notion | Notion Offical API client library for rust
kandi X-RAY | notion Summary
kandi X-RAY | notion Summary
Notion API client library for rust. This project is under active development and this README will be updated as this library gets closer to a reliable state. However, if you're really eager see the example todo cli application provided in examples/todo.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of notion
notion Key Features
notion Examples and Code Snippets
Community Discussions
Trending Discussions on notion
QUESTION
Suppose I want to model, using Haskell pipes, a Python
Generator[int, None, None]
which keeps some internal state. Should I be usingProducer int (State s) ()
orStateT s (Producer int m) ()
, wherem
is whatever type of effect I eventually want from the consumer?How should I think about the notion of transducers in pipes? So in Oleg's simple generators, there is
...
ANSWER
Answered 2022-Mar-31 at 18:32In pipes
, you typically wouldn't use effects in the base monad m
of your overall Effect
to model the internal state of a Producer
. If you really wanted to use State
for this purpose, it would be an internal implementation detail of the Producer
in question (discharged by a runStateP
or evalStateP
inside the Producer
, as explained below), and the State
would not appear in the Producer
's type.
It's also important to emphasize that a Producer
, even when it's operating in the Identity
base monad without any "effects" at its disposal, isn't some sort of pure function that would keep producing the same value over and over without monadic help. A Producer
is basically a stream, and it can maintain state using the usual functional mechanisms (e.g., recursion, for one). So, you definitely don't need a State
for a Producer
to be stateful.
The upshot is that the usual model of a Python Generator[int, None, None]
in Pipes
is just a Monad m => Producer Int m ()
polymorphic in an unspecified base monad m
. Only if the Producer
needs some external effects (e.g., IO
to access the filesystem) would you require more of m
(e.g., a MonadIO m
constraint or something).
To give you a concrete example, a Producer
that generates pseudorandom numbers obviously has "state", but a typical implementation would be a "pure" Producer
:
QUESTION
I am new to Python, I am coming from C++ so I suspect my way of thinking is "tainted" by my preconceived notions. I will explain what I am trying to do and the issue I am facing, but please be aware that the code below is an "artificial" little example that reproduces my issue.
Say that at some point I have this scenario, where B only overrides A.plot_and_clear() as that is all I need from B:
...ANSWER
Answered 2022-Mar-14 at 21:55__init__
should only be used to initialize an existing object. (Though the creation of the object and the call to __init__
usually both happen inside the call to the type itself.)
Use dedicated class methods as alternative constructors (such as copy constructors or constructing an object from another object). For example,
QUESTION
Haskell typeclasses often come with laws; for instance, instances of Monoid
are expected to observe that x <> mempty = mempty <> x = x
.
Typeclass laws are often written with single-equals (=
) rather than double-equals (==
). This suggests that the notion of equality used in typeclass laws is something other than that of Eq
(which makes sense, since Eq
is not a superclass of Monoid
)
Searching around, I was unable to find any authoritative statement on the meaning of =
in typeclass laws. For instance:
- The Haskell 2010 report does not even contain the word "law" in it
- Speaking with other Haskell users, most people seem to believe that
=
usually means extensional equality or substitution but is fundamentally context-dependent. Nobody provided any authoritative source for this claim. - The Haskell wiki article on monad laws states that
=
is extensional, but, again, fails to provide a source, and I wasn't able to track down any way to contact the author of the relevant edit.
The question, then: Is there any authoritative source on or standard for the semantics for =
in typeclass laws? If so, what is it? Additionally, are there examples where the intended meaning of =
is particularly exotic?
(As a side note, treating =
extensionally can get tricky. For instance, there is a Monoid (IO a)
instance, but it's not really clear what extensional equality of IO
values looks like.)
ANSWER
Answered 2022-Feb-24 at 22:30Typeclass laws are not part of the Haskell language, so they are not subject to the same kind of language-theoretic semantic analysis as the language itself.
Instead, these laws are typically presented as an informal mathematical notation. Most presentations do not need a more detailed mathematical exposition, so they do not provide one.
QUESTION
I'm using software that uses Gnuplot language to plot some data, but I've never used Gnuplot before.
So I was trying to place labels and rectangles in a way that created a nice and readable text, which wasn't bad (as you can see with the number 182 in the image below), but I wanted to learn how to rotate the rectangle and label so that they line up with the white line.
(I can't post images, but it's like that in this link Right now, it looks like:
I've already learned to rotate the label (as you can see the number 171), but apparently, it doesn't work the same way with the object.
...ANSWER
Answered 2022-Feb-03 at 21:33Check the following example and help labels
.
You can create a datablock and add your labels and plot them rotated and boxed together with your map.
Edit: ...forgot the semitransparent boxes. You need to play with the alpha channel, i.e. 0xAARRGGBB
.
Code:
QUESTION
I am trying to generate a shiny app that will first allow the user to (using the notion of dplyr verbs) select the variables they are interested in and then filter those variables based on subsequent selections. I am trying to do this using conditionalPanel()
but I am getting stuck finding a way to access the input$
from each conditional panel.
Here is an example:
...ANSWER
Answered 2022-Jan-25 at 20:48We may use across
(if we want to filter
the rows when both column conditions are TRUE) or replace across
with if_any
(if either one of them is TRUE when they are both selected)
QUESTION
Trying to examine intricacies of JavaScript GC, I got deep into the weeds (that is, into the ECMAScript spec). It was found by me that an object should not be collected as long as it is deemed "live". And liveness itself is defined as follows:
At any point during evaluation, a set of objects S is considered live if either of the following conditions is met:
- Any element in S is included in any agent's
[[KeptAlive]]
List.- There exists a valid future hypothetical WeakRef-oblivious execution with respect to S that observes the Object value of any object in S.
The [[KeptAlive]]
list is appended with an object once a special WeakRef
is created, which (weakly) refers to it, and emptied after the current synchronous job ceases.
However, as for WeakRef-oblivious execution, I fail to get my mind around what it is:
For some set of objects S, a hypothetical WeakRef-oblivious execution with respect to S is an execution whereby the abstract operation WeakRefDeref of a WeakRef whose referent is an element of S always returns undefined.
WeakRefDeref
of a WeakRef
returns undefined
when its referent was collected already. Am I getting it right that it is implied here that all objects that make up S
should be collected? So the notion of a future hypothetical WeakRef-oblivious execution is that there is still an object, an element of S
, which not collected yet and observed by some WeakRef
.
It all still makes little sense for me. I would appreciate some samples.
...ANSWER
Answered 2021-Nov-21 at 22:07Let's ignore the formalised, but incomplete, definitions. We find the actual meaning in the non-normative notes of that section.1
What is Liveness in JavaScript?
Liveness is the lower bound for guaranteeing which WeakRef
s an engine must not empty (note 6). So live (sets of) objects are those that must not be garbage-collected because they still will be used by the program.
However, the liveness of a set of objects does not mean that all the objects in the set must be retained. It means that there are some objects in the set that still will be used by the program, and the live set (as a whole) must not be garbage-collected. This is because the definition is used in its negated form in the garbage collector Execution algorithm2: At any time, if a set of objects S
is not live, an ECMAScript implementation may3 […] atomically [remove them]. In other words, if an implementation chooses a non-live set S
in which to empty WeakRefs, it must empty WeakRefs for all objects in S
simultaneously (note 2).
Looking at individual objects, we can say they are not live (garbage-collectable) if there is at least one non-live set containing them; and conversely we say that an individual object is live if every set of objects containing it is live (note 3). It's a bit weird as a "live set of objects" is basically defined as "a set of objects where any of them is live", however the individual liveness is always "with respect to the set S
", i.e. whether these objects can be garbage-collected together.
1: This definitely appears to be the section with the highest notes-to-content ratio in the entire spec.
2: emphasis mine
3: From the first paragraph of the objectives: "This specification does not make any guarantees that any object will be garbage collected. Objects which are not live may be released after long periods of time, or never at all. For this reason, this specification uses the term "may" when describing behaviour triggered by garbage collection."
Now, let's try to understand the definition.
At any point during evaluation, a set of objects
S
is considered live if either of the following conditions is met:
- Any element in
S
is included in any agent's[[KeptAlive]]
List.- There exists a valid future hypothetical WeakRef-oblivious execution with respect to
S
that observes the Object value of any object inS
.
The first condition is pretty clear. The [[KeptAlive]]
list of an agent is representing the list of objects to be kept alive until the end of the current Job. It is cleared after a synchronous run of execution ends, and the note on WeakRef.prototype.deref
4 provides further insight on the intention: If [WeakRefDeref] returns a target
Object that is not undefined
, then this target
object should not be garbage collected until the current execution of ECMAScript code has completed.
The second condition however, oh well. It is not well defined what "valid", "future execution" and "observing the Object value" mean. The intuition the second condition above intends to capture is that an object is live if its identity is observable via non-WeakRef means (note 2), aha. From my understanding, "an execution" is the execution of JavaScript code by an agent and the operations occurring during that. It is "valid" if it conforms to the ECMAScript specification. And it is "future" if it starts from the current state of the program.
An object's identity may be observed by observing a strict equality comparison between objects or observing the object being used as key in a Map (note 4), whereby I assume that the note only gives examples and "the Object value" means "identity". What seems to matter is whether the code does or does not care if the particular object is used, and all of that only if the result of the execution is observable (i.e. cannot be optimised away without altering the result/output of the program)5.
To determine liveness of objects by these means would require testing all possible future executions until the objects are no longer observable. Therefore, liveness as defined here is undecidable6. In practice, engines use conservative approximations such as reachability7 (note 6), but notice that research on more advanced garbage-collectors is under way.
Now for the interesting bit: what makes an execution "hypothetical WeakRef-oblivious with respect to a set of object S
"? It means an execution under the hypothesis that all WeakRefs to objects in S
are already cleared8. We assume that during the future execution, the abstract operation WeakRefDeref of a WeakRef
whose referent is an element of S
always returns undefined
(def), and then work back whether it still might observe an element of the set. If none of the objects to be can be observed after all weak references to them are cleared, they may be garbage-collected. Otherwise, S
is considered live, the objects cannot be garbage-collected and the weak references to them must not be cleared.
4: See the whole note for an example. Interestingly, also the new WeakRef(obj)
constructor adds obj
to the [[KeptAlive]]
list.
5: Unfortunately, "the notion of what constitutes an "observation" is intentionally left vague" according to this very interesting es-discourse thread.
6: While it appears to be useless to specify undecidable properties, it actually isn't. Specifying a worse approximation, e.g. said reachability, would preclude some optimisations that are possible in practice, even if it is impossible to implement a generic 100% optimiser. The case is similar for dead code elimination.
7: Specifying the concept of reachability would actually be much more complicated than describing liveness. See Note 5, which gives examples of structures where objects are reachable through internal slots and specification type fields but should be garbage-collected nonetheless.
8: See also issue 179 in the proposal and the corresponding PR for why sets of objects were introduced.
Example time!
It is hard to me to recognize how livenesses of several objects may affect each other.
WeakRef-obliviousness, together with liveness, capture[s the notion] that a WeakRef itself does not keep an object alive (note 1). This is pretty much the purpose of a WeakRef, but let's see an example anyway:
QUESTION
The general, more abstract procedure for writing and later executing JIT or self-modifying code is, to my understanding, something like the following.
- Write the generated code,
- make sure it's flushed and globally0 visible,
- and then make sure that instructions fetched thence will be what was written.
From what I can tell from this post about self-modifying code on x86, manual cache management is apparently not necessary. I imagined that a clflushopt
would be necessary, but x861 apparently automatically handles cache invalidation upon loading from a location with new instructions, such that instruction fetches are never stale. My question is not about x86, but I wanted to include this for comparison.
The situation in AArch64 is a little more complicated, as it distinguishes between shareability domains and how "visible" a cache operation should be. From just the official documentation for ARMv8/ARMv9, I first came up with this guess.
- Write the generated code,
dsb ishst
to ensure it's all written before continuing,- and then
isb sy
to ensure that subsequent instructions are fetched from memory.
But the documentation for DMB/DSB/ISB says that "instructions following the ISB are fetched from cache or memory". That gives me an impression that cache control operations are indeed necessary. My new guess is thus this.
- Write the generated code,
dsb ishst
to ensure it's all written before continuing,- and then
ic ivau
all the cache lines occupied by the new code.
But I couldn't help but feel that even this is not quite right. A little while later, I found something on the documentation that I missed, and something pretty much the same on a paper. Both of them give an example that looks like this.
...ANSWER
Answered 2022-Jan-15 at 22:27(Disclaimer: this answer is based on reading specs and some tests, but not on previous experience.)
First of all, there is an explanation and example code for this exact
case (one core writes code for another core to execute) in B2.2.5 of
the Architecture Reference Manual (version G.b). The only difference
from the examples you've shown is that the final isb
needs to
be executed in the thread that will execute the new code (which I
guess is your "consumer"), after the cache invalidation has finished.
I found it helpful to try to understand the abstract constructs like "inner shareable domain", "point of unification" from the architecture reference in more concrete terms.
Let's think about a system with several cores. Their L1d caches are coherent, but their L1i caches need not be unified with L1d, nor coherent with each other. However, the L2 cache is unified.
The system does not have any way for L1d and L1i to talk to each other
directly; the only path between them is through L2. So once we have
written our new code to L1d, we have to write it back to L2 (dc cvau
), then
invalidate L1i (ic ivau
) so that it repopulates from the new code in L2.
In this setting, PoU is the L2 cache, and that's exactly where we want to clean / invalidate to.
There's some explanation of these terms in page D4-2646. In particular:
The PoU for an Inner Shareable shareability domain is the point by which the instruction and data caches and the translation table walks of all the PEs in that Inner Shareable shareability domain are guaranteed to see the same copy of a memory location.
Here, the Inner Shareable domain is going to contain all the cores
that could run the threads of our program; indeed, it is supposed to
contain all the cores running the same kernel as us (page B2-166).
And because the memory we are dc cvau
ing is presumably marked with
the Inner Shareable attribute or better, as any reasonable OS should
do for us, it cleans to the PoU of the domain, not merely the PoU of
our core (PE). So that's just what we want: a cache level that all
instruction cache fills from all cores would see.
The Point of Coherency is further down; it is the level that everything on the system sees, including DMA hardware and such. Most likely this is main memory, below all the caches. We don't need to get down to that level; it would just slow everything down for no benefit.
Hopefully that helps with your question 1.
Note that the cache clean and invalidate instructions run "in the
background" as it were, so that you can execute a long string of them
(like a loop over all affected cache lines) without waiting for them
to complete one by one. dsb ish
is used once at the end to wait for
them all to finish.
Some commentary about dsb
, towards your questions #2 and #3. Its
main purpose is as a barrier; it makes sure that all the pending data
accesses within our core (in store buffers, etc) get flushed out to
L1d cache, so that all other cores can see them. This is the kind of
barrier you need for general inter-thread memory ordering. (Or for
most purposes, the weaker dmb
suffices; it enforces ordering but
doesn't actually wait for everything to be flushed.) But it doesn't
do anything else to the caches themselves, nor say anything about what
should happen to that data beyond L1d. So by itself, it would not be
anywhere near strong enough for what we need here.
As far as I can tell, the "wait for cache maintenance to complete"
effect is a sort of bonus feature of dsb ish
. It seems orthogonal
to the instruction's main purpose, and I'm not sure why they didn't
provide a separate wcm
instruction instead. But anyway, it is only
dsb ish
that has this bonus functionality; dsb ishst
does not.
D4-2658: "In all cases, where the text in this section refers to a DMB
or a DSB, this means a DMB or DSB whose required access type is
both loads and stores".
I ran some tests of this on a Cortex A-72. Omitting either of the dc cvau
or ic ivau
usually results in the stale code being executed, even if dsb ish
is done instead. On the other hand, doing dc cvau ; ic ivau
without any dsb ish
, I didn't observe any failures; but that could be luck or a quirk of this implementation.
To your #4, the sequence we've been discussing (dc cvau ; dsb ish ; ci ivau ; dsb ish ; isb
) is intended for the case when you will run
the code on the same core that wrote it. But it actually shouldn't
matter which thread does the dc cvau ; dsb ish ; ci ivau ; dsb ish
sequence, since the cache maintenance instructions cause all the cores
to clean / invalidate as instructed; not just this one. See table
D4-6. (But if the dc cvau
is in a different thread than the writer, maybe the writer has to have completed a dsb ish
beforehand, so that the written data really is in L1d and not still in the writer's store buffer? Not sure about that.)
The part that does matter is isb
. After ci ivau
is complete, the
L1i caches are cleared of stale code, and further instruction fetches
by any core will see the new code. However, the runner core might
previously have fetched the old code from L1i, and still be holding
it internally (decoded and in the pipeline, uop cache, speculative
execution, etc). isb
flushes these CPU-internal mechanisms,
ensuring that all further instructions to be executed have actually
been fetched from the L1i cache after it was invalidated.
Thus, the isb
needs to be executed in the thread that is going to
run the newly written code. And moreover you need to make sure that
it is done after all the cache maintenance has fully completed;
maybe by having the writer thread notify it via condition variable or
the like.
I tested this too. If all the cache maintenance instructions, plus an isb
, are done by the writer, but the runner doesn't isb
, then once again it can execute the stale code. I was only able to reproduce this in a test where the writer patches an instruction in a loop that the runner is executing concurrently, which probably ensures that the runner had already fetched it. This is legal provided that the old and new instruction are, say, a branch and a nop respectively (see B2.2.5), which is what I did. (But it is not guaranteed to work for arbitrary old and new instructions.)
I tried some other tests to try to arrange it so that the instruction wasn't actually executed until it was patched, yet it was the target of a branch that should have been predicted taken, in hopes that this would get it prefetched; but I couldn't get the stale version to execute in that case.
One thing I wasn't quite sure about is this. A typical modern OS may
well have W^X, where no virtual page can be simultaneously writable
and executable. If after writing the code, you call the equivalent of
mprotect
to make the page executable, then most likely the OS is
going to take care of all the cache maintenance and synchronization
for you (but I guess it doesn't hurt to do it yourself too).
But another way to do it would be with an alias: you map the memory
writable at one virtual address, and executable at another. The
writer writes at the former address, and the runner jumps to the
latter. In that case, I think you would simply dc cvau
the
writable address, and ic ivau
the executable one, but I couldn't
find confirmation of that. But I tested it, and it worked no matter which alias was passed to which cache maintenance instruction, while it failed if either instruction was omitted altogether. So it appears that the cache maintenance is done by physical address underneath.
QUESTION
For an array of n integers, there are C(n,2)= n(n−1) / 2 pairs of integers. Thus, we may check all n(n−1) / 2 pairs and see if there is any pair with duplicates.
I was poking around a LeetCode question and the answer for one of the algorithms included the above formula in the question explanation.
What is the point of the C(n, 2)
nomenclature on the left hand side of the equation? Is this a known/named standard that I can read and interpret, or is this some more general information that must/should be ascertained from context? I understand the math on the right, but I don't have any preconceived notions that adds any detail to my understanding from the function on the left.
What is the 2 doing?
...ANSWER
Answered 2021-Dec-13 at 06:30It's called binomial coefficient, or "nCk" or "n Choose k".
Here n is the size of the set, and k = 2 is the number of elements to select, so that e.g. sets {3, 6}
and {6,3}
taken are considered equal.
AFAIK, the standard notation in combinatorics is as shown above and spelled "n choose k", where as C(...)
is non-standard requiring clarification when first introduced.
QUESTION
I encountered an interesting change in a public PR. Initially they had:
...ANSWER
Answered 2021-Dec-10 at 16:50That's because every struct
has fields, and hence this pattern will work for any struct
, but will not compile with an enum
:
QUESTION
Here is a piece of code taken from the seminal "A Pedagogical Implementation of the GSM A5/1 and A5/2 "Voice Privacy" Encryption Algorithms" by Marc Briceno, Ian Goldberg, and David Wagner:
...ANSWER
Answered 2021-Dec-04 at 13:33The "trick" is here:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install notion
Rust is installed and managed by the rustup tool. Rust has a 6-week rapid release process and supports a great number of platforms, so there are many builds of Rust available at any time. Please refer rust-lang.org for more information.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page