random | The most random module on npm
kandi X-RAY | random Summary
kandi X-RAY | random Summary
Seedable random number generator supporting many common distributions. Welcome to the most random module on npm! .
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of random
random Key Features
random Examples and Code Snippets
def set_seed(seed):
"""Sets the global random seed.
Operations that rely on a random seed actually derive it from two seeds:
the global and operation-level seeds. This sets the global seed.
Its interactions with operation-level seeds is as
def set_random_seed(seed):
"""Sets the graph-level random seed for the default graph.
Operations that rely on a random seed actually derive it from two seeds:
the graph-level and operation-level seeds. This sets the graph-level seed.
Its in
def stateless_parameterized_truncated_normal(shape,
seed,
means=0.0,
stddevs=1.0,
Community Discussions
Trending Discussions on random
QUESTION
I'd like to construct an object that works like a random number generator, but generates numbers in a specified sequence.
...ANSWER
Answered 2022-Mar-29 at 00:47You can call next()
with a generator or iterator as an argument to withdraw exactly one element from it. Saving the generator to a variable beforehand allows you to do this multiple times.
QUESTION
std::rand said,
int rand();
Returns a pseudo-random integral value between 0 and RAND_MAX (0 and RAND_MAX included).
Since it is guaranteed that a non-negative integer will be returned, why the return type is signed?
I am not talking about if we should use it here. Is it a historical issue or some bad design?
...ANSWER
Answered 2022-Mar-02 at 02:12There is much debate about unsigned
. Without going too much into subjective territory, consider the following: What matters is not whether the value returned from rand()
cannot be negative. What matters is that rand()
returns a value of a certain type and that type determines what you can do with that value. rand()
never returns a negative value, but does it make sense to apply operations on the value that make the value negative? Certainly yes. For example you might want to do:
QUESTION
Background
I have a complex nested JSON object, which I am trying to unpack into a pandas df
in a very specific way.
JSON Object
this is an extract, containing randomized data of the JSON object, which shows examples of the hierarchy (inc. children) for 1x family (i.e. 'Falconer Family'), however there is 100s of them in total and this extract just has 1x family, however the full JSON object has multiple -
ANSWER
Answered 2022-Feb-16 at 06:41I think this gets you pretty close; might just need to adjust the various name
columns and drop the extra data (I kept the grouping
column).
The main idea is to recursively use pd.json_normalize with pd.concat for all availalable children
levels.
EDIT: Put everything into a single function and added section to collapse the name
columns like the expected output.
QUESTION
There are so many ways to define colour scales within ggplot2
. After just loading ggplot2
I count 22
functions beginging with scale_color_*
(or scale_colour_*
) and same number beginging with scale_fill_*
. Is it possible to briefly name the purpose of the functions below? Particularly I struggle with the differences of some of the functions and when to use them.
- scale_*_binned()
- scale_*_brewer()
- scale_*_continuous()
- scale_*_date()
- scale_*_datetime()
- scale_*_discrete()
- scale_*_distiller()
- scale_*_fermenter()
- scale_*_gradient()
- scale_*_gradient2()
- scale_*_gradientn()
- scale_*_grey()
- scale_*_hue()
- scale_*_identity()
- scale_*_manual()
- scale_*_ordinal()
- scale_*_steps()
- scale_*_steps2()
- scale_*_stepsn()
- scale_*_viridis_b()
- scale_*_viridis_c()
- scale_*_viridis_d()
What I tried
I've tried to make some research on the web but the more I read the more I get onfused. To drop some random example: "The default scale for continuous fill scales is scale_fill_continuous()
which in turn defaults to scale_fill_gradient()
". I do not get what the difference of both functions is. Again, this is just an example. Same is true for scale_color_binned()
and scale_color_discrete()
where I can not name the difference. And in case of scale_color_date()
and scale_color_datetime()
the destription says "scale_*_gradient
creates a two colour gradient (low-high), scale_*_gradient2
creates a diverging colour gradient (low-mid-high), scale_*_gradientn
creates a n-colour gradient." which is nice to know but how is this related to scale_color_date()
and scale_color_datetime()
? Looking for those functions on the web does not give me very informative sources either. Reading on this topic gets also chaotic because there are tons of color palettes in different packages which are sequential/ diverging/ qualitative plus one can set same color in different ways, i.e. by color name, rgb, number, hex code or palette name. In part this is not directly related to the question about the 2*22
functions but in some cases it is because providing a "wrong" palette results in an error (e.g. the error"Continuous value supplied to discrete scale
).
Why I ask this
I need to do many plots for my work and I am supposed to provide some function that returns all kind of plots. The plots are supposed to have similiar layout so that they fit well together. One aspect I need to consider here is that the colour scales of the plots go well together. See here for example, where so many different kind of plots have same colour scale. I was hoping I could use some general function which provides a colour palette to any data, regardless of whether the data is continuous or categorical, whether it is a fill or col easthetic. But since this is not how colour scales are defined in ggplot2
I need to understand what all those functions are good for.
ANSWER
Answered 2022-Feb-01 at 18:14This is a good question... and I would have hoped there would be a practical guide somewhere. One could question if SO would be a good place to ask this question, but regardless, here's my attempt to summarize the various scale_color_*()
and scale_fill_*()
functions built into ggplot2
. Here, we'll describe the range of functions using scale_color_*()
; however, the same general rules will apply for scale_fill_*()
functions.
There are 22 functions in all, but happily we can group them intelligently based on practical usage scenarios. There are three key criteria that can be used to define practically how to use each of the scale_color_*()
functions:
Nature of the mapping data. Is the data mapped to the color aesthetic discrete or continuous? CONTINUOUS data is something that can be explained via real numbers: time, temperature, lengths - these are all continuous because even if your observations are
1
and2
, there can exist something that would have a theoretical value of1.5
. DISCRETE data is just the opposite: you cannot express this data via real numbers. Take, for example, if your observations were:"Model A"
and"Model B"
. There is no obvious way to express something in-between those two. As such, you can only represent these as single colors or numbers.The Colorspace. The color palette used to draw onto the plot. By default,
ggplot2
uses (I believe) a color palette based on evenly-spaced hue values. There are other functions built into the library that use either Brewer palettes or Viridis colorspaces.The level of Specification. Generally, once you have defined if the scale function is continuous and in what colorspace, you have variation on the level of control or specification the user will need or can specify. A good example of this is the functions:
*_continuous()
,*_gradient()
,*_gradient2()
, and*_gradientn()
.
We can start off with continuous scales. These functions are all used when applied to observations that are continuous variables (see above). The functions here can further be defined if they are either binned or not binned. "Binning" is just a way of grouping ranges of a continuous variable to all be assigned to a particular color. You'll notice the effect of "binning" is to change the legend keys from a "colorbar" to a "steps" legend.
The continuous example (colorbar legend):
QUESTION
I was looking for the canonical implementation of MergeSort on Haskell to port to HOVM, and I found this StackOverflow answer. When porting the algorithm, I realized something looked silly: the algorithm has a "halve" function that does nothing but split a list in two, using half of the length, before recursing and merging. So I thought: why not make a better use of this pass, and use a pivot, to make each half respectively smaller and bigger than that pivot? That would increase the odds that recursive merge calls are applied to already-sorted lists, which might speed up the algorithm!
I've done this change, resulting in the following code:
...ANSWER
Answered 2022-Jan-27 at 19:15Your split
splits the list in two ordered halves, so merge
consumes its first argument first and then just produces the second half in full. In other words it is equivalent to ++
, doing redundant comparisons on the first half which always turn out to be True
.
In the true mergesort the merge actually does twice the work on random data because the two parts are not ordered.
The split
though spends some work on the partitioning whereas an online bottom-up mergesort would spend no work there at all. But the built-in sort tries to detect ordered runs in the input, and apparently that extra work is not negligible.
QUESTION
I made a bubble sort implementation in C, and was testing its performance when I noticed that the -O3
flag made it run even slower than no flags at all! Meanwhile -O2
was making it run a lot faster as expected.
Without optimisations:
...ANSWER
Answered 2021-Oct-27 at 19:53It looks like GCC's naïveté about store-forwarding stalls is hurting its auto-vectorization strategy here. See also Store forwarding by example for some practical benchmarks on Intel with hardware performance counters, and What are the costs of failed store-to-load forwarding on x86? Also Agner Fog's x86 optimization guides.
(gcc -O3
enables -ftree-vectorize
and a few other options not included by -O2
, e.g. if
-conversion to branchless cmov
, which is another way -O3
can hurt with data patterns GCC didn't expect. By comparison, Clang enables auto-vectorization even at -O2
, although some of its optimizations are still only on at -O3
.)
It's doing 64-bit loads (and branching to store or not) on pairs of ints. This means, if we swapped the last iteration, this load comes half from that store, half from fresh memory, so we get a store-forwarding stall after every swap. But bubble sort often has long chains of swapping every iteration as an element bubbles far, so this is really bad.
(Bubble sort is bad in general, especially if implemented naively without keeping the previous iteration's second element around in a register. It can be interesting to analyze the asm details of exactly why it sucks, so it is fair enough for wanting to try.)
Anyway, this is pretty clearly an anti-optimization you should report on GCC Bugzilla with the "missed-optimization" keyword. Scalar loads are cheap, and store-forwarding stalls are costly. (Can modern x86 implementations store-forward from more than one prior store? no, nor can microarchitectures other than in-order Atom efficiently load when it partially overlaps with one previous store, and partially from data that has to come from the L1d cache.)
Even better would be to keep buf[x+1]
in a register and use it as buf[x]
in the next iteration, avoiding a store and load. (Like good hand-written asm bubble sort examples, a few of which exist on Stack Overflow.)
If it wasn't for the store-forwarding stalls (which AFAIK GCC doesn't know about in its cost model), this strategy might be about break-even. SSE 4.1 for a branchless pmind
/ pmaxd
comparator might be interesting, but that would mean always storing and the C source doesn't do that.
If this strategy of double-width load had any merit, it would be better implemented with pure integer on a 64-bit machine like x86-64, where you can operate on just the low 32 bits with garbage (or valuable data) in the upper half. E.g.,
QUESTION
I'm using a string Encryption/Decryption class similar to the one provided here as a solution.
This worked well for me in .Net 5.
Now I wanted to update my project to .Net 6.
When using .Net 6, the decrypted string does get cut off a certain point depending on the length of the input string.
▶️ To make it easy to debug/reproduce my issue, I created a public repro Repository here.
- The encryption code is on purpose in a Standard 2.0 Project.
- Referencing this project are both a .Net 6 as well as a .Net 5 Console project.
Both are calling the encryption methods with the exact same input of "12345678901234567890"
with the path phrase of "nzv86ri4H2qYHqc&m6rL"
.
.Net 5 output: "12345678901234567890"
.Net 6 output: "1234567890123456"
The difference in length is 4
.
I also looked at the breaking changes for .Net 6, but could not find something which guided me to a solution.
I'm glad for any suggestions regarding my issue, thanks!
Encryption Class
...ANSWER
Answered 2021-Nov-10 at 10:25The reason is this breaking change:
DeflateStream, GZipStream, and CryptoStream diverged from typical Stream.Read and Stream.ReadAsync behavior in two ways:
They didn't complete the read operation until either the buffer passed to the read operation was completely filled or the end of the stream was reached.
And the new behaviour is:
Starting in .NET 6, when Stream.Read or Stream.ReadAsync is called on one of the affected stream types with a buffer of length N, the operation completes when:
At least one byte has been read from the stream, or The underlying stream they wrap returns 0 from a call to its read, indicating no more data is available.
In your case you are affected because of this code in Decrypt
method:
QUESTION
When extending a class, I can easily add some new properties to it.
But what if, when I extend a base class, I want to add new properties to an object (a property which is a simple object) of the base class?
Here is an example with some code.
base class
...ANSWER
Answered 2021-Dec-07 at 15:50If you're just going to reuse a property from a superclass but treat it as a narrower type, you should probably use the declare
property modifier in the subclass instead of re-declaring the field:
QUESTION
I have a small loop of code which is throwing Uncaught RangeError: Invalid Array Length
I was able to reproduce it with just this in the Google Chrome console
...ANSWER
Answered 2021-Dec-02 at 16:18The real reason is in V8 memory optimization. When you store integers - it stores the 32 bit number in place, But when you store double-number - it is stored differently (as an object) - so yValues
array contains the reference but the actual value stored in heap. So in your example you just used all heap memory. To see the limit, use: console.memory
and you'll see something like this:
QUESTION
In the following code, I create two lists with the same values: one list unsorted (s_not), the other sorted (s_yes). The values are created by randint(). I run some loop for each list and time it.
...ANSWER
Answered 2021-Nov-15 at 21:05Cache misses. When N
int objects are allocated back-to-back, the memory reserved to hold them tends to be in a contiguous chunk. So crawling over the list in allocation order tends to access the memory holding the ints' values in sequential, contiguous, increasing order too.
Shuffle it, and the access pattern when crawling over the list is randomized too. Cache misses abound, provided there are enough different int objects that they don't all fit in cache.
At r==1
, and r==2
, CPython happens to treat such small ints as singletons, so, e.g., despite that you have 10 million elements in the list, at r==2
it contains only (at most) 100 distinct int objects. All the data for those fit in cache simultaneously.
Beyond that, though, you're likely to get more, and more, and more distinct int objects. Hardware caches become increasingly useless then when the access pattern is random.
Illustrating:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install random
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page