q | Quantum Computation Simulator for Go
kandi X-RAY | q Summary
kandi X-RAY | q Summary
Quantum Computation Simulator written in golang
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of q
q Key Features
q Examples and Code Snippets
def _count_cross_inversions(P, Q):
"""
Counts the inversions across two sorted arrays.
And combine the two arrays into one sorted array
For all 1<= i<=len(P) and for all 1 <= j <= len(Q),
if P[i] > Q[j], then (i, j)
public static Result commonAncestorHelper(TreeNode root, TreeNode p, TreeNode q) {
if (root == null) {
return new Result(null, false);
}
if (root == p && root == q) {
return new Result(root, true);
}
Result rx = commonAnces
public static TreeNode commonAncestor(TreeNode root, TreeNode p, TreeNode q) {
if (!covers(root, p) || !covers(root, q)) {
return null;
} else if (covers(p, q)) {
return p;
} else if (covers(q, p)) {
return q;
}
TreeNode sibling
Community Discussions
Trending Discussions on q
QUESTION
I am having trouble resolving a ReDoS vulnerability identified by npm audit
. My application has a nested sub-dependency ansi-html
that is vulnerable to attack, but unfortunately, it seems that the maintainers have gone AWOL. As you can see in the comments section of that Github issue, to get around this problem, the community has made a fork of the repo called ansi-html-community
located here, which addresses this vulnerability.
Thus, I would like to replace all nested references of ansi-html
with ansi-html-community
.
My normal strategy of using npm-force-resolutions
does not seem to be able to override nested sub-dependencies with a different package altogether but rather only the same packages that are a different version number. I have researched this for several hours, but unfortunately, the only way I have found to fix this would appear to be with yarn, which I am now seriously considering using instead of npm. However, this is not ideal as our entire CI/CD pipeline is configured to use npm.
Does anyone know of any other way to accomplish nested sub-dependency package substitution/resolution without having to switch over to using yarn?
Related QuestionsThese are questions of interest that I was able to find, but unfortunately, they tend to only discuss methods to override package version number, not the package itself.
Discusses how to override version number:How do I override nested NPM dependency versions?
Has a comment discussion aboutnpm shrinkwrap
(not ideal):
Other related StackOverflow questions:
...ANSWER
Answered 2021-Oct-29 at 21:01I figured it out. As of October 2021, the solution using npm-force-resolutions
is actually very similar to how you would specify it using yarn
. You just need to provide a link to the tarball where you would normally specify the overriding version number. Your resolutions section of package.json
should look like this:
QUESTION
We have some apps (or maybe we should call them a handful of scripts) that use Google APIs to facilitate some administrative tasks. Recently, after making another client_id in the same project, I started getting an error message similar to the one described in localhost redirect_uri does not work for Google Oauth2 (results in 400: invalid_request error). I.e.,
Error 400: invalid_request
You can't sign in to this app because it doesn't comply with Google's OAuth 2.0 policy for keeping apps secure.
You can let the app developer know that this app doesn't comply with one or more Google validation rules.
Request details:
The content in this section has been provided by the app developer. This content has not been reviewed or verified by Google.
If you’re the app developer, make sure that these request details comply with Google policies.
redirect_uri: urn:ietf:wg:oauth:2.0:oob
How do I get through this error? It is important to note that:
- The OAuth consent screen for this project is marked as "Internal". Therefore any mentions of Google review of the project, or publishing status are irrelevant
- I do have "Trust internal, domain-owned apps" enabled for the domain
- Another client id in the same project works and there are no obvious differences between the client IDs - they are both "Desktop" type which only gives me a Client ID and Client secret that are different
- This is a command line script, so I use the "copy/paste" verification method as documented here hence the
urn:ietf:wg:oauth:2.0:oob
redirect URI (copy/paste is the only friendly way to run this on a headless machine which has no browser). - I was able to reproduce the same problem in a dev domain. I have three client ids. The oldest one is from January 2021, another one from December 2021, and one I created today - March 2022. Of those, only the December 2021 works and lets me choose which account to authenticate with before it either accepts it or rejects it with "Error 403: org_internal" (this is expected). The other two give me an "Error 400: invalid_request" and do not even let me choose the "internal" account. Here are the URLs generated by my app (I use the ruby google client APIs) and the only difference between them is the client_id - January 2021, December 2021, March 2022.
Here is the part of the code around the authorization flow, and the URLs for the different client IDs are what was produced on the $stderr.puts url
line. It is pretty much the same thing as documented in the official example here (version as of this writing).
ANSWER
Answered 2022-Mar-02 at 07:56steps.oauth.v2.invalid_request 400 This error name is used for multiple different kinds of errors, typically for missing or incorrect parameters sent in the request. If is set to false, use fault variables (described below) to retrieve details about the error, such as the fault name and cause.
- GenerateAccessToken GenerateAuthorizationCode
- GenerateAccessTokenImplicitGrant
- RefreshAccessToken
QUESTION
After upgrading to android 12, the application is not compiling. It shows
"Manifest merger failed with multiple errors, see logs"
Error showing in Merged manifest:
Merging Errors: Error: android:exported needs to be explicitly specified for . Apps targeting Android 12 and higher are required to specify an explicit value for
android:exported
when the corresponding component has an intent filter defined. See https://developer.android.com/guide/topics/manifest/activity-element#exported for details. main manifest (this file)
I have set all the activity with android:exported="false"
. But it is still showing this issue.
My manifest file:
...ANSWER
Answered 2021-Aug-04 at 09:18I'm not sure what you're using to code, but in order to set it in Android Studio, open the manifest of your project and under the "activity" section, put android:exported="true"(or false if that is what you prefer). I have attached an example.
QUESTION
Hey I am trying to run my application and I am getting this error
build.gradle
...ANSWER
Answered 2021-Nov-19 at 06:20Dependency: androidx.lifecycle:lifecycle-runtime-ktx:2.4.0. The issue is with this dependency, there may be some transitive dependency error. I would suggest use the alpha version of this dependency, worked for me. Replace with this dependency : androidx.lifecycle:lifecycle-*:2.4.0-alpha03
QUESTION
I need to calculate the square root of some numbers, for example √9 = 3
and √2 = 1.4142
. How can I do it in Python?
The inputs will probably be all positive integers, and relatively small (say less than a billion), but just in case they're not, is there anything that might break?
Related
- Integer square root in python
- Is there a short-hand for nth root of x in Python?
- Difference between **(1/2), math.sqrt and cmath.sqrt?
- Why is math.sqrt() incorrect for large numbers?
- Python sqrt limit for very large numbers?
- Which is faster in Python: x**.5 or math.sqrt(x)?
- Why does Python give the "wrong" answer for square root? (specific to Python 2)
- calculating n-th roots using Python 3's decimal module
- How can I take the square root of -1 using python? (focused on NumPy)
- Arbitrary precision of square roots
Note: This is an attempt at a canonical question after a discussion on Meta about an existing question with the same title.
...ANSWER
Answered 2022-Feb-04 at 19:44math.sqrt()
The math
module from the standard library has a sqrt
function to calculate the square root of a number. It takes any type that can be converted to float
(which includes int
) as an argument and returns a float
.
QUESTION
I am sorry but I am really confused and leery now, so I am resorting to SO to get some clarity.
I am running Android Studio Bumblebee and saw a notification about a major new release wit the following text:
...ANSWER
Answered 2022-Feb-10 at 11:10This issue was fixed by Google (10 February 2022).
You can now update Android Studio normally.
QUESTION
There are so many ways to define colour scales within ggplot2
. After just loading ggplot2
I count 22
functions beginging with scale_color_*
(or scale_colour_*
) and same number beginging with scale_fill_*
. Is it possible to briefly name the purpose of the functions below? Particularly I struggle with the differences of some of the functions and when to use them.
- scale_*_binned()
- scale_*_brewer()
- scale_*_continuous()
- scale_*_date()
- scale_*_datetime()
- scale_*_discrete()
- scale_*_distiller()
- scale_*_fermenter()
- scale_*_gradient()
- scale_*_gradient2()
- scale_*_gradientn()
- scale_*_grey()
- scale_*_hue()
- scale_*_identity()
- scale_*_manual()
- scale_*_ordinal()
- scale_*_steps()
- scale_*_steps2()
- scale_*_stepsn()
- scale_*_viridis_b()
- scale_*_viridis_c()
- scale_*_viridis_d()
What I tried
I've tried to make some research on the web but the more I read the more I get onfused. To drop some random example: "The default scale for continuous fill scales is scale_fill_continuous()
which in turn defaults to scale_fill_gradient()
". I do not get what the difference of both functions is. Again, this is just an example. Same is true for scale_color_binned()
and scale_color_discrete()
where I can not name the difference. And in case of scale_color_date()
and scale_color_datetime()
the destription says "scale_*_gradient
creates a two colour gradient (low-high), scale_*_gradient2
creates a diverging colour gradient (low-mid-high), scale_*_gradientn
creates a n-colour gradient." which is nice to know but how is this related to scale_color_date()
and scale_color_datetime()
? Looking for those functions on the web does not give me very informative sources either. Reading on this topic gets also chaotic because there are tons of color palettes in different packages which are sequential/ diverging/ qualitative plus one can set same color in different ways, i.e. by color name, rgb, number, hex code or palette name. In part this is not directly related to the question about the 2*22
functions but in some cases it is because providing a "wrong" palette results in an error (e.g. the error"Continuous value supplied to discrete scale
).
Why I ask this
I need to do many plots for my work and I am supposed to provide some function that returns all kind of plots. The plots are supposed to have similiar layout so that they fit well together. One aspect I need to consider here is that the colour scales of the plots go well together. See here for example, where so many different kind of plots have same colour scale. I was hoping I could use some general function which provides a colour palette to any data, regardless of whether the data is continuous or categorical, whether it is a fill or col easthetic. But since this is not how colour scales are defined in ggplot2
I need to understand what all those functions are good for.
ANSWER
Answered 2022-Feb-01 at 18:14This is a good question... and I would have hoped there would be a practical guide somewhere. One could question if SO would be a good place to ask this question, but regardless, here's my attempt to summarize the various scale_color_*()
and scale_fill_*()
functions built into ggplot2
. Here, we'll describe the range of functions using scale_color_*()
; however, the same general rules will apply for scale_fill_*()
functions.
There are 22 functions in all, but happily we can group them intelligently based on practical usage scenarios. There are three key criteria that can be used to define practically how to use each of the scale_color_*()
functions:
Nature of the mapping data. Is the data mapped to the color aesthetic discrete or continuous? CONTINUOUS data is something that can be explained via real numbers: time, temperature, lengths - these are all continuous because even if your observations are
1
and2
, there can exist something that would have a theoretical value of1.5
. DISCRETE data is just the opposite: you cannot express this data via real numbers. Take, for example, if your observations were:"Model A"
and"Model B"
. There is no obvious way to express something in-between those two. As such, you can only represent these as single colors or numbers.The Colorspace. The color palette used to draw onto the plot. By default,
ggplot2
uses (I believe) a color palette based on evenly-spaced hue values. There are other functions built into the library that use either Brewer palettes or Viridis colorspaces.The level of Specification. Generally, once you have defined if the scale function is continuous and in what colorspace, you have variation on the level of control or specification the user will need or can specify. A good example of this is the functions:
*_continuous()
,*_gradient()
,*_gradient2()
, and*_gradientn()
.
We can start off with continuous scales. These functions are all used when applied to observations that are continuous variables (see above). The functions here can further be defined if they are either binned or not binned. "Binning" is just a way of grouping ranges of a continuous variable to all be assigned to a particular color. You'll notice the effect of "binning" is to change the legend keys from a "colorbar" to a "steps" legend.
The continuous example (colorbar legend):
QUESTION
I would like to calculate a quadratic form: x' Q y
in Julia.
What would be the most efficient way to calculate this for the cases:
- No assumption.
Q
is symmetric.x
andy
are the same (x = y
).- Both
Q
is symmetric andx = y
.
I know Julia has dot()
. But I wonder if it is faster than BLAS call.
ANSWER
Answered 2022-Jan-09 at 22:28If your matrix is symmetric use the Symmetric
wrapper to improve performance (a different method is called then):
QUESTION
The standard defines several 'happens before' relations that extend the good old 'sequenced before' over multiple threads:
[intro.races]
11 An evaluation A simply happens before an evaluation B if either
(11.1) — A is sequenced before B, or
(11.2) — A synchronizes with B, or
(11.3) — A simply happens before X and X simply happens before B.[Note 10: In the absence of consume operations, the happens before and simply happens before relations are identical. — end note]
12 An evaluation A strongly happens before an evaluation D if, either
(12.1) — A is sequenced before D, or
(12.2) — A synchronizes with D, and both A and D are sequentially consistent atomic operations ([atomics.order]), or
(12.3) — there are evaluations B and C such that A is sequenced before B, B simply happens before C, and C is sequenced before D, or
(12.4) — there is an evaluation B such that A strongly happens before B, and B strongly happens before D.[Note 11: Informally, if A strongly happens before B, then A appears to be evaluated before B in all contexts. Strongly happens before excludes consume operations. — end note]
(bold mine)
The difference between the two seems very subtle. 'Strongly happens before' is never true for matching pairs or release-acquire operations (unless both are seq-cst), but it still respects release-acquire syncronization in a way, since operations sequenced before a release 'strongly happen before' the operations sequenced after the matching acquire.
Why does this difference matter?
'Strongly happens before' was introduced in C++20, and pre-C++20, 'simply happens before' used to be called 'strongly happens before'. Why was it introduced?
[atomics.order]/4
says that the total order of all seq-cst operations is consistent with 'strongly happens before'.
Does it mean that it's not consistent with 'simply happens before'? If so, why not?
I'm ignoring the plain 'happens before', because it differs from 'simply happens before' only in its handling of memory_order_consume
, the use of which is temporarily discouraged, since apparently most (all?) major compilers treat it as memory_order_acquire
.
I've already seen this Q&A, but it doesn't explain why 'strongly happens before' exists, and doesn't fully address what it means (it just states that it doesn't respect release-acquire syncronization, which isn't completely the case).
Found the proposal that introduced 'simply happens before'.
I don't fully understand it, but it explains following:
- 'Strongly happens before' is a weakened version of 'simply happens before'.
- The difference is only observable when seq-cst is mixed with aqc-rel on the same variable (I think, it means when an acquire load reads a value from a seq-cst store, or when an seq-cst load reads a value from a release store). But the exact effects of mixing the two are still unclear to me.
ANSWER
Answered 2022-Jan-02 at 18:21Here's my current understanding, which could be incomplete or incorrect. A verification would be appreciated.
C++20 renamed strongly happens before
to simply happens before
, and introduced a new, more relaxed definition for strongly happens before
, which imposes less ordering.
Simply happens before
is used to reason about the presence of data races in your code. (Actually that would be the plain 'happens before', but the two are equivalent in absence of consume operations, the use of which is discouraged by the standard, since most (all?) major compilers treat them as acquires.)
The weaker strongly happens before
is used to reason about the global order of seq-cst operations.
This change was introduced in proposal P0668R5: Revising the C++ memory model, which is based on the paper Repairing Sequential Consistency in C/C++11 by Lahav et al (which I didn't fully read).
The proposal explains why the change was made. Long story short, the way most compilers implement atomics on Power and ARM architectures turned out to be non-conformant in rare edge cases, and fixing the compilers had a performance cost, so they fixed the standard instead.
The change only affects you if you mix seq-cst operations with acquire-release operations on the same atomic variable (i.e. if an acquire operation reads a value from a seq-cst store, or a seq-cst operation reads a value from a release store).
If you don't mix operations in this manner, then you're not affected (i.e. can treat simply happens before
and strongly happens before
as equivalent).
The gist of the change is that the synchronization between a seq-cst operation and the corresponding acquire/release operation no longer affects the position of this specific seq-cst operation in the global seq-cst order, but the synchronization itself is still there.
This makes the seq-cst order for such seq-cst operations very moot, see below.
The proposal presents following example, and I'll try to explain my understanding of it:
QUESTION
I couldn't find a question similar to the one that I have here. I have a very large named list of named vectors that match column names in a dataframe. I would like to use the list of named vectors to replace values in the dataframe columns that match each list element's name. That is, the name of the vector in the list matches the name of the dataframe column and the key-value pair in each vector element will be used to recode the column.
Reprex below:
...ANSWER
Answered 2021-Dec-13 at 04:44One work around would be to use your map2_dfr
code, but then bind the columns that are needed to the map2_dfr
output. Though you still have to drop the names column.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install q
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page