complexity | Library for calculating the complexity of PHP code units
kandi X-RAY | complexity Summary
kandi X-RAY | complexity Summary
Library for calculating the complexity of PHP code units.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Calculates the complexity of abstract syntax tree .
- Calculates the complexity for a string .
- Calculates the complexity for a given source file .
- Get the default parser .
complexity Key Features
complexity Examples and Code Snippets
Community Discussions
Trending Discussions on complexity
QUESTION
For example, given a List of Integer List list = Arrays.asList(5,4,5,2,2)
, how can I get a maxHeap
from this List in O(n)
time complexity?
The naive method:
...ANSWER
Answered 2021-Aug-28 at 15:09According to the java doc of PriorityQueue(PriorityQueue)
Creates a PriorityQueue containing the elements in the specified priority queue. This priority queue will be ordered according to the same ordering as the given priority queue.
So we can extend PriorityQueue
as CustomComparatorPriorityQueue
to hold the desired comparator and the Collection we need to heapify. Then call new PriorityQueue(PriorityQueue)
with an instance of CustomComparatorPriorityQueue
.
Below is tested to work in Java 15.
QUESTION
Is the time complexity of nested for, while, and if statements the same? Suppose a
is given as an array of length n
.
ANSWER
Answered 2022-Mar-02 at 14:43Am I right?
Yes!
The double loop:
QUESTION
I'm tackling a exercise which is supposed to exactly benchmark the time complexity of such code.
The data I'm handling is made up of pairs of strings like this hbFvMF,PZLmRb
, each string is present two times in the dataset, once on position 1 and once on position 2 . so the first string would point to zvEcqe,hbFvMF
for example and the list goes on....
I've been able to produce code which doesn't have much problem sorting these datasets up to 50k pairs, where it takes about 4-5 minutes. 10k gets sorted in a matter of seconds.
The problem is that my code is supposed to handle datasets of up to 5 million pairs. So I'm trying to see what more I can do. I will post my two best attempts, initial one with vectors, which I thought I could upgrade by replacing vector
with unsorted_map
because of the better time complexity when searching, but to my surprise, there was almost no difference between the two containers when I tested it. I'm not sure if my approach to the problem or the containers I'm choosing are causing the steep sorting times...
Attempt with vectors:
...ANSWER
Answered 2022-Feb-22 at 07:13You can use a trie data structure, here's a paper that explains an algorithm to do that: https://people.eng.unimelb.edu.au/jzobel/fulltext/acsc03sz.pdf
But you have to implement the trie from scratch because as far as I know there is no default trie implementation in c++.
QUESTION
I have an array of positive integers. For example:
...ANSWER
Answered 2022-Feb-27 at 22:44This problem has a fun O(n) solution.
If you draw a graph of cumulative sum vs index, then:
The average value in the subarray between any two indexes is the slope of the line between those points on the graph.
The first highest-average-prefix will end at the point that makes the highest angle from 0. The next highest-average-prefix must then have a smaller average, and it will end at the point that makes the highest angle from the first ending. Continuing to the end of the array, we find that...
These segments of highest average are exactly the segments in the upper convex hull of the cumulative sum graph.
Find these segments using the monotone chain algorithm. Since the points are already sorted, it takes O(n) time.
QUESTION
For example if the list is: [2,1,2,5,7,6,9] there's 3 possible ways of splitting:
[2,1,2] [5,7,6,9]
[2,1,2,5] [7,6,9]
[2,1,2,5,7,6] [9]
I'm supposed to calculate how many times the list can be split in a way that every element on the left is smaller than every element on the right. So with this list, the output would be 3.
Here's my current solution:
ANSWER
Answered 2022-Jan-28 at 15:40Compute all prefix maxima and suffix minima in linear time. And combine them in linear time.
QUESTION
I used a function in Python/Numpy to solve a problem in combinatorial game theory.
...ANSWER
Answered 2022-Jan-19 at 09:34The original code can be re-written in the following way:
QUESTION
From various sources, I have come to the understanding that there are four main techniques of string formatting/interpolation in Python 3 (3.6+ for f-strings):
- Formatting with
%
, which is similar to C'sprintf
- The
str.format()
method - Formatted string literals/f-strings
- Template strings from the standard library
string
module
My knowledge of usage mainly comes from Python String Formatting Best Practices (source A):
str.format()
was created as a better alternative to the%
-style, so the latter is now obsolete- However,
str.format()
is vulnerable to attacks if user-given format strings are not properly handled
- However,
- f-strings allow
str.format()
-like behavior only for string literals but are shorter to write and are actually somewhat-optimized syntactic sugar for concatenation - Template strings are safer than
str.format()
(demonstrated in the first source) and the other two methods (implied in the first source) when dealing with user input
I understand that the aforementioned vulnerability in str.format()
comes from the method being usable on any normal strings where the delimiting braces are part of the string data itself. Malicious user input containing brace-delimited replacement fields can be supplied to the method to access environment attributes. I believe this is unlike the other ways of formatting where the programmer is the only one that can supply variables to the pre-formatted string. For example, f-strings have similar syntax to str.format()
but, because f-strings are literals and the inserted values are evaluated separately through concatenation-like behavior, they are not vulnerable to the same attack (source B). Both %
-formatting and Template strings also seem to only be supplied variables for substitution by the programmer; the main difference pointed out is Template's more limited functionality.
I have seen a lot of emphasis on the vulnerability of str.format()
which leaves me with questions of what I should be wary of when using the other techniques. Source A describes Template strings as the safest of the above methods "due to their reduced complexity":
The more complex formatting mini-languages of the other string formatting techniques might introduce security vulnerabilities to your programs.
- Yes, it seems like f-strings are not vulnerable in the same way
str.format()
is, but are there known concerns about f-string security as is implied by source A? Is the concern more like risk mitigation for unknown exploits and unintended interactions?
I am not familiar with C and I don't plan on using the clunkier %
/printf
-style formatting, but I have heard that C's printf
had its own potential vulnerabilities. In addition, both sources A and B seem to imply a lack of security with this method. The top answer in Source B says,
String formatting may be dangerous when a format string depends on untrusted data. So, when using str.format() or %-formatting, it's important to use static format strings, or to sanitize untrusted parts before applying the formatter function.
- Do
%
-style strings have known security concerns? - Lastly, which methods should be used and how can user input-based attacks be prevented (e.g. filtering input with regex)?
- More specifically, are Template strings really the safer option? and Can f-strings be used just as easily and safely while granting more functionality?
ANSWER
Answered 2022-Jan-18 at 12:53It doesn't matter which format you choose, any format and library can have its own downsides and vulnerabilities. The bigger questions you need to ask yourself is what is the risk factor and the scenario you are facing with, and what are you going to do about it. First ask yourself: will there be a scenario where a user or an external entity of some kind (for example - an external system) sends you a format string? If the answer is no, there is no risk. If the answer is yes, you need to see whether this is needed or not. If not - remove it to eliminate the risk. If you need it - you can perform whitelist-based input validation and exclude all format-specific special characters from the list of permitted characters, in order to eliminate the risk. For example, no format string can pass the ^[a-zA-Z0-9\s]*$ generic regular expression.
So the bottom line is: it doesn't matter which format string type you use, what's really important is what do you do with it and how can you reduce and eliminate the risk of it being tampered.
QUESTION
What is the time complexity of this particular implementation of Dijkstra's algorithm?
I know several answers to this question say O(E log V) when you use a min heap, and so does this article and this article. However, the article here says O(V+ElogE) and it has similar (but not exactly the same) logic as the code below.
Different implementations of the algorithm can change the time complexity. I'm trying to analyze the complexity of the implementation below, but the optimizations like checking visitedSet
and ignoring repeated vertices in minHeap
is making me doubt myself.
Here is the pseudo code:
...ANSWER
Answered 2021-Dec-22 at 00:38Despite the test, this implementation of Dijkstra may put Ω(E) items in the priority queue. This will cost Ω(E log E) with every comparison-based priority queue.
Why not E log V? Well, assuming a connected, simple, nontrivial graph, we have Θ(E log V) = Θ(E log E) since log (V−1) ≤ log E < log V² = 2 log V.
The O(E + V log V)-time implementations of Dijkstra's algorithm depend on a(n amortized) constant-time DecreaseKey operation, avoiding multiple entries for an individual vertex. The implementation in this question will likely be faster in practice on sparse graphs, however.
QUESTION
I'm currently trying to implement a mutable slice/view into a buffer that supports taking subslices safely for in-memory message traversal. A minimal example would be
...ANSWER
Answered 2021-Dec-31 at 17:58Returning a MutView<'s>
from subview
is unsound.
It would allow users to call subview
multiple times and yield potentially overlapping ranges which would violate Rust's referential guarantees that mutable references are exclusive. This can be done easily with immutable references since they can be shared, but there are much stricter requirements for mutable references. For this reason, mutable references derived from self
must have their lifetime bound to self
in order to "lock out" access to it while the mutable borrow is still in use. The compiler is enforcing that by telling you &mut self.data[..]
is &'a mut [u8]
instead of &'s mut [u8]
.
The only option I see is to add an
into_subview
andinto_slice
method that consumes self.
That the main option I see, the key part you need to need to guarantee is exclusivity, and consuming self
would remove it from the equation. You can also take inspiration from the mutable methods on slices like split_mut
, split_at_mut
, chunks_mut
, etc. which are carefully designed to get multiple mutable elements/sub-slices at the same time.
You could use std::mem::transmute
to force the lifetimes to be what you want (Warning: transmute
is very unsafe
and is easy to use incorrectly), however, you then are burdened with upholding the referential guarantees mentioned above yourself. The subview() -> MutView<'s>
function should then be marked unsafe
with the safety requirement that the ranges are exclusive. I do not recommend doing that except in exceptional cases where you are returning multiple mutable references and have checked that they don't overlap.
I'd have to see exactly what kind of API you're hoping to design to give better advice.
QUESTION
I've been reading about this for a while, and nothing makes sense, and the explanations are conflicting, and the comments are proving that.
So far what I understood is that JWTs are storing information encoded by the server, can have expiry times, and the server with its secret key can decode the information in it if it's valid. Makes sense.
It is useful for scalability, so independent APIs can decode, and validate the information in the token, as long as they have the secret key. Also, there's no need for the information to be stored in any database, not like in sessions. Makes sense.
If the token gets stolen, the API has no way to tell if the token is used by the right person, or not. It is the downside of the above.
By reducing the expiry time of a token, the security vulnerability can be reduced, so thieves have less time to use the tokens without permission. (side question, but if they were able to steal it once, they will probably do it second time as well)
But reducing the time of how long the token is valid means that the user will need to log in every time the token expires, and as from above, it's quite frequent, so wouldn't provide too good UX. Makes sense.
From now, nothing makes sense:
Introducing a refresh token would solve this problem, because it has a longer expiry time. With the refresh token access tokens can be generated, so the user can be logged in as long as they have the refresh token - which is for a longer period of time -, while a stolen access token is still only valid for a short time.
For me the above seems like an extra layer of complexity without any improvement in security. I.e. for me it seems like the above equals to a long-living access token.
Why? Because for me it seems the refresh token is basically an access token (because that's what it generates). So having the refresh token means unlimited access tokens, so unlimited access to the API.
Then I have a read an answer that there's a one-to-one mapping of refresh token, and access token, so stealing the access token still means unauthorised access to the API, but only for a short time, and stealing the refresh token would generate a different access token, so the API could detect the anomaly (different access tokens are used for the same account), invalidating the access tokens.
It seems like I'm not the only one who's confused about the question.
If the above is not true, how refresh tokens really help?
If the above is true, and there really is one-to-one mapping of refresh tokens, and access tokens:
- it completely loses it's benefit of being "stateless"
- the user cannot be logged in from multiple devices (it would have been an "anomaly")
- I can't understand how an access token could be invalidated - is there a session ID stored in the token data, or the user is "blocked"?
It would have been really great if someone could clear the question, because from 5 explanations, 5 conflicting statements are (sometimes the same explanation contains conflicting information), and many developers want to understand this method.
...ANSWER
Answered 2021-Nov-02 at 19:38There is this general confusion around token-based auth, so let's try to clear some of it up.
First, JWTs are not just "encoded" by the server, they are "signed" (which more precisely is message authentication usually). The purpose is that such a token can not be altered or changed by the client, any field (claim) in the token can be trusted to be as the issuer created it, otherwise validation will fail.
This yields two important takeaways:
- validating tokens is important (obviously) in any implementation
- the contents (claims) of a JWT are not encrypted, ie. it's not a secret and can be viewed by the client
Such a token can be used to maintain a session without server-side state, if it contains some kind of an identity for the subject (user, like a user id or email address), and an expiry.
Another important takeaway though:
- Logout (immediate session invalidation) is not possible in a stateless way, which is a drawback. To be able to log out as in invalidate an existing session, the server must store and check revoked tokens, which is necessarily a stateful operation.
Also a JWT token is typically stored in a way that it's accessible for client-side code (javascript), so things like who the user is and when the token will expire can be read by the client app. It need not be so, yet most implementations do this, eg. store it in localstorage. This makes these tokens susceptible to XSS attacks, meaning that any successful XSS will be able to get the token.
For the reasons discussed so far, JWT authentication is inherently less secure than a plain old session, and should only be used if there is a need. Many times when token auth is used, it is not actually necessary, just fancy.
Sometimes such a token is stored in a httpOnly cookie, but in that case the token cannot be sent to multiple origins (one benefit of localStorage) and a plain old session id could also have been used, and would actually be more secure.
Ok, so what are refresh tokens. As you correctly stated, limiting the lifetime of an access token is useful to limit the validity of a compromised token. So a refresh token can be used to get a new access token when the old one expired. The key is where these are stored.
A key takeaway:
- If a refresh token is stored the same way as the access token, it usually doesn't make any sense. This is a common mistake in implementations.
In a better architecture, the following can happen:
- There are (both logically and "physically" as much as it makes sense in today's cloud world) at least two separate components: the identity provider (IdP, or "login service"), and the resource server (eg. an API).
- When a user logs in, they actually create a session with the IdP. In this case either a plain old session id (acting as refresh token) or an actual JWT refresh token is set up for the IdP origin (domain name).
- An access token is then created when needed for the resource server origin, using the existing session with the identity provider.
- Now even if there is a total compromise of the resource server, like in case of successful XSS, the refresh token belongs to a completely separate origin, so cannot be accessed by the attacker. Even if it's the same origin, but the refresh token is in a httpOnly cookie, that helps, because the attacker then needs to be able to perform repeated XSS against a victim user to receive new access tokens.
There can be implementation variants of this, but the point is the above, separation of access to the two tokens.
A one-to-one mapping of refresh tokens to access tokens as you described would I think be unusual and also unnecessary, but one session per user is in fact sometimes a requirement (especially in financial applications where you want to have a very clear audit trail of what a user did). But this is not much related to the things discussed above.
Also as stated above, proper logout (session invalidation) is not possible in a stateless way. Fortunately, very few applications actually need to be truly stateless on the server-side.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install complexity
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page