theory | Abstraction layer for cross platform JavaScript | Reactive Programming library
kandi X-RAY | theory Summary
kandi X-RAY | theory Summary
Theory is an abstraction layer for server side and client side JavaScript.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Initiate the flow
- Constructs a theory object .
- Main test function
- add fnns to functions
- Create list .
- Get an object
- Get text .
- set num to number
- Creates a bi - number
- Registers an event listener .
theory Key Features
theory Examples and Code Snippets
Community Discussions
Trending Discussions on theory
QUESTION
Say that I have the following interface:
...ANSWER
Answered 2022-Mar-26 at 16:40First of all, you may want to use AutoMoqDataAttribute to create a mock of the ITeam
interface:
QUESTION
There are a lot of articles online about running an Elasticsearch multi-node cluster using docker-compose, including the official documentation for Elasticsearch 8.0. However, I cannot find a reason why you would set up multiple nodes on the same docker host. Is this the recommended setup for a production environment? Or is it an example of theory in practice?
...ANSWER
Answered 2022-Mar-04 at 15:49You shouldn't consider this a production environment. The guides are examples, often for lab environments, and testing scenarios with the application. I would not consider them production ready, and compose is often not considered a production grade tool since everything it does is to a single docker node, where in production you typically want multiple nodes spread across multiple availability zones.
QUESTION
Is the Shannon-Fano coding as described in Fano's paper The Transmission of Information (1952) really ambiguous?
In Detail:3 papers
Claude E. Shannon published his famous paper A Mathematical Theory of Communication in July 1948. In this paper he invented the term bit as we know it today and he also defined what we call Shannon entropy today. And he also proposed an entropy based data compression algorithm in this paper. But Shannon's algorithm was so weak, that under certain circumstances the "compressed" messages could be even longer than in fix length coding. A few month later (March 1949) Robert M. Fano published an improved version of Shannons algorithm in the paper The Transmission of Information. 3 years after Fano (in September 1952) his student David A. Huffman published an even better version in his paper A Method for the Construction of Minimum-Redundancy Codes. Hoffman Coding is more efficient than its two predecessors and it is still used today. But my question is about the algorithm published by Fano which usually is called Shannon-Fano-Coding.
The algorithm
This description is based on the description from Wikipedia. Sorry, I did not fully read Fano's paper. I only browsed through it. It is 37 pages long and I really tried hard to find a passage where he talks about the topic of my question, but I could not find it. So, here is how Shannon-Fano encoding works:
- Count how often each character appears in the message.
- Sort all characters by frequency, characters with highest frequency on top of the list
- Divide the list into two parts, such that the sums of frequencies in both parts are as equal as possible. Add the bit
0
to one part and the bit1
to the other part. - Repeat step 3 on each part that contains 2 or more characters until all parts consist of only 1 character.
- Concatenate all bits from all rounds. This is the Shannon-Fano-code of that character.
An example
Let's execute this on a really tiny example (I think it's the smallest message where the problem appears). Here is the message to encode:
ANSWER
Answered 2022-Mar-08 at 19:00To directly answer your question, without further elaboration about how to break ties, two different implementations of Shannon-Fano could produce different codes of different lengths for the same inputs.
As @MattTimmermans noted in the comments, Shannon-Fano does not always produce optimal prefix-free codings the way that, say, Huffman coding does. It might therefore be helpful to think of it less as an algorithm and more of a heuristic - something that likely will produce a good code but isn't guaranteed to give an optimal solution. Many heuristics suffer from similar issues, where minor tweaks in the input or how ties are broken could result in different results. A good example of this is the greedy coloring algorithm for finding vertex colorings of graphs. The linked Wikipedia article includes an example in which changing the order in which nodes are visited by the same basic algorithm yields wildly different results.
Even algorithms that produce optimal results, however, can sometimes produce different optimal results based on tiebreaks. Take Huffman coding, for example, which works by repeatedly finding the two lowest-weight trees assembled so far and merging them together. In the event that there are three or more trees at some intermediary step that are all tied for the same weight, different implementations of Huffman coding could produce different prefix-free codes based on which two they join together. The resulting trees would all be equally "good," though, in that they'd all produce outputs of the same length. (That's largely because, unlike Shannon-Fano, Huffman coding is guaranteed to produce an optimal encoding.)
That being said, it's easy to adjust Shannon-Fano so that it always produces a consistent result. For example, you could say "in the event of a tie, choose the partition that puts fewer items into the top group," at which point you would always consistently produce the same coding. It wouldn't necessarily be an optimal encoding, but, then again, since Shannon-Fano was never guaranteed to do so, this is probably not a major concern.
If, on the other hand, you're interested in the question of "when Shannon-Fano has to break a tie, how do I decide how to break the tie to produce the optimal solution?," then I'm not sure of a way to do this other than recursively trying both options and seeing which one is better, which in the worst case leads to exponentially-slow runtimes. But perhaps someone else here can find a way to do that>
QUESTION
I use Delphi 10.3 Rio, and need to know the screen PixelsPerInch ratio to scale my application accordingly.
Calculating with the formula, my screen has 142 ppi. (Real values are: 15.5" diagonal and 1920 x 1080 resolution). But when I read in Delphi the Screen.PixelsPerInch
property, I get 134 ! And this value is reportend in PixelsPerInch
property of every TForm
I create, too. So, why this difference and which is the real ppi ?
AIDA64 reports the real value of 142 ppi... So I think is something wrong with the pixels per inch ratio in Delphi...
Edit:
I managed to get the real PPI with this code... but I cannot change this in every Delphi component. So, if I use this value in my components, won't I mess everything up ?
...ANSWER
Answered 2022-Jan-21 at 08:08There is no bug and Delphi returns proper value in PixelsPerInch
.
PPI OS will return for the purpose of scaling your application is not the actual PPI value of the actual display device, but virtual pixel density.
For developing applications the PPI value you need is the one that OS gives you, not the actual PPI value of the display device.
Everything you need to know is baseline PPI for the OS and current PPI or the scale factor. Using those numbers you can then calculate number of scaled pixels from some baseline pixel value.
For instance, if your control baseline width is 100 pixels and screen scale is 150% from the baseline PPI then your runtime control size will be 150 pixels.
Different operating systems have different baseline PPI.
OS Baseline PPI Windows 96 PPI macOS 72 PPI Android 160 PPI iOS 163 PPICalculation:
QUESTION
I used a function in Python/Numpy to solve a problem in combinatorial game theory.
...ANSWER
Answered 2022-Jan-19 at 09:34The original code can be re-written in the following way:
QUESTION
I created an extension method to add all JSON configuration files to the IConfigurationBuilder
ANSWER
Answered 2021-Dec-19 at 09:24The logic of comparing files seems alright, I don't find any outstanding problem with it, it is ok to prepend the "/" to match what you need.
Could be even better if you could use the System.IO.Path.DirectorySeparatorChar
for the directory root path as well, so if you run on windows or Linux you will have no issues.
But there may be a conceptual problem with what you are doing. To my understanding you aim to verify existence of specific configuration files required for your program to work right, if those files are missing than the program should fail. But that kind of failure due to missing configuration files, is an expected and valid result of your code. Yet, you unit-test this as if missing files should fail the test, as if missing files are an indication that something wrong with your code, this is wrong.
Missing files are not indication of your code not working correct and Unit-test should not be used as a validator to make sure the files exist prior executing the program, you will likely agree that unit-test is not part of the actual process and it should only aim to test your code and not preconditions, the test should compare an expected result (mock result of your code) vs. actual result and certainly not meant to become part of the code. That unit test looks like a validator that should be in the code.
So unless those files are produced by your specific code (and not the deployment) there is no sense testing that. In such case you need to create a configuration validator code - and your unit test could test that instead. So it will test that the validator expected result with a mock input you provide. But the thing here is that you would know that you only testing the validation logic and not the actual existence of the files.
QUESTION
Following a previous question of mine, most comments say "just don't, you are in a limbo state, you have to kill everything and start over". There is also a "safeish" workaround.
What I fail to understand is why a segmentation fault is inherently nonrecoverable.
The moment in which writing to protected memory is caught - otherwise, the SIGSEGV
would not be sent.
If the moment of writing to protected memory can be caught, I don't see why - in theory - it can't be reverted, at some low level, and have the SIGSEGV converted to a standard software exception.
Please explain why after a segmentation fault the program is in an undetermined state, as very obviously, the fault is thrown before memory was actually changed (I am probably wrong and don't see why). Had it been thrown after, one could create a program that changes protected memory, one byte at a time, getting segmentation faults, and eventually reprogramming the kernel - a security risk that is not present, as we can see the world still stands.
- When exactly does a segmentation fault happen (= when is
SIGSEGV
sent)? - Why is the process in an undefined behavior state after that point?
- Why is it not recoverable?
- Why does this solution avoid that unrecoverable state? Does it even?
ANSWER
Answered 2021-Dec-10 at 15:05When exactly does segmentation fault happen (=when is SIGSEGV sent)?
When you attempt to access memory you don’t have access to, such as accessing an array out of bounds or dereferencing an invalid pointer. The signal SIGSEGV
is standardized but different OS might implement it differently. "Segmentation fault" is mainly a term used in *nix systems, Windows calls it "access violation".
Why is the process in undefined behavior state after that point?
Because one or several of the variables in the program didn’t behave as expected. Let’s say you have some array that is supposed to store a number of values, but you didn’t allocate enough room for all them. So only those you allocated room for get written correctly, and the rest written out of bounds of the array can hold any values. How exactly is the OS to know how critical those out of bounds values are for your application to function? It knows nothing of their purpose.
Furthermore, writing outside allowed memory can often corrupt other unrelated variables, which is obviously dangerous and can cause any random behavior. Such bugs are often hard to track down. Stack overflows for example are such segmentation faults prone to overwrite adjacent variables, unless the error was caught by protection mechanisms.
If we look at the behavior of "bare metal" microcontroller systems without any OS and no virtual memory features, just raw physical memory - they will just silently do exactly as told - for example, overwriting unrelated variables and keep on going. Which in turn could cause disastrous behavior in case the application is mission-critical.
Why is it not recoverable?
Because the OS doesn’t know what your program is supposed to be doing.
Though in the "bare metal" scenario above, the system might be smart enough to place itself in a safe mode and keep going. Critical applications such as automotive and med-tech aren’t allowed to just stop or reset, as that in itself might be dangerous. They will rather try to "limp home" with limited functionality.
Why does this solution avoid that unrecoverable state? Does it even?
That solution is just ignoring the error and keeps on going. It doesn’t fix the problem that caused it. It’s a very dirty patch and setjmp/longjmp in general are very dangerous functions that should be avoided for any purpose.
We have to realize that a segmentation fault is a symptom of a bug, not the cause.
QUESTION
I have code something like this:
...ANSWER
Answered 2021-Nov-24 at 04:11It's not about the reference problem. And constexpr
variable and constexpr
function are different things. Quoted from the answer https://stackoverflow.com/a/31720324:
The reference does not have a preceding initialization from the point of view of i, though: It's a parameter. It's initialized once ByReference is called.
This is fine, since f does have preceding initialization. The initializer of f is a constant expression as well, since the implicitly declared default constructor is constexpr in this case (§12.1/5). Hence i is initialized by a constant expression and the invocation is a constant expression itself.
And about "preceding initialization", quoted from this:
It does mean "be initialized", but it's more importantly about the visibility of a preceding initialization within the context of the expression being evaluated. In your example, in the context of evaluating func(0) the compiler has the context to see the initialization of rf with 0. However, in the context of evaluating just the expression rf within func, it doesn't see an initialization of rf. The analysis is local, as in it doesn't analyze every call-site. This leads to expression rf itself within func not being a constant expression while func(0) is a constant expression.
Corresponding to your case, the line:
QUESTION
I would like to parse binary files in Raku using its regex / grammar engine, but I didn't found how to do it because the input is coerce to string.
Is there a way to avoid this string coercion and use objects of type Buf or Blob ?
I was thinking maybe it is possible to change something in the Metamodel ?
I know that I can use unpack but I would really like to use the grammar engine insted to have more flexibility and readability.
Am I hitting an inherent limit to Raku capabilities here ?
And before someone tells me that regexes are for string and that I shouldn't do it, it should point out that perl's regex engine can match bytes as far as I know, and I could probably use it with Regexp::Grammars, but I prefer not to and use Raku instead.
Also, I don't see any fundamental reason why regex should be reserved only to string, a NFA of automata theory isn't intriscally made for characters instead of bytes.
...ANSWER
Answered 2021-Nov-09 at 11:34Is there a way to avoid this string coercion and use objects of type Buf or Blob ?
Unfortunately not at present. However, one can use the Latin-1
encoding, which gives a meaning to every byte, so any byte sequence will decode to it, and could then be matched using a grammar.
Also, I don't see any fundamental reason why regex should be reserved only to string, a NFA of automata theory isn't intriscally made for characters instead of bytes.
There isn't one; it's widely expected that the regex/grammar engine will be rebuilt at some point in the future (primarily to deal with performance limitations), and that would be a good point to also consider handling bytes and also codepoint level strings (Uni
).
QUESTION
In C++20, some ranges have both const
and non-const
begin()/end()
, while others only have non-const
begin()/end()
.
In order to enable the range adapters that wraps the former to be able to use begin()/end()
when it is const
qualified, some range adapters such as elements_view
, reverse_view
and common_view
all provide constrained const
-qualified begin()/end()
functions, for example:
ANSWER
Answered 2021-Oct-09 at 03:58Recent SG9 discussion on LWG3564 concluded that the intended design is that x
and as_const(x)
should be required to be substitutable with equal results in equality-preserving expressions for which both are valid. In other words, they should be "equal" in the [concepts.equality] "same platonic value" sense. Thus, for instance, it is not valid for x
and as_const(x)
to have entirely different elements.
The exact wording and scope of the rule will have to await a paper, and we'll have to take care to avoid banning reasonable code. But certainly things like "x
is a range of pairs but as_const(x)
is a range of ints" are not within any reasonable definition of "reasonable".
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install theory
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page