papers | Supplementary materials and source code for papers | Data Mining library
kandi X-RAY | papers Summary
kandi X-RAY | papers Summary
Supplementary materials and source code for papers.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Compute the binary epsilon constraint
- Merge sort
- Lexicographical sort algorithm
- Reload the contents
- Removes the smallest number of candidates
- Recursively splits a tree
- Recompute interval
- Removes the worst candidate from the network
- Removes the smallest number of elements from the network
- Main entry point
- Main method for testing
- Returns the next Gaussian distribution
- Compute the next gaussian distribution
- Shortcut method
- Simulates an iteration
- Performs the decomposition of a problem
- Generate random number generator
- Internal function
- Splits the contents of the matrix with a median
- Performs an iteration
- Load instances from a reader
- Test program
- Adds a point
- Evaluates the flow times for each job
- This method computes the binary epsilon
- Add a point
papers Key Features
papers Examples and Code Snippets
Community Discussions
Trending Discussions on papers
QUESTION
Is the Shannon-Fano coding as described in Fano's paper The Transmission of Information (1952) really ambiguous?
In Detail:3 papers
Claude E. Shannon published his famous paper A Mathematical Theory of Communication in July 1948. In this paper he invented the term bit as we know it today and he also defined what we call Shannon entropy today. And he also proposed an entropy based data compression algorithm in this paper. But Shannon's algorithm was so weak, that under certain circumstances the "compressed" messages could be even longer than in fix length coding. A few month later (March 1949) Robert M. Fano published an improved version of Shannons algorithm in the paper The Transmission of Information. 3 years after Fano (in September 1952) his student David A. Huffman published an even better version in his paper A Method for the Construction of Minimum-Redundancy Codes. Hoffman Coding is more efficient than its two predecessors and it is still used today. But my question is about the algorithm published by Fano which usually is called Shannon-Fano-Coding.
The algorithm
This description is based on the description from Wikipedia. Sorry, I did not fully read Fano's paper. I only browsed through it. It is 37 pages long and I really tried hard to find a passage where he talks about the topic of my question, but I could not find it. So, here is how Shannon-Fano encoding works:
- Count how often each character appears in the message.
- Sort all characters by frequency, characters with highest frequency on top of the list
- Divide the list into two parts, such that the sums of frequencies in both parts are as equal as possible. Add the bit
0
to one part and the bit1
to the other part. - Repeat step 3 on each part that contains 2 or more characters until all parts consist of only 1 character.
- Concatenate all bits from all rounds. This is the Shannon-Fano-code of that character.
An example
Let's execute this on a really tiny example (I think it's the smallest message where the problem appears). Here is the message to encode:
ANSWER
Answered 2022-Mar-08 at 19:00To directly answer your question, without further elaboration about how to break ties, two different implementations of Shannon-Fano could produce different codes of different lengths for the same inputs.
As @MattTimmermans noted in the comments, Shannon-Fano does not always produce optimal prefix-free codings the way that, say, Huffman coding does. It might therefore be helpful to think of it less as an algorithm and more of a heuristic - something that likely will produce a good code but isn't guaranteed to give an optimal solution. Many heuristics suffer from similar issues, where minor tweaks in the input or how ties are broken could result in different results. A good example of this is the greedy coloring algorithm for finding vertex colorings of graphs. The linked Wikipedia article includes an example in which changing the order in which nodes are visited by the same basic algorithm yields wildly different results.
Even algorithms that produce optimal results, however, can sometimes produce different optimal results based on tiebreaks. Take Huffman coding, for example, which works by repeatedly finding the two lowest-weight trees assembled so far and merging them together. In the event that there are three or more trees at some intermediary step that are all tied for the same weight, different implementations of Huffman coding could produce different prefix-free codes based on which two they join together. The resulting trees would all be equally "good," though, in that they'd all produce outputs of the same length. (That's largely because, unlike Shannon-Fano, Huffman coding is guaranteed to produce an optimal encoding.)
That being said, it's easy to adjust Shannon-Fano so that it always produces a consistent result. For example, you could say "in the event of a tie, choose the partition that puts fewer items into the top group," at which point you would always consistently produce the same coding. It wouldn't necessarily be an optimal encoding, but, then again, since Shannon-Fano was never guaranteed to do so, this is probably not a major concern.
If, on the other hand, you're interested in the question of "when Shannon-Fano has to break a tie, how do I decide how to break the tie to produce the optimal solution?," then I'm not sure of a way to do this other than recursively trying both options and seeing which one is better, which in the worst case leads to exponentially-slow runtimes. But perhaps someone else here can find a way to do that>
QUESTION
I'm going through an example in A Taste of Linear Logic.
It first introduces the standard array with the usual operations defined (page 24):
Then suggests that a linear equivalent (using a linear logic for type signatures to restrict array copying) would have a slightly different type signature:
This is designed with the idea that array contains values that are cheap to copy but that the array itself is expensive to copy and thus should be passed along from use to use as a handle.
Question: The signatures for lookup and update correspond well to the standard signatures, but how do I interpret the signature for new?
In particular:
- The function new does not seem to return an array. How can I get an array to use if one is not provided?
- I think I do understand that
Arr –o Arr x X
is not derivable using linear logic and therefore a function to extract individual values without consuming the array is needed, but I don't understand why new doesn't provide that function directly
ANSWER
Answered 2022-Feb-28 at 10:13In practical terms, this is about garbage collection.
Linear logic avoids making copies as well as leaving unused values lying around. So when you create an array with new
, you also need to make sure it's eventually cleaned up again.
How can you make sure it is cleaned up? Well, in this example they do it by not giving back the array as the result, but instead “lending” it to the caller. The function Arr ⊸ Arr ⊗ X must give an array back in the end, in addition to the result you're actually interested in. It's assumed that this will be a modified form of the array you started out with. Only the X is passed back to the caller, the Arr is deallocated.
QUESTION
return_void
consider the coroutine's ReturnObject
below and note the comments before the method ReturnObject::promise_type::return_void
:
ANSWER
Answered 2022-Jan-20 at 12:51[stmt.return.coroutine]/2 says that the expressions
QUESTION
Another question discusses the legitimacy for the optimizer to remove calls to new
: Is the compiler allowed to optimize out heap memory allocations?. I have read the question, the answers, and N3664.
From my understanding, the compiler is allowed to remove or merge dynamic allocations under the "as-if" rule, i.e. if the resulting program behaves as if no change was made, with respect to the abstract machine defined in the standard.
I tested compiling the following two-files program with both clang++ and g++, and -O1
optimizations, and I don't understand how it is allowed to to remove the allocations.
ANSWER
Answered 2022-Jan-09 at 20:34Allocation elision is an optimization that is outside of and in addition to the as-if rule. Another optimization with the same properties is copy elision (not to be confused with mandatory elision, since C++17): Is it legal to elide a non-trivial copy/move constructor in initialization?.
QUESTION
The standard defines several 'happens before' relations that extend the good old 'sequenced before' over multiple threads:
[intro.races]
11 An evaluation A simply happens before an evaluation B if either
(11.1) — A is sequenced before B, or
(11.2) — A synchronizes with B, or
(11.3) — A simply happens before X and X simply happens before B.[Note 10: In the absence of consume operations, the happens before and simply happens before relations are identical. — end note]
12 An evaluation A strongly happens before an evaluation D if, either
(12.1) — A is sequenced before D, or
(12.2) — A synchronizes with D, and both A and D are sequentially consistent atomic operations ([atomics.order]), or
(12.3) — there are evaluations B and C such that A is sequenced before B, B simply happens before C, and C is sequenced before D, or
(12.4) — there is an evaluation B such that A strongly happens before B, and B strongly happens before D.[Note 11: Informally, if A strongly happens before B, then A appears to be evaluated before B in all contexts. Strongly happens before excludes consume operations. — end note]
(bold mine)
The difference between the two seems very subtle. 'Strongly happens before' is never true for matching pairs or release-acquire operations (unless both are seq-cst), but it still respects release-acquire syncronization in a way, since operations sequenced before a release 'strongly happen before' the operations sequenced after the matching acquire.
Why does this difference matter?
'Strongly happens before' was introduced in C++20, and pre-C++20, 'simply happens before' used to be called 'strongly happens before'. Why was it introduced?
[atomics.order]/4
says that the total order of all seq-cst operations is consistent with 'strongly happens before'.
Does it mean that it's not consistent with 'simply happens before'? If so, why not?
I'm ignoring the plain 'happens before', because it differs from 'simply happens before' only in its handling of memory_order_consume
, the use of which is temporarily discouraged, since apparently most (all?) major compilers treat it as memory_order_acquire
.
I've already seen this Q&A, but it doesn't explain why 'strongly happens before' exists, and doesn't fully address what it means (it just states that it doesn't respect release-acquire syncronization, which isn't completely the case).
Found the proposal that introduced 'simply happens before'.
I don't fully understand it, but it explains following:
- 'Strongly happens before' is a weakened version of 'simply happens before'.
- The difference is only observable when seq-cst is mixed with aqc-rel on the same variable (I think, it means when an acquire load reads a value from a seq-cst store, or when an seq-cst load reads a value from a release store). But the exact effects of mixing the two are still unclear to me.
ANSWER
Answered 2022-Jan-02 at 18:21Here's my current understanding, which could be incomplete or incorrect. A verification would be appreciated.
C++20 renamed strongly happens before
to simply happens before
, and introduced a new, more relaxed definition for strongly happens before
, which imposes less ordering.
Simply happens before
is used to reason about the presence of data races in your code. (Actually that would be the plain 'happens before', but the two are equivalent in absence of consume operations, the use of which is discouraged by the standard, since most (all?) major compilers treat them as acquires.)
The weaker strongly happens before
is used to reason about the global order of seq-cst operations.
This change was introduced in proposal P0668R5: Revising the C++ memory model, which is based on the paper Repairing Sequential Consistency in C/C++11 by Lahav et al (which I didn't fully read).
The proposal explains why the change was made. Long story short, the way most compilers implement atomics on Power and ARM architectures turned out to be non-conformant in rare edge cases, and fixing the compilers had a performance cost, so they fixed the standard instead.
The change only affects you if you mix seq-cst operations with acquire-release operations on the same atomic variable (i.e. if an acquire operation reads a value from a seq-cst store, or a seq-cst operation reads a value from a release store).
If you don't mix operations in this manner, then you're not affected (i.e. can treat simply happens before
and strongly happens before
as equivalent).
The gist of the change is that the synchronization between a seq-cst operation and the corresponding acquire/release operation no longer affects the position of this specific seq-cst operation in the global seq-cst order, but the synchronization itself is still there.
This makes the seq-cst order for such seq-cst operations very moot, see below.
The proposal presents following example, and I'll try to explain my understanding of it:
QUESTION
Some ranges adaptors such as filter_view
, take_while_view
and transform_view
use std::optional
's cousin copyable-box
to store the callable object:
ANSWER
Answered 2021-Oct-09 at 14:20All the algorithms require copy-constructible function objects, and views are basically lazy algorithms.
Historically, when these adaptors were added, views were required to be copyable
, so we required the function objects to be copy_constructible
(we couldn't require copyable
without ruling out captureful lambdas). The change to make view
only require movable
came later.
It is probably possible to relax the restriction, but it will need a paper and isn't really high priority.
QUESTION
Consider the following:
...ANSWER
Answered 2021-Dec-30 at 08:54If you look closely at the specification of ranges::size
in [range.prim.size], except when the type of R
is the primitive array type, ranges::size
obtains the size of r
by calling the size()
member function or passing it into a free function.
And since the parameter type of transform()
function is reference, ranges::size(r)
cannot be used as a constant expression in the function body, this means we can only get the size of r
through the type of R
, not the object of R
.
However, there are not many standard range types that contain size information, such as primitive arrays, std::array
, std::span
, and some simple range adaptors. So we can define a function to detect whether R
is of these types, and extract the size from its type in a corresponding way.
QUESTION
I did bump into this question while searching for this topic, but this one seems to be outdated.
Reading https://blogs.mathworks.com/loren/2016/10/24/matlab-arithmetic-expands-in-r2016b , implicit expansion was introduced in 2016b, but I can still find the reference codes in the papers using bsxfun
for arithmetic expansion. So I assume that there are some circumstances that make bsxfun
preferable to other methods.
I did compare the speeds between bsxfun
, repmat
, and implicit expansion (I used the code of Jonas from the link)
The below shows the comparison in calculation time using tic
toc
:
which shows that implicit expansion is clearly faster than bsxfun
or repmat
. Is there any reason to use bsxfun
nowadays?
Here is the code I used to compare the speed:
...ANSWER
Answered 2021-Nov-25 at 12:46All bsxfun
does is Binary Singleton eXpansion. It's more typing than the, now usual, implicit expansion. I'd guess The MathWorks kept bsxfun
around for backwards compatibility, but no longer works on it; it might even internally just map to implicit expansion.
The documentation on bsxfun
states:
It is recommended that you replace most uses of
bsxfun
with direct calls to the functions and operators that support implicit expansion. Compared to usingbsxfun
, implicit expansion offers faster speed of execution, better memory usage, and improved readability of code. For more information, see Compatible Array Sizes for Basic Operations.
Additionally, implicit expansion seems to have internal optimisations beyond what bsxfun
does, see this question of mine.
More helpful links can be found in this answer by nirvana-msu, amongst others to blogs by MathWorks employees discussing this.
So I'd say that the only reason to use bsxfun
instead of implicit expansion would be if you'd run the code on a pre-2016b version of MATLAB.
QUESTION
When running this simple program, different behaviour is observed depending on the compiler.
It prints true
when compiled by GCC 11.2, and false
when compiled by MSVC 19.29.30137 with the (both are the latest release as of today).
ANSWER
Answered 2021-Dec-08 at 16:06GCC and Clang report that S
is trivially copyable in C++11 through C++23 standard modes. MSVC reports that S
is not trivially copyable in C++14 through C++20 standard modes.
N3337 (~ C++11) and N4140 (~ C++14) say:
A trivially copyable class is a class that:
- has no non-trivial copy constructors,
- has no non-trivial move constructors,
- has no non-trivial copy assignment operators,
- has no non-trivial move assignment operators, and
- has a trivial destructor.
By this definition, S
is trivially copyable.
N4659 (~ C++17) says:
A trivially copyable class is a class:
- where each copy constructor, move constructor, copy assignment operator, and move assignment operator is either deleted or trivial,
- that has at least one non-deleted copy constructor, move constructor, copy assignment operator, or move assignment operator, and
- that has a trivial, non-deleted destructor
By this definition, S
is not trivially copyable.
N4860 (~ C++20) says:
A trivially copyable class is a class:
- that has at least one eligible copy constructor, move constructor, copy assignment operator, or move assignment operator,
- where each eligible copy constructor, move constructor, copy assignment operator, and move assignment operator is trivial, and
- that has a trivial, non-deleted destructor.
By this definition, S
is not trivially copyable.
Thus, as published, S
was trivally copyable in C++11 and C++14, but not in C++17 and C++20.
The change was adopted from DR 1734 in February 2016. Implementors generally treat DRs as though they apply to all prior language standards by convention. Thus, by the published standard for C++11 and C++14, S
was trivially copyable, and by convention, newer compiler versions might choose to treat S
as not trivially copyable in C++11 and C++14 modes. Thus, all compilers could be said to be correct for C++11 and C++14.
For C++17 and beyond, S
is unambiguously not trivially copyable so GCC and Clang are incorrect. This is GCC bug #96288 and LLVM bug #39050
QUESTION
So I have code that uses std::weak_ptr
and maintains them in an std::set
, and that works just fine -- and has worked for the last 5 or 7 years. Recently I thought I'd fiddle with using them in an std::unordered_set
(well, actually in an f14::F14ValueSet
) and for that, I would need a hash of it. As of now, there is no std::hash
, so what should I do instead?
The answer seems to be "just hash the control block", as implied by this question and reply: Why was std::hash not defined for std::weak_ptr in C++0x?, but how do I get access to the control block? In glibc, it's located at __weak_ptr<>::_M_refcount._M_pi->
but that's private (and implementation specific). What else can I do?
One answer is "just wait": maybe someday there will be a standard owner_hash()
for std::weak_ptr
, but I'd prefer something available now.
ANSWER
Answered 2021-Nov-27 at 21:43Make your own augmented weak ptr.
It stores a hash value, and supports ==
based off owner_before()
.
You must make these from shared_ptr
s, as a weak ptr with no strong references cannot be hashed to match its owner; this could create two augmented weak ptrs that compare equal but hash differently.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install papers
You can use papers like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the papers component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page