cmp | An implementation of the MessagePack serialization format | Serialization library
kandi X-RAY | cmp Summary
kandi X-RAY | cmp Summary
CMP is a C implementation of the MessagePack serialization format. It currently implements version 5 of the MessagePack Spec. CMP's goal is to be lightweight and straightforward, forcing nothing on the programmer.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of cmp
cmp Key Features
cmp Examples and Code Snippets
Community Discussions
Trending Discussions on cmp
QUESTION
While testing things around Compiler Explorer, I tried out the following overflow-free function for calculating average of 2 unsigned 32-bit integer:
...ANSWER
Answered 2022-Mar-08 at 10:00Clang does the same thing. Probably for compiler-construction and CPU architecture reasons:
Disentangling that logic into just a swap may allow better optimization in some cases; definitely something it makes sense for a compiler to do early so it can follow values through the swap.
Xor-swap is total garbage for swapping registers, the only advantage being that it doesn't need a temporary. But
xchg reg,reg
already does that better.
I'm not surprised that GCC's optimizer recognizes the xor-swap pattern and disentangles it to follow the original values. In general, this makes constant-propagation and value-range optimizations possible through swaps, especially for cases where the swap wasn't conditional on the values of the vars being swapped. This pattern-recognition probably happens soon after transforming the program logic to GIMPLE (SSA) representation, so at that point it will forget that the original source ever used an xor swap, and not think about emitting asm that way.
Hopefully sometimes that lets it then optimize down to only a single mov
, or two mov
s, depending on register allocation for the surrounding code (e.g. if one of the vars can move to a new register, instead of having to end up back in the original locations). And whether both variables are actually used later, or only one. Or if it can fully disentangle an unconditional swap, maybe no mov
instructions.
But worst case, three mov
instructions needing a temporary register is still better, unless it's running out of registers. I'd guess GCC is not smart enough to use xchg reg,reg
instead of spilling something else or saving/restoring another tmp reg, so there might be corner cases where this optimization actually hurts.
(Apparently GCC -Os
does have a peephole optimization to use xchg reg,reg
instead of 3x mov: PR 92549 was fixed for GCC10. It looks for that quite late, during RTL -> assembly. And yes, it works here: turning your xor-swap into an xchg: https://godbolt.org/z/zs969xh47)
with no memory reads, and the same number of instructions, I don't see any bad impacts and feels odd that it be changed. Clearly there is something I did not think through though, but what is it?
Instruction count is only a rough proxy for one of three things that are relevant for perf analysis: front-end uops, latency, and back-end execution ports. (And machine-code size in bytes: x86 machine-code instructions are variable-length.)
It's the same size in machine-code bytes, and same number of front-end uops, but the critical-path latency is worse: 3 cycles from input a
to output a
for xor-swap, and 2 from input b
to output a
, for example.
MOV-swap has at worst 1-cycle and 2-cycle latencies from inputs to outputs, or less with mov-elimination. (Which can also avoid using back-end execution ports, especially relevant for CPUs like IvyBridge and Tiger Lake with a front-end wider than the number of integer ALU ports. And Ice Lake, except Intel disabled mov-elimination on it as an erratum workaround; not sure if it's re-enabled for Tiger Lake or not.)
Also related:
- Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? - and those 3 uops can't benefit from mov-elimination. But on modern AMD
xchg reg,reg
is only 2 uops.
GCC's real missed optimization here (even with -O3
) is that tail-duplication results in about the same static code size, just a couple extra bytes since these are mostly 2-byte instructions. The big win is that the a
path then becomes the same length as the other, instead of twice as long to first do a swap and then run the same 3 uops for averaging.
update: GCC will do this for you with -ftracer
(https://godbolt.org/z/es7a3bEPv), optimizing away the swap. (That's only enabled manually or as part of -fprofile-use
, not at -O3
, so it's probably not a good idea to use all the time without PGO, potentially bloating machine code in cold functions / code-paths.)
Doing it manually in the source (Godbolt):
QUESTION
I am trying to compare two strings I got as input, but it printed at me an error like "syntax: unexpected "="".
...ANSWER
Answered 2022-Feb-17 at 16:42In Julia, you should use ==
for comparison: https://docs.julialang.org/en/v1/manual/mathematical-operations/#Numeric-Comparisons
This is different from assignment operator =
, so for example
QUESTION
I have a vector [(5, 1), (9, 1), (4, 2)]
I want it to be sorted as [(4, 2), (9, 1), (5, 1)]
Which is sorted in descending order by every second element then the first element.
Would it be possible using sort_by(|x, y| y.cmp(x))
function?
ANSWER
Answered 2022-Jan-30 at 22:59It is most certainly possible, and you have the right idea using sort_by
. You can sort by the second element in basically the way you suggest.
QUESTION
I made a bubble sort implementation in C, and was testing its performance when I noticed that the -O3
flag made it run even slower than no flags at all! Meanwhile -O2
was making it run a lot faster as expected.
Without optimisations:
...ANSWER
Answered 2021-Oct-27 at 19:53It looks like GCC's naïveté about store-forwarding stalls is hurting its auto-vectorization strategy here. See also Store forwarding by example for some practical benchmarks on Intel with hardware performance counters, and What are the costs of failed store-to-load forwarding on x86? Also Agner Fog's x86 optimization guides.
(gcc -O3
enables -ftree-vectorize
and a few other options not included by -O2
, e.g. if
-conversion to branchless cmov
, which is another way -O3
can hurt with data patterns GCC didn't expect. By comparison, Clang enables auto-vectorization even at -O2
, although some of its optimizations are still only on at -O3
.)
It's doing 64-bit loads (and branching to store or not) on pairs of ints. This means, if we swapped the last iteration, this load comes half from that store, half from fresh memory, so we get a store-forwarding stall after every swap. But bubble sort often has long chains of swapping every iteration as an element bubbles far, so this is really bad.
(Bubble sort is bad in general, especially if implemented naively without keeping the previous iteration's second element around in a register. It can be interesting to analyze the asm details of exactly why it sucks, so it is fair enough for wanting to try.)
Anyway, this is pretty clearly an anti-optimization you should report on GCC Bugzilla with the "missed-optimization" keyword. Scalar loads are cheap, and store-forwarding stalls are costly. (Can modern x86 implementations store-forward from more than one prior store? no, nor can microarchitectures other than in-order Atom efficiently load when it partially overlaps with one previous store, and partially from data that has to come from the L1d cache.)
Even better would be to keep buf[x+1]
in a register and use it as buf[x]
in the next iteration, avoiding a store and load. (Like good hand-written asm bubble sort examples, a few of which exist on Stack Overflow.)
If it wasn't for the store-forwarding stalls (which AFAIK GCC doesn't know about in its cost model), this strategy might be about break-even. SSE 4.1 for a branchless pmind
/ pmaxd
comparator might be interesting, but that would mean always storing and the C source doesn't do that.
If this strategy of double-width load had any merit, it would be better implemented with pure integer on a 64-bit machine like x86-64, where you can operate on just the low 32 bits with garbage (or valuable data) in the upper half. E.g.,
QUESTION
These two loops are equivalent in C++ and Rust:
...ANSWER
Answered 2022-Jan-12 at 10:20Overflow in the iterator state.
The C++ version will loop forever when given a large enough input:
QUESTION
ANSWER
Answered 2022-Jan-05 at 06:47Both historical and logical reasons.
Back in the days, before impl [T]
, sort
first used cmp()
instead of lt()
. That changed about 5 years ago for optimization reasons. At that point, the constraint could have been changed from Ord
to PartialOrd
. And truly, it sparked another discussion about PartialOrd
and Ord
.
However, there's a logical reason too: for any two indices i
and j
within [0..values.len()]
and i <= j
, you expect the following to hold if values
has been sorted:
QUESTION
In the documentation for Ord
, it says
Implementations must be consistent with the PartialOrd implementation [...]
That of course makes sense and can easily be archived as in the example further down:
...ANSWER
Answered 2021-Dec-26 at 00:40Apparently, there is a reference to that, in a github issue - rust-lang/rust#63104:
This conflicts with the existing blanket impl in core.
QUESTION
The following code produces the lifetime errors below despite the fact that the V
instance in question is owned.
ANSWER
Answered 2021-Dec-23 at 08:01Use a higher-rank trait bound denoted by for<'a>
:
QUESTION
Update: relevant GCC bug report: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103798
I tested the following code:
...ANSWER
Answered 2021-Dec-21 at 11:08libstdc++'s std::string_view::find_first_of
looks something like:
QUESTION
I have tried speeding up a toy GEMM implementation. I deal with blocks of 32x32 doubles for which I need an optimized MM kernel. I have access to AVX2 and FMA.
I have two codes (in ASM, I apologies for the crudeness of the formatting) defined below, one is making use of AVX2 features, the other uses FMA.
Without going into micro benchmarks, I would like to try to develop an understanding (theoretical) of why the AVX2 implementation is 1.11x faster than the FMA version. And possibly how to improve both versions.
The codes below are for a 3000x3000 MM of doubles and the kernels are implemented using the classical, naive MM with an interchanged deepest loop. I'm using a Ryzen 3700x/Zen 2 as development CPU.
I have not tried unrolling aggressively, in fear that the CPU might run out of physical registers.
AVX2 32x32 MM kernel:
...ANSWER
Answered 2021-Dec-13 at 21:36Zen2 has 3 cycle latency for vaddpd
, 5 cycle latency for vfma...pd
. (https://uops.info/).
Your code with 8 accumulators has enough ILP that you'd expect close to two FMA per clock, about 8 per 5 clocks (if there aren't other bottlenecks) which is a bit less than the 10/5 theoretical max.
vaddpd
and vmulpd
actually run on different ports on Zen2 (unlike Intel), port FP2/3 and FP0/1 respectively, so it can in theory sustain 2/clock vaddpd
and vmulpd
. Since the latency of the loop-carried dependency is shorter, 8 accumulators are enough to hide the vaddpd
latency if scheduling doesn't let one dep chain get behind. (But at least multiplies aren't stealing cycles from it.)
Zen2's front-end is 5 instructions wide (or 6 uops if there are any multi-uop instructions), and it can decode memory-source instructions as a single uop. So it might well be doing 2/clock each multiply and add with the non-FMA version.
If you can unroll by 10 or 12, that might hide enough FMA latency and make it equal to the non-FMA version, but with less power consumption and more SMT-friendly to code running on the other logical core. (10 = 5 x 2 would be just barely enough, which means any scheduling imperfections lose progress on a dep chain which is on the critical path. See Why does mulss take only 3 cycles on Haswell, different from Agner's instruction tables? (Unrolling FP loops with multiple accumulators) for some testing on Intel.)
(By comparison, Intel Skylake runs vaddpd/vmulpd on the same ports with the same latency as vfma...pd, all with 4c latency, 0.5c throughput.)
I didn't look at your code super carefully, but 10 YMM vectors might be a tradeoff between touching two pairs of cache lines vs. touching 5 total lines, which might be worse if a spatial prefetcher tries to complete an aligned pair. Or might be fine. 12 YMM vectors would be three pairs, which should be fine.
Depending on matrix size, out-of-order exec may be able to overlap inner loop dep chains between separate iterations of the outer loop, especially if the loop exit condition can execute sooner and resolve the mispredict (if there is one) while FP work is still in flight. That's an advantage to having fewer total uops for the same work, favouring FMA.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install cmp
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page