Miscellaneous | Small programs and scripts that do not require
kandi X-RAY | Miscellaneous Summary
kandi X-RAY | Miscellaneous Summary
Small programs and scripts that do not require their own repositories
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Example demo function
- Return the type signature for the given apiName
- Prints a type signature
- Activate the pointer
- Changes the type of the lvar
- Determines if a pointer is a pTR var
- Creates MapType Map Types
- Parse a single declaration
- Install type setter
- Parses set types
- Tries to parse list types
- Creates shared pointer types
- Make a vector type
- Create a deque type definition
- Update the value of the widget
- Populates the given popup widget
- Uninstall a type set
- Check if IDF is 64 bit
Miscellaneous Key Features
Miscellaneous Examples and Code Snippets
Community Discussions
Trending Discussions on Miscellaneous
QUESTION
I want to implement a custom calculation for the specific row using the values from other rows in the same column. I found that AG Grid provides the ability to define Column Definition Expressions and aggFunc, but they don't solve what I want:
Column Definition Expressions
(let's call it CDE) allow users to reference other columns of the same rowaggFunc
is helpful in the case of grouping where users can use built-in functions or define custom aggregation function, which can use cell values of the same column only inside the particular group.
I need to solve the following:
...ANSWER
Answered 2021-Dec-20 at 18:01For now, it seems that the only possible way and place to implement this is to use the onGridReady
event, and there it is possible to set values for calculated rows (via rowNode.setDataValue()
). The grid has all data (+ aggregated data) at this stage. This link is useful to understand how to collect all data.
The better way is to define getRowNodeId
callback
QUESTION
According to this paper, one of the common mistakes developers make ( #3 ) is using a smart pointer with array types; Mainly because the operator delete
will be called instead of delete[]
, leaving the program with a memory leak.
Depite that; looking up default deleter in the C++ reference, the following code is there:
...ANSWER
Answered 2022-Mar-09 at 09:35If I understand correctly, you seem to think that the example 4 contradicts the paper, and the paper recommends using example 2. This is not what the paper is saying.
Example 1 is undefined behavior, as described in the paper: "Using smart pointers, such as auto_ptr, unique_ptr, shared_ptr, with arrays is also incorrect."
Example 2 works correctly, however using a custom deleter is not mentioned in the paper at all.
Example 3 creates pointer to single int
. This is not related to the paper.
Example 4 creates pointer to int[]
, which is mentioned in the paper as a valid usage: "If using of a smart pointer is required for an array, it is possible to use -- a unique_ptr specialization."
QUESTION
I have a requirement for transforming JSON and I am trying to use the same value multiple times. Is there a way to use value multiple times previously I did use in array but this time I have to go through the level. Any help is appreciated and thank you.
Note I want to filter product configurations based on the name.
How to use same field value at multiple places in Jolt
Input:
...ANSWER
Answered 2022-Mar-01 at 14:29You can reference the object twice along with a shift transformation spec such as
QUESTION
In std::hint
there's a spin_loop
function with the following definition in its documentation:
Emits a machine instruction to signal the processor that it is running in a busy-wait spin-loop (“spin lock”).
Upon receiving the spin-loop signal the processor can optimize its behavior by, for example, saving power or switching hyper-threads.
Depending on the target architecture, this compiles to either:
_mm_pause
, A.K.A. thepause
intrinsic on x86yield
instruction on 32-bit armISB SY
on 64-bit arm (aarch64)
That last one has got my head spinning a little bit (😉). I thought that ISB
is a lengthy operation, which would mean that, if used within a spin lock, the thread lags a bit in trying to detect whether the lock is open again, but otherwise there's hardly any profit to it.
What are the advantages of using ISB SY
instead of a NOP
in a spin loop on aarch64?
ANSWER
Answered 2022-Jan-23 at 14:13I had to dig into the Rust repository history to get to this answer:
The yield
has been replaced with isb
in c064b6560b7c
:
On arm64 we have seen on several databases that ISB (instruction synchronization barrier) is better to use than yield in a spin loop. The yield instruction is a nop. The isb instruction puts the processor to sleep for some short time. isb is a good equivalent to the pause instruction on x86.
[...]
So essentially, it uses the time it takes for an ISB
to complete to pause the processor, so that it wastes less power.
Peter Cordes explained it nicely in one of his comments:
ISB SY doesn't stall for long, just saves a bit of power vs. spamming loads in a tight loop.
QUESTION
im over here frying my brain to death trying to figure out how to prevent users from using someone elses selection menu, and i can't seem to fix that issue.. so here i am asking for some help smh.
i know it has something to do with the collector, but im not sure what it is.
I have asked around all over discord, but havent really had a straight forward answer on how to solve this issue because it gets annoying when you try to use a selection menu and then someone else comes along and it able to use the same frickin menu that you are trying to use, so i had enough of that crap and im just trying to find a way to prevent others from using each others menus and so on.
any help will be appreciated, just want it to ignore them or respond to them saying its not their menu or whatever.
...ANSWER
Answered 2022-Jan-22 at 05:22Hi you can do that by adding
QUESTION
The general, more abstract procedure for writing and later executing JIT or self-modifying code is, to my understanding, something like the following.
- Write the generated code,
- make sure it's flushed and globally0 visible,
- and then make sure that instructions fetched thence will be what was written.
From what I can tell from this post about self-modifying code on x86, manual cache management is apparently not necessary. I imagined that a clflushopt
would be necessary, but x861 apparently automatically handles cache invalidation upon loading from a location with new instructions, such that instruction fetches are never stale. My question is not about x86, but I wanted to include this for comparison.
The situation in AArch64 is a little more complicated, as it distinguishes between shareability domains and how "visible" a cache operation should be. From just the official documentation for ARMv8/ARMv9, I first came up with this guess.
- Write the generated code,
dsb ishst
to ensure it's all written before continuing,- and then
isb sy
to ensure that subsequent instructions are fetched from memory.
But the documentation for DMB/DSB/ISB says that "instructions following the ISB are fetched from cache or memory". That gives me an impression that cache control operations are indeed necessary. My new guess is thus this.
- Write the generated code,
dsb ishst
to ensure it's all written before continuing,- and then
ic ivau
all the cache lines occupied by the new code.
But I couldn't help but feel that even this is not quite right. A little while later, I found something on the documentation that I missed, and something pretty much the same on a paper. Both of them give an example that looks like this.
...ANSWER
Answered 2022-Jan-15 at 22:27(Disclaimer: this answer is based on reading specs and some tests, but not on previous experience.)
First of all, there is an explanation and example code for this exact
case (one core writes code for another core to execute) in B2.2.5 of
the Architecture Reference Manual (version G.b). The only difference
from the examples you've shown is that the final isb
needs to
be executed in the thread that will execute the new code (which I
guess is your "consumer"), after the cache invalidation has finished.
I found it helpful to try to understand the abstract constructs like "inner shareable domain", "point of unification" from the architecture reference in more concrete terms.
Let's think about a system with several cores. Their L1d caches are coherent, but their L1i caches need not be unified with L1d, nor coherent with each other. However, the L2 cache is unified.
The system does not have any way for L1d and L1i to talk to each other
directly; the only path between them is through L2. So once we have
written our new code to L1d, we have to write it back to L2 (dc cvau
), then
invalidate L1i (ic ivau
) so that it repopulates from the new code in L2.
In this setting, PoU is the L2 cache, and that's exactly where we want to clean / invalidate to.
There's some explanation of these terms in page D4-2646. In particular:
The PoU for an Inner Shareable shareability domain is the point by which the instruction and data caches and the translation table walks of all the PEs in that Inner Shareable shareability domain are guaranteed to see the same copy of a memory location.
Here, the Inner Shareable domain is going to contain all the cores
that could run the threads of our program; indeed, it is supposed to
contain all the cores running the same kernel as us (page B2-166).
And because the memory we are dc cvau
ing is presumably marked with
the Inner Shareable attribute or better, as any reasonable OS should
do for us, it cleans to the PoU of the domain, not merely the PoU of
our core (PE). So that's just what we want: a cache level that all
instruction cache fills from all cores would see.
The Point of Coherency is further down; it is the level that everything on the system sees, including DMA hardware and such. Most likely this is main memory, below all the caches. We don't need to get down to that level; it would just slow everything down for no benefit.
Hopefully that helps with your question 1.
Note that the cache clean and invalidate instructions run "in the
background" as it were, so that you can execute a long string of them
(like a loop over all affected cache lines) without waiting for them
to complete one by one. dsb ish
is used once at the end to wait for
them all to finish.
Some commentary about dsb
, towards your questions #2 and #3. Its
main purpose is as a barrier; it makes sure that all the pending data
accesses within our core (in store buffers, etc) get flushed out to
L1d cache, so that all other cores can see them. This is the kind of
barrier you need for general inter-thread memory ordering. (Or for
most purposes, the weaker dmb
suffices; it enforces ordering but
doesn't actually wait for everything to be flushed.) But it doesn't
do anything else to the caches themselves, nor say anything about what
should happen to that data beyond L1d. So by itself, it would not be
anywhere near strong enough for what we need here.
As far as I can tell, the "wait for cache maintenance to complete"
effect is a sort of bonus feature of dsb ish
. It seems orthogonal
to the instruction's main purpose, and I'm not sure why they didn't
provide a separate wcm
instruction instead. But anyway, it is only
dsb ish
that has this bonus functionality; dsb ishst
does not.
D4-2658: "In all cases, where the text in this section refers to a DMB
or a DSB, this means a DMB or DSB whose required access type is
both loads and stores".
I ran some tests of this on a Cortex A-72. Omitting either of the dc cvau
or ic ivau
usually results in the stale code being executed, even if dsb ish
is done instead. On the other hand, doing dc cvau ; ic ivau
without any dsb ish
, I didn't observe any failures; but that could be luck or a quirk of this implementation.
To your #4, the sequence we've been discussing (dc cvau ; dsb ish ; ci ivau ; dsb ish ; isb
) is intended for the case when you will run
the code on the same core that wrote it. But it actually shouldn't
matter which thread does the dc cvau ; dsb ish ; ci ivau ; dsb ish
sequence, since the cache maintenance instructions cause all the cores
to clean / invalidate as instructed; not just this one. See table
D4-6. (But if the dc cvau
is in a different thread than the writer, maybe the writer has to have completed a dsb ish
beforehand, so that the written data really is in L1d and not still in the writer's store buffer? Not sure about that.)
The part that does matter is isb
. After ci ivau
is complete, the
L1i caches are cleared of stale code, and further instruction fetches
by any core will see the new code. However, the runner core might
previously have fetched the old code from L1i, and still be holding
it internally (decoded and in the pipeline, uop cache, speculative
execution, etc). isb
flushes these CPU-internal mechanisms,
ensuring that all further instructions to be executed have actually
been fetched from the L1i cache after it was invalidated.
Thus, the isb
needs to be executed in the thread that is going to
run the newly written code. And moreover you need to make sure that
it is done after all the cache maintenance has fully completed;
maybe by having the writer thread notify it via condition variable or
the like.
I tested this too. If all the cache maintenance instructions, plus an isb
, are done by the writer, but the runner doesn't isb
, then once again it can execute the stale code. I was only able to reproduce this in a test where the writer patches an instruction in a loop that the runner is executing concurrently, which probably ensures that the runner had already fetched it. This is legal provided that the old and new instruction are, say, a branch and a nop respectively (see B2.2.5), which is what I did. (But it is not guaranteed to work for arbitrary old and new instructions.)
I tried some other tests to try to arrange it so that the instruction wasn't actually executed until it was patched, yet it was the target of a branch that should have been predicted taken, in hopes that this would get it prefetched; but I couldn't get the stale version to execute in that case.
One thing I wasn't quite sure about is this. A typical modern OS may
well have W^X, where no virtual page can be simultaneously writable
and executable. If after writing the code, you call the equivalent of
mprotect
to make the page executable, then most likely the OS is
going to take care of all the cache maintenance and synchronization
for you (but I guess it doesn't hurt to do it yourself too).
But another way to do it would be with an alias: you map the memory
writable at one virtual address, and executable at another. The
writer writes at the former address, and the runner jumps to the
latter. In that case, I think you would simply dc cvau
the
writable address, and ic ivau
the executable one, but I couldn't
find confirmation of that. But I tested it, and it worked no matter which alias was passed to which cache maintenance instruction, while it failed if either instruction was omitted altogether. So it appears that the cache maintenance is done by physical address underneath.
QUESTION
I am working on an audio player with Vue 3 and the Napster API.
Project detailsThe player has a progress bar. I use the trackProgress
computed property to update the progress in real-time:
ANSWER
Answered 2021-Dec-24 at 22:41You should create a data property trackProgress and update it in the listener which you create in created() hook (similar to ended event).
QUESTION
I am having an issue with my navbar, once I make the browser tab smaller, the text stays the same size but the logo gets smaller and smaller and disappears. How can I make it so everything gets smaller? If any more information is needed I will provide them. Here are some examples of my problem. 100% width page vs Page made smaller for smaller screens for example
...ANSWER
Answered 2021-Dec-07 at 19:36I think you're using @media wrong.
QUESTION
I am trying to create a PyTorch Dataset and DataLoader object using a sample data.
This is the tab seperated dataset:
...ANSWER
Answered 2021-Dec-01 at 02:39Your data seems to be space-separated, not tab-separated. So, when you specify delimiter="\t"
, the entire row is read as a single column. But because of usecols=range(0,7)
, NumPy expects there to be seven columns, and throws an error when trying to iterate over them.
To fix this, either change the whitespaces to tabs in your data, or change the delimiter argument to delimiter=" "
.
QUESTION
C# compiler shows me Non-nullable property must contain a non-null value
on:
- EF
relationships
DbSet
According to this documentation: Working with Nullable Reference Types I could get rid of this warning for DbSet
using:
ANSWER
Answered 2021-Nov-30 at 11:51Answering my own question I believe this is the most suitable way of removing warning in EF relationships
:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Miscellaneous
You can use Miscellaneous like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page