stuff | Random stuff and scripts | Plugin library
kandi X-RAY | stuff Summary
kandi X-RAY | stuff Summary
Random stuff and scripts. For commercial support or customization you can contact me at edwin@datux.nl.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of stuff
stuff Key Features
stuff Examples and Code Snippets
Community Discussions
Trending Discussions on stuff
QUESTION
I need to access the first element of a list
. The problem is that the lists vary in the way how deep they are nested. Here is an example:
ANSWER
Answered 2022-Feb-02 at 14:38You can use rrapply::rrapply
:
QUESTION
With the first patch for AS Arctic Fox Jetpack Compose previews stopped working.
I'm getting this error for all previews - even older ones, which worked fine a while back:
...ANSWER
Answered 2022-Feb-24 at 11:36This got fixed in AS Bumblebee, patch 2.
QUESTION
I am working with WSL a lot lately because I need some native UNIX tools (and emulators aren't good enough). I noticed that the speed difference when working with NPM/Yarn is incredible.
I conducted a simple test that confirmed my feelings. The test was running npx create-react-app my-test-app
and the WSL result was Done in 287.56s.
while GitBash finished with Done in 10.46s.
.
This is not the whole picture, because the perceived time was higher in both cases, but even based on that - there is a big issue somewhere. I just don't know where. The project I'm working on uses tens of libraries and changing even one of them takes minutes instead of seconds.
Is this something that I can fix? If so - where to look for clues?
Additional info:
my processor: Processor AMD Ryzen 7 5800H with Radeon Graphics, 3201 Mhz, 8 Core(s), 16 Logical Processors
I'm running Windows 11 with all the latest updates to both the system and the WSL. The chosen system is Ubuntu 20.04
I've seen some questions that are somewhat similar like 'npm install' extremely slow on Windows, but they don't touch WSL at all (and my pure Windows NPM works fast).
the issue is not limited to NPM, it's also for Yarn
another problem that I'm getting is that file watching is not happening (I need to restart the server with every change). In some applications I don't get any errors, sometimes I get the following:
...
ANSWER
Answered 2021-Aug-29 at 15:40Since you mention executing the same files (with proper performance) from within Git Bash, I'm going to make an assumption here. Correct me if I'm wrong on this, and I'll delete the answer and look for another possibility.
This would be explained (and expected) if your files are stored on /mnt/c
(a.k.a. C:
, or /C
under Git Bash) or any other Windows drive, as they would likely need to be to be accessed by Git Bash.
WSL2 uses the 9P protocol to access Windows drives, and it is currently known to be very slow when compared to:
- Native NTFS (obviously)
- The ext4 filesystem on the virtual disk used by WSL2
- And even the performance of WSL1 with Windows drives
I've seen a git clone
of a large repo (the WSL2 Linux kernel Github) take 8 minutes on WSL2 on a Windows drive, but only seconds on the root filesystem.
Two possibilities:
If possible (and it is for most Node projects), convert your WSL to version 1 with
wsl --set-version 1
. I always recommend making a backup withwsl --export
first.And since you are making a backup anyway, you may as well just create a copy of the instance by
wsl --import
ing your backup as--version 1
(as the last argument). WSL1 and WSL2 both have their uses, and you may find it helpful to keep both around.See this answer for more details on the exact syntax..
Or just move the project over to somewhere under the WSL root, such as
/home/username/src/
.
QUESTION
Herb Sutter, in his "atomic<> weapons" talk, shows several example uses of atomics, and one of them boils down to following: (video link, timestamped)
A main thread launches several worker threads.
Workers check the stop flag:
...
ANSWER
Answered 2022-Jan-05 at 14:48mo_relaxed
is fine for both load and store of a stop
flag
There's also no meaningful latency benefit to stronger memory orders, even if latency of seeing a change to a keep_running
or exit_now
flag was important.
IDK why Herb thinks stop.store
shouldn't be relaxed; in his talk, his slides have a comment that says // not relaxed
on the assignment, but he doesn't say anything about the store side before moving on to "is it worth it".
Of course, the load runs inside the worker loop, but the store runs only once, and Herb really likes to recommend sticking with SC unless you have a performance reason that truly justifies using something else. I hope that wasn't his only reason; I find that unhelpful when trying to understand what memory order would actually be necessary and why. But anyway, I think either that or a mistake on his part.
The ISO C++ standard doesn't say anything about how soon stores become visible or what might influence that, just Section 6.9.2.3 Forward progress
18. An implementation should ensure that the last value (in modification order) assigned by an atomic or synchronization operation will become visible to all other threads in a finite period of time.
Another thread can loop arbitrarily many times before its load actually sees this store value, even if they're both seq_cst
, assuming there's no other synchronization of any kind between them. Low inter-thread latency is a performance issue, not correctness / formal guarantee.
And non-infinite inter-thread latency is apparently only a "should" QOI (quality of implementation) issue. :P Nothing in the standard suggests that seq_cst
would help on an implementation where store visibility could be delayed indefinitely, although one might guess that could be the case, e.g. on a hypothetical implementation with explicit cache flushes instead of cache coherency. (Although such an implementation is probably not practically usable in terms of performance with CPUs anything like what we have now; every release and/or acquire operation would have to flush the whole cache.)
On real hardware (which uses some form of MESI cache coherency), different memory orders for store or load don't make stores visible sooner in real time, they just control whether later operations can become globally visible while still waiting for the store to commit from the store buffer to L1d cache. (After invalidating any other copies of the line.)
Stronger orders, and barriers, don't make things happen sooner in an absolute sense, they just delay other things until they're allowed to happen relative to the store or load. (This is the case on all real-world CPUs AFAIK; they always try to make stores visible to other cores ASAP anyway, so the store buffer doesn't fill up, and
See also (my similar answers on):
- Does hardware memory barrier make visibility of atomic operations faster in addition to providing necessary guarantees?
- If I don't use fences, how long could it take a core to see another core's writes?
- memory_order_relaxed and visibility
- Thread synchronization: How to guarantee visibility of writes (it's a non-issue on current real hardware)
The second Q&A is about x86 where commit from the store buffer to L1d cache is in program order. That limits how far past a cache-miss store execution can get, and also any possible benefit of putting a release or seq_cst fence after the store to prevent later stores (and loads) from maybe competing for resources. (x86 microarchitectures will do RFO (read for ownership) before stores reach the head of the store buffer, and plain loads normally compete for resources to track RFOs we're waiting for a response to.) But these effects are extremely minor in terms of something like exiting another thread; only very small scale reordering.
because who cares if the thread stops with a slightly bigger delay.
More like, who cares if the thread gets more work done by not making loads/stores after the load wait for the check to complete. (Of course, this work will get discarded if it's in the shadow of a a mis-speculated branch on the load result when we eventually load true
.) The cost of rolling back to a consistent state after a branch mispredict is more or less independent of how much already-executed work had happened beyond the mispredicted branch. And it's a stop
flag so the total amount of wasted work costing cache/memory bandwidth for other CPUs is pretty minimal.
That phrasing makes it sound like an acquire
load or release
store would actually get the the store seen sooner in absolute real time, rather than just relative to other code in this thread. (Which is not the case).
The benefit is more instruction-level and memory-level parallelism across loop iterations when the load produces a false
. And simply avoiding running extra instructions on ISAs where an acquire or especially an SC load needs extra instructions, especially expensive 2-way barrier instructions, not like ARM64 ldapr
.
BTW, Herb is right that the dirty
flag can also be relaxed
, only because of the thread.join
sync between the reader and any possible writer. Otherwise yeah, release / acquire.
But in this case, dirty
only needs to be atomic<>
at all because of possible simultaneous writers all storing the same value, which ISO C++ still deems data-race UB. e.g. because of the theoretical possibility of hardware race-detection that traps on conflicting non-atomic accesses.
QUESTION
Somehow the "build" directory doesn't exist within devtools-2.9.2 directory. I am getting this exception only while running the build on iPhone SE 2nd generation iOS 14.5 simulator though, which is weird. This began after an unexpected forced reboot of my mac. But I can not directly connect this event.
What is happening and how can I build this stuff or get rid of exceptions? And what is the cause?
flutter doctor -v
No issues found
ANSWER
Answered 2021-Dec-20 at 23:42DevTools is no longer being shipped via pub
and is now part of the Dart SDK. 2.9.2 was published unintentionally this morning and has since been retracted.
How were you starting DevTools? You might want to file an issue on the DevTools repository if you're still having issues and I (@bkonyi) can help you out further there.
QUESTION
In the documentation for Ord
, it says
Implementations must be consistent with the PartialOrd implementation [...]
That of course makes sense and can easily be archived as in the example further down:
...ANSWER
Answered 2021-Dec-26 at 00:40Apparently, there is a reference to that, in a github issue - rust-lang/rust#63104:
This conflicts with the existing blanket impl in core.
QUESTION
I have lists a
,b
,c
,... of equal length. I'd like to sort all of them the order obtained by sorting a
, i.e., I could do the decorate-sort-undecorate pattern
ANSWER
Answered 2021-Dec-04 at 21:14I think "without creating temporary objects" is impossible, especially since "everything is an object" in Python.
You could get O(1) space / number of objects if you implement some sorting algorithm yourself, though if you want O(n log n) time and stability, it's difficult. If you don't care about stability (seems likely, since you say you want to sort by a
but then actually sort by a
, b
and c
), heapsort is reasonably easy:
QUESTION
According to the docs, there's nothing called .angular being regarded. Yet, in my project, I get that directory, immediately in the root of the project (on the same level as e.g. package.json).
It wasn't there before because my .gitignore would've barked at it. Currently, I'm trying out the latest Angular version, 13.0 and I conclude that it's a new addition to the tooling. Probably, it's some temporary stuff, since its contents are the following.
- .angular/cache/angular-webpack
- .angular/cache/babel-webpack
It was pointless to google .angular directory dot what is and the only (semi-)relevant hit I got was the docs linked above.
What's up with .angular directory and do I need to care (and/or version control it)?
...ANSWER
Answered 2021-Dec-02 at 10:07".angular/cache" folder should be ignored by your version control system (git, svn etc...)
Example for git, add this line to .gitignore file
QUESTION
Note: I am trying to run
packer.exe
as a background process to workaround a particular issue with theazure-arm
builder, and I need to watch the output. I am not usingStart-Process
because I don't want to use an intermediary file to consume the output.
I have the following code setting up packer.exe
to run in the background so I can consume its output and act upon a certain log message. This is part of a larger script but this is the bit in question that is not behaving correctly:
ANSWER
Answered 2021-Oct-20 at 22:36StreamReader.ReadLine()
is blocking by design.There is an asynchronous alternative,
.ReadLineAsync()
, which returns aTask
instance that you can poll for completion, via its.IsCompleted
property, without blocking your foreground thread (polling is your only option in PowerShell, given that it has no language feature analogous to C#'sawait
).
Here's a simplified example that focuses on asynchronous reading from a StreamReader
instance that happens to be a file, to which new lines are added only periodically; use Ctrl-C to abort.
I would expect the code to work the same if you adapt it to your stdout-reading System.Diagnostics.Process
code.
QUESTION
I'm trying to figure out how alloca()
actually works on a memory level. From the linux man page:
The alloca() function allocates size bytes of space in the stack frame of the caller. This temporary space is automatically freed when the function that called alloca() returns to its caller.
Does this mean alloca()
will forward the stack pointer by n
bytes? Or where exactly is the newly created memory allocated?
And isn't this exactly the same as variable length arrays?
I know the implementation details are probably left to the OS and stuff. But I want to know how in general this is accomplished.
...ANSWER
Answered 2021-Oct-02 at 00:31Yes, alloca
is functionally equivalent to a local variable length array, i.e. this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install stuff
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page