Threads | 博客中的23个多线程例子

 by   Jackson0714 C# Version: Current License: No License

kandi X-RAY | Threads Summary

kandi X-RAY | Threads Summary

Threads is a C# library. Threads has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

博客中的23个多线程例子
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              Threads has a low active ecosystem.
              It has 12 star(s) with 8 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              Threads has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of Threads is current.

            kandi-Quality Quality

              Threads has 0 bugs and 0 code smells.

            kandi-Security Security

              Threads has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              Threads code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              Threads does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              Threads releases are not available. You will need to build from source code and install.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Threads
            Get all kandi verified functions for this library.

            Threads Key Features

            No Key Features are available at this moment for Threads.

            Threads Examples and Code Snippets

            No Code Snippets are available at this moment for Threads.

            Community Discussions

            QUESTION

            dlopen: libcrypt.so.1: cannot open shared object file: No such file or directory
            Asked 2022-Mar-13 at 10:37

            I use EndevourOS and have updated my system on February 17 2022 using

            sudo pacman -Syu

            Eversince, when I run docker-compose, I get this error message: [4221] Error loading Python lib '/tmp/_MEIgGJQGW/libpython3.7m.so.1.0': dlopen: libcrypt.so.1: cannot open shared object file: No such file or directory

            Some forum threads suggested to reinstall docker-compose, which I did. I tried following solution. But both without success: Python3.7: error while loading shared libraries: libpython3.7m.so.1.0

            How can I resolve this issue?

            ...

            ANSWER

            Answered 2022-Feb-19 at 22:27

            I found several forum posts explaining to isntall libxcrypt-compat from AUR. I did not like this solution, but apparently, this is the way for now: https://bbs.archlinux.org/viewtopic.php?id=274160&p=2

            If there is a PGP key error when building the package from AUR, use this workaround as explained by Stock44 on this page: https://aur.archlinux.org/packages/libxcrypt-compat

            Source https://stackoverflow.com/questions/71187944

            QUESTION

            Spring Boot WebClient stops sending requests
            Asked 2022-Feb-18 at 14:42

            I am running a Spring Boot app that uses WebClient for both non-blocking and blocking HTTP requests. After the app has run for some time, all outgoing HTTP requests seem to get stuck.

            WebClient is used to send requests to multiple hosts, but as an example, here is how it is initialized and used to send requests to Telegram:

            WebClientConfig:

            ...

            ANSWER

            Answered 2021-Dec-20 at 14:25

            I would propose to take a look in the RateLimiter direction. Maybe it does not work as expected, depending on the number of requests your application does over time. From the Javadoc for Ratelimiter: "It is important to note that the number of permits requested never affects the throttling of the request itself ... but it affects the throttling of the next request. I.e., if an expensive task arrives at an idle RateLimiter, it will be granted immediately, but it is the next request that will experience extra throttling, thus paying for the cost of the expensive task." Also helpful might be this discussion: github or github

            I could imaginge there is some throttling adding up or other effect in the RateLimiter, i would try to play around with it and make sure this thing really works the way you want. Alternatively, consider using Spring @Scheduled to read from your queue. You might want to spice it up using embedded JMS for further goodies (message persistence etc).

            Source https://stackoverflow.com/questions/70357582

            QUESTION

            log4js-node-DEP0004 - Deprecation warning in Angular 12 karma unit tests
            Asked 2022-Feb-16 at 17:00

            I'm getting the following deprecation warning when running unit tests in a brand new Angular 12 application:

            (node:14940) [log4js-node-DEP0004] DeprecationWarning: Pattern %d{DATE} is deprecated due to the confusion it causes when used. Please use %d{DATETIME} instead.

            why log4js prompts "karma" depends on it. The warning itself is clear as to what should be done but there are two key missing pieces of information:

            • it doesn't say when/if the old syntax will stop working
            • it doesn't provide a workaround (other than forking karma and replacing the deprecated syntax with the new one - which I'm definitely not going to do).

            Downgrading log4js to an earlier version, which doesn't output the warning, using forceResolutions doesn't seem like a good idea, especially since I've found a few github threads related to vulnerabilities in it, although karma doesn't seem to be affected.

            The question: are there actionable paths for not getting the warning, or is "and now we wait" (for a karma update) the only option?

            Note: I've also asked it on karma's repo.

            ...

            ANSWER

            Answered 2022-Feb-16 at 17:00

            Got the fix from karma maintainers:

            Update karma (in package.json > devDependencies.karma) to ^6.3.12.

            Warnings gone. Well done, karma. That was fast!

            Source https://stackoverflow.com/questions/70841924

            QUESTION

            Linearize nested for loops
            Asked 2022-Feb-01 at 08:27

            I'm working on some heavy algorithm, and now I'm trying to make it multithreaded. It has a loop with 2 nested loops:

            ...

            ANSWER

            Answered 2021-Dec-20 at 09:25

            A third attempt:

            I've taken your code, and at last got it to run properly (in python):

            Source https://stackoverflow.com/questions/70413446

            QUESTION

            What is 'serviceability memory category' of Native Memory Tracking?
            Asked 2022-Jan-17 at 13:38

            I have an java app (JDK13) running in a docker container. Recently I moved the app to JDK17 (OpenJDK17) and found a gradual increase of memory usage by docker container.

            During investigation I found that the 'serviceability memory category' NMT grows constantly (15mb per an hour). I checked the page https://docs.oracle.com/en/java/javase/17/troubleshoot/diagnostic-tools.html#GUID-5EF7BB07-C903-4EBD-A9C2-EC0E44048D37 but this category is not mentioned there.

            Could anyone explain what this serviceability category means and what can cause such gradual increase? Also there are some additional new memory categories comparing to JDK13. Maybe someone knows where I can read details about them.

            Here is the result of command jcmd 1 VM.native_memory summary

            ...

            ANSWER

            Answered 2022-Jan-17 at 13:38

            Unfortunately (?), the easiest way to know for sure what those categories map to is to look at OpenJDK source code. The NMT tag you are looking for is mtServiceability. This would show that "serviceability" are basically diagnostic interfaces in JDK/JVM: JVMTI, heap dumps, etc.

            But the same kind of thing is clear from observing that stack trace sample you are showing mentions ThreadStackTrace::dump_stack_at_safepoint -- that is something that dumps the thread information, for example for jstack, heap dump, etc. If you have a suspicion for the memory leak in that code, you might try to build a MCVE demonstrating it, and submitting the bug against OpenJDK, or showing it to a fellow OpenJDK developer. You probably know better what your application is doing to cause thread dumps, focus there.

            That being said, I don't see any obvious memory leaks in StackFrameInfo, neither can I reproduce any leak with stress tests, so maybe what you are seeing is "just" thread dumping over the larger and larger thread stacks. Or you capture it when thread dump is happening. Or... It is hard to say without the MCVE.

            Update: After playing with MCVE, I realized that it reproduces with 17.0.1, but not with either mainline development JDK, or JDK 18 EA, or JDK 17.0.2 EA. I tested with 17.0.2 EA before, so was not seeing it, dang. Bisection between 17.0.1 and 17.0.2 EA shows it was fixed with JDK-8273902 backport. 17.0.2 releases this week, so the bug should disappear after you upgrade.

            Source https://stackoverflow.com/questions/70709971

            QUESTION

            Are memory orders for each atomic correct in this lock-free SPSC ring buffer queue?
            Asked 2022-Jan-06 at 12:33

            I have a ring buffer that looks like:

            ...

            ANSWER

            Answered 2021-Dec-31 at 12:49

            Previous answers may help as background:
            c++, std::atomic, what is std::memory_order and how to use them?
            https://bartoszmilewski.com/2008/12/01/c-atomics-and-memory-ordering/

            Firstly the system you describe is known as a Single Producer - Single Consumer queue. You can always look at the boost version of this container to compare. I often will examine boost code, even when I work in situations where boost is not allowed. This is because examining and understanding a stable solution will give you insights into problems you may encounter (why did they do it that way? Oh, I see it - etc). Given your design, and having written many similar containers I will say that your design has to be careful about distinguishing empty from full. If you use a classic {begin,end} pair, you hit the problem that due to wrapping

            {begin, begin+size} == {begin, begin} == empty

            Okay, so back synchronisation issue.

            Given that the order only effects reording, the use of release in Publish seems a textbook use of the flag. Nothing will read the value until the size of the container is incremented, so you don't care if the orders of writes of the value itself happen in a random order, you only care that the value must be fully written before the count is increased. So I would concur, you are correctly using the flag in the Publish function.
            I did question whether the "release" was required in the Consume, but if you are moving out of the queue, and those moves are side-effecting, it may be required. I would say that if you are after raw speed, then it may be worth making a second version, that is specialised for trivial objects, that uses relaxed order for incrementing the head.

            You might also consider inplace new/delete as you push/pop. Whilst most moves will leave an object in an empty state, the standard only requires that it is left in a valid state after a move. explicitly deleting the object after the move may save you from obscure bugs later.

            You could argue that the two atomic loads in consume could be memory_order_consume. This relaxes the constraints to say "I don't care what order they are loaded, as long as they are both loaded by the time they are used". Although I doubt in practice it produces any gain. I am also nervous about this suggestion because when I look at the boost version it is remarkably close to what you have. https://www.boost.org/doc/libs/1_66_0/boost/lockfree/spsc_queue.hpp

            Source https://stackoverflow.com/questions/70512371

            QUESTION

            Why set the stop flag using `memory_order_seq_cst`, if you check it with `memory_order_relaxed`?
            Asked 2022-Jan-05 at 15:38

            Herb Sutter, in his "atomic<> weapons" talk, shows several example uses of atomics, and one of them boils down to following: (video link, timestamped)

            • A main thread launches several worker threads.

            • Workers check the stop flag:

              ...

            ANSWER

            Answered 2022-Jan-05 at 14:48
            mo_relaxed is fine for both load and store of a stop flag

            There's also no meaningful latency benefit to stronger memory orders, even if latency of seeing a change to a keep_running or exit_now flag was important.

            IDK why Herb thinks stop.store shouldn't be relaxed; in his talk, his slides have a comment that says // not relaxed on the assignment, but he doesn't say anything about the store side before moving on to "is it worth it".

            Of course, the load runs inside the worker loop, but the store runs only once, and Herb really likes to recommend sticking with SC unless you have a performance reason that truly justifies using something else. I hope that wasn't his only reason; I find that unhelpful when trying to understand what memory order would actually be necessary and why. But anyway, I think either that or a mistake on his part.

            The ISO C++ standard doesn't say anything about how soon stores become visible or what might influence that, just Section 6.9.2.3 Forward progress

            18. An implementation should ensure that the last value (in modification order) assigned by an atomic or synchronization operation will become visible to all other threads in a finite period of time.

            Another thread can loop arbitrarily many times before its load actually sees this store value, even if they're both seq_cst, assuming there's no other synchronization of any kind between them. Low inter-thread latency is a performance issue, not correctness / formal guarantee.

            And non-infinite inter-thread latency is apparently only a "should" QOI (quality of implementation) issue. :P Nothing in the standard suggests that seq_cst would help on an implementation where store visibility could be delayed indefinitely, although one might guess that could be the case, e.g. on a hypothetical implementation with explicit cache flushes instead of cache coherency. (Although such an implementation is probably not practically usable in terms of performance with CPUs anything like what we have now; every release and/or acquire operation would have to flush the whole cache.)

            On real hardware (which uses some form of MESI cache coherency), different memory orders for store or load don't make stores visible sooner in real time, they just control whether later operations can become globally visible while still waiting for the store to commit from the store buffer to L1d cache. (After invalidating any other copies of the line.)

            Stronger orders, and barriers, don't make things happen sooner in an absolute sense, they just delay other things until they're allowed to happen relative to the store or load. (This is the case on all real-world CPUs AFAIK; they always try to make stores visible to other cores ASAP anyway, so the store buffer doesn't fill up, and

            See also (my similar answers on):

            The second Q&A is about x86 where commit from the store buffer to L1d cache is in program order. That limits how far past a cache-miss store execution can get, and also any possible benefit of putting a release or seq_cst fence after the store to prevent later stores (and loads) from maybe competing for resources. (x86 microarchitectures will do RFO (read for ownership) before stores reach the head of the store buffer, and plain loads normally compete for resources to track RFOs we're waiting for a response to.) But these effects are extremely minor in terms of something like exiting another thread; only very small scale reordering.

            because who cares if the thread stops with a slightly bigger delay.

            More like, who cares if the thread gets more work done by not making loads/stores after the load wait for the check to complete. (Of course, this work will get discarded if it's in the shadow of a a mis-speculated branch on the load result when we eventually load true.) The cost of rolling back to a consistent state after a branch mispredict is more or less independent of how much already-executed work had happened beyond the mispredicted branch. And it's a stop flag so the total amount of wasted work costing cache/memory bandwidth for other CPUs is pretty minimal.

            That phrasing makes it sound like an acquire load or release store would actually get the the store seen sooner in absolute real time, rather than just relative to other code in this thread. (Which is not the case).

            The benefit is more instruction-level and memory-level parallelism across loop iterations when the load produces a false. And simply avoiding running extra instructions on ISAs where an acquire or especially an SC load needs extra instructions, especially expensive 2-way barrier instructions, not like ARM64 ldapr.

            BTW, Herb is right that the dirty flag can also be relaxed, only because of the thread.join sync between the reader and any possible writer. Otherwise yeah, release / acquire.

            But in this case, dirty only needs to be atomic<> at all because of possible simultaneous writers all storing the same value, which ISO C++ still deems data-race UB. e.g. because of the theoretical possibility of hardware race-detection that traps on conflicting non-atomic accesses.

            Source https://stackoverflow.com/questions/70581645

            QUESTION

            Python threads difference for 3.10 and others
            Asked 2022-Jan-04 at 21:25

            For some, simple thread related code, i.e:

            ...

            ANSWER

            Answered 2021-Nov-17 at 14:58

            An answer from a core developer:

            Unintended consequence of Mark Shannon's change that refactors fast opcode dispatching: https://github.com/python/cpython/commit/4958f5d69dd2bf86866c43491caf72f774ddec97 -- the INPLACE_ADD opcode no longer uses the "slow" dispatch path that checks for interrupts and such.

            Source https://stackoverflow.com/questions/69993959

            QUESTION

            refusing to allow a Personal Access Token to create or update workflow
            Asked 2022-Jan-04 at 10:12

            Today when I added a workflow and push the code to GitHub remote repo, shows this error:

            ...

            ANSWER

            Answered 2021-Aug-17 at 05:15

            give workflow privillege when created token in GitHub:

            Source https://stackoverflow.com/questions/68811838

            QUESTION

            What is the significance of 'strongly happens before' compared to '(simply) happens before'?
            Asked 2022-Jan-02 at 18:21

            The standard defines several 'happens before' relations that extend the good old 'sequenced before' over multiple threads:

            [intro.races]

            11 An evaluation A simply happens before an evaluation B if either

            (11.1) — A is sequenced before B, or
            (11.2) — A synchronizes with B, or
            (11.3) — A simply happens before X and X simply happens before B.

            [Note 10: In the absence of consume operations, the happens before and simply happens before relations are identical. — end note]

            12 An evaluation A strongly happens before an evaluation D if, either

            (12.1) — A is sequenced before D, or
            (12.2) — A synchronizes with D, and both A and D are sequentially consistent atomic operations ([atomics.order]), or
            (12.3) — there are evaluations B and C such that A is sequenced before B, B simply happens before C, and C is sequenced before D, or
            (12.4) — there is an evaluation B such that A strongly happens before B, and B strongly happens before D.

            [Note 11: Informally, if A strongly happens before B, then A appears to be evaluated before B in all contexts. Strongly happens before excludes consume operations. — end note]

            (bold mine)

            The difference between the two seems very subtle. 'Strongly happens before' is never true for matching pairs or release-acquire operations (unless both are seq-cst), but it still respects release-acquire syncronization in a way, since operations sequenced before a release 'strongly happen before' the operations sequenced after the matching acquire.

            Why does this difference matter?

            'Strongly happens before' was introduced in C++20, and pre-C++20, 'simply happens before' used to be called 'strongly happens before'. Why was it introduced?

            [atomics.order]/4 says that the total order of all seq-cst operations is consistent with 'strongly happens before'.

            Does it mean that it's not consistent with 'simply happens before'? If so, why not?

            I'm ignoring the plain 'happens before', because it differs from 'simply happens before' only in its handling of memory_order_consume, the use of which is temporarily discouraged, since apparently most (all?) major compilers treat it as memory_order_acquire.

            I've already seen this Q&A, but it doesn't explain why 'strongly happens before' exists, and doesn't fully address what it means (it just states that it doesn't respect release-acquire syncronization, which isn't completely the case).

            Found the proposal that introduced 'simply happens before'.

            I don't fully understand it, but it explains following:

            • 'Strongly happens before' is a weakened version of 'simply happens before'.
            • The difference is only observable when seq-cst is mixed with aqc-rel on the same variable (I think, it means when an acquire load reads a value from a seq-cst store, or when an seq-cst load reads a value from a release store). But the exact effects of mixing the two are still unclear to me.
            ...

            ANSWER

            Answered 2022-Jan-02 at 18:21

            Here's my current understanding, which could be incomplete or incorrect. A verification would be appreciated.

            C++20 renamed strongly happens before to simply happens before, and introduced a new, more relaxed definition for strongly happens before, which imposes less ordering.

            Simply happens before is used to reason about the presence of data races in your code. (Actually that would be the plain 'happens before', but the two are equivalent in absence of consume operations, the use of which is discouraged by the standard, since most (all?) major compilers treat them as acquires.)

            The weaker strongly happens before is used to reason about the global order of seq-cst operations.

            This change was introduced in proposal P0668R5: Revising the C++ memory model, which is based on the paper Repairing Sequential Consistency in C/C++11 by Lahav et al (which I didn't fully read).

            The proposal explains why the change was made. Long story short, the way most compilers implement atomics on Power and ARM architectures turned out to be non-conformant in rare edge cases, and fixing the compilers had a performance cost, so they fixed the standard instead.

            The change only affects you if you mix seq-cst operations with acquire-release operations on the same atomic variable (i.e. if an acquire operation reads a value from a seq-cst store, or a seq-cst operation reads a value from a release store).

            If you don't mix operations in this manner, then you're not affected (i.e. can treat simply happens before and strongly happens before as equivalent).

            The gist of the change is that the synchronization between a seq-cst operation and the corresponding acquire/release operation no longer affects the position of this specific seq-cst operation in the global seq-cst order, but the synchronization itself is still there.

            This makes the seq-cst order for such seq-cst operations very moot, see below.

            The proposal presents following example, and I'll try to explain my understanding of it:

            Source https://stackoverflow.com/questions/70554277

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install Threads

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/Jackson0714/Threads.git

          • CLI

            gh repo clone Jackson0714/Threads

          • sshUrl

            git@github.com:Jackson0714/Threads.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link