java-concurrency | clean up java concurrency example
kandi X-RAY | java-concurrency Summary
kandi X-RAY | java-concurrency Summary
clean up java concurrency example
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- The main entry point
- Adds some elements to the set
- Computes the result
- Lazily convert unchecked Throwable to a RuntimeException
- Entry point for debugging
- Process a string
- Main entry point
- Returns a sorted list of quote quotes
- Retrieves an item from the queue
- Process this factor in response
- Entry point for testing
- Computes and returns the result
- Loads the ARK weights from the specified candidates
- Main method for testing
- Transfers money from one account to another
- Displays the page with the specified ad
- Render a text page
- Run the specified task in background
- Renders a page
- Test program
- Main method
- Executes the given action on the given executor
- Run the given Runnable with a given time unit
java-concurrency Key Features
java-concurrency Examples and Code Snippets
Community Discussions
Trending Discussions on java-concurrency
QUESTION
I am watching a video tutorial on Room db. In the video, the instructor says:
"Volatile just means that the instance will be able to get rid of itself or remove itself if need be."
However, based on my understanding, volatile means that the variable will be stored in the main memory rather than taken from the cache.
This post confirmed that as well: http://tutorials.jenkov.com/java-concurrency/volatile.html
How does volatile allow for an instance to remove itself?
Thanks!
...ANSWER
Answered 2021-May-31 at 04:32The CPU caches are always coherent; it is the source of truth; so memory could be completely out of sync.. indefinitely.. perfectly fine. A write of a volatile variable doesn't trigger the whole cache to be flushed to main memory; that would be extremely inefficient. Even a single cache line isn't force to flush to main memory.
What typically is meant (on X86) with 'flushing' to main memory is that when a volatile store is done, the CPU waits for the stores in the store buffer to be committed to the cache before allowing any loads to execute. This prevents the reordering of an older store with a newer load to a different address.
The "remove itself" stuff makes no sense at all.
This article is incorrect.
http://tutorials.jenkov.com/java-concurrency/volatile.html
This is not how it works. Caches are synchronized using a cache coherence protocol like MESI. So a write to a cacheline on one CPU will always lead to the copies of the cacheline in the other CPU to be invalidated.
Warning: when dealing with the JMM it is best to keep it fully abstract and not reason about implementation concerns like cache invalidation etc. It is fun.. but you can shoot yourself very easily in the foot.
QUESTION
While going through the Blocking/Non Blocking Algorithms section at the link
and the below code to explain the Atomic compareAndSet
operation
ANSWER
Answered 2021-Jan-02 at 05:07The value of the AtomicLong is on a cacheline. On the X86 there is a feature called cacheline locking which is used for locked instructions. So when a cas is done, the cacheline is first acquired in modified/exclusive state and then locked.
If a different CPU wants to access the same cacheline, its cache coherence requests including the request-for-ownership will be ignored till the cacheline is unlocked.
So it is a very lightweight form of synchronization. If you are lucky, the other CPU has some out of order instructions it can execute while it waits for the cacheline.
This approach is called non blocking even though it could lead to other threads 'blocking' since they need to wait. The primary difference with a blocking algorithm is that it can't happen that the CPU (thread) owning the locked cacheline gets suspended while it has locked that cacheline. This is taken care of at the hardware level (so the CPU can't be interrupted between cacheline lock acquire and release). So the blocking is guaranteed to be very short instead of unbounded like with a blocking algorithm.
According to @BeeOnRope there might be some optimistic behavior as well involved but this goes beyond my knowledge level.
QUESTION
I've read from this article that:
...Synchronized blocks also guarantee that all variables accessed inside the synchronized block will be read in from main memory, and when the thread exits the synchronized block, all updated variables will be flushed back to main memory again, regardless of whether the variable is declared
volatile
or not.
There's also an example showed in Effective Java:
...ANSWER
Answered 2020-Jul-02 at 08:11In the first example, flags is initialised using a static
initialiser. It is guaranteed by the Java Memory Model that any subsequent reads would see the updated value of the reference, and the correct initial state of Flags
(basically, Flags
would be properly published).
However, since Flags
is mutable and might be mutated at a later point in time by multiple threads, you need to use proper synchronisation to ensure memory visibility for its state. So a volatile
would be needed for its fields (or proper synchronisation).
In the second example, simply declaring flags
as volatile
won't ensure memory visibility of writes to arrays. It just ensures a happens-before relationship b/w writes to the array reference and subsequent reads from it. To ensure a happens-before relationship b/w writes to array elements and subsequent reads from them, you need to use locking, which you are already doing.
Why this works? The JMM guarantees a happens-before relationship b/w the release of a monitor and its re-acquisition. When a thread releases a lock which is later acquired by another thread, a kind-of total ordering (governed by happens-before) is ensured b/w whatever writes happened in the previous thread and any subsequent reads by the thread that re-acquired the lock.
Just remember that declaring a reference as volatile
does not ensure proper visibility of the mutable state of the object it refers to. We still need proper synchronisation mechanisms to ensure memory visibility.
QUESTION
I am reading two text files concurrently line by line.
What I am specifically want to do is when the lineCount
on each thread are the same I want to take a look at the string that the scanner is currently reading.
I looked around for certain pattern I can implement like Compare and Swap and Slipped Condition but I cannot wrap my head around how it would help me achieve my goal. I am new to concurrency programming.
What I have managed so far is to synchronize the string reading and printing with counterSync
method and I know that I have carry out my thread lock/pause operation there and take a look at the string.
ANSWER
Answered 2020-Mar-19 at 15:01It seems that you didn't post a complete example. But, a few general comments:
You might be able to get away with using "compare-and-swap" logic for an integer, but you should not expect it to work for a more-sophisticated thing like a Java "String" or any sort of container.
You should simply use the synchronization-objects provided in the language. If you are going to update or even to examine a shared data structure, you must be holding the proper lock.
Of course, "thread-safe queues" are very helpful in many designs because they facilitate the most-common activity – message-passing – and allow the various threads to operate graciously at slightly-varying speeds. You still have to lock anything that's shared, but nonetheless it's a useful design that's really as old as the Unix® "pipe."
QUESTION
public void test() {
List integers = new ArrayList<>();
for(int i = 0; i < 1000; i++) {
integers.add(i);
}
Map cache = new ConcurrentHashMap<>();
ExecutorService pool = new ForkJoinPool(10);
try {
pool.submit(() -> integers.parallelStream().forEach(integer -> {
String name = Thread.currentThread().getName();
System.out.println("Foo " + name);
cache.put(integer, integer);
})).get();
} catch (Exception e) {
}
System.out.println(cache);
}
...ANSWER
Answered 2020-Mar-10 at 03:11Yes, your code will work fine. The ConcurrentHashMap
guarantees that all the inserted mappings will happen in a thread-safe manner.
You don't need to care about pool
and cache
-- they're effectively final variables, and as such, their values once set at construction time (before you start any multi-threaded code) won't change anymore.
What may be confusing you is that when dealing with non-final fields, you may need to mark them as volatile
if you intend to change them and to be sure that the change is correctly propagated across threads. But as said above, notice how in this case the value of pool
and caches
is never changed.
QUESTION
What is the difference between cpu cache and memory cache?
When data is cached in memory there is also a higher probability that this data is also cached in the CPU cache of the CPU executing the thread. [1]
And how we can relate caching in cpu and memory?
...ANSWER
Answered 2017-Feb-24 at 09:54The "memory cache" appears to really just be talking about anywhere in memory. Sometimes this is a cache of data stored on disk or externally. This is a software cache.
The CPU cache is a hardware cache and is faster, more localised but smaller.
QUESTION
I'm currently studying about signaling in threads and came across this article for signaling via shared objects,
http://tutorials.jenkov.com/java-concurrency/thread-signaling.html
It says that we can create a shared object and pass that object to threads which, threads can use to signal each other.
Following is the snippet provided for shared object,
...ANSWER
Answered 2019-Sep-15 at 07:32But even if I remove the synchronized on the MySignal methods, this provides the same output as sharedSignal object is locked by one of the threads
Removing the synchronized
from the methods won't make a difference as there is already a synchronized
block guarding the method access from different threads.
And, if I remove only the synchronized in run(), it does not work properly as one of the threads end before even going to sleep.
But if you remove the the synchronized
block then the contents of the block are not executed in an atomic way.
What I mean is without the synchronized
block the any thread can call the sharedSignal.hasDataToProcess()
get the lock on the MySignal
object and then release it after it is done with the method then another thread is free to call the sharedSignal.setHasDataToProcess(false);
as the lock on the MySignal
instance was already released by the earlier thread when it was done with the method.
QUESTION
I am working a java library, which has a singleton class with a methods - createTask()
and addPointsToTask()
The library is meant to be used in any java service which executes multiple requests.
The service should be able to call createTask
only once during it's processing of a single request. Any further calls to createTask
in the same thread execution should fail. addPointsToTask
can be called any number of times.
As a library owner how can I restrict this method to be called only once per thread?
I have explored ThreadLocal, but don't think it fits my purpose.
One solution is to ask the service that is using the library to set a unique id in threadLocal, but as this 'set-to-thread-local' solution is outside the boundary of the library, this is not a full-proof solution.
Any hints?
...ANSWER
Answered 2019-Aug-30 at 13:01You will not be able to prohibit multiple calls from the same request, simply because your library has no concept of what a "request" actually is. This very much depends on the service using the library. Some services may use a single thread per request, but others may not. Using thread-locals is error-prone especially when you are working in multi-threaded or reactive applications where code processing a request can execute on multiple parallel threads.
If your requirement is that addPointsToTask
is only called for a task that was actually started by some code that is processing the current request, you could set up your API like that. E.g. createTask
could return a context object that is required to call addPointsToTask
later.
QUESTION
I am reading an article about the Java Volatile keyword, got some questions. click here
...ANSWER
Answered 2019-Aug-20 at 08:14It's all about happens-before
relationship.
This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement.
In the same thread,
QUESTION
I made a runnable class and created a thread but with a unique name but when I send this thread though executor.scheduleAtFixedRate
it creates its own thread and I do not understand why is it?
I tried to read here but still, I do not understand this: https://www.codejava.net/java-core/concurrency/java-concurrency-scheduling-tasks-to-execute-after-a-given-delay-or-periodically
...ANSWER
Answered 2019-Aug-04 at 11:48The executor service is creating it for you. If you wish to override the naming of threads you can set options in the executor service. See Naming threads and thread-pools of ExecutorService
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install java-concurrency
You can use java-concurrency like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the java-concurrency component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page