kandi X-RAY | java-thread Summary
kandi X-RAY | java-thread Summary
Top functions reviewed by kandi - BETA
java-thread Key Features
java-thread Examples and Code Snippets
Trending Discussions on java-thread
Asking a question from https://www.baeldung.com/java-thread-safety.
Code given is...
ANSWERAnswered 2021-Apr-28 at 12:02
Multiple threads can indeed call
Each activation of
factorial will have its own copy of
f, accessible to it alone. That is the meaning of a "local variable".
number is not modified, and acts like a local variable of `factorial' in any case.
As to your question in the comment. No, there's one copy of the code - no need to have more that one. But each thread has its own execution of the code, so if it helps to think of that as a 'separate copy', not much conceptual harm is done.
From this source one can read:
It's worth mentioning that synchronized and concurrent collections only make the collection itself thread-safe and not the contents.
I thought if
Collection is thread-safe then its content will implicitly be thread-safe.
I mean if two threads cannot access my
Collection object then the object which my
Collection object is holding will implicitly become thread-safe.
I missing the point, could someone please explain me with an example?...
ANSWERAnswered 2021-Feb-05 at 08:48
The access to the elements is thread-safe but not the usage of them. Two threads get access to the same element (one after the other) and both are calling methods on the element after that. The collection would not even know.
I have a class (lets say
SocketClass) which extends
AsyncTask (I am using
Sockets, that's why I am using
AsyncTask). I am calling that class on a different class which runs on the main thread.
ANSWERAnswered 2020-Dec-02 at 05:53
You can call AsyncTask.get() to get the result back after doInBackground completes.
I had a problem which is fixed by the answer to the following question: Java Thread Start-Stop-Start on same button click I know what it does, but I do not know exactly why. The things I do not fully understand are the blocks that look like this:...
ANSWERAnswered 2020-Nov-06 at 16:21
what the putValue exactly does
It just sets a property of the
Read the section from the Swing tutorial on How to Use Actions for more information and a list of all the properties.
When you add an
Action to a Swing component (JButton, JMenuItem etc), The the properties of the
Action are used to configure the component. So the same "text" can be used on all components, the "enabled" state will be the same for all components etc.
In the case of the mnemonic property a
Key Binding will be set up automatically so that you can invoke the
Action when the
KeyStroke is used. Read the section from the Swing tutorial on Key Bindings
I am interested in low-latency code and that`s why I tried to configure thread affinity. In particular, it was supposed to help to avoid context switches.
I have configured thread affinity using https://github.com/OpenHFT/Java-Thread-Affinity. I run very simple test code that just spins in a cycle checking a time condition....
ANSWERAnswered 2020-Oct-18 at 22:42
A voluntary context switch usually means a thread is waiting for something, e.g. for a lock to become free.
async-profiler can help to find where context switches happen. Here is a command line I used:
I've realy though about how can I catch JIT's deoptimization events.
Today, I've read brilliant answer by Andrei Pangin When busy-spining java thread is bound to physical core, can context switch happen by the reason that new branch in code is reached? and thought about it again.
I want to catch JIT's deoptimization events like "unstable_if, class_check and etc" with JNI+JVMTI then send alert to my monitoring system or anything else.
Is it possible? What is it impact on performance JVM ?...
ANSWERAnswered 2020-Oct-19 at 11:41
Uncommon traps and deoptimization are HotSpot implementation details. You won't find them in a standard interface like JVM TI (which is designed for a generic virtual machine, not just HotSpot).
As suggested in my previous answer, one possible way to diagnose deoptimization is to add
-XX:+UnlockDiagnosticVMOptions -XX:+LogCompilation options and to look for
in the compilation log.
Another approach is to trace deoptimization events with async-profiler.
To do so, use
This will show you the places in Java code where deoptimization happens, and also timestamps, if using
jfr output format.
Since JDK 14, deoptimization events are also reported natively by Flight Recorder (JDK-8216041). Using Event Browser in JMC, you may find all uncommon traps, including method name, bytecode index, deoptimization reason, etc.
The overhead of all the above approaches is small enough. There is usually no problem in using async-profiler in production; JFR is also fine, if the recording settings are not superfluous.
However, there is no much use in profiling deoptimizations, except for very special cases. This is absolutely normal for a typical Java application to recompile methods multiple times, as long as the JVM learns more about the application in runtime. It may sound weird, but uncommon traps is a common technique of the speculative optimization :) As you can see on the above pictures, even basic methods like
HashMap.put may cause deoptimization, and this is fine.
I am trying to create weblogic domain using python code which got stuck at filehandler and below is code and stacktrace which is waiting on something can you help to fix it ?...
ANSWERAnswered 2020-Sep-06 at 15:04
This is solved after changing the location of the log file from /SHARED to local
I have been reading up on how the settings of Spring's ThreadPoolTaskExecutor work together and how the thread pool and queue work. This stackoverflow answer as well as this and this article from Baeldung have been useful to me.
As far as I understand thus far, corePoolSize number of threads are kept alive at all time (assuming allowCoreThreadTimeOut is not set to true). If all of these threads are currently in use, any additional requests will be put on the queue. Once queueCapacity is reached, the thread pool size will be increased until maxPoolSize is reached.
Intuitively, I would have thought it would instead work as follows:
corePoolSize number of threads are kept alive at all time (again assuming allowCoreThreadTimeOut is not set to true). If all of these threads are currently in use and new requests come in, the pool size will be increased until maxPoolSize is reached. If there are then still more requests coming in, they will be put on the queue until queueCapacity is reached.
I wonder what would be the reasoning behind it working the way it is?...
ANSWERAnswered 2020-Jul-14 at 17:52
The first reference you should check is the documentation.
Right from the documentation for
ThreadPoolTaskExecutor is "just" a wrapper):
A ThreadPoolExecutor will automatically adjust the pool size (see getPoolSize()) according to the bounds set by corePoolSize (see getCorePoolSize()) and maximumPoolSize (see getMaximumPoolSize()). When a new task is submitted in method execute(Runnable), if fewer than corePoolSize threads are running, a new thread is created to handle the request, even if other worker threads are idle. Else if fewer than maximumPoolSize threads are running, a new thread will be created to handle the request only if the queue is full. [...]
If the pool currently has more than corePoolSize threads, excess threads will be terminated if they have been idle for more than the keepAliveTime (see getKeepAliveTime(TimeUnit)). This provides a means of reducing resource consumption when the pool is not being actively used. If the pool becomes more active later, new threads will be constructed. [...]
(You haven't mentioned the parameter for the
BlockingQueue but I suggest you to read about it as well. It's very interesting.)
Why do the parameters not work like you've suggested they should?
If the pool size would be increased up to
maximumPoolSize before tasks are queued (like you've proposed), you'd have one problem: You'd have removed the thread pool's ability to determine when a new worker is worth it.
corePoolSize is the amount of workers that stay in the pool. The benefit is that you don't have to create, terminate, create, terminate, create ... new workers for a given workload. If you can determine how much work there will always be, it's a smart idea to set the
maximumPoolSize determines the maximum amount of workers in the pool. You want to have control over that as you could have multiple thread pools, hardware restrictions or just a specific program where you don't need as many workers.
Now why does the work queue get filled up first? Because the queue capacity is an indicator for when the amount of work is so high, that it's worth it to create new workers. As long the queue is not full, the core workers are supposed to be enough to handle the given work. If the capacity is reached, then new workers are created to handle further work.
With this mechanism the thread pool dynamically creates workers when there is a need for them and only keeps so many workers as there is usually need for. This is the point of a thread pool.
I'm using a
PriorityBlockingQueue q to handle parallel processing of tasks. The queue is initialized by some tasks, and the processing of each task may produce some more tasks, that will be added to the queue. I want to process the tasks in parallel, and stop when all the tasks have been processed. Of course the queue may temporarily become empty before we're done, if tasks are still being processed by other threads.
My question is: what's a good (correct, of course, but also elegant, idiomatic, with no unnecessary locking or waiting when the queue is empty) way to do this in Java?
I'm using a priority queue, but the tasks may be processed in any order (there's some gain in handling the tasks in roughly the order specified by the priority - but I think it's safe to just ignore this bit).
This answer ("use the Task Parallel Library") seems to address the issue for C#, but it seems that this doesn't exist for Java. This is essentially the same question; it does not seem to have a completely satisfactory answer...
The processing of each task is quite lengthy, so it's ok to have a bit more overhead for the task management if it makes the code more elegant (and that's also why I'm happy to have a single thread in charge of polling tasks from the queue and assigning them to the workers)
Example: As a rough approximation, I'm trying to use parallel BFS to find the depth of a tree that's dynamically generated. You can think of it as looking for a strategy that will maximize your reward while playing a game, where you get a point for every move you make. Each state is a task to be processed. You start at the initial (root) state, you compute all the moves (this computation is lengthy and may generate thousands of moves), and add the states reached by these moves as tasks to be explored....
ANSWERAnswered 2020-Jun-21 at 15:47
I realized a solution should probably allow all threads to submit new tasks recursively, which led me to this answer.
Here's a fully-fleshed version of that answer that handles traversal of a binary tree, obtained by starting from some string and roughly halving it. To support priorities one can simply modify MyTask.run() to pop the string from some auxiliary PriorityBlockingQueue, I decided to omit this because it just clutters the essence of the solution.
I have a multi-threaded Java application that calls a Python program via
Runtime.exec(). This works fine. I now wanted each Java-Thread to start its own Python process for concurrency.
While this is working I ran into the issue that all Python processes seem to restrict themselves to a single CPU and thus each process only uses part of the CPU to run. In
top I can see my
n Python processes.
n=1, the process uses 100% CPU.
n=2, both processes use approx 50% CPU.
n=10, all processes use around 10% CPU.
htop I can see that only two CPUs are used: One for Java stuff and the other for the Python stuff.
I thought that running multiple Python processes would allow them to run completely independently from each other.
Ideas and hints? Thank you!
EDIT: Here is the code that leads to the creation of the Python processes. It's not a minimal example. I would create one if this isn't clear enough....
ANSWERAnswered 2020-Apr-07 at 08:00
Threads share the cpu of the parent process. If we have 5 threads that doesn't mean we can make use of all 5 cores we got, each thread will the share the cpu/core of the main parent process. In your case 10 threads were sharing 100% cpu so you got 10% to each. Now each thread is running a python code with 10% cpu hence that is the computing power you got for python. I suggest you to do multi processing instead of multithreading. Like, each java process starts a python process and you can deploy multiple instances of java.
No vulnerabilities reported
You can use java-thread like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the java-thread component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Reuse Trending Solutions
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page