offheap | Used to be called go
kandi X-RAY | offheap Summary
kandi X-RAY | offheap Summary
an off-heap hash-table in Go. Used to be called go-offheap-hashtable, but we shortened it.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of offheap
offheap Key Features
offheap Examples and Code Snippets
Community Discussions
Trending Discussions on offheap
QUESTION
I am struggling to make my Spark program avoid exceeding YARN memory limits (on executors).
The "known" error which I get is:
ANSWER
Answered 2021-Mar-11 at 15:45Spark may use off-heap memory during shuffle and cache block transfers; even if spark.memory.offHeap.use=false
.
This problem is also referenced in Spark Summit 2016 (minute 4:05).
Spark 3.0.0 and AboveThis behavior can be disabled since Spark 3.0.0
with spark.network.io.preferDirectBufs=false
.
Spark configuration elaborates a short explanation for this:
Spark < 3.0.0If enabled then off-heap buffer allocations are preferred by the shared allocators. Off-heap buffers are used to reduce garbage collection during shuffle and cache block transfer. For environments where off-heap memory is tightly limited, users may wish to turn this off to force all allocations to be on-heap.
For versions lower than 3.0.0
, using larger executors with modified higher memory overhead significantly remedies this problem while keeping the same allocated memory per executor and same overall resources consumption by your Spark job.
For example:
Before:
QUESTION
Our Spring Boot application shall be able to fetch data from a remote Ehcache instance. For this, the following configuration has been added.
...ANSWER
Answered 2021-Feb-17 at 13:09If your application can run with EhCache disabled, you can simply edit the @SpringBootTest
annotation to disable EhCache, eg.: @SpringBootTest(properties = {"property.to.disable.ehcache=true"})
, guaranteeing EhCache will be disabled only during the test phase.
If your application requires EhCache to be enabled, your best bet is to add a src/test/resources/application.properties
file, containing properties (eg. host, port etc.) pointing to a local, possibly test-embedded, instance of EhCache.
QUESTION
I recently changed from running Ignite with Multicast IP Finder to using Static IP Finder to make district sets of clusters per machine.
Prior to this change I was able to establish a connection between the client and the Ignite server. However, after specifying the Static IP finder in the configuration of the server and the client, when I attempt to connect to the server a IgniteCheckedException: No session found is thrown and takes down the JVM killing my application. The Ignite server however stays up.
Just to test I tried to revert to the Multicast IP finder but I am now getting the same error.
I have been able to connect to Ignite other clusters, but not the one local to the client.
This is the client configuration:
...ANSWER
Answered 2021-Feb-10 at 00:39Make sure you are using the java.net.preferIPv4Stack property on the server side as well.
Check your firewall config, to make sure that all listed ports are open.
The issue is w/the communication SPI which, by default is using 47100 and up port range see: https://ignite.apache.org/docs/latest/clustering/network-configuration#communication
QUESTION
I'm using Ehcache as buffer to pass data for all the clients connected to a WebSocket(Spring).
CacheEventListener implementation:
...ANSWER
Answered 2020-Dec-16 at 03:00Obs2: After searching for spring boot logs, I believe CacheEventListener is been binded to the cache before spring boot finishes loading. Not sure if this is the problem.
This hints at your issue, you can't inject spring beans into non-spring managed object instances. You can use a simple class to give you static access to the spring context to load a bean described here.
QUESTION
I start the Apache Ingnite node, a server node, and another client node.
My scenario is: Close the client node, and how to update the service node Topology Snapshot at the same time.
Now, the Topology Snapshot is refreshed only when the NodeFailed event is received by the server after 20 seconds.
What method or configuration on the server side can receive the NodeFailed event immediately or refresh the Topology Snapshot?
This is server log:
...ANSWER
Answered 2020-Oct-21 at 06:02Can reduce the service node attribute ClientFailureDetectionTimeout
, increase the server check the frequency of client nodes.The default is 30 seconds
.
QUESTION
We use Ignite 2.7.6 in both server and client modes: two server and six clients.
At first, each app node with client Ignite inside had 2G heap. Each Ignite server node had 24G offheap and 2G heap.
With last app update we introduced new functionality which required about 2000 caches of 20 entires (user groups). Cache entry has small size up to 10 integers inside.
These caches are created via ignite.getOrCreateCache(name)
method, so they have default cache configurations (off-heap, partitioned).
But in an hour after update we got OOM error on a server node:
...ANSWER
Answered 2020-Sep-17 at 09:332000 caches is a lot. One cache probably takes up to 40M in data structures.
I recommend at least using the same cacheGroup
for all caches of the similar purpose and composition, to share some of these data structures.
QUESTION
when I search off-heap in Spark configuration, there are two properties related(spark.executor.memoryOverhead
and spark.memory.offHeap.size
), I am not sure the relationship between these two.
If I enable spark.memory.offHeap.enabled
, will spark.memory.offHeap.size
be part of spark.executor.memoryOverhead
? or these two types of off-heap memory are independent(thus the total off-heap memory is the sum of the two)
ANSWER
Answered 2020-May-11 at 06:41See my full answer here: https://stackoverflow.com/a/61723456/6470969
Short answer: as of current Spark version (2.4.5), if you specify spark.memory.offHeap.size
, you should also add this portion to spark.executor.memoryOverhead
. E.g. you set spark.memory.offHeap.size
to 500M and you have spark.executor.memory=2G
, then the default spark.executor.memoryOverhead
is max(2*0.1, 384)=384M
, but you'd better to increase the memoryOverhead
to 384M+500M=884M
.
QUESTION
I am trying to set the configuration of a few spark parameters inside the pyspark shell.
I tried the following
spark.conf.set("spark.executor.memory", "16g")
To check if the executor memory has been set, I did the following
spark.conf.get("spark.executor.memory")
which returned "16g"
.
I tried to check it through sc
using
sc._conf.get("spark.executor.memory")
and that returned "4g"
.
Why do these two return different values and whats the correct way to set these configurations.
Also, I am fiddling with a bunch of parameters like
"spark.executor.instances"
"spark.executor.cores"
"spark.executor.memory"
"spark.executor.memoryOverhead"
"spark.driver.memory"
"spark.driver.cores"
"spark.driver.memoryOverhead"
"spark.memory.offHeap.size"
"spark.memory.fraction"
"spark.task.cpus"
"spark.memory.offHeap.enabled "
"spark.rpc.io.serverThreads"
"spark.shuffle.file.buffer"
Is there a way that will set the configurations for all the variables.
EDIT
I need to set the configuration programmatically. How do I change it after I have done spark-submit
or started the pyspark shell? I am trying to reduce the runtime of my jobs for which I am going through multiple iterations changing the spark configuration and recording the runtimes.
ANSWER
Answered 2019-Mar-08 at 13:37You can set environment variables by using: (e.g. in spark-env.sh
, only stand-alone)
QUESTION
I have allocated 5 GB Non heap memory to Ignite as can be verified at application startup.
...
ANSWER
Answered 2019-Nov-21 at 10:45It's a bug in Ignite. I found an open ticket for the fix - IGNITE-5583.
Update: The original issue seems to be resolved for 2.7 according to IGNITE-9305
What version do you use?
QUESTION
I m trying to use apache ignite, I m generating my node using the ignite web console.
I needed to configure 2 caches from database and enabled the persistence storage since the two table have lot of data.
Here is what I have done (the console)
...ANSWER
Answered 2019-Nov-19 at 12:48This is a known issue and it stems from the fact that previously you started the same cluster but without persistence.
Please remove your Ignite work dir (%TMP%\ignite\work or /tmp/ignite/work or ./ignite/work) and restart your node.
UPD: There is also this issue about LOCAL cache on client node with persistence: IGNITE-11677. My recommendation is to avoid using LOCAL caches at all.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install offheap
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page