heapstats | JVMTI agent and JavaFX analyzer
kandi X-RAY | heapstats Summary
kandi X-RAY | heapstats Summary
JVMTI agent and JavaFX analyzer to gather JVM runtime information for after-the-fact analysis.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of heapstats
heapstats Key Features
heapstats Examples and Code Snippets
Community Discussions
Trending Discussions on heapstats
QUESTION
I've embedded V8 9.5 into my app (C++ HTTP server). When I started to use optional chaining in my JS scripts I've noticed abnormal rise in memory consumption under heavy load (CPU) leading to OOM. While there's some free CPU, memory usage is normal. I've displayed V8 HeapStats in grafana (this is only for 1 isolate, which I have 8 in my app)
Under heavy load there's a spike in peak_malloced_memory
, while other stats are much less affected and seem normal.
I've passed --expose-gc
flag to V8 and called gc()
at the end of my script. It completely solved the problem and peak_malloced_memory
doesn't rise like that. Also, by
repeatedly calling gc()
I could free all extra memory consumed without it. --gc-global
also works. But these approaches seem more like a workaround rather than a production-ready solution.
--max-heap-size=64
and --max-old-space-size=64
had no effect - memory consumption still did greatly exceed 8(number of isolates in my app)*64Mb (>2Gb physical RAM).
I don't use any GC-related V8 API in my app.
My app creates v8::Isolate
and v8::Context
once and uses them to process HTTP requests.
Same behavior at v9.7.
Ubuntu xenial
Built V8 with these args.gn
...ANSWER
Answered 2021-Nov-29 at 17:46(V8 developer here.)
- Is it normal that peak_malloced_memory can exceed total_heap_size?
Malloced memory is unrelated to the heap, so yes, when the heap is tiny then malloced memory (which typically also isn't a lot) may well exceed it, maybe only briefly. Note that peak malloced memory (53 MiB in your screenshot) is not current malloced memory (24 KiB in your screenshot); it's the largest amount that was used at any point in the past, but has since been freed (and is hence not a leak, and won't cause an OOM over time).
Not being part of the heap, malloced memory isn't affected by --max-heap-size
or --max-old-space-size
, nor by manual gc()
calls.
- Why could this occur only when using JS's optional chaining?
That makes no sense, and I bet that something else is going on.
- Are there any other, more correct solutions to this problem other than forcing full GC all the time?
I'm not sure what "this problem" is. A brief peak of malloced memory (which is freed again soon) should be fine. Your question title mentions a "leak", but I don't see any evidence of a leak. Your question also mentions OOM, but the graph doesn't show anything related (less than 10 MiB current memory consumption at the end of the plotted time window, with 2GB physical memory), so I'm not sure what to make of that.
Manually forcing GC runs is certainly not a good idea. The fact that it even affects (non-GC'ed!) malloced memory at all is surprising, but may have a perfectly mundane explanation. For example (and I'm wildly speculating here, since you haven't provided a repro case or other more specific data), it could be that the short-term peak is caused by an optimized compilation, and with the forced GC runs you're destroying so much type feedback that the optimized compilation never happens.
Happy to take a closer look if you provide more data, such as a repro case. If the only "problem" you see is that peak_malloced_memory
is larger than the heap size, then the solution is simply not to worry about it.
QUESTION
I have a NodeJS application running on a k8s pod. The actual size of the pod is 2GB, but in environment variables, we set this value to 4GB --max-old-space-size=4096
(which will not be true in my case - for some tenants we do allocate 4GB but most pods have 2GB).
Now I tried 2 ways to detect the memory usage and the total memory and both are providing different stats.
- I'm fetching memory usage from this system file:
/sys/fs/cgroup/memory/memory.usage_in_bytes
and total memory from this file :/sys/fs/cgroup/memory/memory.limit_in_bytes
limit_in_bytes
is returning 2GB correctly, but the value of usage_in_bytes
is fluctuating too much, It's around 1gb in few mins and spikes to 2GB in next minute even though nothing changed in that minute(no stress on the system).
Stats of a process
...ANSWER
Answered 2021-Nov-14 at 01:12To get overall memory consumption of a process, look to (and trust) the operating system's facilities.
Node's v8.getHeapStatistics
tell you about the managed (a.k.a. garbage-collected) heap where all the JavaScript objects live. But there can be quite a bit of other, non-garbage-collected memory in the process, for example Node buffers and certain strings, and various general infrastructure that isn't on the managed heap. In the average Chrome renderer process, the JavaScript heap tends to be around a third of total memory consumption, but with significant outliers in both directions; for Node apps it depends a lot on what your app is doing.
Setting V8's max heap size (which, again, is just the garbage-collected part of the process' overall memory usage) to a value larger than the amount of memory available to you doesn't make much sense: it invites avoidable crashes, because V8 won't spend as much time on garbage collection when it thinks there's lots of available memory left, but the OS/pod as a whole could already be facing an out-of-memory situation at that point. That's why I linked the other answer: you very likely want to set the max heap size to something a little less than available memory, to give the garbage collector the right hints about when to work harder to stay under the limit.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install heapstats
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page