memcheck | Rails plugin for tracking down memory leaks
kandi X-RAY | memcheck Summary
kandi X-RAY | memcheck Summary
Rails plugin to help with tracking down memory leaks, with a few caveats. * this plugin will only work with *nix type operating systems as it relies on the output of "ps" * this plugin runs garbage collection after every controller action, so it is only intended for diagnostic purposes.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of memcheck
memcheck Key Features
memcheck Examples and Code Snippets
Community Discussions
Trending Discussions on memcheck
QUESTION
I have a following code where I want to pass lambda or any other invocable object into an awaiter to call it at await_suspend()
. Using co_await
while using void(int)
works properly, however, when I try to get a return value from a returning coroutine using co_return
by using eager evaluation, the program does not behave as expected, however while using lazy evaluation, everything works correctly. However, both attempts cause Invalid read by size 4
. AFAIK everything is done by the documentation (https://en.cppreference.com/w/cpp/language/coroutines).
ANSWER
Answered 2022-Mar-29 at 15:21Your problem is here:
QUESTION
I have a kernel that launches a lot of blocks and I know that there is are illegal memory reads happening for blockIdx.y = 312
. Running it under cuda-gdb
results in sequential execution of blocks 16 at a time and it takes very long for the execution to reach this block index, even with a conditional breakpoint.
Is there any way to change the order in thread blocks are scheduled when running under cuda-gdb
? If not, is there any other debugging strategy that I might have missed?
I have a baseline convolution CUDA kernel that scales with problem size by launching more blocks. There is a bug for input images with dimensions of the order of 10_000 x 10_000
. Running it under cuda-memcheck
, I see the following.
ANSWER
Answered 2022-Mar-29 at 14:26Is there any way to change the order in thread blocks are scheduled when running under cuda-gdb?
There is no way to change the threadblock scheduling order unless you want to rewrite the code, and take control of threadblock scheduling yourself. Note that that linked example is not exactly how to redefine threadblock scheduled order, but it has all the necessary ingredients. In practice I don't see a lot of people wanting to do this level of refactoring, but I mention it for completeness.
If not, is there any other debugging strategy that I might have missed?
The method described here can localize your error to a specific line of kernel code. From there you can use e.g. conditioned printf
to identify illegal index calculation, etc. Note that for that method, there is no need to compile your code with debug switches, but you do need to compile with -lineinfo
.
This training topic provides a longer treatment of CUDA debugging.
QUESTION
I am using valgrind 3.15 and 3.17. I am running valgrind in a job in a gitlab-CI pipeline and I want the pipeline to fail, if there are any leaks or memory problems.
I can show it on ls, which is leaky:
...ANSWER
Answered 2022-Mar-01 at 20:05By default, only definite and possible leaks are considered as errors. You can use e.g. --errors-for-leak-kinds=all to have all leak kinds considered as errors:
QUESTION
I'm new to c++ and I have code that compiles but won't publish to linux because it says I have a memory leak in the error. Please help me find the error in the code. Linux uses valgrind, which finds the leak. Please help me find the error and fix it.
Output with memory leak:
...ANSWER
Answered 2022-Jan-23 at 07:07Here,
QUESTION
My first question is that the object A(v) that added the the map, it should be deleted automatically when exit the scope?
My second question is what would happen to the object added into the map, when program exits? I believe when I do a_[name] = A(v);, a copy is stored to the map. Also, do I need to provide a copy constructor?
...ANSWER
Answered 2022-Jan-08 at 22:54"still reachable" does not strictly mean it is memory leak. I belive it is because you called exit(0)
instead of just returning 0. The stack didn't get cleaned up because the program got terminated with signal.
QUESTION
Many topics told us that use small object like lambda expression could avoid heap allocation when using std::function
. But my study shows not that way.
This is my experiment code, very simple
...ANSWER
Answered 2022-Jan-05 at 09:28Older versions of libstdc++, like the one shipped by gcc 4.8.5, seem to only optimise function pointers to not allocate (as seen here).
Since the std::function
implementation does not have the small object optimisation that you want, you will have to use an alternative implementation. Either upgrade your compiler or use boost::function
, which is essentially the same as std::function
.
QUESTION
Edited to include MWE (removing example-lite) and added details about compilation and Valgrind output.
I am using the mutable keyword to achieve the result of lazy evaluation and caching a result. This works fine for a single object, but doesn't seem to work as expected for a collection.
My case is more complex, but let's say I have a triangle class that can calculate the area of a triangle and cache the result. I use pointers in my case because the thing being lazily evaluated is a more complex class (it is actually another instance of the same class, but I'm trying to simplify this example).
I have another class that is essentially a collection of triangles. It has a way to calculate the total area of all the contained triangles.
Logically, tri::Area() is const -- and mesh::Area() is const. When implemented as above, Valgrind shows a memory leak (m_Area).
I believe since I am using a const_iterator, the call to tri::Area() is acting on a copy of the triangle. Area() is called on that copy, which does the new, calculates the area, and returns the result. At that point, the copy is lost and the memory is leaked.
In addition, I believe this means the area is not actually cached. The next time I call Area(), it leaks more memory and does the calculation again. Obviously, this is non-ideal.
One solution would be to make mesh::Area() non-const. This isn't great because it needs to be called from other const methods.
I think this might work (mark m_Triangles as mutable and use a regular iterator):
However, I don't love marking m_Triangles as mutable -- I'd prefer to keep the compiler's ability to protect the constiness of m_Triangles in other un-related methods. So, I'm tempted to use const_cast to localize the ugly to just the method that needs it. Something like this (mistakes likely):
Not sure how to implement with const_cast -- should I be casting m_Triangles or this? If I cast this, is m_Triangles visible (since it is private)?
Is there some other way that I'm missing?
The effect I want is to keep mesh::Area() marked const, but have calling it cause all the tris calculate and cache their m_Area. While we're at it -- no memory leaks and Valgrind is happy.
I've found plenty of examples of using mutable in an object -- but nothing about using that object in a collection from another object. Links to a blog post or tutorial article on this would be great.
Thanks for any help.
UpdateFrom this MWE, it looks like I was wrong about the point of the leak.
The code below is Valgrind-clean if the call to SplitIndx()
is removed.
In addition, I added a simple test to confirm that the cached value is getting stored and updated in the container-stored objects.
It now appears that the call m_Triangles[indx] = t1;
is where the leak occurs. How should I plug this leak?
ANSWER
Answered 2021-Dec-24 at 01:18One way to avoid making it mutable
is to make it always point at the the data cache, which could be a std::optional
.
You'd then create and store a std::unique_ptr>
that you keep for the tri
object's lifetime.
Example:
QUESTION
I'm learn about std::allocator
. I try to allocate but use deallocate incorrectly I saw that it didn't use size argument, I confuse about the method could you please explain for me ? Thanks.
- testcase1 "test" : I didn't deallocate, valgrind detected (correct)
- testcase2 "test_deallocate" : I deallocate with size(0) less than actual size (400),valgrind or
-fsanitize=address
can't detect leak - testcase3 "test_deallocate2": I deallocate with size(10000) greater than actual size (400) compiler didn't warning , g++ with
-fsanitize=address
also can't detect this.
ANSWER
Answered 2021-Dec-28 at 10:42
- testcase2 "test_deallocate" : I deallocate with size(0) less than actual size (400),valgrind or -fsanitize=address can't detect leak
When you dellocate with wrong size, then the behaviour of the program is undefined. When the behaviour is undefined, there is no guarantee that memory would be leaked.
- testcase3 "test_deallocate2": I deallocate with size(10000) greater than actual size (400) compiler didn't warning , g++ with -fsanitize=address also can't detect this.
Ditto.
QUESTION
This is similar but not the same as Why does Valgrind not detect a memory leak in a Rust program using nightly 1.29.0? since that one was solved in Rust 1.32
A simple reproducible sample:
...ANSWER
Answered 2021-Oct-18 at 07:10Well I see... Using Godbolt, the code compiles to nothing (no memory allocations), probably because of Rust's optimizations. However, using the 0u8
code, it indeed compiles to something (__rust_alloc_zeroed
), thus the memory leak really happens in that case.
Finally I use something like the following to forbid rust from optimizing out the leaky code.
QUESTION
I have this code in func
I am getting pointer to array of pointers. The purpose of this function is to write strings to two allocated spaces for chars
. Allocated in main. I am getting segFault at this line
ANSWER
Answered 2021-Oct-14 at 07:25One way would be:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install memcheck
On a UNIX-like operating system, using your system’s package manager is easiest. However, the packaged Ruby version may not be the newest one. There is also an installer for Windows. Managers help you to switch between multiple Ruby versions on your system. Installers can be used to install a specific or multiple Ruby versions. Please refer ruby-lang.org for more information.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page