Description
Describe the bug
I have a 6GB heap dump that I want to analyze. Basically find out where a lot of the sources of Memory Leaks come from.
When starting the determine the Dominators.
It goes through 3 progress bars.
At the third one (Computing Retained Sizes) it gets stuck instantly on 33% when GC Sizes are computed.
And while it seems to continue to work (after allocating 16GB on startup) it seems to barely use any resources.
It uses 7% of my Entire CPU to do its work.
Is this normal?
The progress bar "while improved" is basically still useless.
(I remember a time where there was no % at all)
To Reproduce
Steps to reproduce the behavior:
Start Visual VM take any Heap Dump thats like 6-10GB large and try to determine the Dominators.
Expected behavior
I guess be faster? And also progress communicated?
Like "1000/432312 Files processed"
Also maybe "performance options" that could be set to utilize system hardware a lot more?
VisualVM log
Even ran through the command line there is 0 logs besides: "Program found another console was used so it will be using the console provided"
logfile.txt
Activity
thurka commentedon May 22, 2023
Is there is a way to get your heap dump, so we can investigate if this is normal behaviour or not. Thanks.
thurka commentedon May 22, 2023
BTW: There is no need to run VisualVM with 16G Xmx.
Speiger commentedon May 23, 2023
@thurka that might take a while...
Due to the size of the of it. (here the lines are <100kbit in download speeds and <10kbit upload)
Also i found memory leak through other means so i kinda have to break my code again to get a heap dump that big again.
But yeah it should be possible. I just ask for some time.
thurka commentedon May 23, 2023
Ouch!
No problem. Make sure that you compress the heap dump before uploading (zip -9, gzip -9, bzip -9). The compression ration is very good for heap dumps.
Once you upload it, send me link via email.
Thanks once gain.
Speiger commentedon Jun 5, 2023
@thurka sorry for the delay.
I found a old file that was only thrown into the paper bin instead of being deleted.
(Said file is one that was created with this exact issue)
5GB big. First tried uploading it, then saw "compression" would help. Now uploading a 777MB file with a 13kbit upload line.
That is fine ^^"
Next comment will be the link :)
thurka commentedon Jun 6, 2023
Thanks, I have the heap dump.
Speiger commentedon Jun 6, 2023
Thank you.
I hope this helps improving VisualVM :)
Speiger commentedon Jun 14, 2023
@thurka anything interesting found with the heap dump?
thurka commentedon Jun 16, 2023
Yes, there is instance java.util.concurrent.ConcurrentLinkedQueue, which has >300000 elements (lambdas from net.minecraft.world.chunk.listener.ChainedChunkStatusListener). These lambdas and corresponding java.util.concurrent.ConcurrentLinkedQueue$Node instances have very long paths to GC root - up to 300200. This together with fact lambdas reference other objects, causes the unusually long computation of retained size. I will try to seed it up.
Speiger commentedon Jun 16, 2023
I love minecraft sometimes xD
The devs shred a tool to find performance issues/memory leaks without even knowing it xD
philipwhiuk commentedon Jan 23, 2024
I have a similar (in terms of memory structure, not content) 1.8GB log file from an Akka-based application. This application makes extensive use of ConcurrentLinkedQueue$Node's. The problem for analysis here is that you need the queue info to workout where the real usage of them is and the only way to get this is to Compute Retained Size.
Unfortunately this involves traversing the entire queue I expect (100K nodes in memory across a number of queues), which VVM takes a long time to do. I assume it's reasonably single-threaded.
The percentage is going up over a long period (>24hrs) so I'm hopeful it will eventually finish but there are some improvements that could be made
Improvement Ideas
Edit: Automatic updates restarted my PC and I have to start from scratch again 😭
thurka commentedon Jan 24, 2024
1.8GB log file ?? Do you mean 1.8 GB heap dump? If so, it would be great if you can share it, so we can make sure that we improved your case.
Thanks.
sanket1729 commentedon Mar 21, 2024
@Speiger, did you manage to complete the process after some time? Or was it always stuck at 33#. I am facing a similar issue and visualvm is running for the past 1 hour stuck at 33%
Speiger commentedon Mar 22, 2024
@sanket1729 Nope. I gave up and simply starte scanning my code and found my bugs.
Anyways using streams and lambda chains will basically kill Visual VM usability, which sadly becomes more common by the day.
So @thurka would have to fix this overloading process somehow otherwise they can effectively mark it as "useless" because it gets overloaded instantly by large scale projects.
tabarakMohammed commentedon Nov 11, 2024
and, that means if we have lambdas or streams in our code, we can not computing Retained ?
in my cause, i am increased the heap size but still take too much time.
Speiger commentedon Nov 11, 2024
@tabarakMohammed minecraft is known to excessively use lambdas (not static ones that can be cached but those that create unique instances) is the problem here.
You can use lamdas, but if you have a 5-6 multithreaded layered Lambda that will result in not being able to use that functionality.
Its an extreme case where this program has terrible feedback and people wonder.
I very recently made a 16GB heapdump and the program was able to use it fine.
NOTE: This wasn't for the same test, there were a lot less lambda layers included.