-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Many hard page faults during full GC #674
Comments
That time sounds not unreasonable. If you do the same on a machine with sufficient ram, how much faster is gc? |
Hmm... maybe some specs of your system would help. In general, having an overall memory load higher than 90%, I would expect a slow-down of any application in the system. An incremental GC seems to be a somewhat more practical approach to not buy into any platform-specific virtual-memory manager but remain cross-platform comparable? |
Hi Christoph, it is in the nature of global mark sweep collectors, non-incremental and incremental alike, that they touch each object at least once in each cycle. In the sweep they visit all objects to add unmarked ones to the feee lists. So even an incremental collector won’t help much here (but I would encourage as many of you to get involved with Tom Braun’s incremental GC for Spur; we need to productize this asap).In the short term there are two things you can doa) increase the size of eden; if your image is very large a 128mb, 256mb or even 512mb eden will usually improve thingsb) the global GC runs in two ways, to attempt to reclaim memory when a large allocation fails (see handleFailingNew: et al), and when old space grows when scavenging tenures objects to old spaceEnlarging eden reduces the rate at which scavenging tenures. Being larger, more objects can stay in eden, and hence the tenuring rate decreases.A vm parameter controls the latter initiation of global GC. This parameter is the ratio of growth of old space since startup or the previous GC, above which a GC should be done. By default this is set to 0.3333… (which is probably way too low). So every time scavenging causes old space to grow by a third the scavenger will run the global GC immediately after. Changing this parameter to, say, 1.0 will not do a global GC until the total size has doubled.Experiment and see how you get on, and report back. You can find the parameter in the About Squeak’sVM parameters tab, or in the vmParameterAt: comment. setGCParameters is a hood method to put your customization in.HTH_,,,^..^,,,_ (phone)On Jan 16, 2024, at 3:53 AM, Christoph Thiede ***@***.***> wrote:
When working with images that (necessarily) have a size of a few GBs, I frequently note interruptions during full GCs that can take 30 seconds or longer. resmon confirms that there is a high number of hard page faults during these GCs, as my overall RAM consumption is often >90% and most of the time >50% of memory of Squeak.exe is swapped out to my SSD.
If somebody felt like investing efforts in making the GC more swap-friendly, this would be highly welcome from my side. For example: https://people.cs.umass.edu/~emery/pubs/f034-hertz.pdf (I do not pretend to have read this paper.)
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
When working with images that (necessarily) have a size of a few GBs, I frequently note interruptions during full GCs that can take 30 seconds or longer. resmon confirms that there is a high number of hard page faults during these GCs, as my overall RAM consumption is often >90% and most of the time >50% of memory of Squeak.exe is swapped out to my SSD.
If somebody felt like investing efforts in making the GC more swap-friendly, this would be highly welcome from my side. For example: https://people.cs.umass.edu/~emery/pubs/f034-hertz.pdf (I do not pretend to have read this paper.)
The text was updated successfully, but these errors were encountered: