-
Notifications
You must be signed in to change notification settings - Fork 308
perf(path): reduce RAM spikes in large directory scans #2213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Hey @mohammadfs243, Could you please give this new PR a shot when you have a chance? This should improve memory usage significantly. To test it using lazyvim, add the return {
'saghen/blink.cmp',
branch = 'perf/path-scan-optimize',
-- ...
}Thanks in advance! |
|
Hi @soifou The other issue is the memroy leak as I mentioned in your other PR. Every time the completion is triggered, it adds up to memory usage. |
|
Hmm... Are you sure you're testing it correctly? Here is the RAM consumption before this PR: 2025-10-15_12-30-55.mp4And with this PR: 2025-10-16_14-44-38.mp4 |
|
Could you provide a similar video with the ram stats and the actions you do when you observe this increase? |
|
Unfortunately I can't record the screen at the moment, but it runs as follows:
Footnotes
|
|
Wait, when you refers to 1.3M, is it actually... 1,300,000 files ?? on HDD 😄 ?? If I get this right, I can now understand why this take so much time, we keep in memory all the files and doing the filtering on each keystroke. |
|
Yeah we should re-introduce the limit, maybe at 10k files? 5b4055e |
|
Limiting does not solve the problem but unless a refactoring I guess this is an acceptable in-between solution! 10k is fine I guess (200k on SSD too 😄), I'll add that. In any case, this changes seems still valid about the spotted memory leak, wdyt? |
Add configurable `max_entries` and `chunk_size`, including validation and user notification when the limit is reached.
|
Quick follow-up, re-added that entry limit we talked about (default 10k) and made it configurable along with the chunk size for the brave souls who want to tweak it. Also added a user notification when the limit kicks in. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
@soifou It works pretty good. Thanks for your effort 👍 Just one minor thing; How does the sorting or making suggestions work? I expect the exact match be in the top spot, which isn't (there's a file named PS: I think the memory leak is still there. |
|
Since we are limiting the total number of entries and, as far as I know, the order during the scan can be arbitrary, this impacts which entries are available for suggestions. That could explain why you don't see the "36" file. At what magnitude do you currently observe this memory leak? |
|
For the first invocation of suggestions the usage goes up to around 70MB, and if I recreate the suggestions it reaches ~105MB and stays there for further triggering of completion list. It's negligible and may be related to Lua's GC. |
|
Yes, this is similar to what you can observe in my 2nd video. After the second iteration, the RAM consumption stabilizes. Seems fine to me in this scenario. |
|
Yeah, not everyone has a 1.3M files directory and if they do it is only around 100MB 😄 |
|
Sorry it took so long to review this. I don't think we should expose the |
|
Thanks for the feedback. Since I made Regarding |


No description provided.