Skip to content

Conversation

adriangb
Copy link
Contributor

Currently we prefetch 1 file. In bandwith limited systems (e.g. object storage) having wider IO than CPU tends to make sense. This makes the prefetch configurable so that we can open >2 files at a time. My main concern is memory use. For now users will have to tune this manually. Long term I think we should track how much prefetched data is 'waiting' using a memory pool + the newly added memory tracking in arrow -> you can set a pretty large prefetch value and it will be throttled by memory. This means that scans with very selective filters (that produce few rows per file) will benefit heavily from wide IO prefetching while scans that produce large amounts of data will do little prefetching if they hit memory limits.

@github-actions github-actions bot added core Core DataFusion crate common Related to common crate datasource Changes to the datasource crate labels Sep 24, 2025
Currently we prefetch 1 file. In bandwith limited systems (e.g. object storage) having wider IO than CPU tends to make sense. This makes the prefetch configurable so that we can open >2 files at a time. My main concern is memory use. For now users will have to tune this manually. Long term I think we should track how much prefetched data is 'waiting' using a memory pool + the newly added memory tracking in arrow -> you can set a pretty large prefetch value and it will be throttled by memory. This means that scans with very selective filters (that produce few rows per file) will benefit heavily from wide IO prefetching while scans that produce large amounts of data will do little prefetching if they hit memory limits.
@adriangb adriangb force-pushed the configurable-prefetch branch from 3173a4c to 6900198 Compare September 24, 2025 17:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
common Related to common crate core Core DataFusion crate datasource Changes to the datasource crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant