You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Start the application in development, production mode or hosted on Vercel.
Open http://localhost:3000 which contains a list of pages and API routes, each one being a standalone reproduction.
Current vs. Expected behavior
L1 cache (outer) is set to 60 seconds, while L2 cache (deeper) is set to 1 hour.
A refresh of the page after 60 seconds is supposed to only invalidate L1 cache, while leaving L2 cache intact.
What we observe, in all provided test cases, is that both caches are invalidated every 60 seconds, causing multiple expensive L2 calls every 60 seconds.
The same scenario happens when revalidateTag is used to invalidate only the L1 cache.
Provide environment information
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:23 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6020
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 22.14.0
npm: 11.1.0
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.3.0-canary.18 // Latest available version is detected (15.3.0-canary.18).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.8.2
Next.js Config:
output: N/A
Which area(s) are affected? (Select all that apply)
Use Cache
Which stage(s) are affected? (Select all that apply)
I have discovered this, when one of our API suppliers asked why we are making 3 million API calls daily to a very expensive API, which I thought I cached via unstable_cache (or fetch) for 6 days.
The API returns a 70-700kb JSON, with a large dataset, which I never-ever use raw. Instead, I cache the dataset for 6 days, and cache individual findById or similar transformative calls, separately with a shorter TTL.
You might ask "why not just set the same TTL as the underlying dataset won't change for 6 days", and I have 3 answers to that:
This is a bug, regardless of the intended usage.
One may wish to run revalidateTag on the dataset, and rely on the short TTL to slowly recalculate its results.
There are situations where it is outside out control, such as library usage, or large codebases.
Furthermore, I have also created a custom cache handler according to Next documentation (or used @neshca/cache-handler) with expanded logging, and thereby proven that this logic is not a problem of the storage or adapter, but instead an immutable rule within Next itself.
@markomitranic I haven't look at the code to see why but I suspect the cache key/tag live in the parent context. In your case, you can just take out the getall unstable_cache instead of nested.
@zedquach yes, please look at the code or at least read the message, before answering.
The whole point of my bug report is that I am not supposed to flatten my cache calls. Sometimes it is also outside of my control.
If calls are nested, they are still individual calls, and should have separate TTLs. There is no reason whatsoever to have implicit, undocumented logic of having them share TTL of the lowest cache wrapper in the tree.
Link to the code that reproduces this issue
https://github.com/markomitranic/nextjs-nested-cache-reproduction
To Reproduce
Current vs. Expected behavior
L1 cache (outer) is set to
60 seconds
, while L2 cache (deeper) is set to1 hour
.A refresh of the page after
60 seconds
is supposed to only invalidate L1 cache, while leaving L2 cache intact.What we observe, in all provided test cases, is that both caches are invalidated every
60 seconds
, causing multiple expensive L2 calls every 60 seconds.The same scenario happens when revalidateTag is used to invalidate only the L1 cache.
Provide environment information
Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:23 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6020 Available memory (MB): 32768 Available CPU cores: 10 Binaries: Node: 22.14.0 npm: 11.1.0 Yarn: 1.22.22 pnpm: N/A Relevant Packages: next: 15.3.0-canary.18 // Latest available version is detected (15.3.0-canary.18). eslint-config-next: N/A react: 19.0.0 react-dom: 19.0.0 typescript: 5.8.2 Next.js Config: output: N/A
Which area(s) are affected? (Select all that apply)
Use Cache
Which stage(s) are affected? (Select all that apply)
Vercel (Deployed) - https://nextjs-nested-cache-reproduction.vercel.app
Additional context
I have discovered this, when one of our API suppliers asked why we are making 3 million API calls daily to a very expensive API, which I thought I cached via
unstable_cache
(or fetch) for 6 days.The API returns a 70-700kb JSON, with a large dataset, which I never-ever use raw. Instead, I cache the dataset for 6 days, and cache individual
findById
or similar transformative calls, separately with a shorter TTL.You might ask "why not just set the same TTL as the underlying dataset won't change for 6 days", and I have 3 answers to that:
Furthermore, I have also created a custom cache handler according to Next documentation (or used
@neshca/cache-handler
) with expanded logging, and thereby proven that this logic is not a problem of the storage or adapter, but instead an immutable rule within Next itself.Here are some examples, screenshots of the provided repository, as seen on https://nextjs-nested-cache-reproduction.vercel.app:
The text was updated successfully, but these errors were encountered: