-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fast? #244
Comments
You're right, something going on with the demo. Will look into this asap. Thanks for bringing this up |
I'm writing to say that I'm very interested in understanding why the demo seems slow, whether it's a scalability issue of D1/KV or an external one. As I'm a massive fan of Cloudflare's stack, I was wondering whether to use SonicJS or not, but if it doesn't scale well, I cannot use it for my specific use case 🤔 |
One of my early POCs had over a million records and it was still screaming fast. Not sure when I'll get time to dig into this but its on my list. FWIW - I am using SonicJs for a production project. One table has about 6k records and the queries against it that return 20 records at a time take about 240ms so it looks like there may be an issue specifically with the demo. |
I think that one of the things that slows down this application, and I believe will certainly break it at scale, is that in your server.ts you have an app.use(*) that passes every single API request to the initializeLucia. This means that even API endpoints that are publicly accessible, e.g. posts, have to go thru Lucia. Lucia is very database intensive (checks the session on every request), and this is not good for D1, or any SQLite database for that matter, for some use cases. In my opinion, Lucia is great for the admin routes and for the API's that are not accessible, because these will most likely not have a huge amount of requests (depends on number of users of course, but I imagine there is always only a handful of users accessing the CMS for 99% of companies. But, to pass publicly accessible routes thru here is asking for issues because of the amount of requests and subsequent useless database reads to D1 that a publicly available API will potentially receive (even at a small scale). I think a better approach here would be to ONLY pass the admin routes and private API routes, e.g. /users/ thru Lucia. For the publicly available routes, like posts, you should not pass them thru Lucia. What you would do is verify a JWT token that is passed as a bearer. This is the approach used by every major company nowadays, e.g. Shopify. So the way to do it is simple: create an admin route that will generate a JWT based on an ENV secret and some payload (eventually you can attach access rights to this token also). In the admin dashboard create a simple form that will generate this. Then this "public" JWT can be copied to any third-party app and must be sent as a bearer in the header to a publicly accessible route Then for every publicly accessible route, just verity this JWT before showing data. This will provide a good level of security. |
@osseonews You may very well be correct, thanks for your input. If that is what is happening we need to cache those reads or as you mentioned, eliminate them if possible. We do have multiple use cases to cover in terms of access control, but we still need that functionality to be highly performance and scalable |
Yeah, the access control is an amazing feature, but for publicly accessible routes I think it will be overkill, unless it's a create/edit route. So I guess the logic should be first determine if something is "read: true", if not then obviously do all the auth work, if it is "read:true", but not a create/edit, then just pass it thru without going thru Lucia. For edits/create, obviously you need to pass thru auth. Anyway, I think it would just be easier to create JWT's for auth. It's how something like Supabase works, and nobody has any issues with it. Lucia is good for getting the initial authentication, but I think JWT is better for ongoing auth, especially for public endpoing. |
SonicJs has been completely rewritten on top of Astro, so this issue should no longer be present. |
Data Retrieval - Server: 1691ms, Client: 2443ms. Source: d1 |
@bugproof my bad here, I updated the issue before deploying the demo site. If you check the new demo, responses are under 300ms. However, the caching is not in place yet. Once it is, we should be back to <100ms response times in most cases. |
Thanks for the info @bugproof . I will do some additional testing once I have the caching in place. I did some testing with a VM I had in France early last year and it was lightning fast, so I just need to figure out whats happening. I suspect caching will resolve but will report back asap |
I do have KV caching back in place in a branch and a test URL here: I checked in pingdom and pagespeedplus and they are both showing the response globally much faster: This is just with KV caching. When I fold back in in-memory caching, it should be much faster which I will do asap. Even < 400ms for places like Tokyo would typically be a massive improvement for most real world situations. Nevertheless, I will report back when in memory is in place. Utimately I will rerun the K6 performance tests that I have to get a more wholelistic performance baseline. If you do have a moment to test the above URL, please let me know what the initial response time is and then what it is as you refresh the URL a few times in your browser (does it improve after the first hit?) |
Here is a quick implementation branch of the in memory cache. You'll have to hit the URL once and then take the timing of the second time you refresh to see the response time. This won't be the case once its fully implemented because SonicJs will automatically self warm the cache the first time a user hits any endpoint in a new region. So the second hit on this URL should give us an idea of what you can expect in your local region: |
Hey Lane, this is looking cool - thanks for the updates. Testing on this link from London; always hard refreshing (ctrl+shift+R) in a chrome incognitor window: https://feature-cache-locals.sonicjs-jo6.pages.dev/api/v1/categories?limit=5 1st hit - I tested a few times, pausing between refreshes and got up to and just over 1s Load time 2nd hit - I tested a few times without pausing, Load averages out at roughly 49ms - I got some as low as 39ms and I got one random one at roughly 500ms too Sometimes after waiting a bit the first refresh is fast like the 2nd hit and the second refresh is slow like the 1st hit. Seems unexpected behaviour? |
@sebastianpetrovski thanks for testing and posting your results. I forgot to mention that the API itself will track the server side execution time and add that to the payload. For example, in this case its Zero, but still taking 54ms due to transit time: If you have unexpected timing, you should check with executionTime in the payload to try and pinpoint where the delay is coming from. Hopefully its just normal network latency, which we obviously can't do anything about. However if the executionTime isn't near 0 for a in memory cached response, then there is indeed and issue we need to resolve. Or if a previously in-memory switches to something else (KV or D1) then it means that cache is no longer valid and had to be refreshed, which is again something that needs to be understood. Currently the cache is not tied to any keep alive time period. |
Loading This was also not the first fetch, so any caching layer doesn't seem to be helping much. When the source is KV, exec times are much lower. So seems like this is an issue with D1 performance. Dunno if this is the best way to test, but the TTFB numbers here are... bad. |
@lane711 Ah ok thanks for explaining. I'll look out for that. It seems to revert back to D1 pretty quickly for me, after maybe 30 seconds, and that's when the execution time goes up from 0 (in-memory) to 400ms and up for d1. update: I got distracted while I was writing this comment and came back to test the link again before posting. Now it's going to KV storage only - even after multiple hard refreshes. KV is mostly quick though - execution time average is 6ms. If I let it sit for about 30 seconds before refreshing the execution time seems to increase to an average of around 130ms. |
https://demo.sonicjs.com/
Data Retrieval - Server: 2469ms, Client: 2877ms
The text was updated successfully, but these errors were encountered: