fix(types): rewrite type definitions#142
Conversation
|
|
Thanks, these look like sorely needed improvements to the types! I will need to find some time to review this carefully -- it may take a week or two as I'm pretty swamped at the moment. But I wanted to let you know I've seen this and am eager to integrate it as bizarre type errors are a sore spot I'm seeing a lot of right now. |
|
My honest suggestion is to add type-only tests with recursive nesting, and a N-depth type inference. We are manually restricting the type depth in our codebase by policy. So it's quite useful for us to know the new depth limit, if this gets merged. |
commit: |
|
Hi @VastBlast! Thanks so much for this PR. These fixes look great and I think are more thorough than what I would have likely come up with. A couple requests:
Thanks! |
|
And sorry again for the slow review, it's been an excessively busy few weeks. |
This PR rewrites type definitions to fix a ton of issues. I believe this closes all the open & confirmed type issues/bugs reported. Based on my testing, I did not find any new issues, but please let me know if anything slipped.
My motivation for this stemmed from the current type definitions being incredibly hard to work with in my current project due to the bugs. The library is very neat in terms of actual functionality and performance (which I love) but the types are hard to work with.
I will admit, AI assisted with a lot of this (gpt-5.3-codex:Xhigh), but I did ultimately rewrite parts myself. However, my methodology was not blindly using AI: I first wrote several type tests and created a type-testing suite to run against many different base/edge cases. I then tested it against the current types, before using AI to both rewrite the types. After many hours, feedback loops, and handwritten rewrites, I finally landed on this.
I decided against adding those tests to this PR since I do not know if that is something wanted or structured in a specific way. Let me know if you'd like them added and I can push them here.
While working on this, I noticed a comment left by @kentonv:
So I understand if this is being worked on internally and you'd rather not accept a PR, but I do think its worth looking into these types and potentially using what you can from it.
Here are the list of changes/fixes:
anyreturn values no longer leak through asany.parse(): anynow givesapi.parse(): Promise<unknown>(notPromise<any>).RpcStub<(...args) => ...>signatures.run(cb: RpcStub<(n: number) => void>),stub.run(n => n.toFixed())infersnasnumber(notany).map()callback results now reject native promises (including nested ones) and inputs are intentionally non-thenable placeholders.api.listUsers().map(u => Promise.resolve(u.getName()))is a type error.api.listNames().map(name => name.then(...))is now a type error.unknownreturn values are now handled consistently.getUnknown(): unknowngivesapi.getUnknown(): Promise<unknown>instead of degrading to unusable/nevershapes.unknownobject fields are now accepted in RPC-compatible types.Record<string, unknown>is treated as validRpcCompatible.map()callback placeholder values can now be forwarded into params directly.api.listIds().map(id => api.getUser(id))type-checks even thoughidis a map placeholder, not a plain number.map()keep their call signatures.(n: number) => string, thenformatter(1)is allowed andformatter("x")is rejected.Promise<readonly [number, User]>,stub[0]is typed from the tuple slot instead of generic array fallback.Stubify/Unstubifynow filter to string/number keys in mapped objects, so symbol-keyed properties are excluded from RPC object mapping.UnstubifyAllnow usesreadonly unknown[], so for example methods expectingreadonly [string, number]keep that in stub call args.