-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Current serde/shims don't parse Vec<u8>, only hex/base64/pem strings #86
Comments
I have some doubts related to this. There are currently some fields which contain binary content that at some point may be serialized/deserialized to a text containr, e.g. the So my doubts are:
|
This again comes back to the idea that TUF is moving away from rigid definitions for everthing. Even as it stood at the time Notary as implemented, it didn't say that an implementation must support ed25519 keys or even sha256, so I could have written a TUF lib that still perfectly followed the spec but wouldn't be able to talk to Notary which also perfectly followed the spec. I chose b64 because hex is ~50% more data to send per hash/keyid/sig. There are some obvious reasons for TUF to not specify things rigidly.
I think trying to make a lib that is fully general purpose that can handle all cases is too complex. This lib is intended to be used as both the client and server lib with no guarantees about interop. It might behoove me to document this fact better in both the readme and rustdocs. I do make some opinionated decisions that deviate from the non-security-critical designs in TUF (key encoding, for example). If this is seen as an issue with the TUF spec, you should file something upstream. Or say @trishankkarthik's name 3 times in front of a mirror and see if he appears behind you.
I'm not sure what you mean by this? JSON should it have both hexlower and b64 as different fields? If so, I'm going to say no since there is already a huge amount of metadata being transferred with TUF. I'm actually going to strip the default metadata generators to only use user defined hash types and not "all" since that would further reduce the amount of data transferred. See #106. |
Also as a note, this is even more annoying because writing metadata is more complex. How would I take the bytes of a hash and encode them as both a string and a byte array? Each |
Thanks for the answer, you are touching many points that I omitted to avoid going out of scope, but those insights are appreciated. Getting to specific topics:
Yes, it would be nice to point it out in the top level crate doc that there is no interoperability with anything else, on purpose, as per-spec. It would also be nice to collect all your opinionated tweaks and their rationale in a markdown file in here.
No. Just for reference, this was suggesting an internally tagged enum (in serde terms):
I think I will refrain from. Both you and @trishankkarthik are upstream and well aware of encoding details. The fact that in the whole current section 4 there is not a single |
I'll make a note in #109 to ensure I cover all the data formats and how everything works and not just the code you need to make the lib work.
This feels like adding cruft/bloat to a new lib that doesn't yet need to support backwards compatibility.
I actually see this a deficiency of the spec because at some point those were all mandatory (or so I believe), and I tried to increase the precision of the language, clean up the spec a bit, and turn in into an RFC here: theupdateframework/python-tuf#455. (note: that PR is dated by now) |
Thanks, that would have helped in understanding the differencies/caveats before digging into the code :)
I agree, as far as there is no need to interop and the chosen encoding is documented somewhere. I'm not suggesting any of those further, just clarifying the meaning.
As above, I'm not planning to drag this discussion there, so feel free to report it upstream to the spec if you deem it is a deficiency there (or tell me explicitly if I should do instead, but I'm in general not involved in TUF processes as you). For reference, I previously saw your stale RFC PR and would love to see the spec evolving in that direction. |
Honestly, I didn't actually expect people to start using this so much yet. :)
Nah, it's fine. I actually support the flexibility and lack of interop for the reasons mentioned.
If you like the RFC and would like to see the spec RFC'd, you could let the homies over that-a-way know that you prefer the RFC format over the |
I'll just note the question of "how to encode bytes in JSON" (i.e. hex vs base64) is one of the things I was trying to address in TJSON: It allows for both, with mandatory type tags selecting the encoding, with base64url being the default. |
This is would prevent us from having a serialization format that uses parses bytes directly (and not bytes-that-are-a-string (I think)). This should work:
The text was updated successfully, but these errors were encountered: