We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
While benchmarking vtprotobuf in our projects, we noticed a performance regression in case of lists of int32 and sfixed32 numbers.
int32
sfixed32
Marshaling and unmarshaling both seem to be slower with vtprotobuf in case of repeated int32 fields.
repeated int32
Although unmarshaling is faster with vtprotobuf for repeated sfixed32, marshaling is slower.
repeated sfixed32
This repository contains samples of these microbenchmarks: https://github.com/themreza/vtprotobuf-bench/tree/main
What could be causing this? Is there a way to improve the performance?
It would be helpful to have automated benchmarks for different data types comparing vtprotobuf with the built-in proto.Marshal and proto.Unmarshal.
proto.Marshal
proto.Unmarshal
The text was updated successfully, but these errors were encountered:
No branches or pull requests
While benchmarking vtprotobuf in our projects, we noticed a performance regression in case of lists of
int32
andsfixed32
numbers.Marshaling and unmarshaling both seem to be slower with vtprotobuf in case of
repeated int32
fields.Although unmarshaling is faster with vtprotobuf for
repeated sfixed32
, marshaling is slower.This repository contains samples of these microbenchmarks: https://github.com/themreza/vtprotobuf-bench/tree/main
What could be causing this? Is there a way to improve the performance?
It would be helpful to have automated benchmarks for different data types comparing vtprotobuf with the built-in
proto.Marshal
andproto.Unmarshal
.The text was updated successfully, but these errors were encountered: