-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
I have new results from timing serialization / deserialization of protobuf and json, both uncompressed and compressed with deflate.
10000 ,write-proto:17,write-compressed-proto:24,write-json:30,write-compressed-json:31,read-proto:16,read-compressed-proto:33,read-json:29,read-compressed-json:21
100000 ,write-proto:102,write-compressed-proto:210,write-json:157,write-compressed-json:281,read-proto:87,read-compressed-proto:336,read-json:169,read-compressed-json:197
1000000 ,write-proto:551,write-compressed-proto:2144,write-json:946,write-compressed-json:2307,read-proto:753,read-compressed-proto:3175,read-json:1715,read-compressed-json:2038
10000000 ,write-proto:4762,write-compressed-proto:18999,write-json:9617,write-compressed-json:23476,read-proto:7102,read-compressed-proto:31885,read-json:18004,read-compressed-json:20854
Need to update the program code here with the modifications from my local machine.
Also, update readme.md. If speed if a concern, more thought may be needed. In short, protobuf is the clear winner for speed if using FileStreams and uncompressed. If the data's compressed, then performance seems comparable with protobuf.
I bet that I could get better performance still by leveraging Utf8JsonReader/Utf8JsonWriter, but I need to figure out how to work with the Span<T> and Memory<T> API properly to use it.
Metadata
Metadata
Assignees
Labels
No labels