Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add hooks to json encoder to override default encoding or add support for unsupported types #7015

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

adriangb
Copy link
Contributor

We essentially had to vendor this module to implement this. I know this has been discussed in the past and there was pushback against exposing Encoder to the public API but since I had to go through the work of updating our vendored version I thought I'd at least try to make a PR to upstream this to make our lives easier in the future and bring value back to the community.

In my opinion exposing this is a rather small adition the public API and any breaking changes in the future will be restricted to the JSON encoder, which is not a core part of the project. It seems worth it and addresses various requests e.g. how to encode a binary array (base64 encoded string or array of ints are both common). It has also been a stable API for ~ 1 year now.

@adriangb
Copy link
Contributor Author

TODO: add a test showing how to change the encoding of binary arrays.

@github-actions github-actions bot added the arrow Changes to the arrow crate label Jan 24, 2025
@adriangb
Copy link
Contributor Author

cc @tustvold

@adriangb
Copy link
Contributor Author

TODO: add a test showing how to change the encoding of binary arrays.

Done

Comment on lines 36 to 37
type EncoderFactoryResult<'a> =
Result<Option<(Box<dyn Encoder + 'a>, Option<NullBuffer>)>, ArrowError>;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be Option<&'a NullBuffer>?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably could be for consistency, but it isn't likely to have any material performance impact

Copy link
Contributor

@tustvold tustvold left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a review, and think it is probably fine to expose Encoder, it is a relatively straightforward trait that I don't see changing. It is the reader side that I'd be much more hesitant about altering, as that is quite coupled with the parsing machinery.

That being said there is one slight wrinkle, as documented here

The implementation of Encoder for ArrayFormatter assumes it does not produce
characters that would need to be escaped within a JSON string, e.g. '"'.
If support for user-provided format specifications is added, this assumption
may need to be revisited

I think we need to do the following:

  • Remove the impl of Encoder for ArrayFormatter in favour of a private newtype wrapper
  • Clearly document on Encoder the expectation that it generates valid escaped JSON

On a related note, the return type of Box<dyn Encoder + 'a>, Option<NullBuffer> is a bit of a mouthful, and honestly makes for a bit of a funky API where the semantics of an Encoder depend on a different data structure returned at construction time.

I wonder if a better API might be something like

pub trait Encoder {
    fn encode(&mut self, idx: usize, out: &mut Vec<u8>);
    fn nulls(&self) -> Option<NullBuffer>;
}

I also left a comment on potentially simplifying the factory down to a single method

Comment on lines 48 to 49
_array: &'a dyn Array,
_data_type: &DataType,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we pass both DataType and Array?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to, it just simplifies the implementation of the EncoderFactory a bit because almost surely all of them are going to call array.data_type(). I can remove if you'd prefer.

@@ -426,10 +438,13 @@ mod tests {

let actual: Vec<Option<Value>> = input
.split(|b| *b == b'\n')
.map(|s| (!s.is_empty()).then(|| serde_json::from_slice(s).unwrap()))
.map(|s| {
println!("{:?}", str::from_utf8(s));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

upsies!

}
}
let (_, type_ids, _, buffers) = array
.as_any()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use AsArray instead of manual downcasts, it is much easier to read


/// Make an encoder that if returned runs after all of the default encoders.
/// If this method returns None then an error will be returned.
fn make_fallback_encoder<'a>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this? AFAICT we could remove this with no real loss of functionality, if anything it would be more resilient as adding a new default encoder wouldn't break existing overrides.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll drop it in favor of a single method, I think it should work for our use case.

}

#[derive(Debug)]
struct UnionEncoder {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW adding a native UnionEncoder is likely not overly complicated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be nice 😄

How about I try to write one on our end and then we can upstream it if it works properly?

Comment on lines 36 to 37
type EncoderFactoryResult<'a> =
Result<Option<(Box<dyn Encoder + 'a>, Option<NullBuffer>)>, ArrowError>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably could be for consistency, but it isn't likely to have any material performance impact

@adriangb
Copy link
Contributor Author

Thank you for the review. I've pushed a (broken) version with most comments addressed aside from the part about ArrayFormater. To be honest I don't understand what ArrayFormatter does or why it's special... would you mind elaborating a bit on how you propose to handle it?

@tustvold
Copy link
Contributor

ArrayFormatter is a type from arrow_cast used to format array values to strings. It is used by arrow_json to format some values, however, being a general formatter it does not implement JSON escaping. This isn't a problem for the types arrow_json uses it for, but as we're now making Encoder public we need to ensure people don't think ArrayFormatter can be used as an Encoder in the general case

@adriangb
Copy link
Contributor Author

Okay I think I understand now. Looking at the implementation it seems to me that it wouldn't be that much extra work to add formatters for Date, Timestamp, Time and Decimal, then we could ditch ArrayFormatter for JSON encoding altogether. I can do that as a prequel PR. Wdyt?

@tustvold
Copy link
Contributor

A private newtype wrapper would avoid a non-trivial amount of code duplication whilst resolving the issue, and would be my preferred approach

@adriangb
Copy link
Contributor Author

Makes sense, done!

There is quite a bit of code churn from changing the Encoder trait. Would you like me to break that out into a precursor PR?

/// This can be used to override how e.g. binary data is encoded so that it is an encoded string or an array of integers.
fn make_default_encoder<'a>(
&self,
_array: &'a dyn Array,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be interested in using the changes in this PR to write GeoJSON from geoarrow-rs.

However this API would not be sufficient for me because it assumes that the physical Array is enough to know how to encode the data. This is not true for geospatial data (at least for Arrow data according to the GeoArrow specification) because the same physical layout can describe multiple types.

E.g. an array of LineString and an array of MultiPoint would both be stored as an Arrow List[FixedSizeList[2, Float64]], but the extension metadata on the Field would be necessary to know whether to write "type": "MultiPoint" or "type": "LineString" in each JSON object.

Given that the existing json Writer API writes a RecordBatch, it should be possible to access the Field and pass that through here, instead of just using the Array?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point yes I think that should be possible 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kylebarron will this work for you?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe so! As long as we can access the field that should be enough!

@adriangb
Copy link
Contributor Author

Hi @tustvold sorry to ping you. Given the use cases beyond our JSON serialization, would you mind taking another look at this PR when you have some time? Thanks!

@tustvold
Copy link
Contributor

I'm a bit swamped at the moment, but I'll try to take a look this weekend.

@adriangb
Copy link
Contributor Author

adriangb commented Feb 13, 2025

I've started trying to replace our vendored version with this + a hook to handle the JSON union and have a couple of notes:

  1. In order to handle unions we need make_encoder to be public: the union will need to delegate to make_encoder for it's children.
    2. Handling dictionary types is annoying, I would consider making DictionaryEncoder public. I was able to work around this by handling the values type instead of the dictionary itself, maybe not necessary. What is a bit weird is that in the dictionary case it ends up recursing (see DictionaryEncoder::try_new) but in the second pass the dictionary field gets passed in alongside the values array so that the type of the field parameter and the array parameter do not match.

Comment on lines 73 to 77
/// Check if the value at `idx` is null.
fn is_null(&self, _idx: usize) -> bool;

/// Short circuiting null checks if there are none.
fn has_nulls(&self) -> bool;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect this change will have major performance implications, as you're now adding a per-value dyn dispatch in order to check nullability

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. I guess it does need to be dynamic, but I take it you'd rather have a method that returns a NullBuffer the length of the array in one go?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I'm not seeing why it needs to be changed, this wasn't necessary before...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hard to remember now but I think the primary motivation was to encapsulate the (Encoder, NullBuffer) into the Encoder to make the whole API easier to work with.

I've now collapsed these into a single method that returns an Option<NullBuffer> which can be hoisted out of loops. I think that should be equivalent to the old implementation but with a better API for implementers.

Copy link
Contributor

@tustvold tustvold left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay, I have been (and still am) really swamped. I think this is getting closer to something we can merge, although the changes to null handling concern me, and I'd prefer to revert these if possible.

@adriangb
Copy link
Contributor Author

adriangb commented Mar 8, 2025

Thanks for reviewing! I'll try to work out how to do the null handling differently.

if drop_nulls && is_null {

// Collect all the field nulls buffers up front to avoid dynamic dispatch in the loop
// This creates a temporary Vec, but avoids repeated virtual calls which should be a net win
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're still doing this dynamic dispatch to get null information on every value write... Unless we can show with benchmarks it doesn't regress performance I'm lukewarm on this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be clear though, it currently does do dynamic dispatch for every non-null value write, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just rn

cargo bench --bench json_writer

And see what the numbers say?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had run it and there was a measurable (~30%) slowdown. This makes sense, it was doing 2 dynamic dispatch calls for every row instead of just 1.

I've now refactored this to avoid the dynamic dispatch altogether by encapsulating the dynamic encoder + null buffer in a helper struct. So it's basically the same as before except that things get passed around as a struct instead of a tuple of unrelated values. I don't love the name EncoderWithNullBuffer, open to suggestions.

Benchmarks now show no difference vs. main.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❯ git checkout main && cargo bench --bench json_writer && git checkout arrow-union && cargo bench --bench json_writer
Switched to branch 'main'
Your branch is ahead of 'origin/main' by 6 commits.
  (use "git push" to publish your local commits)
   Compiling arrow-json v54.2.1 (/Users/adriangb/GitHub/arrow-rs/arrow-json)
   Compiling arrow v54.2.1 (/Users/adriangb/GitHub/arrow-rs/arrow)
    Finished `bench` profile [optimized] target(s) in 3.03s
     Running benches/json_writer.rs (target/release/deps/json_writer-8d99cf5d0d08efba)
Gnuplot not found, using plotters backend
bench_integer           time:   [2.9239 ms 2.9488 ms 2.9810 ms]
                        change: [-1.4155% -0.3843% +0.7943%] (p = 0.52 > 0.05)
                        No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
  1 (1.00%) high mild
  1 (1.00%) high severe

bench_float             time:   [3.4054 ms 3.4216 ms 3.4394 ms]
                        change: [-3.7264% -2.9504% -2.1609%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 15 outliers among 100 measurements (15.00%)
  8 (8.00%) high mild
  7 (7.00%) high severe

bench_string            time:   [8.8852 ms 8.9466 ms 9.0106 ms]
                        change: [-1.3959% -0.5660% +0.3531%] (p = 0.21 > 0.05)
                        No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild

bench_mixed             time:   [11.975 ms 12.030 ms 12.087 ms]
                        change: [-3.7701% -3.0819% -2.4763%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
  5 (5.00%) high mild

bench_dict_array        time:   [4.6221 ms 4.6771 ms 4.7345 ms]
                        change: [-2.4806% -0.5873% +1.2096%] (p = 0.53 > 0.05)
                        No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
  3 (3.00%) high mild
  1 (1.00%) high severe

bench_struct            time:   [19.521 ms 19.623 ms 19.731 ms]
                        change: [-0.9078% +0.1000% +0.9733%] (p = 0.84 > 0.05)
                        No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild

bench_nullable_struct   time:   [9.4059 ms 9.4565 ms 9.5092 ms]
                        change: [-3.5737% -2.7898% -1.9920%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild

bench_list              time:   [27.854 ms 27.976 ms 28.118 ms]
                        change: [-0.8182% -0.3309% +0.1897%] (p = 0.22 > 0.05)
                        No change in performance detected.
Found 10 outliers among 100 measurements (10.00%)
  7 (7.00%) high mild
  3 (3.00%) high severe

bench_nullable_list     time:   [5.9283 ms 5.9849 ms 6.0429 ms]
                        change: [-8.7315% -6.8046% -5.0511%] (p = 0.00 < 0.05)
                        Performance has improved.

bench_struct_list       time:   [2.9264 ms 2.9694 ms 3.0130 ms]
                        change: [-5.8590% -3.8633% -1.8735%] (p = 0.00 < 0.05)
                        Performance has improved.

Switched to branch 'arrow-union'
Your branch is up to date with 'origin/arrow-union'.
   Compiling arrow-json v54.2.1 (/Users/adriangb/GitHub/arrow-rs/arrow-json)
   Compiling arrow v54.2.1 (/Users/adriangb/GitHub/arrow-rs/arrow)
    Finished `bench` profile [optimized] target(s) in 3.03s
     Running benches/json_writer.rs (target/release/deps/json_writer-8d99cf5d0d08efba)
Gnuplot not found, using plotters backend
bench_integer           time:   [2.9105 ms 2.9313 ms 2.9542 ms]
                        change: [-1.8789% -0.5924% +0.5939%] (p = 0.36 > 0.05)
                        No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) high mild
  2 (2.00%) high severe

bench_float             time:   [3.3695 ms 3.3854 ms 3.4041 ms]
                        change: [-1.7458% -1.0557% -0.3581%] (p = 0.00 < 0.05)
                        Change within noise threshold.
Found 6 outliers among 100 measurements (6.00%)
  5 (5.00%) high mild
  1 (1.00%) high severe

bench_string            time:   [8.7954 ms 8.8473 ms 8.9044 ms]
                        change: [-2.0603% -1.1094% -0.2183%] (p = 0.02 < 0.05)
                        Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high severe

bench_mixed             time:   [12.034 ms 12.099 ms 12.170 ms]
                        change: [-0.1335% +0.5726% +1.3337%] (p = 0.13 > 0.05)
                        No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
  3 (3.00%) high mild
  1 (1.00%) high severe

bench_dict_array        time:   [4.4275 ms 4.4639 ms 4.5020 ms]
                        change: [-5.9236% -4.5580% -3.1699%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild

bench_struct            time:   [19.450 ms 19.538 ms 19.640 ms]
                        change: [-1.1211% -0.4330% +0.3004%] (p = 0.25 > 0.05)
                        No change in performance detected.
Found 6 outliers among 100 measurements (6.00%)
  3 (3.00%) high mild
  3 (3.00%) high severe

bench_nullable_struct   time:   [9.4349 ms 9.4862 ms 9.5417 ms]
                        change: [-0.4570% +0.3147% +1.0725%] (p = 0.43 > 0.05)
                        No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
  1 (1.00%) high mild
  1 (1.00%) high severe

bench_list              time:   [27.799 ms 27.901 ms 28.015 ms]
                        change: [-0.8950% -0.2687% +0.2907%] (p = 0.40 > 0.05)
                        No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
  3 (3.00%) high mild
  1 (1.00%) high severe

bench_nullable_list     time:   [5.9567 ms 6.0222 ms 6.0886 ms]
                        change: [-0.9060% +0.6229% +2.1762%] (p = 0.41 > 0.05)
                        No change in performance detected.

bench_struct_list       time:   [2.8755 ms 2.9589 ms 3.0925 ms]
                        change: [-3.6152% -0.3533% +4.3395%] (p = 0.89 > 0.05)
                        No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
  3 (3.00%) high mild
  1 (1.00%) high severe

Copy link
Contributor

@tustvold tustvold left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the changes make the null handling worse 😅

The API was designed the way it was for a reason, I think if we want to change it we need to justify doing so empirically

@adriangb
Copy link
Contributor Author

I think the changes make the null handling worse 😅

The API was designed the way it was for a reason, I think if we want to change it we need to justify doing so empirically

To be clear, you're saying that the API make_encoder(...) -> (Box<dyn Encoder>, NullBuffer) is a better API than adding a nulls() method to Encoder to package them up together? Or is it something else about the existing implementation that I'm missing? I can try to revert it if that's it.

@alamb
Copy link
Contributor

alamb commented Mar 12, 2025

Should we be trying to get this change in for the release in a day or two?

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this @adriangb

Sorry I haven't had a chance to review this more carefully.

if drop_nulls && is_null {

// Collect all the field nulls buffers up front to avoid dynamic dispatch in the loop
// This creates a temporary Vec, but avoids repeated virtual calls which should be a net win
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just rn

cargo bench --bench json_writer

And see what the numbers say?

@adriangb
Copy link
Contributor Author

@alamb @tustvold could you please give this another look? I think I addressed the major outstanding issue of performance.

@alamb
Copy link
Contributor

alamb commented Mar 13, 2025

@alamb @tustvold could you please give this another look? I think I addressed the major outstanding issue of performance.

I will try to do so later today, but I have several other things to do first

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
arrow Changes to the arrow crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants