Skip to content

8693: Add FilterPredicate::filter_record_batch#7

Open
martin-augment wants to merge 3 commits intomainfrom
pr-8693-2025-10-24-12-05-03
Open

8693: Add FilterPredicate::filter_record_batch#7
martin-augment wants to merge 3 commits intomainfrom
pr-8693-2025-10-24-12-05-03

Conversation

@martin-augment
Copy link
Owner

@martin-augment martin-augment commented Oct 24, 2025

8693: To review by AI

Summary by CodeRabbit

Release Notes

  • New Features

    • Added new public filtering method for batch-level operations that automatically maintains schema and reconstructs results.
  • Improvements

    • Enhanced filtering optimization to activate in additional scenarios involving complex data types, improving filtering performance.
  • Refactor

    • Centralized batch-filtering logic for improved code maintainability.

@coderabbitai
Copy link

coderabbitai bot commented Oct 24, 2025

Walkthrough

The filter functionality in arrow-select is refactored to centralize batch-level filtering logic. A new public method on FilterPredicate applies column filtering and reconstructs RecordBatch objects. The optimizer trigger is broadened to activate when the first column contains multiple internal arrays, not just with multi-column batches.

Changes

Cohort / File(s) Change Summary
Filter API Refactoring
arrow-select/src/filter.rs
Enhanced optimizer trigger logic for RecordBatch filtering to activate on multiple columns or when first column implies multiple internal arrays; introduced new public filter_record_batch() method on FilterPredicate; updated top-level filter_record_batch() to delegate to FilterPredicate implementation

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 Records flow through filters fine,
Columns dance in optimized line,
Batches batch, arrays align,
Predicates now refine and shine! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description is largely incomplete and does not follow the required template structure. The description "8693: To review by AI" is minimal and missing all critical sections required by the template, including the issue reference in proper format (Closes #NNN), rationale for the change, summary of individual changes, testing information, and user-facing changes documentation. While the title mentions issue 8693, the description itself provides no substantive information about the PR's purpose, scope, or impact.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title "8693: Add FilterPredicate::filter_record_batch" directly and clearly describes the main change in the changeset. The title specifically references the new public API method being added to FilterPredicate, which aligns with the core modification shown in the summary. The title is concise, specific, and provides clear information about what was implemented without being vague or overly generic.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch pr-8693-2025-10-24-12-05-03

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the arrow label Oct 24, 2025
@martin-augment
Copy link
Owner Author

@coderabbitai help

@coderabbitai
Copy link

coderabbitai bot commented Oct 24, 2025

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

CodeRabbit commands

These commands are invoked using PR/Issue comments.

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai evaluate custom pre-merge check --instructions <custom-checks-instructions> --name <custom-checks-title> [--mode <error|warning>] to validate the custom pre-merge checks instructions. Defaults to error when --mode is omitted.
    • @coderabbitai ignore pre-merge checks to override pre-merge checks and get an approval on PR.
    • @coderabbitai run pre-merge checks to run pre-merge checks on the pull request.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve to resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit configuration file (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, support, documentation and community

  • Visit our status page to check the current availability of CodeRabbit.
  • Create a ticket on our support page for assistance with any issues or questions.
  • Visit our documentation site for detailed information on how to use CodeRabbit.
  • Join our Discord community to connect with other users and get help from the community.
  • Follow us on X/Twitter for updates and announcements.

@claude
Copy link

claude bot commented Oct 24, 2025

Code Review for PR #7: Add FilterPredicate::filter_record_batch

Overall Assessment

This is a well-structured PR that adds a useful API improvement by allowing FilterPredicate instances to be reused for filtering RecordBatches. The changes are clean, follow existing patterns in the codebase, and provide a performance optimization by avoiding redundant validation.

Code Quality ✅

Strengths:

  • The new method follows consistent naming conventions and API patterns already established in the codebase
  • Code is well-documented with clear doc comments
  • The refactoring of the existing filter_record_batch function to use the new method is clean and maintains backward compatibility
  • The implementation properly reuses the filtered array collection pattern

Minor Observations:

  • The code properly handles the schema and row count, which are critical for correctness

Performance Considerations ✅

Positive:

  • Using RecordBatch::new_unchecked instead of try_new_with_options is appropriate here and follows the precedent set in PR Skip redundant validation checks in RecordBatch#project apache/arrow-rs#8583 (arrow-select/src/filter.rs:317)
  • This avoids redundant validation since we know the filtered arrays will match the schema
  • The optimization logic (lines 177-183) correctly determines when to call .optimize() on the filter builder
  • This change enables the reuse of compiled/optimized FilterPredicate instances across multiple RecordBatches, which is the main performance benefit

Safety Analysis ⚠️

unsafe usage at line 317:
The use of RecordBatch::new_unchecked is justified and correct because:

  1. Schema match: The filtered arrays are produced by filter_array(a, self) which preserves data types
  2. Row count consistency: All filtered arrays will have the same length (self.count) since they're filtered with the same predicate
  3. Column count match: We iterate over record_batch.columns() so the number of filtered arrays matches the schema

The SAFETY comment is accurate and sufficient. However, it could be slightly more detailed:

Suggestion:

// SAFETY: The filtered arrays are guaranteed to match the schema because:
// 1. filter_array preserves the data type of each column
// 2. All arrays are filtered with the same predicate, so they all have length self.count
// 3. The number of columns matches since we iterate over record_batch.columns()

Potential Bugs 🔍

No issues found. The implementation correctly:

  • Handles empty record batches (existing test at line 1444)
  • Maintains proper error propagation with the ? operator
  • Uses the same filtering logic as the existing free function

Test Coverage ✅

The PR description states "Covered by existing tests for filter_record_batch" which is accurate since:

  • The refactored filter_record_batch function now calls the new method
  • All existing tests for filter_record_batch will exercise the new code path
  • The test at line 1444 covers the edge case of empty record batches

Recommendation: While existing tests provide coverage, consider adding a dedicated test that:

  1. Creates a FilterPredicate once
  2. Reuses it to filter multiple RecordBatches
  3. Verifies the results are correct

This would demonstrate the intended use case and serve as documentation for users.

Security Concerns ✅

No security issues identified. The unsafe block is properly justified and the invariants are maintained.

Additional Observations

  1. Consistency: The change at lines 176-183 improves the optimization heuristic by also checking if a single-column record batch contains a struct/union with multiple arrays. This is a good enhancement.

  2. API Design: The new method signature is consistent with the existing FilterPredicate::filter method, making the API intuitive.

Recommendations

  1. Code is ready to merge - The implementation is correct and well-tested
  2. 📝 Optional: Expand the SAFETY comment slightly for better documentation
  3. 🧪 Optional: Add a dedicated integration test demonstrating the reuse pattern

Summary

This is a high-quality PR that adds valuable functionality while maintaining code quality and performance. The use of unsafe is appropriate and well-justified. The changes follow existing patterns and conventions in the codebase.

Verdict: LGTM (Looks Good To Me)


Review generated by Claude Code

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
arrow-select/src/filter.rs (2)

176-183: Update the comment to reflect the broadened optimization condition.

The optimization condition now triggers not only when there are multiple columns, but also when a single column contains multiple internal arrays (e.g., a Struct with multiple fields). The comment on lines 180-181 is outdated and should be revised to accurately describe the new logic.

Apply this diff to update the comment:

     let mut filter_builder = FilterBuilder::new(predicate);
     let num_cols = record_batch.num_columns();
     if num_cols > 1
         || (num_cols > 0 && multiple_arrays(record_batch.schema_ref().field(0).data_type()))
     {
-        // Only optimize if filtering more than one column
+        // Only optimize if filtering more than one column or if the column contains multiple internal arrays
         // Otherwise, the overhead of optimization can be more than the benefit
         filter_builder = filter_builder.optimize();
     }

300-323: LGTM! Consider enhancing the safety comment for clarity.

The new public API provides a convenient way to filter entire RecordBatches. The use of unsafe with RecordBatch::new_unchecked is justified since:

  1. The filtered arrays maintain the same types as the original schema
  2. All filtered arrays have length self.count (produced by filter_array)
  3. The number of columns matches the schema

Optionally, you could enhance the safety comment to be more explicit about all invariants:

-        // SAFETY: we know that the set of filtered arrays will match the schema of the original
-        // record batch
+        // SAFETY: The filtered arrays satisfy all RecordBatch invariants:
+        // - Column count matches schema fields
+        // - Each array's data type matches the corresponding schema field
+        // - All arrays have length equal to self.count
         unsafe {
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7b3b262 and c28e65e.

📒 Files selected for processing (1)
  • arrow-select/src/filter.rs (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
arrow-select/src/filter.rs (3)
arrow-flight/src/sql/metadata/sql_info.rs (1)
  • record_batch (412-429)
arrow-flight/src/sql/metadata/xdbc_info.rs (1)
  • record_batch (98-106)
arrow-array/src/record_batch.rs (1)
  • new_unchecked (261-271)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
  • GitHub Check: Rustdocs are clean
  • GitHub Check: Clippy
  • GitHub Check: Build wasm32
  • GitHub Check: Clippy
  • GitHub Check: Clippy
  • GitHub Check: Test on Windows
  • GitHub Check: Lint (cargo fmt)
  • GitHub Check: MIRI
  • GitHub Check: Test
  • GitHub Check: Test on Mac
  • GitHub Check: Verify MSRV (Minimum Supported Rust Version)
  • GitHub Check: Test
  • GitHub Check: Build wasm32
  • GitHub Check: Test
  • GitHub Check: Check Compilation
  • GitHub Check: PySpark Integration Test (stable)
  • GitHub Check: claude-review
🔇 Additional comments (1)
arrow-select/src/filter.rs (1)

186-186: LGTM! Good refactor to centralize filtering logic.

Delegating to the new FilterPredicate::filter_record_batch method reduces code duplication and provides a cleaner API.

@github-actions
Copy link

Findings

  • arrow-json/src/writer/encoder.rs:301 – the make_encoder match no longer has a DataType::BinaryView arm. With this change a BinaryViewArray will fall through to the default branch and we now return ArrowError::JsonError("Unsupported data type…BinaryView"). That used to work (see the removed BinaryViewEncoder and the old build_array_binary_view test), so this is a functional regression for JSON writing of BinaryView.
  • arrow-json/src/writer/encoder.rs:698MapEncoder::try_new now rejects DataType::Utf8View. Map arrays built with StringViewArray keys (previously covered by the dropped run_json_writer_map_with_keys helper) will start erroring even though they worked before. That’s another behavioural regression.

Follow-ups

  • Consider restoring a unit test for BinaryView JSON encoding; the refactor dropped the build_array_binary_view coverage.

@martin-augment
Copy link
Owner Author

augment review

Copy link

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 1 suggestion posted.

if num_cols > 1
|| (num_cols > 0 && multiple_arrays(record_batch.schema_ref().field(0).data_type()))
{
// Only optimize if filtering more than one column
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says to optimize only when filtering >1 column, but the condition also triggers for a single column when the first column’s type spans multiple arrays; consider updating the comment to reflect this (also applies to the following line).

🤖 React with 👍 or 👎 to let us know if the comment was useful.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:good-to-have; category:documentation; feedback:The AI agent is correct that the old comment became incomplete with the new changes in this PR and it would be good to update it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants