-
Notifications
You must be signed in to change notification settings - Fork 28.6k
[SPARK-52238][PYTHON] Python client for Declarative Pipelines #50963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is Declarative Pipelines supposed to be only supported in connect mode?
from pyspark.sql.pipelines.block_connect_access import block_spark_connect_execution_and_analysis | ||
|
||
|
||
class BlockSparkConnectAccessTests(ReusedConnectTestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new tests should be registered in dev/sparktestsupport/modules.py
, otherwise they are skipped
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we plan to also test this file in classic mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should only be tested in connect mode – do I need to add something to the file to set that up?
|
||
class GraphElementRegistryTest(unittest.TestCase): | ||
def test_graph_element_registry(self): | ||
spark = SparkSession.builder.getOrCreate() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not reusing the ReusedSQLTestCase (for classic) and ReusedConnectTestCase (for connect)?
@zhengruifeng this initial implementation is just for Connect. Connect is more straightforward to support, because Connect DataFrames are lazier than classic DataFrames. This means we can evaluate the user's decorated query function immediately rather than call back after all upstream datasets have been resolved. However, it's designed in a way that can support classic in the future – by implementing a |
What changes were proposed in this pull request?
Adds the Python client for Declarative Pipelines. This implements the command line interface and Python APIs described in the Declarative Pipelines SPIP.
Python API for defining pipeline graph elements
The Python API consists of these APIs for defining flows and datasets in a pipeline dataflow graph (see their docstring for more details):
create_streaming_table
@append_flow
@materialized_view
@table
@temporary_view
Example file of definitions:
Command line interface
The CLI is implemented as a Spark Connect client. It enables launching runs of declarative pipelines. It accepts a YAML spec, which specifies where on the local filesystem to look for the Python and SQL files that contain the definitions of the flows and datasets that make up the pipeline dataflow graph.
Example usage:
Example output:
Architecture diagram
Why are the changes needed?
In order to implement Declarative Pipelines, as described in the SPIP.
Does this PR introduce any user-facing change?
No previous behavior is changed, but new behavior is introduced.
How was this patch tested?
Unit testing
Includes unit tests for:
Note that, once the backend is wired up, we will submit additional unit tests that cover end-to-end pipeline execution with Python.
CLI testing
With the Declarative Pipelines Spark Connect backend (coming in a future PR), I ran the CLI and confirmed that it executed a pipeline as expected.
Was this patch authored or co-authored using generative AI tooling?