Skip to content

Conversation

khakhlyuk
Copy link
Contributor

What changes were proposed in this pull request?

Spark Connect Python client does not throw a proper error when creating a dataframe from a pandas dataframe with a index and empty data.
Generally, spark connect client throws a client-side error [CANNOT_INFER_EMPTY_SCHEMA] Can not infer schema from an empty dataset. when creating a dataframe without data, for example via

spark.createDataFrame([]).show()

or

df = pd.DataFrame()
spark.createDataFrame(df).show()

or

df = pd.DataFrame({"a": []})
spark.createDataFrame(df).show()

This does not happen when pandas dataframe has an index but no data, e.g.

df = pd.DataFrame(index=range(5))
spark.createDataFrame(df).show()

What happens instead is that the dataframe is successfully converted to a LocalRelation on the client, is sent to the server, but the server then throws the following exception: INTERNAL_ERROR: Input data for LocalRelation does not produce a schema. SQLSTATE: XX000. XX000 is an internal error sql state and the error is not actionable enough for the user.

This PR fixes this problem by throwing CANNOT_INFER_EMPTY_SCHEMA in case the dataframe has rows (because of the index), but does not have columns.

Why are the changes needed?

Currently the error is thrown as a server-side internal error and the error message is not actionable enough for the user.

Does this PR introduce any user-facing change?

Creating a spark connect dataframe from an pandas dataframe with an index but no data will now throw a client-side error [CANNOT_INFER_EMPTY_SCHEMA] Can not infer schema from an empty dataset instead of the server-side INTERNAL_ERROR: Input data for LocalRelation does not produce a schema. SQLSTATE: XX000.

How was this patch tested?

New unit test.

Was this patch authored or co-authored using generative AI tooling?

No

spark_type = from_arrow_type(field_type)
struct.add(field.name, spark_type, nullable=field.nullable)
schema = struct
if len(schema) == 0:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qq, how's Spark Classic code path? I think we should keep it matched

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hey @HyukjinKwon!
In spark classic the code does not throw an error

df = pd.DataFrame(index=range(5))
spark.createDataFrame(df).collect()

this code results in an empty list []. We could add the same error for the classic spark, but it will be a breaking change technically. Do you think it makes sense to do that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants