-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-54194][CONNECT][FOLLOWUP] Spark Connect Proto Plan Compression - Scala Client #53003
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
sql/connect/client/jvm/pom.xml
Outdated
| <include>io.perfmark:*</include> | ||
| <include>org.apache.arrow:*</include> | ||
| <include>org.codehaus.mojo:*</include> | ||
| <include>com.github.luben:zstd-jni</include> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For 3rd-party jars, if we choose to shade them, it's advisable to also perform relocation on them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the comment. For this zstd-jni, it is tricky to shade because it has native libraries. If we shade it, it will fail with error java.lang.UnsatisfiedLinkError: 'int org.sparkproject.connect.com.github.luben.zstd.Zstd.defaultCompressionLevel()', because the native library was compiled with the original package name. I'm keeping it not shaded for now.
Update: I've removed this line so we won't bundle a copy of it into the client jar.
hvanhovell
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
… - Scala Client ### What changes were proposed in this pull request? In the previous PR #52894 of Spark Connect Proto Plan Compression, both Server-side and PySpark client changes were implemented. In this PR, the corresponding Scala client changes are implemented, so plan compression are now supported on the Scala client as well. To reproduce the existing issue we are solving here, run this code on Spark Connect Scala client: ``` import scala.util.Random import org.apache.spark.sql.DataFrame import spark.implicits._ def randomLetters(n: Int): String = { Iterator.continually(Random.nextPrintableChar()) .filter(_.isLetter) .take(n) .mkString } val numUniqueSmallRelations = 5 val sizePerSmallRelation = 512 * 1024 val smallDfs: Seq[DataFrame] = (0 until numUniqueSmallRelations).map { _ => Seq(randomLetters(sizePerSmallRelation)).toDF("value") } var resultDf = smallDfs.head for (_ <- 0 until 500) { val idx = Random.nextInt(smallDfs.length) resultDf = resultDf.unionByName(smallDfs(idx)) } resultDf.collect() ``` It fails with RESOURCE_EXHAUSTED error with message `gRPC message exceeds maximum size 134217728: 269207219`, because the server is trying to send an ExecutePlanResponse of ~260MB to the client. With the improvement introduced by the PR, the above code runs successfully and prints the expected result. ### Why are the changes needed? It improves Spark Connect stability when handling large plans. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? New tests. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #53003 from xi-db/plan-compression-scala-client. Authored-by: Xi Lyu <[email protected]> Signed-off-by: Herman van Hovell <[email protected]> (cherry picked from commit 6cb88c1) Signed-off-by: Herman van Hovell <[email protected]>
|
Merging to master/4.1. Thanks! |
What changes were proposed in this pull request?
In the previous PR #52894 of Spark Connect Proto Plan Compression, both Server-side and PySpark client changes were implemented.
In this PR, the corresponding Scala client changes are implemented, so plan compression are now supported on the Scala client as well.
To reproduce the existing issue we are solving here, run this code on Spark Connect Scala client:
It fails with RESOURCE_EXHAUSTED error with message
gRPC message exceeds maximum size 134217728: 269207219, because the server is trying to send an ExecutePlanResponse of ~260MB to the client.With the improvement introduced by the PR, the above code runs successfully and prints the expected result.
Why are the changes needed?
It improves Spark Connect stability when handling large plans.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
New tests.
Was this patch authored or co-authored using generative AI tooling?
No.