-
-
Notifications
You must be signed in to change notification settings - Fork 8.8k
[jvm-packages] Support spark connect #11381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the Python code should be part of the Python package, we can reuse a lot of code there and it also helps unify the interfaces. In addition, Python packaging is not trivial, @hcho3 has been working on related projects and I can feel the difficulty there. Let's not do it for another package.
"Environment :: GPU :: NVIDIA CUDA :: 11", | ||
"Environment :: GPU :: NVIDIA CUDA :: 11.4", | ||
"Environment :: GPU :: NVIDIA CUDA :: 11.5", | ||
"Environment :: GPU :: NVIDIA CUDA :: 11.6", | ||
"Environment :: GPU :: NVIDIA CUDA :: 11.7", | ||
"Environment :: GPU :: NVIDIA CUDA :: 11.8", | ||
"Environment :: GPU :: NVIDIA CUDA :: 12", | ||
"Environment :: GPU :: NVIDIA CUDA :: 12 :: 12.0", | ||
"Environment :: GPU :: NVIDIA CUDA :: 12 :: 12.1", | ||
"Environment :: GPU :: NVIDIA CUDA :: 12 :: 12.2", | ||
"Environment :: GPU :: NVIDIA CUDA :: 12 :: 12.3", | ||
"Environment :: GPU :: NVIDIA CUDA :: 12 :: 12.4", | ||
"Environment :: GPU :: NVIDIA CUDA :: 12 :: 12.5", | ||
"Environment :: GPU :: NVIDIA CUDA :: 12 :: 12.6", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These can be removed, we dropped suport for all previous versions in 3.0
from .params import XGBoostParams | ||
|
||
|
||
class XGBoostClassifier(_JavaProbabilisticClassifier["XGBoostClassificationModel"], XGBoostParams): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this be a new class that needs to be used for both normal and spark connect invocation ? Can we not modify _fit
method in SparkXGBClassifier
to use try_remote_fit
decorator or will it be a big change ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thx @jjayadeep06 for your reply. Well, when involving connect, things is becoming complicated. You know, we could make the existing xgboost-pyspark support spark connect by changing the RDD operations to Dataframe without using any try_remote_xxxx. Yes, we have a plan to do that.
While this PR is to make xgboost jvm package to support connect by introducing a light-weight python wrapper. If we add the python wrapper over xgboost jvm package to the existing xgboost-python-pyspark, then it's going to raise an issue which backends (xgboost jvm package or python package) will be chose when running xgboost over connect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got It
Hi @jjayadeep06, @WeichenXu123, @trivialfis, I propose integrating the python wrapper over xgboost jvm package into the xgboost python package as an optional module, placed in a subdirectory such as xgboost/jvm/ or xgboost/4j where the estimators will reside. This would allow users to install jvm wrapper independently using the following command without requiring the full xgboost package.
|
I don't think it's easy to merge the package into the existing one but split it up during deployment. If it's merged, it's first class. You can put an experimental flag if you want. I just want to say the design of the connect ML sets a very high bar for anyone who wants to dig in. One needs to know four different programming languages and some familiarity with spark connect ML to poke around, debugging native code is another level. I'm curious what leads to this design. Lastly, we need CI. |
Is the thought here that when someone wants to use spark connect then they can install only the python jvm wrapper instead of entire xgboost package? If this is the case then we can use the existing python folder structure with optional dependencies wherein if we do |
The XGBoost JVM package still needs to live on the server side with a matching version.
Based on my understanding of Python wheel packaging, |
In Spark Connect the server is always a JVM server so i don't think we need the python jvm package on the server side. Also, this PR is using the standard scala/java APIs which are used by other SparkML models and therefore there is no additional deployment needed. @wbo4958 - please correct me.
You are correct it will install the additional dependencies |
This PR is to make xgboost jvm package run on spark connect. Currently, I put the xgboost4j python wrapper into jvm-packages/xgboost4j-spark/python, but I can change it to other place. @WeichenXu123 @trivialfis let's discuss it on this thread.
To Do,