-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paralellize the filling on the std::vector in DMatrix #41
base: main
Are you sure you want to change the base?
Conversation
data_ = std::vector<ElementType>{data_ptr, data_ptr + num_elem}; | ||
data_.reserve(num_elem); | ||
#pragma omp parallel for | ||
for (std::uint64_t i = 0; i < num_elem; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have benchmarks showing the impact of this change at various scales? How often is this code likely to get called inside a block that is already parallelized? You mentioned this was a bottleneck; can you give an example of a workload where it is a bottleneck?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late reply. In our application we do predictions at large scale: we run tl2cgen.Predictor.predict about 17,000 times on 100,000,000 samples points by 61 features Numpy nd arrays. For this reason we use tl2cgen to compile the model from sklearn, convert the array to DMartrix and then predict. Whoever we noticed that the DMatrix conversion was about 60% of the computing time and that the execution was not doe in parallel. In our machines we have 96 CPUs so this has a big impact. Parallelizing the filling of the std::vector for the DMatrix the time dropped drastically. For comparison you can see the following screenshot (100,000,000 samples points by 61 features input). Here an old version of TreeLite was use for comparison (since it has similar performance and had the same bottle neck), while tl2cgen is with parallel DMatrix.
@@ -17,6 +17,8 @@ def py_str(string): | |||
"""Convert C string back to Python string""" | |||
return string.decode("utf-8") | |||
|
|||
def check_if_fast(): | |||
return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if there's missing code here or if this was supposed to be removed before the PR was made.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, sorry, this commit was only included to check internally which version of tl2cgen we were running and went pushed to the PR because in the same branch, but should not be included.
Currently the DMatrix creation is a bottleneck in the library because is not performed in parallel. This small modification can solve the issue and I am sure that it can be done also in more clean and performing way (e.g. playing with omp chunking).