-
Notifications
You must be signed in to change notification settings - Fork 13.5k
gpu offload host code generation #142097
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
gpu offload host code generation #142097
Conversation
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@oli-obk Featurewise, I am almost done. I'll add a few more lines to describe the layout of Rust types to the offload library, but in this PR I only intend to support one type or two (maybe array's, raw pointer, or slices). I might even hardcode the length in the very first approach. In a follow-up PR I'll do some proper type parsing on a higher level, similar to what I did in the past with Rust TypeTrees. This work is much simpler and more reliable though, since offload doesn't care what type something has, just how many bytes it is large, and hence need to be moved to/from the GPU. I was able to just move a few of the builder methods I needed to the generic builder. |
Not fully ready yet, I apparently missed yet another global to initialize the offload runtime. But at least it compiles successfully to a binary if I emit the IR from Rust, and then use clang for the rest. I'll add the global today, then I should be done and will clean it up |
This comment has been minimized.
This comment has been minimized.
Some changes occurred in compiler/rustc_codegen_ssa |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did the first round of reviews for myself, I'll address them tomorrow.
I'll also clean up the code in gpu_builder more, it has a lot of duplications and IR comments from when I was trying to figure out what to generate..
@@ -117,6 +118,70 @@ impl<'a, 'll, CX: Borrow<SCx<'ll>>> GenericBuilder<'a, 'll, CX> { | |||
} | |||
bx | |||
} | |||
|
|||
pub(crate) fn my_alloca2(&mut self, ty: &'ll Type, align: Align, name: &str) -> &'ll Value { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll find a better name for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also document why/how it is different from alloca
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave it a name, because I feel that alloca is the best place to add some names to make IR more readable, without much effort.
Other than that I could add a few more llvm wrappers to do the
- safe current insertpoint
- jump to the end of the allocas.
- insert alloca.
- restore original insertpoint.
If you feel strongly I could do it, but for now I just commented that I expect users to be sure that they want to really insert their alloca at the current insert point. That is the default for all other (non-alloca) builder functions, so together with the new name hopefully not too surprising.
ok, I think I'm mostly done. Do you have any suggestions? I don't want to add any actual run tests, as these would require a working clang based on the same commit. |
@@ -667,6 +668,12 @@ pub(crate) fn run_pass_manager( | |||
write::llvm_optimize(cgcx, dcx, module, None, config, opt_level, opt_stage, stage)?; | |||
} | |||
|
|||
if cfg!(llvm_enzyme) && enable_gpu && !thin { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no dependency of offload on Enzyme, but since I think I'm supposed to gate my features, for now I'll just re-use the ones from Enzyme.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the cfg
gate is necessary, as long as the Offload::Enable
is gated behind -Zunstable-options or sth
☔ The latest upstream changes (presumably #143026) made this pull request unmergeable. Please resolve the merge conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I... don't know if I can review this properly. I can review it from the "does this fit into how I want the llvm backend to look" side, but what it actually does just looks random to me.
@@ -117,6 +118,70 @@ impl<'a, 'll, CX: Borrow<SCx<'ll>>> GenericBuilder<'a, 'll, CX> { | |||
} | |||
bx | |||
} | |||
|
|||
pub(crate) fn my_alloca2(&mut self, ty: &'ll Type, align: Align, name: &str) -> &'ll Value { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also document why/how it is different from alloca
why is clang necessary for this? |
This time I started with dev guide docs! https://rustc-dev-guide.rust-lang.org/offload/installation.html#usage |
Thanks! And no worries, I'm discussing the offloading design with @jdoerfert and @kevinsala. The memory transfer is pretty straightforward and not that interesting. The only question was how many layers of abstraction we wanted, but we made a decision which should be fine, we could always re-evaluate it later. For the Kernel launches PR I'll ask them to also review the code, but they aren't rust devs, so your reviews on the rustc side are definetly appreciated! |
@oli-obk I cleaned up the code a bit more and addressed your feedback. This reduced the number of values which I pass around between functions by a lot. |
@@ -667,6 +668,12 @@ pub(crate) fn run_pass_manager( | |||
write::llvm_optimize(cgcx, dcx, module, None, config, opt_level, opt_stage, stage)?; | |||
} | |||
|
|||
if cfg!(llvm_enzyme) && enable_gpu && !thin { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the cfg
gate is necessary, as long as the Offload::Enable
is gated behind -Zunstable-options or sth
let mapper_update = "__tgt_target_data_update_mapper"; | ||
let mapper_end = "__tgt_target_data_end_mapper"; | ||
let begin_mapper_decl = declare_offload_fn(&cx, mapper_begin, mapper_fn_ty); | ||
let update_mapper_decl = declare_offload_fn(&cx, mapper_update, mapper_fn_ty); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function seems to be unused
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, did you mean to add the comment somewhere else? the gen_tgt_data_mappers
function is called, and the return values of declare_offload_fn
are also used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
__tgt_target_data_update_mapper specifically is "Step 3" and unused for now, make use it as _ = update_mapper;
at the Step 3 site
// void *AuxAddr; | ||
// } __tgt_offload_entry; | ||
let kernel_elements = | ||
vec![ti32, ti32, tptr, tptr, tptr, tptr, tptr, tptr, ti64, ti64, tarr, tarr, ti32]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these entries do not match the fields in the comment above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uuuh, yeah. The tgt_offload_entry ended up int he function above. There it also matches. Now just let me find the docs for what tgt_kernel_argument values stand for.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not pretty, but I found and added the right definition. I don't want to add links to a different gh repo, and looking the definition up would be annoying to contributors, so I think that's the best solution.
tgt_bin_desc_alloca, | ||
cx.get_const_i8(0), | ||
cx.get_const_i64(32), | ||
Align::from_bytes(8).unwrap(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
explain magic numbers pls
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://llvm.org/docs/LangRef.html#llvm-memset-intrinsics
The 32 is just the bit width, because the first type is an i32. The byte value to which we set it to is zero, so no magic number.
fwiw, running this directly through rustc or as part of the rust test suite works. Calling cargo fails due to triggering an llvm assertion, because cargo adds
Which is fair, I'm still using clang/lld as the driver and linker. Also looking forward to https://github.com/rust-lang/rust/pull/143684/files, which will fix some offload logic, but isn't blocking. https://github.com/nikic/llvm-project/blob/c5e8134bf939bd5fcf48faeb4efd0d18c4721b4a/offload/libomptarget/PluginManager.cpp#L289 |
I removed the cfg gate, happy to not have it rely on Enzyme. |
l: Linkage, | ||
) -> &'ll llvm::Value { | ||
let llglobal = add_global(cx, name, initializer, l); | ||
unsafe { llvm::LLVMSetUnnamedAddress(llglobal, llvm::UnnamedAddr::Global) }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unsafe { llvm::LLVMSetUnnamedAddress(llglobal, llvm::UnnamedAddr::Global) }; | |
llvm::SetUnnamedAddress(llglobal, llvm::UnnamedAddr::Global); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, one more round then I think this can ship 😆
let name = format!(".offloading.entry.kernel_{num}"); | ||
let ci64_0 = cx.get_const_i64(0); | ||
let ci16_1 = cx.get_const_i16(1); | ||
let elems: Vec<&llvm::Value> = vec![ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which data structure is this?
let c_section_name = CString::new(".llvm.rodata.offloading").unwrap(); | ||
llvm::set_section(llglobal, &c_section_name); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let c_section_name = CString::new(".llvm.rodata.offloading").unwrap(); | |
llvm::set_section(llglobal, &c_section_name); | |
llvm::set_section(llglobal, c".llvm.rodata.offloading"); |
tgt_bin_desc_alloca, | ||
cx.get_const_i8(0), | ||
cx.get_const_i64(32), | ||
Align::from_bytes(8).unwrap(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align::from_bytes(8).unwrap(), | |
Align::EIGHT, |
let gep1 = builder.inbounds_gep(ty, a1, &[i32_0, i32_0]); | ||
let gep2 = builder.inbounds_gep(ty, a2, &[i32_0, i32_0]); | ||
let gep3 = builder.inbounds_gep(ty2, a4, &[i32_0, i32_0]); | ||
|
||
let nullptr = cx.const_null(cx.type_ptr()); | ||
let o_type = o_types[0]; | ||
let args = vec![ | ||
s_ident_t, | ||
cx.get_const_i64(u64::MAX), | ||
cx.get_const_i32(num_args), | ||
gep1, | ||
gep2, | ||
gep3, | ||
o_type, | ||
nullptr, | ||
nullptr, | ||
]; | ||
builder.call(fn_ty, end_mapper_decl, &args, None); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since almost the exact same call is gonna be happening thrice in the future, pull this into a function that deduplicates as much as possible
r? ghost
This will generate most of the host side code to use llvm's offload feature.
The first PR will only handle automatic mem-transfers to and from the device.
So if a user calls a kernel, we will copy inputs back and forth, but we won't do the actual kernel launch.
Before merging, we will use LLVM's Info infrastructure to verify that the memcopies match what openmp offloa generates in C++.
LIBOMPTARGET_INFO=-1 ./my_rust_binary
should print that a memcpy to and later from the device is happening.A follow-up PR will generate the actual device-side kernel which will then do computations on the GPU.
A third PR will implement manual host2device and device2host functionality, but the goal is to minimize cases where a user has to overwrite our default handling due to performance issues.
I'm trying to get a full MVP out first, so this just recognizes GPU functions based on magic names. The final frontend will obviously move this over to use proper macros, like I'm already doing it for the autodiff work.
This work will also be compatible with std::autodiff, so one can differentiate GPU kernels.
Tracking: