Skip to content

[CIR] Fix alignment when lowering set/get bitfield operations #148999

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 18, 2025

Conversation

Andres-Salamanca
Copy link
Contributor

This PR fixes incorrect alignment when lowering set and getBitField operations to LLVM IR. The issue occurred because during lowering, the function was being called with an alignment of 0, which caused it to default to the alignment of the packed member. For example, if the bitfield was packed inside a u64i, it would use an alignment of 8. With this change, the generated code now matches what the classic codegen produces.
In the assembly format, I changed to be similar to how it's done in loadOp. If there's a better approach, please feel free to point it out.

@llvmbot llvmbot added clang Clang issues not falling into any other category ClangIR Anything related to the ClangIR project labels Jul 16, 2025
@llvmbot
Copy link
Member

llvmbot commented Jul 16, 2025

@llvm/pr-subscribers-clangir

Author: None (Andres-Salamanca)

Changes

This PR fixes incorrect alignment when lowering set and getBitField operations to LLVM IR. The issue occurred because during lowering, the function was being called with an alignment of 0, which caused it to default to the alignment of the packed member. For example, if the bitfield was packed inside a u64i, it would use an alignment of 8. With this change, the generated code now matches what the classic codegen produces.
In the assembly format, I changed to be similar to how it's done in loadOp. If there's a better approach, please feel free to point it out.


Patch is 28.15 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/148999.diff

7 Files Affected:

  • (modified) clang/include/clang/CIR/Dialect/IR/CIROps.td (+24-9)
  • (modified) clang/lib/CIR/CodeGen/CIRGenBuilder.h (+10-8)
  • (modified) clang/lib/CIR/CodeGen/CIRGenExpr.cpp (+9-5)
  • (modified) clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp (+5-3)
  • (modified) clang/test/CIR/CodeGen/bitfields.c (+23-23)
  • (modified) clang/test/CIR/CodeGen/bitfields.cpp (+11-11)
  • (modified) clang/test/CIR/CodeGen/bitfields_be.c (+13-13)
diff --git a/clang/include/clang/CIR/Dialect/IR/CIROps.td b/clang/include/clang/CIR/Dialect/IR/CIROps.td
index 676ff76dff661..90bc9bf3be738 100644
--- a/clang/include/clang/CIR/Dialect/IR/CIROps.td
+++ b/clang/include/clang/CIR/Dialect/IR/CIROps.td
@@ -1720,7 +1720,8 @@ def SetBitfieldOp : CIR_Op<"set_bitfield"> {
     %2 = cir.load %0 : !cir.ptr<!cir.ptr<!record_type>>, !cir.ptr<!record_type>
     %3 = cir.get_member %2[1] {name = "e"} : !cir.ptr<!record_type>
                                                              -> !cir.ptr<!u16i>
-    %4 = cir.set_bitfield(#bfi_e, %3 : !cir.ptr<!u16i>, %1 : !s32i) -> !s32i
+    %4 = cir.set_bitfield align(4) (#bfi_e, %3 : !cir.ptr<!u16i>, %1 : !s32i)
+                                                                       -> !s32i
     ```
    }];
 
@@ -1728,12 +1729,15 @@ def SetBitfieldOp : CIR_Op<"set_bitfield"> {
     Arg<CIR_PointerType, "the address to store the value", [MemWrite]>:$addr,
     CIR_AnyType:$src,
     BitfieldInfoAttr:$bitfield_info,
+    OptionalAttr<I64Attr>:$alignment,
     UnitAttr:$is_volatile
   );
 
   let results = (outs CIR_IntType:$result);
 
-  let assemblyFormat = [{ `(`$bitfield_info`,` $addr`:`qualified(type($addr))`,`
+  let assemblyFormat = [{
+    (`align` `(` $alignment^ `)`)?
+    `(`$bitfield_info`,` $addr`:`qualified(type($addr))`,`
     $src`:`type($src) `)`  attr-dict `->` type($result) }];
 
   let builders = [
@@ -1745,14 +1749,18 @@ def SetBitfieldOp : CIR_Op<"set_bitfield"> {
                    "unsigned":$size,
                    "unsigned":$offset,
                    "bool":$is_signed,
-                   "bool":$is_volatile
+                   "bool":$is_volatile,
+                   CArg<"unsigned", "0">:$alignment
                    ),
    [{
       BitfieldInfoAttr info =
         BitfieldInfoAttr::get($_builder.getContext(),
                               name, storage_type,
                               size, offset, is_signed);
-      build($_builder, $_state, type, addr, src, info, is_volatile);
+      build($_builder, $_state, type, addr, src, info,
+            alignment == 0 ? IntegerAttr()
+                           : $_builder.getI64IntegerAttr(alignment),
+            is_volatile);
     }]>
   ];
 }
@@ -1804,20 +1812,23 @@ def GetBitfieldOp : CIR_Op<"get_bitfield"> {
     %2 = cir.load %0 : !cir.ptr<!cir.ptr<!record_type>>, !cir.ptr<!record_type>
     %3 = cir.get_member %2[1] {name = "e"} : !cir.ptr<!record_type>
                                                              -> !cir.ptr<!u16i>
-    %4 = cir.get_bitfield(#bfi_e, %3 : !cir.ptr<!u16i>) -> !s32i
+    %4 = cir.get_bitfield align(4) (#bfi_e, %3 : !cir.ptr<!u16i>) -> !s32i
     ```
     }];
 
   let arguments = (ins
     Arg<CIR_PointerType, "the address to load from", [MemRead]>:$addr,
     BitfieldInfoAttr:$bitfield_info,
+    OptionalAttr<I64Attr>:$alignment,
     UnitAttr:$is_volatile
     );
 
   let results = (outs CIR_IntType:$result);
 
-  let assemblyFormat = [{ `(`$bitfield_info `,` $addr attr-dict `:`
-   qualified(type($addr)) `)` `->` type($result) }];
+  let assemblyFormat = [{
+    (`align` `(` $alignment^ `)`)?
+    `(`$bitfield_info `,` $addr attr-dict `:`
+    qualified(type($addr)) `)` `->` type($result) }];
 
   let builders = [
     OpBuilder<(ins "mlir::Type":$type,
@@ -1827,14 +1838,18 @@ def GetBitfieldOp : CIR_Op<"get_bitfield"> {
                    "unsigned":$size,
                    "unsigned":$offset,
                    "bool":$is_signed,
-                   "bool":$is_volatile
+                   "bool":$is_volatile,
+                   CArg<"unsigned", "0">:$alignment
                    ),
    [{
       BitfieldInfoAttr info =
         BitfieldInfoAttr::get($_builder.getContext(),
                               name, storage_type,
                               size, offset, is_signed);
-      build($_builder, $_state, type, addr, info, is_volatile);
+      build($_builder, $_state, type, addr, info,
+            alignment == 0 ? IntegerAttr()
+                           : $_builder.getI64IntegerAttr(alignment),
+            is_volatile);
     }]>
   ];
 }
diff --git a/clang/lib/CIR/CodeGen/CIRGenBuilder.h b/clang/lib/CIR/CodeGen/CIRGenBuilder.h
index 5bd53ebc52ab5..4a948759b10f8 100644
--- a/clang/lib/CIR/CodeGen/CIRGenBuilder.h
+++ b/clang/lib/CIR/CodeGen/CIRGenBuilder.h
@@ -426,19 +426,21 @@ class CIRGenBuilderTy : public cir::CIRBaseBuilderTy {
   mlir::Value createSetBitfield(mlir::Location loc, mlir::Type resultType,
                                 mlir::Value dstAddr, mlir::Type storageType,
                                 mlir::Value src, const CIRGenBitFieldInfo &info,
-                                bool isLvalueVolatile, bool useVolatile) {
-    return create<cir::SetBitfieldOp>(loc, resultType, dstAddr, storageType,
-                                      src, info.name, info.size, info.offset,
-                                      info.isSigned, isLvalueVolatile);
+                                bool isLvalueVolatile, bool useVolatile,
+                                unsigned alignment = 0) {
+    return create<cir::SetBitfieldOp>(
+        loc, resultType, dstAddr, storageType, src, info.name, info.size,
+        info.offset, info.isSigned, isLvalueVolatile, alignment);
   }
 
   mlir::Value createGetBitfield(mlir::Location loc, mlir::Type resultType,
                                 mlir::Value addr, mlir::Type storageType,
                                 const CIRGenBitFieldInfo &info,
-                                bool isLvalueVolatile, bool useVolatile) {
-    return create<cir::GetBitfieldOp>(loc, resultType, addr, storageType,
-                                      info.name, info.size, info.offset,
-                                      info.isSigned, isLvalueVolatile);
+                                bool isLvalueVolatile, bool useVolatile,
+                                unsigned alignment = 0) {
+    return create<cir::GetBitfieldOp>(
+        loc, resultType, addr, storageType, info.name, info.size, info.offset,
+        info.isSigned, isLvalueVolatile, alignment);
   }
 };
 
diff --git a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
index 51da48d330f55..eb8b67bd0cc17 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
@@ -337,9 +337,10 @@ mlir::Value CIRGenFunction::emitStoreThroughBitfieldLValue(RValue src,
 
   mlir::Value dstAddr = dst.getAddress().getPointer();
 
-  return builder.createSetBitfield(dstAddr.getLoc(), resLTy, dstAddr,
-                                   ptr.getElementType(), src.getValue(), info,
-                                   dst.isVolatileQualified(), useVolatile);
+  return builder.createSetBitfield(
+      dstAddr.getLoc(), resLTy, dstAddr, ptr.getElementType(), src.getValue(),
+      info, dst.isVolatileQualified(), useVolatile,
+      dst.getAddress().getAlignment().getAsAlign().value());
 }
 
 RValue CIRGenFunction::emitLoadOfBitfieldLValue(LValue lv, SourceLocation loc) {
@@ -353,7 +354,7 @@ RValue CIRGenFunction::emitLoadOfBitfieldLValue(LValue lv, SourceLocation loc) {
 
   mlir::Value field = builder.createGetBitfield(
       getLoc(loc), resLTy, ptr.getPointer(), ptr.getElementType(), info,
-      lv.isVolatile(), false);
+      lv.isVolatile(), false, ptr.getAlignment().getAsAlign().value());
   assert(!cir::MissingFeatures::opLoadEmitScalarRangeCheck() && "NYI");
   return RValue::get(field);
 }
@@ -366,7 +367,10 @@ Address CIRGenFunction::getAddrOfBitFieldStorage(LValue base,
   cir::PointerType fieldPtr = cir::PointerType::get(fieldType);
   cir::GetMemberOp sea = getBuilder().createGetMember(
       loc, fieldPtr, base.getPointer(), field->getName(), index);
-  return Address(sea, CharUnits::One());
+  auto rec = cast<cir::RecordType>(base.getAddress().getElementType());
+  CharUnits offset = CharUnits::fromQuantity(
+      rec.getElementOffset(cgm.getDataLayout().layout, index));
+  return Address(sea, base.getAlignment().alignmentAtOffset(offset));
 }
 
 LValue CIRGenFunction::emitLValueForBitField(LValue base,
diff --git a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
index 7dcea0c8eb529..fa52efd3fb099 100644
--- a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
+++ b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
@@ -2555,7 +2555,7 @@ mlir::LogicalResult CIRToLLVMSetBitfieldOpLowering::matchAndRewrite(
     assert(storageSize > size && "Invalid bitfield size.");
 
     mlir::Value val = rewriter.create<mlir::LLVM::LoadOp>(
-        op.getLoc(), intType, adaptor.getAddr(), /* alignment */ 0,
+        op.getLoc(), intType, adaptor.getAddr(), op.getAlignment().value(),
         op.getIsVolatile());
 
     srcVal =
@@ -2572,7 +2572,8 @@ mlir::LogicalResult CIRToLLVMSetBitfieldOpLowering::matchAndRewrite(
   }
 
   rewriter.create<mlir::LLVM::StoreOp>(op.getLoc(), srcVal, adaptor.getAddr(),
-                                       /* alignment */ 0, op.getIsVolatile());
+                                       op.getAlignment().value(),
+                                       op.getIsVolatile());
 
   mlir::Type resultTy = getTypeConverter()->convertType(op.getType());
 
@@ -2646,7 +2647,8 @@ mlir::LogicalResult CIRToLLVMGetBitfieldOpLowering::matchAndRewrite(
       computeBitfieldIntType(storageType, context, storageSize);
 
   mlir::Value val = rewriter.create<mlir::LLVM::LoadOp>(
-      op.getLoc(), intType, adaptor.getAddr(), 0, op.getIsVolatile());
+      op.getLoc(), intType, adaptor.getAddr(), op.getAlignment().value(),
+      op.getIsVolatile());
   val = rewriter.create<mlir::LLVM::BitcastOp>(op.getLoc(), intType, val);
 
   if (info.getIsSigned()) {
diff --git a/clang/test/CIR/CodeGen/bitfields.c b/clang/test/CIR/CodeGen/bitfields.c
index 896acbfc854a4..a73c076ea81ab 100644
--- a/clang/test/CIR/CodeGen/bitfields.c
+++ b/clang/test/CIR/CodeGen/bitfields.c
@@ -87,14 +87,14 @@ int load_field(S* s) {
 // CIR:   [[TMP0:%.*]] = cir.alloca !cir.ptr<!rec_S>, !cir.ptr<!cir.ptr<!rec_S>>, ["s", init]
 // CIR:   [[TMP1:%.*]] = cir.load{{.*}} [[TMP0]] : !cir.ptr<!cir.ptr<!rec_S>>, !cir.ptr<!rec_S>
 // CIR:   [[TMP2:%.*]] = cir.get_member [[TMP1]][0] {name = "c"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP3:%.*]] = cir.get_bitfield(#bfi_c, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
+// CIR:   [[TMP3:%.*]] = cir.get_bitfield align(4) (#bfi_c, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
 
 // LLVM: define dso_local i32 @load_field
 // LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 // LLVM:   [[TMP1:%.*]] = alloca i32, i64 1, align 4
 // LLVM:   [[TMP2:%.*]] = load ptr, ptr [[TMP0]], align 8
 // LLVM:   [[TMP3:%.*]] = getelementptr %struct.S, ptr [[TMP2]], i32 0, i32 0
-// LLVM:   [[TMP4:%.*]] = load i64, ptr [[TMP3]], align 8
+// LLVM:   [[TMP4:%.*]] = load i64, ptr [[TMP3]], align 4
 // LLVM:   [[TMP5:%.*]] = shl i64 [[TMP4]], 15
 // LLVM:   [[TMP6:%.*]] = ashr i64 [[TMP5]], 47
 // LLVM:   [[TMP7:%.*]] = trunc i64 [[TMP6]] to i32
@@ -115,13 +115,13 @@ unsigned int load_field_unsigned(A* s) {
 //CIR:   [[TMP0:%.*]] = cir.alloca !cir.ptr<!rec_A>, !cir.ptr<!cir.ptr<!rec_A>>, ["s", init] {alignment = 8 : i64}
 //CIR:   [[TMP1:%.*]] = cir.load align(8) [[TMP0]] : !cir.ptr<!cir.ptr<!rec_A>>, !cir.ptr<!rec_A>
 //CIR:   [[TMP2:%.*]] = cir.get_member [[TMP1]][3] {name = "more_bits"} : !cir.ptr<!rec_A> -> !cir.ptr<!u16i>
-//CIR:   [[TMP3:%.*]] = cir.get_bitfield(#bfi_more_bits, [[TMP2]] : !cir.ptr<!u16i>) -> !u32i
+//CIR:   [[TMP3:%.*]] = cir.get_bitfield align(1) (#bfi_more_bits, [[TMP2]] : !cir.ptr<!u16i>) -> !u32i
 
 //LLVM: define dso_local i32 @load_field_unsigned
 //LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 //LLVM:   [[TMP1:%.*]] = load ptr, ptr [[TMP0]], align 8
 //LLVM:   [[TMP2:%.*]] = getelementptr %struct.A, ptr [[TMP1]], i32 0, i32 3
-//LLVM:   [[TMP3:%.*]] = load i16, ptr [[TMP2]], align 2
+//LLVM:   [[TMP3:%.*]] = load i16, ptr [[TMP2]], align 1
 //LLVM:   [[TMP4:%.*]] = lshr i16 [[TMP3]], 3
 //LLVM:   [[TMP5:%.*]] = and i16 [[TMP4]], 15
 //LLVM:   [[TMP6:%.*]] = zext i16 [[TMP5]] to i32
@@ -143,15 +143,15 @@ void store_field() {
 // CIR:   [[TMP0:%.*]] = cir.alloca !rec_S, !cir.ptr<!rec_S>
 // CIR:   [[TMP1:%.*]] = cir.const #cir.int<3> : !s32i
 // CIR:   [[TMP2:%.*]] = cir.get_member [[TMP0]][1] {name = "e"} : !cir.ptr<!rec_S> -> !cir.ptr<!u16i>
-// CIR:   cir.set_bitfield(#bfi_e, [[TMP2]] : !cir.ptr<!u16i>, [[TMP1]] : !s32i)
+// CIR:   cir.set_bitfield align(4) (#bfi_e, [[TMP2]] : !cir.ptr<!u16i>, [[TMP1]] : !s32i)
 
 // LLVM: define dso_local void @store_field()
 // LLVM:   [[TMP0:%.*]] = alloca %struct.S, i64 1, align 4
 // LLVM:   [[TMP1:%.*]] = getelementptr %struct.S, ptr [[TMP0]], i32 0, i32 1
-// LLVM:   [[TMP2:%.*]] = load i16, ptr [[TMP1]], align 2
+// LLVM:   [[TMP2:%.*]] = load i16, ptr [[TMP1]], align 4
 // LLVM:   [[TMP3:%.*]] = and i16 [[TMP2]], -32768
 // LLVM:   [[TMP4:%.*]] = or i16 [[TMP3]], 3
-// LLVM:   store i16 [[TMP4]], ptr [[TMP1]], align 2
+// LLVM:   store i16 [[TMP4]], ptr [[TMP1]], align 4
 
 // OGCG: define dso_local void @store_field()
 // OGCG:   [[TMP0:%.*]] = alloca %struct.S, align 4
@@ -169,24 +169,24 @@ void store_bitfield_to_bitfield() {
 // CIR: cir.func {{.*@store_bitfield_to_bitfield}}
 // CIR:   [[TMP0:%.*]] = cir.alloca !rec_S, !cir.ptr<!rec_S>, ["s"] {alignment = 4 : i64}
 // CIR:   [[TMP1:%.*]] = cir.get_member [[TMP0]][0] {name = "c"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP2:%.*]] = cir.get_bitfield(#bfi_c, [[TMP1]] : !cir.ptr<!u64i>) -> !s32i
+// CIR:   [[TMP2:%.*]] = cir.get_bitfield align(4) (#bfi_c, [[TMP1]] : !cir.ptr<!u64i>) -> !s32i
 // CIR:   [[TMP3:%.*]] = cir.get_member [[TMP0]][0] {name = "a"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP4:%.*]] = cir.set_bitfield(#bfi_a, [[TMP3]] : !cir.ptr<!u64i>, [[TMP2]] : !s32i) -> !s32i
+// CIR:   [[TMP4:%.*]] = cir.set_bitfield align(4) (#bfi_a, [[TMP3]] : !cir.ptr<!u64i>, [[TMP2]] : !s32i) -> !s32i
 
 // LLVM: define dso_local void @store_bitfield_to_bitfield()
 // LLVM:  [[TMP0:%.*]] = alloca %struct.S, i64 1, align 4
 // LLVM:  [[TMP1:%.*]] = getelementptr %struct.S, ptr [[TMP0]], i32 0, i32 0
-// LLVM:  [[TMP2:%.*]] = load i64, ptr [[TMP1]], align 8
+// LLVM:  [[TMP2:%.*]] = load i64, ptr [[TMP1]], align 4
 // LLVM:  [[TMP3:%.*]] = shl i64 [[TMP2]], 15
 // LLVM:  [[TMP4:%.*]] = ashr i64 [[TMP3]], 47
 // LLVM:  [[TMP5:%.*]] = trunc i64 [[TMP4]] to i32
 // LLVM:  [[TMP6:%.*]] = getelementptr %struct.S, ptr [[TMP0]], i32 0, i32 0
 // LLVM:  [[TMP7:%.*]] = zext i32 [[TMP5]] to i64
-// LLVM:  [[TMP8:%.*]] = load i64, ptr [[TMP6]], align 8
+// LLVM:  [[TMP8:%.*]] = load i64, ptr [[TMP6]], align 4
 // LLVM:  [[TMP9:%.*]] = and i64 [[TMP7]], 15
 // LLVM:  [[TMP10:%.*]] = and i64 [[TMP8]], -16
 // LLVM:  [[TMP11:%.*]] = or i64 [[TMP10]], [[TMP9]]
-// LLVM:  store i64 [[TMP11]], ptr [[TMP6]], align 8
+// LLVM:  store i64 [[TMP11]], ptr [[TMP6]], align 4
 // LLVM:  [[TMP12:%.*]] = shl i64 [[TMP9]], 60
 // LLVM:  [[TMP13:%.*]] = ashr i64 [[TMP12]], 60
 // LLVM:  [[TMP15:%.*]] = trunc i64 [[TMP13]] to i32
@@ -222,16 +222,16 @@ void get_volatile(V* v) {
 // CIR:   [[TMP1:%.*]] = cir.const #cir.int<3> : !s32i
 // CIR:   [[TMP2:%.*]] = cir.load align(8) [[TMP0]] : !cir.ptr<!cir.ptr<!rec_V>>, !cir.ptr<!rec_V>
 // CIR:   [[TMP3:%.*]] = cir.get_member [[TMP2]][0] {name = "b"} : !cir.ptr<!rec_V> -> !cir.ptr<!u64i>
-// CIR:   [[TMP4:%.*]] = cir.set_bitfield(#bfi_b, [[TMP3]] : !cir.ptr<!u64i>, [[TMP1]] : !s32i) {is_volatile} -> !s32i
+// CIR:   [[TMP4:%.*]] = cir.set_bitfield align(4) (#bfi_b, [[TMP3]] : !cir.ptr<!u64i>, [[TMP1]] : !s32i) {is_volatile} -> !s32i
 
 // LLVM: define dso_local void @get_volatile
 // LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 // LLVM:   [[TMP1:%.*]] = load ptr, ptr [[TMP0]], align 8
 // LLVM:   [[TMP2:%.*]] = getelementptr %struct.V, ptr [[TMP1]], i32 0, i32 0
-// LLVM:   [[TMP3:%.*]] = load volatile i64, ptr [[TMP2]], align 8
+// LLVM:   [[TMP3:%.*]] = load volatile i64, ptr [[TMP2]], align 4
 // LLVM:   [[TMP4:%.*]] = and i64 [[TMP3]], -1095216660481
 // LLVM:   [[TMP5:%.*]] = or i64 [[TMP4]], 12884901888
-// LLVM:   store volatile i64 [[TMP5]], ptr [[TMP2]], align 8
+// LLVM:   store volatile i64 [[TMP5]], ptr [[TMP2]], align 4
 
 // OCGC: define dso_local void @get_volatile
 // OCGC:   [[TMP0:%.*]] = alloca ptr, align 8
@@ -249,16 +249,16 @@ void set_volatile(V* v) {
 //CIR:   [[TMP1:%.*]] = cir.const #cir.int<3> : !s32i
 //CIR:   [[TMP2:%.*]] = cir.load align(8) [[TMP0]] : !cir.ptr<!cir.ptr<!rec_V>>, !cir.ptr<!rec_V>
 //CIR:   [[TMP3:%.*]] = cir.get_member [[TMP2]][0] {name = "b"} : !cir.ptr<!rec_V> -> !cir.ptr<!u64i>
-//CIR:   [[TMP4:%.*]] = cir.set_bitfield(#bfi_b, [[TMP3]] : !cir.ptr<!u64i>, [[TMP1]] : !s32i) {is_volatile} -> !s32i
+//CIR:   [[TMP4:%.*]] = cir.set_bitfield align(4) (#bfi_b, [[TMP3]] : !cir.ptr<!u64i>, [[TMP1]] : !s32i) {is_volatile} -> !s32i
 
 // LLVM: define dso_local void @set_volatile
 // LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 // LLVM:   [[TMP1:%.*]] = load ptr, ptr [[TMP0]], align 8
 // LLVM:   [[TMP2:%.*]] = getelementptr %struct.V, ptr [[TMP1]], i32 0, i32 0
-// LLVM:   [[TMP3:%.*]] = load volatile i64, ptr [[TMP2]], align 8
+// LLVM:   [[TMP3:%.*]] = load volatile i64, ptr [[TMP2]], align 4
 // LLVM:   [[TMP4:%.*]] = and i64 [[TMP3]], -1095216660481
 // LLVM:   [[TMP5:%.*]] = or i64 [[TMP4]], 12884901888
-// LLVM:   store volatile i64 [[TMP5]], ptr [[TMP2]], align 8
+// LLVM:   store volatile i64 [[TMP5]], ptr [[TMP2]], align 4
 
 // OGCG: define dso_local void @set_volatile
 // OGCG:   [[TMP0:%.*]] = alloca ptr, align 8
@@ -276,24 +276,24 @@ void unOp(S* s) {
 // CIR:   [[TMP0:%.*]] = cir.alloca !cir.ptr<!rec_S>, !cir.ptr<!cir.ptr<!rec_S>>, ["s", init] {alignment = 8 : i64}
 // CIR:   [[TMP1:%.*]] = cir.load align(8) [[TMP0]] : !cir.ptr<!cir.ptr<!rec_S>>, !cir.ptr<!rec_S>
 // CIR:   [[TMP2:%.*]] = cir.get_member [[TMP1]][0] {name = "d"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP3:%.*]] = cir.get_bitfield(#bfi_d, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
+// CIR:   [[TMP3:%.*]] = cir.get_bitfield align(4) (#bfi_d, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
 // CIR:   [[TMP4:%.*]] = cir.unary(inc, [[TMP3]]) nsw : !s32i, !s32i
-// CIR:   cir.set_bitfield(#bfi_d, [[TMP2]] : !cir.ptr<!u64i>, [[TMP4]] : !s32i)
+// CIR:   cir.set_bitfield align(4) (#bfi_d, [[TMP2]] : !cir.ptr<!u64i>, [[TMP4]] : !s32i)
 
 // LLVM: define {{.*@unOp}}
 // LLVM:   [[TMP0:%.*]] = getelementptr %struct.S, ptr [[LOAD0:%.*]], i32 0, i32 0
-// LLVM:   [[TMP1:%.*]] = load i64, ptr [[TMP0]], align 8
+// LLVM:   [[TMP1:%.*]] = load i64, ptr [[TMP0]], align 4
 // LLVM:   [[TMP2:%.*]] = shl i64 [[TMP1]], 13
 // LLVM:   [[TMP3:%.*]] = ashr i64 [[TMP2]], 62
 // LLVM:   [[TMP4:%.*]] = trunc i64 [[TMP3]] to i32
 // LLVM:   [[TMP5:%.*]] = add nsw i32 [[TMP4]], 1
 // LLVM:   [[TMP6:%.*]] = zext i32 [[TMP5]] to i64
-// LLVM:   [[TMP7:%.*]] = load i64, ptr [[TMP0]], align 8
+// LLVM:   [[TMP7:%.*]] = load i64, ptr [[TMP0]], align 4
 // LLVM:   [[TMP8:%.*]] = and i64 [[TMP6]], 3
 // LLVM:   [[TMP9:%.*]] = shl i64 [[TMP8]], 49
 // LLVM:   [[TMP10:%.*]] = and i64 [[TMP7]], -1688849860263937
 // LLVM:   [[TMP11:%.*]] = or i64 [[TMP10]], [[TMP9]]
-// LLVM:   store i64 [[TMP11]], ptr [[TMP0]], align 8
+// LLVM:   store i64 [[TMP11]], ptr [[TMP0]], align 4
 // LLVM:   [[TMP12:%.*]] = shl i64 [[TMP8]], 62
 // LLVM:   [[TMP13:%.*]] = ashr i64 [[TMP12]], 62
 // LLVM:   [[TMP14:%.*]] = trunc i64 [[TMP13]] to i32
diff --git a/clang/test/CIR/CodeGen/bitfields.cpp b/clang/test/CIR/CodeGen/bitfields.cpp
index 6715ebf1f48b6..7650e0b83faf6 100644
--- a/clang/test/CIR/CodeGen/bitfields.cpp
+++ b/clang/test/CIR/CodeGen/bitfields.cpp
@@ -39,14 +39,14 @@ int load_field(S* s) {
 // CIR:   [[TMP0:%.*]] = cir.alloca !cir.ptr<!rec_S>, !cir.ptr<!cir.ptr<!rec_S>>, ["s", init]
 // CIR:   [[TMP1:%.*]] = cir.load{{.*}} [[TMP0]] : !cir.ptr<!cir.ptr<!rec_S>>, !cir.ptr<!rec_S>
 // CIR:   [[TMP2:%.*]] = cir.get_member [[TMP1]][0] {name = "c"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP3:%.*]] = cir.get_bitfield(#bfi_c, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
+// CIR:   [[TMP3:%.*]] = cir.get_bitfield align(4) (#bfi_c, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
 
 // LLVM: define dso_local i32 @_Z10load_fieldP1S
 // LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 // LLVM:   [[TMP1:%.*]] = alloca i32, i64 1, align 4
 // LLVM:   [[TMP2:%.*]] = lo...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 16, 2025

@llvm/pr-subscribers-clang

Author: None (Andres-Salamanca)

Changes

This PR fixes incorrect alignment when lowering set and getBitField operations to LLVM IR. The issue occurred because during lowering, the function was being called with an alignment of 0, which caused it to default to the alignment of the packed member. For example, if the bitfield was packed inside a u64i, it would use an alignment of 8. With this change, the generated code now matches what the classic codegen produces.
In the assembly format, I changed to be similar to how it's done in loadOp. If there's a better approach, please feel free to point it out.


Patch is 28.15 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/148999.diff

7 Files Affected:

  • (modified) clang/include/clang/CIR/Dialect/IR/CIROps.td (+24-9)
  • (modified) clang/lib/CIR/CodeGen/CIRGenBuilder.h (+10-8)
  • (modified) clang/lib/CIR/CodeGen/CIRGenExpr.cpp (+9-5)
  • (modified) clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp (+5-3)
  • (modified) clang/test/CIR/CodeGen/bitfields.c (+23-23)
  • (modified) clang/test/CIR/CodeGen/bitfields.cpp (+11-11)
  • (modified) clang/test/CIR/CodeGen/bitfields_be.c (+13-13)
diff --git a/clang/include/clang/CIR/Dialect/IR/CIROps.td b/clang/include/clang/CIR/Dialect/IR/CIROps.td
index 676ff76dff661..90bc9bf3be738 100644
--- a/clang/include/clang/CIR/Dialect/IR/CIROps.td
+++ b/clang/include/clang/CIR/Dialect/IR/CIROps.td
@@ -1720,7 +1720,8 @@ def SetBitfieldOp : CIR_Op<"set_bitfield"> {
     %2 = cir.load %0 : !cir.ptr<!cir.ptr<!record_type>>, !cir.ptr<!record_type>
     %3 = cir.get_member %2[1] {name = "e"} : !cir.ptr<!record_type>
                                                              -> !cir.ptr<!u16i>
-    %4 = cir.set_bitfield(#bfi_e, %3 : !cir.ptr<!u16i>, %1 : !s32i) -> !s32i
+    %4 = cir.set_bitfield align(4) (#bfi_e, %3 : !cir.ptr<!u16i>, %1 : !s32i)
+                                                                       -> !s32i
     ```
    }];
 
@@ -1728,12 +1729,15 @@ def SetBitfieldOp : CIR_Op<"set_bitfield"> {
     Arg<CIR_PointerType, "the address to store the value", [MemWrite]>:$addr,
     CIR_AnyType:$src,
     BitfieldInfoAttr:$bitfield_info,
+    OptionalAttr<I64Attr>:$alignment,
     UnitAttr:$is_volatile
   );
 
   let results = (outs CIR_IntType:$result);
 
-  let assemblyFormat = [{ `(`$bitfield_info`,` $addr`:`qualified(type($addr))`,`
+  let assemblyFormat = [{
+    (`align` `(` $alignment^ `)`)?
+    `(`$bitfield_info`,` $addr`:`qualified(type($addr))`,`
     $src`:`type($src) `)`  attr-dict `->` type($result) }];
 
   let builders = [
@@ -1745,14 +1749,18 @@ def SetBitfieldOp : CIR_Op<"set_bitfield"> {
                    "unsigned":$size,
                    "unsigned":$offset,
                    "bool":$is_signed,
-                   "bool":$is_volatile
+                   "bool":$is_volatile,
+                   CArg<"unsigned", "0">:$alignment
                    ),
    [{
       BitfieldInfoAttr info =
         BitfieldInfoAttr::get($_builder.getContext(),
                               name, storage_type,
                               size, offset, is_signed);
-      build($_builder, $_state, type, addr, src, info, is_volatile);
+      build($_builder, $_state, type, addr, src, info,
+            alignment == 0 ? IntegerAttr()
+                           : $_builder.getI64IntegerAttr(alignment),
+            is_volatile);
     }]>
   ];
 }
@@ -1804,20 +1812,23 @@ def GetBitfieldOp : CIR_Op<"get_bitfield"> {
     %2 = cir.load %0 : !cir.ptr<!cir.ptr<!record_type>>, !cir.ptr<!record_type>
     %3 = cir.get_member %2[1] {name = "e"} : !cir.ptr<!record_type>
                                                              -> !cir.ptr<!u16i>
-    %4 = cir.get_bitfield(#bfi_e, %3 : !cir.ptr<!u16i>) -> !s32i
+    %4 = cir.get_bitfield align(4) (#bfi_e, %3 : !cir.ptr<!u16i>) -> !s32i
     ```
     }];
 
   let arguments = (ins
     Arg<CIR_PointerType, "the address to load from", [MemRead]>:$addr,
     BitfieldInfoAttr:$bitfield_info,
+    OptionalAttr<I64Attr>:$alignment,
     UnitAttr:$is_volatile
     );
 
   let results = (outs CIR_IntType:$result);
 
-  let assemblyFormat = [{ `(`$bitfield_info `,` $addr attr-dict `:`
-   qualified(type($addr)) `)` `->` type($result) }];
+  let assemblyFormat = [{
+    (`align` `(` $alignment^ `)`)?
+    `(`$bitfield_info `,` $addr attr-dict `:`
+    qualified(type($addr)) `)` `->` type($result) }];
 
   let builders = [
     OpBuilder<(ins "mlir::Type":$type,
@@ -1827,14 +1838,18 @@ def GetBitfieldOp : CIR_Op<"get_bitfield"> {
                    "unsigned":$size,
                    "unsigned":$offset,
                    "bool":$is_signed,
-                   "bool":$is_volatile
+                   "bool":$is_volatile,
+                   CArg<"unsigned", "0">:$alignment
                    ),
    [{
       BitfieldInfoAttr info =
         BitfieldInfoAttr::get($_builder.getContext(),
                               name, storage_type,
                               size, offset, is_signed);
-      build($_builder, $_state, type, addr, info, is_volatile);
+      build($_builder, $_state, type, addr, info,
+            alignment == 0 ? IntegerAttr()
+                           : $_builder.getI64IntegerAttr(alignment),
+            is_volatile);
     }]>
   ];
 }
diff --git a/clang/lib/CIR/CodeGen/CIRGenBuilder.h b/clang/lib/CIR/CodeGen/CIRGenBuilder.h
index 5bd53ebc52ab5..4a948759b10f8 100644
--- a/clang/lib/CIR/CodeGen/CIRGenBuilder.h
+++ b/clang/lib/CIR/CodeGen/CIRGenBuilder.h
@@ -426,19 +426,21 @@ class CIRGenBuilderTy : public cir::CIRBaseBuilderTy {
   mlir::Value createSetBitfield(mlir::Location loc, mlir::Type resultType,
                                 mlir::Value dstAddr, mlir::Type storageType,
                                 mlir::Value src, const CIRGenBitFieldInfo &info,
-                                bool isLvalueVolatile, bool useVolatile) {
-    return create<cir::SetBitfieldOp>(loc, resultType, dstAddr, storageType,
-                                      src, info.name, info.size, info.offset,
-                                      info.isSigned, isLvalueVolatile);
+                                bool isLvalueVolatile, bool useVolatile,
+                                unsigned alignment = 0) {
+    return create<cir::SetBitfieldOp>(
+        loc, resultType, dstAddr, storageType, src, info.name, info.size,
+        info.offset, info.isSigned, isLvalueVolatile, alignment);
   }
 
   mlir::Value createGetBitfield(mlir::Location loc, mlir::Type resultType,
                                 mlir::Value addr, mlir::Type storageType,
                                 const CIRGenBitFieldInfo &info,
-                                bool isLvalueVolatile, bool useVolatile) {
-    return create<cir::GetBitfieldOp>(loc, resultType, addr, storageType,
-                                      info.name, info.size, info.offset,
-                                      info.isSigned, isLvalueVolatile);
+                                bool isLvalueVolatile, bool useVolatile,
+                                unsigned alignment = 0) {
+    return create<cir::GetBitfieldOp>(
+        loc, resultType, addr, storageType, info.name, info.size, info.offset,
+        info.isSigned, isLvalueVolatile, alignment);
   }
 };
 
diff --git a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
index 51da48d330f55..eb8b67bd0cc17 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
@@ -337,9 +337,10 @@ mlir::Value CIRGenFunction::emitStoreThroughBitfieldLValue(RValue src,
 
   mlir::Value dstAddr = dst.getAddress().getPointer();
 
-  return builder.createSetBitfield(dstAddr.getLoc(), resLTy, dstAddr,
-                                   ptr.getElementType(), src.getValue(), info,
-                                   dst.isVolatileQualified(), useVolatile);
+  return builder.createSetBitfield(
+      dstAddr.getLoc(), resLTy, dstAddr, ptr.getElementType(), src.getValue(),
+      info, dst.isVolatileQualified(), useVolatile,
+      dst.getAddress().getAlignment().getAsAlign().value());
 }
 
 RValue CIRGenFunction::emitLoadOfBitfieldLValue(LValue lv, SourceLocation loc) {
@@ -353,7 +354,7 @@ RValue CIRGenFunction::emitLoadOfBitfieldLValue(LValue lv, SourceLocation loc) {
 
   mlir::Value field = builder.createGetBitfield(
       getLoc(loc), resLTy, ptr.getPointer(), ptr.getElementType(), info,
-      lv.isVolatile(), false);
+      lv.isVolatile(), false, ptr.getAlignment().getAsAlign().value());
   assert(!cir::MissingFeatures::opLoadEmitScalarRangeCheck() && "NYI");
   return RValue::get(field);
 }
@@ -366,7 +367,10 @@ Address CIRGenFunction::getAddrOfBitFieldStorage(LValue base,
   cir::PointerType fieldPtr = cir::PointerType::get(fieldType);
   cir::GetMemberOp sea = getBuilder().createGetMember(
       loc, fieldPtr, base.getPointer(), field->getName(), index);
-  return Address(sea, CharUnits::One());
+  auto rec = cast<cir::RecordType>(base.getAddress().getElementType());
+  CharUnits offset = CharUnits::fromQuantity(
+      rec.getElementOffset(cgm.getDataLayout().layout, index));
+  return Address(sea, base.getAlignment().alignmentAtOffset(offset));
 }
 
 LValue CIRGenFunction::emitLValueForBitField(LValue base,
diff --git a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
index 7dcea0c8eb529..fa52efd3fb099 100644
--- a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
+++ b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
@@ -2555,7 +2555,7 @@ mlir::LogicalResult CIRToLLVMSetBitfieldOpLowering::matchAndRewrite(
     assert(storageSize > size && "Invalid bitfield size.");
 
     mlir::Value val = rewriter.create<mlir::LLVM::LoadOp>(
-        op.getLoc(), intType, adaptor.getAddr(), /* alignment */ 0,
+        op.getLoc(), intType, adaptor.getAddr(), op.getAlignment().value(),
         op.getIsVolatile());
 
     srcVal =
@@ -2572,7 +2572,8 @@ mlir::LogicalResult CIRToLLVMSetBitfieldOpLowering::matchAndRewrite(
   }
 
   rewriter.create<mlir::LLVM::StoreOp>(op.getLoc(), srcVal, adaptor.getAddr(),
-                                       /* alignment */ 0, op.getIsVolatile());
+                                       op.getAlignment().value(),
+                                       op.getIsVolatile());
 
   mlir::Type resultTy = getTypeConverter()->convertType(op.getType());
 
@@ -2646,7 +2647,8 @@ mlir::LogicalResult CIRToLLVMGetBitfieldOpLowering::matchAndRewrite(
       computeBitfieldIntType(storageType, context, storageSize);
 
   mlir::Value val = rewriter.create<mlir::LLVM::LoadOp>(
-      op.getLoc(), intType, adaptor.getAddr(), 0, op.getIsVolatile());
+      op.getLoc(), intType, adaptor.getAddr(), op.getAlignment().value(),
+      op.getIsVolatile());
   val = rewriter.create<mlir::LLVM::BitcastOp>(op.getLoc(), intType, val);
 
   if (info.getIsSigned()) {
diff --git a/clang/test/CIR/CodeGen/bitfields.c b/clang/test/CIR/CodeGen/bitfields.c
index 896acbfc854a4..a73c076ea81ab 100644
--- a/clang/test/CIR/CodeGen/bitfields.c
+++ b/clang/test/CIR/CodeGen/bitfields.c
@@ -87,14 +87,14 @@ int load_field(S* s) {
 // CIR:   [[TMP0:%.*]] = cir.alloca !cir.ptr<!rec_S>, !cir.ptr<!cir.ptr<!rec_S>>, ["s", init]
 // CIR:   [[TMP1:%.*]] = cir.load{{.*}} [[TMP0]] : !cir.ptr<!cir.ptr<!rec_S>>, !cir.ptr<!rec_S>
 // CIR:   [[TMP2:%.*]] = cir.get_member [[TMP1]][0] {name = "c"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP3:%.*]] = cir.get_bitfield(#bfi_c, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
+// CIR:   [[TMP3:%.*]] = cir.get_bitfield align(4) (#bfi_c, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
 
 // LLVM: define dso_local i32 @load_field
 // LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 // LLVM:   [[TMP1:%.*]] = alloca i32, i64 1, align 4
 // LLVM:   [[TMP2:%.*]] = load ptr, ptr [[TMP0]], align 8
 // LLVM:   [[TMP3:%.*]] = getelementptr %struct.S, ptr [[TMP2]], i32 0, i32 0
-// LLVM:   [[TMP4:%.*]] = load i64, ptr [[TMP3]], align 8
+// LLVM:   [[TMP4:%.*]] = load i64, ptr [[TMP3]], align 4
 // LLVM:   [[TMP5:%.*]] = shl i64 [[TMP4]], 15
 // LLVM:   [[TMP6:%.*]] = ashr i64 [[TMP5]], 47
 // LLVM:   [[TMP7:%.*]] = trunc i64 [[TMP6]] to i32
@@ -115,13 +115,13 @@ unsigned int load_field_unsigned(A* s) {
 //CIR:   [[TMP0:%.*]] = cir.alloca !cir.ptr<!rec_A>, !cir.ptr<!cir.ptr<!rec_A>>, ["s", init] {alignment = 8 : i64}
 //CIR:   [[TMP1:%.*]] = cir.load align(8) [[TMP0]] : !cir.ptr<!cir.ptr<!rec_A>>, !cir.ptr<!rec_A>
 //CIR:   [[TMP2:%.*]] = cir.get_member [[TMP1]][3] {name = "more_bits"} : !cir.ptr<!rec_A> -> !cir.ptr<!u16i>
-//CIR:   [[TMP3:%.*]] = cir.get_bitfield(#bfi_more_bits, [[TMP2]] : !cir.ptr<!u16i>) -> !u32i
+//CIR:   [[TMP3:%.*]] = cir.get_bitfield align(1) (#bfi_more_bits, [[TMP2]] : !cir.ptr<!u16i>) -> !u32i
 
 //LLVM: define dso_local i32 @load_field_unsigned
 //LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 //LLVM:   [[TMP1:%.*]] = load ptr, ptr [[TMP0]], align 8
 //LLVM:   [[TMP2:%.*]] = getelementptr %struct.A, ptr [[TMP1]], i32 0, i32 3
-//LLVM:   [[TMP3:%.*]] = load i16, ptr [[TMP2]], align 2
+//LLVM:   [[TMP3:%.*]] = load i16, ptr [[TMP2]], align 1
 //LLVM:   [[TMP4:%.*]] = lshr i16 [[TMP3]], 3
 //LLVM:   [[TMP5:%.*]] = and i16 [[TMP4]], 15
 //LLVM:   [[TMP6:%.*]] = zext i16 [[TMP5]] to i32
@@ -143,15 +143,15 @@ void store_field() {
 // CIR:   [[TMP0:%.*]] = cir.alloca !rec_S, !cir.ptr<!rec_S>
 // CIR:   [[TMP1:%.*]] = cir.const #cir.int<3> : !s32i
 // CIR:   [[TMP2:%.*]] = cir.get_member [[TMP0]][1] {name = "e"} : !cir.ptr<!rec_S> -> !cir.ptr<!u16i>
-// CIR:   cir.set_bitfield(#bfi_e, [[TMP2]] : !cir.ptr<!u16i>, [[TMP1]] : !s32i)
+// CIR:   cir.set_bitfield align(4) (#bfi_e, [[TMP2]] : !cir.ptr<!u16i>, [[TMP1]] : !s32i)
 
 // LLVM: define dso_local void @store_field()
 // LLVM:   [[TMP0:%.*]] = alloca %struct.S, i64 1, align 4
 // LLVM:   [[TMP1:%.*]] = getelementptr %struct.S, ptr [[TMP0]], i32 0, i32 1
-// LLVM:   [[TMP2:%.*]] = load i16, ptr [[TMP1]], align 2
+// LLVM:   [[TMP2:%.*]] = load i16, ptr [[TMP1]], align 4
 // LLVM:   [[TMP3:%.*]] = and i16 [[TMP2]], -32768
 // LLVM:   [[TMP4:%.*]] = or i16 [[TMP3]], 3
-// LLVM:   store i16 [[TMP4]], ptr [[TMP1]], align 2
+// LLVM:   store i16 [[TMP4]], ptr [[TMP1]], align 4
 
 // OGCG: define dso_local void @store_field()
 // OGCG:   [[TMP0:%.*]] = alloca %struct.S, align 4
@@ -169,24 +169,24 @@ void store_bitfield_to_bitfield() {
 // CIR: cir.func {{.*@store_bitfield_to_bitfield}}
 // CIR:   [[TMP0:%.*]] = cir.alloca !rec_S, !cir.ptr<!rec_S>, ["s"] {alignment = 4 : i64}
 // CIR:   [[TMP1:%.*]] = cir.get_member [[TMP0]][0] {name = "c"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP2:%.*]] = cir.get_bitfield(#bfi_c, [[TMP1]] : !cir.ptr<!u64i>) -> !s32i
+// CIR:   [[TMP2:%.*]] = cir.get_bitfield align(4) (#bfi_c, [[TMP1]] : !cir.ptr<!u64i>) -> !s32i
 // CIR:   [[TMP3:%.*]] = cir.get_member [[TMP0]][0] {name = "a"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP4:%.*]] = cir.set_bitfield(#bfi_a, [[TMP3]] : !cir.ptr<!u64i>, [[TMP2]] : !s32i) -> !s32i
+// CIR:   [[TMP4:%.*]] = cir.set_bitfield align(4) (#bfi_a, [[TMP3]] : !cir.ptr<!u64i>, [[TMP2]] : !s32i) -> !s32i
 
 // LLVM: define dso_local void @store_bitfield_to_bitfield()
 // LLVM:  [[TMP0:%.*]] = alloca %struct.S, i64 1, align 4
 // LLVM:  [[TMP1:%.*]] = getelementptr %struct.S, ptr [[TMP0]], i32 0, i32 0
-// LLVM:  [[TMP2:%.*]] = load i64, ptr [[TMP1]], align 8
+// LLVM:  [[TMP2:%.*]] = load i64, ptr [[TMP1]], align 4
 // LLVM:  [[TMP3:%.*]] = shl i64 [[TMP2]], 15
 // LLVM:  [[TMP4:%.*]] = ashr i64 [[TMP3]], 47
 // LLVM:  [[TMP5:%.*]] = trunc i64 [[TMP4]] to i32
 // LLVM:  [[TMP6:%.*]] = getelementptr %struct.S, ptr [[TMP0]], i32 0, i32 0
 // LLVM:  [[TMP7:%.*]] = zext i32 [[TMP5]] to i64
-// LLVM:  [[TMP8:%.*]] = load i64, ptr [[TMP6]], align 8
+// LLVM:  [[TMP8:%.*]] = load i64, ptr [[TMP6]], align 4
 // LLVM:  [[TMP9:%.*]] = and i64 [[TMP7]], 15
 // LLVM:  [[TMP10:%.*]] = and i64 [[TMP8]], -16
 // LLVM:  [[TMP11:%.*]] = or i64 [[TMP10]], [[TMP9]]
-// LLVM:  store i64 [[TMP11]], ptr [[TMP6]], align 8
+// LLVM:  store i64 [[TMP11]], ptr [[TMP6]], align 4
 // LLVM:  [[TMP12:%.*]] = shl i64 [[TMP9]], 60
 // LLVM:  [[TMP13:%.*]] = ashr i64 [[TMP12]], 60
 // LLVM:  [[TMP15:%.*]] = trunc i64 [[TMP13]] to i32
@@ -222,16 +222,16 @@ void get_volatile(V* v) {
 // CIR:   [[TMP1:%.*]] = cir.const #cir.int<3> : !s32i
 // CIR:   [[TMP2:%.*]] = cir.load align(8) [[TMP0]] : !cir.ptr<!cir.ptr<!rec_V>>, !cir.ptr<!rec_V>
 // CIR:   [[TMP3:%.*]] = cir.get_member [[TMP2]][0] {name = "b"} : !cir.ptr<!rec_V> -> !cir.ptr<!u64i>
-// CIR:   [[TMP4:%.*]] = cir.set_bitfield(#bfi_b, [[TMP3]] : !cir.ptr<!u64i>, [[TMP1]] : !s32i) {is_volatile} -> !s32i
+// CIR:   [[TMP4:%.*]] = cir.set_bitfield align(4) (#bfi_b, [[TMP3]] : !cir.ptr<!u64i>, [[TMP1]] : !s32i) {is_volatile} -> !s32i
 
 // LLVM: define dso_local void @get_volatile
 // LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 // LLVM:   [[TMP1:%.*]] = load ptr, ptr [[TMP0]], align 8
 // LLVM:   [[TMP2:%.*]] = getelementptr %struct.V, ptr [[TMP1]], i32 0, i32 0
-// LLVM:   [[TMP3:%.*]] = load volatile i64, ptr [[TMP2]], align 8
+// LLVM:   [[TMP3:%.*]] = load volatile i64, ptr [[TMP2]], align 4
 // LLVM:   [[TMP4:%.*]] = and i64 [[TMP3]], -1095216660481
 // LLVM:   [[TMP5:%.*]] = or i64 [[TMP4]], 12884901888
-// LLVM:   store volatile i64 [[TMP5]], ptr [[TMP2]], align 8
+// LLVM:   store volatile i64 [[TMP5]], ptr [[TMP2]], align 4
 
 // OCGC: define dso_local void @get_volatile
 // OCGC:   [[TMP0:%.*]] = alloca ptr, align 8
@@ -249,16 +249,16 @@ void set_volatile(V* v) {
 //CIR:   [[TMP1:%.*]] = cir.const #cir.int<3> : !s32i
 //CIR:   [[TMP2:%.*]] = cir.load align(8) [[TMP0]] : !cir.ptr<!cir.ptr<!rec_V>>, !cir.ptr<!rec_V>
 //CIR:   [[TMP3:%.*]] = cir.get_member [[TMP2]][0] {name = "b"} : !cir.ptr<!rec_V> -> !cir.ptr<!u64i>
-//CIR:   [[TMP4:%.*]] = cir.set_bitfield(#bfi_b, [[TMP3]] : !cir.ptr<!u64i>, [[TMP1]] : !s32i) {is_volatile} -> !s32i
+//CIR:   [[TMP4:%.*]] = cir.set_bitfield align(4) (#bfi_b, [[TMP3]] : !cir.ptr<!u64i>, [[TMP1]] : !s32i) {is_volatile} -> !s32i
 
 // LLVM: define dso_local void @set_volatile
 // LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 // LLVM:   [[TMP1:%.*]] = load ptr, ptr [[TMP0]], align 8
 // LLVM:   [[TMP2:%.*]] = getelementptr %struct.V, ptr [[TMP1]], i32 0, i32 0
-// LLVM:   [[TMP3:%.*]] = load volatile i64, ptr [[TMP2]], align 8
+// LLVM:   [[TMP3:%.*]] = load volatile i64, ptr [[TMP2]], align 4
 // LLVM:   [[TMP4:%.*]] = and i64 [[TMP3]], -1095216660481
 // LLVM:   [[TMP5:%.*]] = or i64 [[TMP4]], 12884901888
-// LLVM:   store volatile i64 [[TMP5]], ptr [[TMP2]], align 8
+// LLVM:   store volatile i64 [[TMP5]], ptr [[TMP2]], align 4
 
 // OGCG: define dso_local void @set_volatile
 // OGCG:   [[TMP0:%.*]] = alloca ptr, align 8
@@ -276,24 +276,24 @@ void unOp(S* s) {
 // CIR:   [[TMP0:%.*]] = cir.alloca !cir.ptr<!rec_S>, !cir.ptr<!cir.ptr<!rec_S>>, ["s", init] {alignment = 8 : i64}
 // CIR:   [[TMP1:%.*]] = cir.load align(8) [[TMP0]] : !cir.ptr<!cir.ptr<!rec_S>>, !cir.ptr<!rec_S>
 // CIR:   [[TMP2:%.*]] = cir.get_member [[TMP1]][0] {name = "d"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP3:%.*]] = cir.get_bitfield(#bfi_d, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
+// CIR:   [[TMP3:%.*]] = cir.get_bitfield align(4) (#bfi_d, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
 // CIR:   [[TMP4:%.*]] = cir.unary(inc, [[TMP3]]) nsw : !s32i, !s32i
-// CIR:   cir.set_bitfield(#bfi_d, [[TMP2]] : !cir.ptr<!u64i>, [[TMP4]] : !s32i)
+// CIR:   cir.set_bitfield align(4) (#bfi_d, [[TMP2]] : !cir.ptr<!u64i>, [[TMP4]] : !s32i)
 
 // LLVM: define {{.*@unOp}}
 // LLVM:   [[TMP0:%.*]] = getelementptr %struct.S, ptr [[LOAD0:%.*]], i32 0, i32 0
-// LLVM:   [[TMP1:%.*]] = load i64, ptr [[TMP0]], align 8
+// LLVM:   [[TMP1:%.*]] = load i64, ptr [[TMP0]], align 4
 // LLVM:   [[TMP2:%.*]] = shl i64 [[TMP1]], 13
 // LLVM:   [[TMP3:%.*]] = ashr i64 [[TMP2]], 62
 // LLVM:   [[TMP4:%.*]] = trunc i64 [[TMP3]] to i32
 // LLVM:   [[TMP5:%.*]] = add nsw i32 [[TMP4]], 1
 // LLVM:   [[TMP6:%.*]] = zext i32 [[TMP5]] to i64
-// LLVM:   [[TMP7:%.*]] = load i64, ptr [[TMP0]], align 8
+// LLVM:   [[TMP7:%.*]] = load i64, ptr [[TMP0]], align 4
 // LLVM:   [[TMP8:%.*]] = and i64 [[TMP6]], 3
 // LLVM:   [[TMP9:%.*]] = shl i64 [[TMP8]], 49
 // LLVM:   [[TMP10:%.*]] = and i64 [[TMP7]], -1688849860263937
 // LLVM:   [[TMP11:%.*]] = or i64 [[TMP10]], [[TMP9]]
-// LLVM:   store i64 [[TMP11]], ptr [[TMP0]], align 8
+// LLVM:   store i64 [[TMP11]], ptr [[TMP0]], align 4
 // LLVM:   [[TMP12:%.*]] = shl i64 [[TMP8]], 62
 // LLVM:   [[TMP13:%.*]] = ashr i64 [[TMP12]], 62
 // LLVM:   [[TMP14:%.*]] = trunc i64 [[TMP13]] to i32
diff --git a/clang/test/CIR/CodeGen/bitfields.cpp b/clang/test/CIR/CodeGen/bitfields.cpp
index 6715ebf1f48b6..7650e0b83faf6 100644
--- a/clang/test/CIR/CodeGen/bitfields.cpp
+++ b/clang/test/CIR/CodeGen/bitfields.cpp
@@ -39,14 +39,14 @@ int load_field(S* s) {
 // CIR:   [[TMP0:%.*]] = cir.alloca !cir.ptr<!rec_S>, !cir.ptr<!cir.ptr<!rec_S>>, ["s", init]
 // CIR:   [[TMP1:%.*]] = cir.load{{.*}} [[TMP0]] : !cir.ptr<!cir.ptr<!rec_S>>, !cir.ptr<!rec_S>
 // CIR:   [[TMP2:%.*]] = cir.get_member [[TMP1]][0] {name = "c"} : !cir.ptr<!rec_S> -> !cir.ptr<!u64i>
-// CIR:   [[TMP3:%.*]] = cir.get_bitfield(#bfi_c, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
+// CIR:   [[TMP3:%.*]] = cir.get_bitfield align(4) (#bfi_c, [[TMP2]] : !cir.ptr<!u64i>) -> !s32i
 
 // LLVM: define dso_local i32 @_Z10load_fieldP1S
 // LLVM:   [[TMP0:%.*]] = alloca ptr, i64 1, align 8
 // LLVM:   [[TMP1:%.*]] = alloca i32, i64 1, align 4
 // LLVM:   [[TMP2:%.*]] = lo...
[truncated]

```
}];

let arguments = (ins
Arg<CIR_PointerType, "the address to store the value", [MemWrite]>:$addr,
CIR_AnyType:$src,
BitfieldInfoAttr:$bitfield_info,
OptionalAttr<I64Attr>:$alignment,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
OptionalAttr<I64Attr>:$alignment,
ConfinedAttr<OptionalAttr<I64Attr>, [IntNonNegative]>:$alignment,

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe even DefaultValuedOptionalAttr would be better, which would eliminate unnecessary ternary in builders and op.getAlignment().value() should be reduced to op.getAlignment().

Copy link
Contributor

@andykaylor andykaylor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work finding and fixing this! I have just one comment beyond what @xlauko has suggested,

return create<cir::SetBitfieldOp>(loc, resultType, dstAddr, storageType,
src, info.name, info.size, info.offset,
info.isSigned, isLvalueVolatile);
bool isLvalueVolatile, bool useVolatile,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be more consistent with createLoad/createStore to use Address for the dstAddr/addr parameter and get the alignment from that within the implementation of these functions.

Copy link
Member

@bcardosolopes bcardosolopes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after existing comments are addressed, nothing else to add on my part.

Copy link
Contributor

@xlauko xlauko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm after useVolatile is fixed

lv.isVolatile(), false, ptr.getAlignment().getAsAlign().value());
mlir::Value field =
builder.createGetBitfield(getLoc(loc), resLTy, ptr, ptr.getElementType(),
info, lv.isVolatile(), false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the argument is to stay add comment to /*useVolatile=*/ false please, but we should remove it entirely as noted before.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

mlir::Value src, const CIRGenBitFieldInfo &info,
bool isLvalueVolatile, bool useVolatile,
unsigned alignment = 0) {
bool isLvalueVolatile, bool useVolatile) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

useVolatile is never used can we remove it? We can always add it later.

const CIRGenBitFieldInfo &info,
bool isLvalueVolatile, bool useVolatile,
unsigned alignment = 0) {
bool isLvalueVolatile, bool useVolatile) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

useVolatile is never used can we remove it? We can always add it later.

@Andres-Salamanca Andres-Salamanca merged commit b02787d into llvm:main Jul 18, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clang Clang issues not falling into any other category ClangIR Anything related to the ClangIR project
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants