Skip to content

[mlir] Add isStatic* size check for ShapedTypes. NFCI. #147085

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 7, 2025

Conversation

kuhar
Copy link
Member

@kuhar kuhar commented Jul 4, 2025

The motivation is to avoid having to negate isDynamic* checks, avoid double negations, and allow for ShapedType::isStaticDim to be used in ADT functions without having to wrap it in a lambda performing the negation.

Also add the new functions to C and Python bindings.

@llvmbot
Copy link
Member

llvmbot commented Jul 4, 2025

@llvm/pr-subscribers-mlir-core
@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-ods

Author: Jakub Kuderski (kuhar)

Changes

The motivation is to avoid having to negate isDynamic* checks, avoid double negations, and allow for ShapedType::isStaticDim to be used in ADT functions without having to wrap it in a lambda performing the negation.

Also add the new functions to C and Python bindings.


Patch is 59.48 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/147085.diff

37 Files Affected:

  • (modified) mlir/include/mlir-c/BuiltinTypes.h (+12-1)
  • (modified) mlir/include/mlir/IR/BuiltinTypeInterfaces.td (+20-3)
  • (modified) mlir/lib/Bindings/Python/IRTypes.cpp (+24)
  • (modified) mlir/lib/CAPI/IR/BuiltinTypes.cpp (+13)
  • (modified) mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp (+1-1)
  • (modified) mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp (+1-1)
  • (modified) mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp (+4-4)
  • (modified) mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp (+1-1)
  • (modified) mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Linalg/Utils/Utils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp (+13-13)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/ExpandStridedMetadata.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Mesh/IR/MeshOps.cpp (+7-8)
  • (modified) mlir/lib/Dialect/Mesh/Transforms/Spmdization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp (+5-5)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseReinterpretMap.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorInferTypeOpInterfaceImpl.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorOps.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Tosa/IR/TosaOps.cpp (+26-36)
  • (modified) mlir/lib/Dialect/Utils/ReshapeOpsUtils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Utils/StaticValueUtils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp (+1-1)
  • (modified) mlir/lib/IR/BuiltinAttributes.cpp (+4-4)
  • (modified) mlir/lib/IR/BuiltinTypes.cpp (+2-2)
  • (modified) mlir/lib/IR/TypeUtilities.cpp (+1-1)
  • (modified) mlir/lib/Interfaces/ViewLikeInterface.cpp (+2-2)
  • (modified) mlir/python/mlir/_mlir_libs/_mlir/ir.pyi (+13)
  • (modified) mlir/test/python/ir/builtin_types.py (+17)
diff --git a/mlir/include/mlir-c/BuiltinTypes.h b/mlir/include/mlir-c/BuiltinTypes.h
index 6875fab7bf796..a73d57f9362fd 100644
--- a/mlir/include/mlir-c/BuiltinTypes.h
+++ b/mlir/include/mlir-c/BuiltinTypes.h
@@ -292,6 +292,9 @@ MLIR_CAPI_EXPORTED bool mlirShapedTypeHasStaticShape(MlirType type);
 /// Checks wither the dim-th dimension of the given shaped type is dynamic.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim);
 
+/// Checks wither the dim-th dimension of the given shaped type is static.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim);
+
 /// Returns the dim-th dimension of the given ranked shaped type.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
                                                     intptr_t dim);
@@ -300,14 +303,22 @@ MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
 /// in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicSize(int64_t size);
 
+/// Checks whether the given shaped type dimension value is statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticSize(int64_t size);
+
 /// Returns the value indicating a dynamic size in a shaped type. Prefer
-/// mlirShapedTypeIsDynamicSize to direct comparisons with this value.
+/// mlirShapedTypeIsDynamicSize and mlirShapedTypeIsStaticSize to direct
+/// comparisons with this value.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDynamicSize(void);
 
 /// Checks whether the given value is used as a placeholder for dynamic strides
 /// and offsets in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val);
 
+/// Checks whether the given dimension value of a stride or an offset is
+/// statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val);
+
 /// Returns the value indicating a dynamic stride or offset in a shaped type.
 /// Prefer mlirShapedTypeGetDynamicStrideOrOffset to direct comparisons with
 /// this value.
diff --git a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
index 367aeb6ac512b..91ffe6572ac41 100644
--- a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
+++ b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
@@ -36,7 +36,7 @@ def VectorElementTypeInterface : TypeInterface<"VectorElementTypeInterface"> {
     This may change in the future, for example, to require types to provide
     their size or alignment given a data layout. Please post an RFC before
     adding this interface to additional types. Implementing this interface on
-    downstream types is discourged, until we specified the exact properties of
+    downstream types is discouraged, until we specified the exact properties of
     a vector element type in more detail.
   }];
 }
@@ -221,7 +221,17 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
 
     /// Whether the given shape has any size that indicates a dynamic dimension.
     static bool isDynamicShape(ArrayRef<int64_t> dSizes) {
-      return any_of(dSizes, [](int64_t dSize) { return isDynamic(dSize); });
+      return llvm::any_of(dSizes, isDynamic);
+    }
+
+    /// Whether the given dimension size indicates a statically-sized dimension.
+    static constexpr bool isStatic(int64_t dValue) {
+      return !isDynamic(dValue);
+    }
+
+    /// Whether the given shape has static dimensions only.
+    static bool isStaticShape(ArrayRef<int64_t> dSizes) {
+      return llvm::all_of(dSizes, isStatic);
     }
 
     /// Return the number of elements present in the given shape.
@@ -273,11 +283,18 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
       return ::mlir::ShapedType::isDynamic($_type.getShape()[idx]);
     }
 
+    /// Returns true if this dimension has a static size (for ranked types);
+    /// aborts for unranked types.
+    bool isStaticDim(unsigned idx) const {
+      assert(idx < getRank() && "invalid index for shaped type");
+      return ::mlir::ShapedType::isStatic($_type.getShape()[idx]);
+    }
+
     /// Returns if this type has a static shape, i.e. if the type is ranked and
     /// all dimensions have known size (>= 0).
     bool hasStaticShape() const {
       return $_type.hasRank() &&
-             !::mlir::ShapedType::isDynamicShape($_type.getShape());
+             ::mlir::ShapedType::isStaticShape($_type.getShape());
     }
 
     /// Returns if this type has a static shape and the shape is equal to
diff --git a/mlir/lib/Bindings/Python/IRTypes.cpp b/mlir/lib/Bindings/Python/IRTypes.cpp
index 0f2719c10a027..b11e3f75b8463 100644
--- a/mlir/lib/Bindings/Python/IRTypes.cpp
+++ b/mlir/lib/Bindings/Python/IRTypes.cpp
@@ -544,6 +544,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim"),
       "Returns whether the dim-th dimension of the given shaped type is "
       "dynamic.");
+  c.def(
+      "is_static_dim",
+      [](PyShapedType &self, intptr_t dim) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticDim(self, dim);
+      },
+      nb::arg("dim"),
+      "Returns whether the dim-th dimension of the given shaped type is "
+      "static.");
   c.def(
       "get_dim_size",
       [](PyShapedType &self, intptr_t dim) {
@@ -558,6 +567,12 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given dimension size indicates a dynamic "
       "dimension.");
+  c.def_static(
+      "is_static_size",
+      [](int64_t size) -> bool { return mlirShapedTypeIsStaticSize(size); },
+      nb::arg("dim_size"),
+      "Returns whether the given dimension size indicates a static "
+      "dimension.");
   c.def(
       "is_dynamic_stride_or_offset",
       [](PyShapedType &self, int64_t val) -> bool {
@@ -567,6 +582,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given value is used as a placeholder for dynamic "
       "strides and offsets in shaped types.");
+  c.def(
+      "is_static_stride_or_offset",
+      [](PyShapedType &self, int64_t val) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticStrideOrOffset(val);
+      },
+      nb::arg("dim_size"),
+      "Returns whether the given shaped type stride or offset value is "
+      "statically-sized.");
   c.def_prop_ro(
       "shape",
       [](PyShapedType &self) {
diff --git a/mlir/lib/CAPI/IR/BuiltinTypes.cpp b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
index a080adf0f8103..9d8554aabff8a 100644
--- a/mlir/lib/CAPI/IR/BuiltinTypes.cpp
+++ b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
@@ -332,6 +332,11 @@ bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim) {
       .isDynamicDim(static_cast<unsigned>(dim));
 }
 
+bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim) {
+  return llvm::cast<ShapedType>(unwrap(type))
+      .isStaticDim(static_cast<unsigned>(dim));
+}
+
 int64_t mlirShapedTypeGetDimSize(MlirType type, intptr_t dim) {
   return llvm::cast<ShapedType>(unwrap(type))
       .getDimSize(static_cast<unsigned>(dim));
@@ -343,10 +348,18 @@ bool mlirShapedTypeIsDynamicSize(int64_t size) {
   return ShapedType::isDynamic(size);
 }
 
+bool mlirShapedTypeIsStaticSize(int64_t size) {
+  return ShapedType::isStatic(size);
+}
+
 bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val) {
   return ShapedType::isDynamic(val);
 }
 
+bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val) {
+  return ShapedType::isStatic(val);
+}
+
 int64_t mlirShapedTypeGetDynamicStrideOrOffset() {
   return ShapedType::kDynamic;
 }
diff --git a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
index 86d6643820376..e34d5f74d232f 100644
--- a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
@@ -53,7 +53,7 @@ MemRefDescriptor MemRefDescriptor::fromStaticShape(
 
   // Extract all strides and offsets and verify they are static.
   auto [strides, offset] = type.getStridesAndOffset();
-  assert(!ShapedType::isDynamic(offset) && "expected static offset");
+  assert(ShapedType::isStatic(offset) && "expected static offset");
   assert(!llvm::any_of(strides, ShapedType::isDynamic) &&
          "expected static strides");
 
diff --git a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
index 57c8f4402cf4b..efecea2d461a7 100644
--- a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
@@ -610,7 +610,7 @@ bool LLVMTypeConverter::canConvertToBarePtr(BaseMemRefType type) {
     if (ShapedType::isDynamic(stride))
       return false;
 
-  return !ShapedType::isDynamic(offset);
+  return ShapedType::isStatic(offset);
 }
 
 /// Convert a memref type to a bare pointer to the memref element type.
diff --git a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
index 7484e4b07390e..d767a24f6d698 100644
--- a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
+++ b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
@@ -44,7 +44,7 @@ static constexpr LLVM::GEPNoWrapFlags kNoWrapFlags =
 namespace {
 
 static bool isStaticStrideOrOffset(int64_t strideOrOffset) {
-  return !ShapedType::isDynamic(strideOrOffset);
+  return ShapedType::isStatic(strideOrOffset);
 }
 
 static FailureOr<LLVM::LLVMFuncOp>
@@ -1469,7 +1469,7 @@ struct MemRefReshapeOpLowering
       Value stride = nullptr;
       int64_t targetRank = targetMemRefType.getRank();
       for (auto i : llvm::reverse(llvm::seq<int64_t>(0, targetRank))) {
-        if (!ShapedType::isDynamic(strides[i])) {
+        if (ShapedType::isStatic(strides[i])) {
           // If the stride for this dimension is dynamic, then use the product
           // of the sizes of the inner dimensions.
           stride =
@@ -1723,7 +1723,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                 ArrayRef<int64_t> shape, ValueRange dynamicSizes, unsigned idx,
                 Type indexType) const {
     assert(idx < shape.size());
-    if (!ShapedType::isDynamic(shape[idx]))
+    if (ShapedType::isStatic(shape[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, shape[idx]);
     // Count the number of dynamic dims in range [0, idx]
     unsigned nDynamic =
@@ -1739,7 +1739,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                   ArrayRef<int64_t> strides, Value nextSize,
                   Value runningStride, unsigned idx, Type indexType) const {
     assert(idx < strides.size());
-    if (!ShapedType::isDynamic(strides[idx]))
+    if (ShapedType::isStatic(strides[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, strides[idx]);
     if (nextSize)
       return runningStride
diff --git a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
index c2be08ef40f21..c3ce71ee2c82c 100644
--- a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
+++ b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
@@ -759,7 +759,7 @@ computeTargetSize(PatternRewriter &rewriter, Location loc, IndexPool &indexPool,
   // dimension greater than 1 with a different value is undefined behavior.
   for (auto operand : operands) {
     auto size = cast<RankedTensorType>(operand.getType()).getDimSize(dim);
-    if (!ShapedType::isDynamic(size) && size > 1)
+    if (ShapedType::isStatic(size) && size > 1)
       return {rewriter.getIndexAttr(size), operand};
   }
 
diff --git a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
index 615c7ca1cfd15..f73821c4d35a2 100644
--- a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
+++ b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
@@ -84,7 +84,7 @@ TensorType inferReshapeExpandedType(TensorType inputType,
         return totalSize / totalSizeNoPlaceholder;
       });
 
-  bool resultIsStatic = !ShapedType::isDynamicShape(resultShape);
+  bool resultIsStatic = ShapedType::isStaticShape(resultShape);
 
   // A syntactic restriction in 'tensor.expand_shape' forbids a dynamically
   // shaped input from being reshaped into a statically shaped result. We may
@@ -306,7 +306,7 @@ class SliceConverter : public OpConversionPattern<tosa::SliceOp> {
       int64_t size = i.value();
       size_t index = i.index();
       sizes.push_back(size == -1 ? ShapedType::kDynamic : size);
-      if (!ShapedType::isDynamic(sizes.back()))
+      if (ShapedType::isStatic(sizes.back()))
         continue;
 
       auto dim = rewriter.create<tensor::DimOp>(loc, input, index);
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index 66949c96798de..5c1d42db18c47 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -44,7 +44,7 @@ FailureOr<Value> mlir::bufferization::castOrReallocMemRefValue(
         failed(target.getStridesAndOffset(targetStrides, targetOffset)))
       return false;
     auto dynamicToStatic = [](int64_t a, int64_t b) {
-      return ShapedType::isDynamic(a) && !ShapedType::isDynamic(b);
+      return ShapedType::isDynamic(a) && ShapedType::isStatic(b);
     };
     if (dynamicToStatic(sourceOffset, targetOffset))
       return false;
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
index acf5f7767d12a..15e03fbefe9c5 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
@@ -33,7 +33,7 @@ static bool hasFullyDynamicLayoutMap(MemRefType type) {
     return false;
   if (!llvm::all_of(strides, ShapedType::isDynamic))
     return false;
-  if (!ShapedType::isDynamic(offset))
+  if (ShapedType::isStatic(offset))
     return false;
   return true;
 }
diff --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
index 8075df730ccc6..1edf27201ee24 100644
--- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
+++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
@@ -4569,7 +4569,7 @@ static SmallVector<OpFoldResult> getMixedTilesImpl(OpTy op) {
   SmallVector<OpFoldResult> mixedInnerTiles;
   unsigned dynamicValIndex = 0;
   for (int64_t staticTile : op.getStaticInnerTiles()) {
-    if (!ShapedType::isDynamic(staticTile))
+    if (ShapedType::isStatic(staticTile))
       mixedInnerTiles.push_back(builder.getI64IntegerAttr(staticTile));
     else
       mixedInnerTiles.push_back(op.getInnerTiles()[dynamicValIndex++]);
@@ -4834,7 +4834,7 @@ bool PackOp::requirePaddingValue(ArrayRef<int64_t> inputShape,
     std::optional<int64_t> constantTile = getConstantIntValue(tileSize);
 
     if (!constantTile) {
-      if (!ShapedType::isDynamic(outputTileSizes[pos]) &&
+      if (ShapedType::isStatic(outputTileSizes[pos]) &&
           (inputShape[pos] % outputTileSizes[pos] != 0))
         return true;
     } else if (inputShape[pos] % (*constantTile) != 0) {
@@ -4940,7 +4940,7 @@ SmallVector<OpFoldResult> PackOp::getResultShape(
   // use dispatchIndexOpFoldResults on the result, and rely on exact number of
   // dynamic dims returned by that.
   for (unsigned i = 0; i < resultDims.size(); ++i) {
-    if (!ShapedType::isDynamic(resultTypeShape[i]))
+    if (ShapedType::isStatic(resultTypeShape[i]))
       continue;
     resultDims[i] =
         getValueOrCreateConstantIndexOp(builder, loc, resultDims[i]);
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index d0031e047b770..6907df096252e 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -2065,7 +2065,7 @@ transform::PadOp::apply(transform::TransformRewriter &rewriter,
       rewriter.setInsertionPoint(linalgTarget);
       for (OpOperand &operand : linalgTarget->getOpOperands()) {
         for (auto [i, dim] : llvm::enumerate(linalgTarget.getShape(&operand))) {
-          if (!ShapedType::isDynamic(dim))
+          if (ShapedType::isStatic(dim))
             continue;
           options.setSizeToPadTo(operand.getOperandNumber(), i,
                                  tensor::getMixedSize(rewriter,
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index f8592e2ca2174..e3aebce8dfc09 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -338,7 +338,7 @@ VectorizationState::precomputeIterSpaceValueSizes(RewriterBase &rewriter,
                                                   LinalgOp linalgOp) {
   // TODO: Support 0-d vectors.
   for (int vecDim = 0, end = canonicalVecShape.size(); vecDim < end; ++vecDim) {
-    if (!ShapedType::isDynamic(iterSpaceStaticSizes[vecDim])) {
+    if (ShapedType::isStatic(iterSpaceStaticSizes[vecDim])) {
       // Create constant index op for static dimensions.
       iterSpaceValueSizes.push_back(rewriter.create<arith::ConstantIndexOp>(
           linalgOp.getLoc(), iterSpaceStaticSizes[vecDim]));
@@ -1655,7 +1655,7 @@ createWriteOrMaskedWrite(OpBuilder &builder, Location loc, Value vecToStore,
     for (unsigned i = 0; i < vecToStoreRank; i++)
       inBoundsVal[i] =
           (destShape[destRank - vecToStoreRank + i] >= vecToStoreShape[i]) &&
-          !ShapedType::isDynamic(destShape[destRank - vecToStoreRank + i]);
+          ShapedType::isStatic(destShape[destRank - vecToStoreRank + i]);
   }
 
   // If missing, initialize the write indices to 0.
diff --git a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
index 209309ddb413a..472d7479dad0c 100644
--- a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
@@ -697,7 +697,7 @@ computeSliceParameters(OpBuilder &builder, Location loc, Value valueToTile,
     int64_t shapeSize = shape[r];
     std::optional<int64_t> sizeCst = getConstantIntValue(size);
     auto hasTileSizeOne = sizeCst == 1;
-    auto dividesEvenly = sizeCst && !ShapedType::isDynamic(shapeSize) &&
+    auto dividesEvenly = sizeCst && ShapedType::isStatic(shapeSize) &&
                          ((shapeSize % *sizeCst) == 0);
     if (!hasTileSizeOne && !dividesEvenly) {
       LLVM_DEBUG(llvm::dbgs() << "makeTiledShape: shapeSize=" << shapeSize
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 3c4d2562e6999..2371cff1043d6 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -99,7 +99,7 @@ static void constifyIndexValues(SmallVectorImpl<OpFoldResult> &values,
          "incorrect number of const values");
   for (auto [i, cstVal] : llvm::enumerate(constValues)) {
     Builder builder(values[i].getContext());
-    if (!ShapedType::isDynamic(cstVal)) {
+    if (ShapedType::isStatic(cstVal)) {
       // Constant value is known, use it directly.
       values[i] = builder.getIndexAttr(cstVal);
       continue;
@@ -189,7 +189,7 @@ struct SimplifyAllocConst : public OpRewritePattern<AllocLikeOp> {
     for (unsigned dim = 0, e = memrefType.getRank(); dim < e; ++dim) {
       int64_t dimSize = memrefType.getDimSize(dim);
       // If this is already static dimension, keep it.
-      if (!ShapedType::isDynamic(dimSize)) {
+      if (ShapedType::isStatic(dimSize)) {
         newShapeConstants.push_back(dimSize);
         continue;
       }
@@ -615,21 +615,21 @@ bool CastOp::canFoldIntoConsumerOp(CastOp castOp) {
   for (auto it : llvm::zip(sourceType.getShape(), resultType.getShape())) {
     auto ss = std::get<0>(it), st = std::get<1>(it);
     if (ss != st)
-      if (ShapedType::isDynamic(ss) && !ShapedType::isDynamic(st))
+      if (ShapedType::isDynamic(ss) && ShapedType::isStatic(st))
         return false;
   }
 
   // If cast is towards more static offset along any dimension, don't fold.
   if (sourceOffset != resultOffset)
     if (ShapedType::isDynamic(sourceOffset) &&
-        !ShapedType::isDynamic(resultOffset))
+        ShapedType::isStatic(resultOffset))
       return false;
 
   // If cast is towards more static strides along any dimension, don't fold.
   for (au...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 4, 2025

@llvm/pr-subscribers-mlir-tensor

Author: Jakub Kuderski (kuhar)

Changes

The motivation is to avoid having to negate isDynamic* checks, avoid double negations, and allow for ShapedType::isStaticDim to be used in ADT functions without having to wrap it in a lambda performing the negation.

Also add the new functions to C and Python bindings.


Patch is 59.48 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/147085.diff

37 Files Affected:

  • (modified) mlir/include/mlir-c/BuiltinTypes.h (+12-1)
  • (modified) mlir/include/mlir/IR/BuiltinTypeInterfaces.td (+20-3)
  • (modified) mlir/lib/Bindings/Python/IRTypes.cpp (+24)
  • (modified) mlir/lib/CAPI/IR/BuiltinTypes.cpp (+13)
  • (modified) mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp (+1-1)
  • (modified) mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp (+1-1)
  • (modified) mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp (+4-4)
  • (modified) mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp (+1-1)
  • (modified) mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Linalg/Utils/Utils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp (+13-13)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/ExpandStridedMetadata.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Mesh/IR/MeshOps.cpp (+7-8)
  • (modified) mlir/lib/Dialect/Mesh/Transforms/Spmdization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp (+5-5)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseReinterpretMap.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorInferTypeOpInterfaceImpl.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorOps.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Tosa/IR/TosaOps.cpp (+26-36)
  • (modified) mlir/lib/Dialect/Utils/ReshapeOpsUtils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Utils/StaticValueUtils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp (+1-1)
  • (modified) mlir/lib/IR/BuiltinAttributes.cpp (+4-4)
  • (modified) mlir/lib/IR/BuiltinTypes.cpp (+2-2)
  • (modified) mlir/lib/IR/TypeUtilities.cpp (+1-1)
  • (modified) mlir/lib/Interfaces/ViewLikeInterface.cpp (+2-2)
  • (modified) mlir/python/mlir/_mlir_libs/_mlir/ir.pyi (+13)
  • (modified) mlir/test/python/ir/builtin_types.py (+17)
diff --git a/mlir/include/mlir-c/BuiltinTypes.h b/mlir/include/mlir-c/BuiltinTypes.h
index 6875fab7bf796..a73d57f9362fd 100644
--- a/mlir/include/mlir-c/BuiltinTypes.h
+++ b/mlir/include/mlir-c/BuiltinTypes.h
@@ -292,6 +292,9 @@ MLIR_CAPI_EXPORTED bool mlirShapedTypeHasStaticShape(MlirType type);
 /// Checks wither the dim-th dimension of the given shaped type is dynamic.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim);
 
+/// Checks wither the dim-th dimension of the given shaped type is static.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim);
+
 /// Returns the dim-th dimension of the given ranked shaped type.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
                                                     intptr_t dim);
@@ -300,14 +303,22 @@ MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
 /// in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicSize(int64_t size);
 
+/// Checks whether the given shaped type dimension value is statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticSize(int64_t size);
+
 /// Returns the value indicating a dynamic size in a shaped type. Prefer
-/// mlirShapedTypeIsDynamicSize to direct comparisons with this value.
+/// mlirShapedTypeIsDynamicSize and mlirShapedTypeIsStaticSize to direct
+/// comparisons with this value.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDynamicSize(void);
 
 /// Checks whether the given value is used as a placeholder for dynamic strides
 /// and offsets in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val);
 
+/// Checks whether the given dimension value of a stride or an offset is
+/// statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val);
+
 /// Returns the value indicating a dynamic stride or offset in a shaped type.
 /// Prefer mlirShapedTypeGetDynamicStrideOrOffset to direct comparisons with
 /// this value.
diff --git a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
index 367aeb6ac512b..91ffe6572ac41 100644
--- a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
+++ b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
@@ -36,7 +36,7 @@ def VectorElementTypeInterface : TypeInterface<"VectorElementTypeInterface"> {
     This may change in the future, for example, to require types to provide
     their size or alignment given a data layout. Please post an RFC before
     adding this interface to additional types. Implementing this interface on
-    downstream types is discourged, until we specified the exact properties of
+    downstream types is discouraged, until we specified the exact properties of
     a vector element type in more detail.
   }];
 }
@@ -221,7 +221,17 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
 
     /// Whether the given shape has any size that indicates a dynamic dimension.
     static bool isDynamicShape(ArrayRef<int64_t> dSizes) {
-      return any_of(dSizes, [](int64_t dSize) { return isDynamic(dSize); });
+      return llvm::any_of(dSizes, isDynamic);
+    }
+
+    /// Whether the given dimension size indicates a statically-sized dimension.
+    static constexpr bool isStatic(int64_t dValue) {
+      return !isDynamic(dValue);
+    }
+
+    /// Whether the given shape has static dimensions only.
+    static bool isStaticShape(ArrayRef<int64_t> dSizes) {
+      return llvm::all_of(dSizes, isStatic);
     }
 
     /// Return the number of elements present in the given shape.
@@ -273,11 +283,18 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
       return ::mlir::ShapedType::isDynamic($_type.getShape()[idx]);
     }
 
+    /// Returns true if this dimension has a static size (for ranked types);
+    /// aborts for unranked types.
+    bool isStaticDim(unsigned idx) const {
+      assert(idx < getRank() && "invalid index for shaped type");
+      return ::mlir::ShapedType::isStatic($_type.getShape()[idx]);
+    }
+
     /// Returns if this type has a static shape, i.e. if the type is ranked and
     /// all dimensions have known size (>= 0).
     bool hasStaticShape() const {
       return $_type.hasRank() &&
-             !::mlir::ShapedType::isDynamicShape($_type.getShape());
+             ::mlir::ShapedType::isStaticShape($_type.getShape());
     }
 
     /// Returns if this type has a static shape and the shape is equal to
diff --git a/mlir/lib/Bindings/Python/IRTypes.cpp b/mlir/lib/Bindings/Python/IRTypes.cpp
index 0f2719c10a027..b11e3f75b8463 100644
--- a/mlir/lib/Bindings/Python/IRTypes.cpp
+++ b/mlir/lib/Bindings/Python/IRTypes.cpp
@@ -544,6 +544,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim"),
       "Returns whether the dim-th dimension of the given shaped type is "
       "dynamic.");
+  c.def(
+      "is_static_dim",
+      [](PyShapedType &self, intptr_t dim) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticDim(self, dim);
+      },
+      nb::arg("dim"),
+      "Returns whether the dim-th dimension of the given shaped type is "
+      "static.");
   c.def(
       "get_dim_size",
       [](PyShapedType &self, intptr_t dim) {
@@ -558,6 +567,12 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given dimension size indicates a dynamic "
       "dimension.");
+  c.def_static(
+      "is_static_size",
+      [](int64_t size) -> bool { return mlirShapedTypeIsStaticSize(size); },
+      nb::arg("dim_size"),
+      "Returns whether the given dimension size indicates a static "
+      "dimension.");
   c.def(
       "is_dynamic_stride_or_offset",
       [](PyShapedType &self, int64_t val) -> bool {
@@ -567,6 +582,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given value is used as a placeholder for dynamic "
       "strides and offsets in shaped types.");
+  c.def(
+      "is_static_stride_or_offset",
+      [](PyShapedType &self, int64_t val) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticStrideOrOffset(val);
+      },
+      nb::arg("dim_size"),
+      "Returns whether the given shaped type stride or offset value is "
+      "statically-sized.");
   c.def_prop_ro(
       "shape",
       [](PyShapedType &self) {
diff --git a/mlir/lib/CAPI/IR/BuiltinTypes.cpp b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
index a080adf0f8103..9d8554aabff8a 100644
--- a/mlir/lib/CAPI/IR/BuiltinTypes.cpp
+++ b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
@@ -332,6 +332,11 @@ bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim) {
       .isDynamicDim(static_cast<unsigned>(dim));
 }
 
+bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim) {
+  return llvm::cast<ShapedType>(unwrap(type))
+      .isStaticDim(static_cast<unsigned>(dim));
+}
+
 int64_t mlirShapedTypeGetDimSize(MlirType type, intptr_t dim) {
   return llvm::cast<ShapedType>(unwrap(type))
       .getDimSize(static_cast<unsigned>(dim));
@@ -343,10 +348,18 @@ bool mlirShapedTypeIsDynamicSize(int64_t size) {
   return ShapedType::isDynamic(size);
 }
 
+bool mlirShapedTypeIsStaticSize(int64_t size) {
+  return ShapedType::isStatic(size);
+}
+
 bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val) {
   return ShapedType::isDynamic(val);
 }
 
+bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val) {
+  return ShapedType::isStatic(val);
+}
+
 int64_t mlirShapedTypeGetDynamicStrideOrOffset() {
   return ShapedType::kDynamic;
 }
diff --git a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
index 86d6643820376..e34d5f74d232f 100644
--- a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
@@ -53,7 +53,7 @@ MemRefDescriptor MemRefDescriptor::fromStaticShape(
 
   // Extract all strides and offsets and verify they are static.
   auto [strides, offset] = type.getStridesAndOffset();
-  assert(!ShapedType::isDynamic(offset) && "expected static offset");
+  assert(ShapedType::isStatic(offset) && "expected static offset");
   assert(!llvm::any_of(strides, ShapedType::isDynamic) &&
          "expected static strides");
 
diff --git a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
index 57c8f4402cf4b..efecea2d461a7 100644
--- a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
@@ -610,7 +610,7 @@ bool LLVMTypeConverter::canConvertToBarePtr(BaseMemRefType type) {
     if (ShapedType::isDynamic(stride))
       return false;
 
-  return !ShapedType::isDynamic(offset);
+  return ShapedType::isStatic(offset);
 }
 
 /// Convert a memref type to a bare pointer to the memref element type.
diff --git a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
index 7484e4b07390e..d767a24f6d698 100644
--- a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
+++ b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
@@ -44,7 +44,7 @@ static constexpr LLVM::GEPNoWrapFlags kNoWrapFlags =
 namespace {
 
 static bool isStaticStrideOrOffset(int64_t strideOrOffset) {
-  return !ShapedType::isDynamic(strideOrOffset);
+  return ShapedType::isStatic(strideOrOffset);
 }
 
 static FailureOr<LLVM::LLVMFuncOp>
@@ -1469,7 +1469,7 @@ struct MemRefReshapeOpLowering
       Value stride = nullptr;
       int64_t targetRank = targetMemRefType.getRank();
       for (auto i : llvm::reverse(llvm::seq<int64_t>(0, targetRank))) {
-        if (!ShapedType::isDynamic(strides[i])) {
+        if (ShapedType::isStatic(strides[i])) {
           // If the stride for this dimension is dynamic, then use the product
           // of the sizes of the inner dimensions.
           stride =
@@ -1723,7 +1723,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                 ArrayRef<int64_t> shape, ValueRange dynamicSizes, unsigned idx,
                 Type indexType) const {
     assert(idx < shape.size());
-    if (!ShapedType::isDynamic(shape[idx]))
+    if (ShapedType::isStatic(shape[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, shape[idx]);
     // Count the number of dynamic dims in range [0, idx]
     unsigned nDynamic =
@@ -1739,7 +1739,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                   ArrayRef<int64_t> strides, Value nextSize,
                   Value runningStride, unsigned idx, Type indexType) const {
     assert(idx < strides.size());
-    if (!ShapedType::isDynamic(strides[idx]))
+    if (ShapedType::isStatic(strides[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, strides[idx]);
     if (nextSize)
       return runningStride
diff --git a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
index c2be08ef40f21..c3ce71ee2c82c 100644
--- a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
+++ b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
@@ -759,7 +759,7 @@ computeTargetSize(PatternRewriter &rewriter, Location loc, IndexPool &indexPool,
   // dimension greater than 1 with a different value is undefined behavior.
   for (auto operand : operands) {
     auto size = cast<RankedTensorType>(operand.getType()).getDimSize(dim);
-    if (!ShapedType::isDynamic(size) && size > 1)
+    if (ShapedType::isStatic(size) && size > 1)
       return {rewriter.getIndexAttr(size), operand};
   }
 
diff --git a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
index 615c7ca1cfd15..f73821c4d35a2 100644
--- a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
+++ b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
@@ -84,7 +84,7 @@ TensorType inferReshapeExpandedType(TensorType inputType,
         return totalSize / totalSizeNoPlaceholder;
       });
 
-  bool resultIsStatic = !ShapedType::isDynamicShape(resultShape);
+  bool resultIsStatic = ShapedType::isStaticShape(resultShape);
 
   // A syntactic restriction in 'tensor.expand_shape' forbids a dynamically
   // shaped input from being reshaped into a statically shaped result. We may
@@ -306,7 +306,7 @@ class SliceConverter : public OpConversionPattern<tosa::SliceOp> {
       int64_t size = i.value();
       size_t index = i.index();
       sizes.push_back(size == -1 ? ShapedType::kDynamic : size);
-      if (!ShapedType::isDynamic(sizes.back()))
+      if (ShapedType::isStatic(sizes.back()))
         continue;
 
       auto dim = rewriter.create<tensor::DimOp>(loc, input, index);
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index 66949c96798de..5c1d42db18c47 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -44,7 +44,7 @@ FailureOr<Value> mlir::bufferization::castOrReallocMemRefValue(
         failed(target.getStridesAndOffset(targetStrides, targetOffset)))
       return false;
     auto dynamicToStatic = [](int64_t a, int64_t b) {
-      return ShapedType::isDynamic(a) && !ShapedType::isDynamic(b);
+      return ShapedType::isDynamic(a) && ShapedType::isStatic(b);
     };
     if (dynamicToStatic(sourceOffset, targetOffset))
       return false;
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
index acf5f7767d12a..15e03fbefe9c5 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
@@ -33,7 +33,7 @@ static bool hasFullyDynamicLayoutMap(MemRefType type) {
     return false;
   if (!llvm::all_of(strides, ShapedType::isDynamic))
     return false;
-  if (!ShapedType::isDynamic(offset))
+  if (ShapedType::isStatic(offset))
     return false;
   return true;
 }
diff --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
index 8075df730ccc6..1edf27201ee24 100644
--- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
+++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
@@ -4569,7 +4569,7 @@ static SmallVector<OpFoldResult> getMixedTilesImpl(OpTy op) {
   SmallVector<OpFoldResult> mixedInnerTiles;
   unsigned dynamicValIndex = 0;
   for (int64_t staticTile : op.getStaticInnerTiles()) {
-    if (!ShapedType::isDynamic(staticTile))
+    if (ShapedType::isStatic(staticTile))
       mixedInnerTiles.push_back(builder.getI64IntegerAttr(staticTile));
     else
       mixedInnerTiles.push_back(op.getInnerTiles()[dynamicValIndex++]);
@@ -4834,7 +4834,7 @@ bool PackOp::requirePaddingValue(ArrayRef<int64_t> inputShape,
     std::optional<int64_t> constantTile = getConstantIntValue(tileSize);
 
     if (!constantTile) {
-      if (!ShapedType::isDynamic(outputTileSizes[pos]) &&
+      if (ShapedType::isStatic(outputTileSizes[pos]) &&
           (inputShape[pos] % outputTileSizes[pos] != 0))
         return true;
     } else if (inputShape[pos] % (*constantTile) != 0) {
@@ -4940,7 +4940,7 @@ SmallVector<OpFoldResult> PackOp::getResultShape(
   // use dispatchIndexOpFoldResults on the result, and rely on exact number of
   // dynamic dims returned by that.
   for (unsigned i = 0; i < resultDims.size(); ++i) {
-    if (!ShapedType::isDynamic(resultTypeShape[i]))
+    if (ShapedType::isStatic(resultTypeShape[i]))
       continue;
     resultDims[i] =
         getValueOrCreateConstantIndexOp(builder, loc, resultDims[i]);
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index d0031e047b770..6907df096252e 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -2065,7 +2065,7 @@ transform::PadOp::apply(transform::TransformRewriter &rewriter,
       rewriter.setInsertionPoint(linalgTarget);
       for (OpOperand &operand : linalgTarget->getOpOperands()) {
         for (auto [i, dim] : llvm::enumerate(linalgTarget.getShape(&operand))) {
-          if (!ShapedType::isDynamic(dim))
+          if (ShapedType::isStatic(dim))
             continue;
           options.setSizeToPadTo(operand.getOperandNumber(), i,
                                  tensor::getMixedSize(rewriter,
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index f8592e2ca2174..e3aebce8dfc09 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -338,7 +338,7 @@ VectorizationState::precomputeIterSpaceValueSizes(RewriterBase &rewriter,
                                                   LinalgOp linalgOp) {
   // TODO: Support 0-d vectors.
   for (int vecDim = 0, end = canonicalVecShape.size(); vecDim < end; ++vecDim) {
-    if (!ShapedType::isDynamic(iterSpaceStaticSizes[vecDim])) {
+    if (ShapedType::isStatic(iterSpaceStaticSizes[vecDim])) {
       // Create constant index op for static dimensions.
       iterSpaceValueSizes.push_back(rewriter.create<arith::ConstantIndexOp>(
           linalgOp.getLoc(), iterSpaceStaticSizes[vecDim]));
@@ -1655,7 +1655,7 @@ createWriteOrMaskedWrite(OpBuilder &builder, Location loc, Value vecToStore,
     for (unsigned i = 0; i < vecToStoreRank; i++)
       inBoundsVal[i] =
           (destShape[destRank - vecToStoreRank + i] >= vecToStoreShape[i]) &&
-          !ShapedType::isDynamic(destShape[destRank - vecToStoreRank + i]);
+          ShapedType::isStatic(destShape[destRank - vecToStoreRank + i]);
   }
 
   // If missing, initialize the write indices to 0.
diff --git a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
index 209309ddb413a..472d7479dad0c 100644
--- a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
@@ -697,7 +697,7 @@ computeSliceParameters(OpBuilder &builder, Location loc, Value valueToTile,
     int64_t shapeSize = shape[r];
     std::optional<int64_t> sizeCst = getConstantIntValue(size);
     auto hasTileSizeOne = sizeCst == 1;
-    auto dividesEvenly = sizeCst && !ShapedType::isDynamic(shapeSize) &&
+    auto dividesEvenly = sizeCst && ShapedType::isStatic(shapeSize) &&
                          ((shapeSize % *sizeCst) == 0);
     if (!hasTileSizeOne && !dividesEvenly) {
       LLVM_DEBUG(llvm::dbgs() << "makeTiledShape: shapeSize=" << shapeSize
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 3c4d2562e6999..2371cff1043d6 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -99,7 +99,7 @@ static void constifyIndexValues(SmallVectorImpl<OpFoldResult> &values,
          "incorrect number of const values");
   for (auto [i, cstVal] : llvm::enumerate(constValues)) {
     Builder builder(values[i].getContext());
-    if (!ShapedType::isDynamic(cstVal)) {
+    if (ShapedType::isStatic(cstVal)) {
       // Constant value is known, use it directly.
       values[i] = builder.getIndexAttr(cstVal);
       continue;
@@ -189,7 +189,7 @@ struct SimplifyAllocConst : public OpRewritePattern<AllocLikeOp> {
     for (unsigned dim = 0, e = memrefType.getRank(); dim < e; ++dim) {
       int64_t dimSize = memrefType.getDimSize(dim);
       // If this is already static dimension, keep it.
-      if (!ShapedType::isDynamic(dimSize)) {
+      if (ShapedType::isStatic(dimSize)) {
         newShapeConstants.push_back(dimSize);
         continue;
       }
@@ -615,21 +615,21 @@ bool CastOp::canFoldIntoConsumerOp(CastOp castOp) {
   for (auto it : llvm::zip(sourceType.getShape(), resultType.getShape())) {
     auto ss = std::get<0>(it), st = std::get<1>(it);
     if (ss != st)
-      if (ShapedType::isDynamic(ss) && !ShapedType::isDynamic(st))
+      if (ShapedType::isDynamic(ss) && ShapedType::isStatic(st))
         return false;
   }
 
   // If cast is towards more static offset along any dimension, don't fold.
   if (sourceOffset != resultOffset)
     if (ShapedType::isDynamic(sourceOffset) &&
-        !ShapedType::isDynamic(resultOffset))
+        ShapedType::isStatic(resultOffset))
       return false;
 
   // If cast is towards more static strides along any dimension, don't fold.
   for (au...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 4, 2025

@llvm/pr-subscribers-mlir-linalg

Author: Jakub Kuderski (kuhar)

Changes

The motivation is to avoid having to negate isDynamic* checks, avoid double negations, and allow for ShapedType::isStaticDim to be used in ADT functions without having to wrap it in a lambda performing the negation.

Also add the new functions to C and Python bindings.


Patch is 59.48 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/147085.diff

37 Files Affected:

  • (modified) mlir/include/mlir-c/BuiltinTypes.h (+12-1)
  • (modified) mlir/include/mlir/IR/BuiltinTypeInterfaces.td (+20-3)
  • (modified) mlir/lib/Bindings/Python/IRTypes.cpp (+24)
  • (modified) mlir/lib/CAPI/IR/BuiltinTypes.cpp (+13)
  • (modified) mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp (+1-1)
  • (modified) mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp (+1-1)
  • (modified) mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp (+4-4)
  • (modified) mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp (+1-1)
  • (modified) mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Linalg/Utils/Utils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp (+13-13)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/ExpandStridedMetadata.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Mesh/IR/MeshOps.cpp (+7-8)
  • (modified) mlir/lib/Dialect/Mesh/Transforms/Spmdization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp (+5-5)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseReinterpretMap.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorInferTypeOpInterfaceImpl.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorOps.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Tosa/IR/TosaOps.cpp (+26-36)
  • (modified) mlir/lib/Dialect/Utils/ReshapeOpsUtils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Utils/StaticValueUtils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp (+1-1)
  • (modified) mlir/lib/IR/BuiltinAttributes.cpp (+4-4)
  • (modified) mlir/lib/IR/BuiltinTypes.cpp (+2-2)
  • (modified) mlir/lib/IR/TypeUtilities.cpp (+1-1)
  • (modified) mlir/lib/Interfaces/ViewLikeInterface.cpp (+2-2)
  • (modified) mlir/python/mlir/_mlir_libs/_mlir/ir.pyi (+13)
  • (modified) mlir/test/python/ir/builtin_types.py (+17)
diff --git a/mlir/include/mlir-c/BuiltinTypes.h b/mlir/include/mlir-c/BuiltinTypes.h
index 6875fab7bf796..a73d57f9362fd 100644
--- a/mlir/include/mlir-c/BuiltinTypes.h
+++ b/mlir/include/mlir-c/BuiltinTypes.h
@@ -292,6 +292,9 @@ MLIR_CAPI_EXPORTED bool mlirShapedTypeHasStaticShape(MlirType type);
 /// Checks wither the dim-th dimension of the given shaped type is dynamic.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim);
 
+/// Checks wither the dim-th dimension of the given shaped type is static.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim);
+
 /// Returns the dim-th dimension of the given ranked shaped type.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
                                                     intptr_t dim);
@@ -300,14 +303,22 @@ MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
 /// in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicSize(int64_t size);
 
+/// Checks whether the given shaped type dimension value is statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticSize(int64_t size);
+
 /// Returns the value indicating a dynamic size in a shaped type. Prefer
-/// mlirShapedTypeIsDynamicSize to direct comparisons with this value.
+/// mlirShapedTypeIsDynamicSize and mlirShapedTypeIsStaticSize to direct
+/// comparisons with this value.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDynamicSize(void);
 
 /// Checks whether the given value is used as a placeholder for dynamic strides
 /// and offsets in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val);
 
+/// Checks whether the given dimension value of a stride or an offset is
+/// statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val);
+
 /// Returns the value indicating a dynamic stride or offset in a shaped type.
 /// Prefer mlirShapedTypeGetDynamicStrideOrOffset to direct comparisons with
 /// this value.
diff --git a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
index 367aeb6ac512b..91ffe6572ac41 100644
--- a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
+++ b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
@@ -36,7 +36,7 @@ def VectorElementTypeInterface : TypeInterface<"VectorElementTypeInterface"> {
     This may change in the future, for example, to require types to provide
     their size or alignment given a data layout. Please post an RFC before
     adding this interface to additional types. Implementing this interface on
-    downstream types is discourged, until we specified the exact properties of
+    downstream types is discouraged, until we specified the exact properties of
     a vector element type in more detail.
   }];
 }
@@ -221,7 +221,17 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
 
     /// Whether the given shape has any size that indicates a dynamic dimension.
     static bool isDynamicShape(ArrayRef<int64_t> dSizes) {
-      return any_of(dSizes, [](int64_t dSize) { return isDynamic(dSize); });
+      return llvm::any_of(dSizes, isDynamic);
+    }
+
+    /// Whether the given dimension size indicates a statically-sized dimension.
+    static constexpr bool isStatic(int64_t dValue) {
+      return !isDynamic(dValue);
+    }
+
+    /// Whether the given shape has static dimensions only.
+    static bool isStaticShape(ArrayRef<int64_t> dSizes) {
+      return llvm::all_of(dSizes, isStatic);
     }
 
     /// Return the number of elements present in the given shape.
@@ -273,11 +283,18 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
       return ::mlir::ShapedType::isDynamic($_type.getShape()[idx]);
     }
 
+    /// Returns true if this dimension has a static size (for ranked types);
+    /// aborts for unranked types.
+    bool isStaticDim(unsigned idx) const {
+      assert(idx < getRank() && "invalid index for shaped type");
+      return ::mlir::ShapedType::isStatic($_type.getShape()[idx]);
+    }
+
     /// Returns if this type has a static shape, i.e. if the type is ranked and
     /// all dimensions have known size (>= 0).
     bool hasStaticShape() const {
       return $_type.hasRank() &&
-             !::mlir::ShapedType::isDynamicShape($_type.getShape());
+             ::mlir::ShapedType::isStaticShape($_type.getShape());
     }
 
     /// Returns if this type has a static shape and the shape is equal to
diff --git a/mlir/lib/Bindings/Python/IRTypes.cpp b/mlir/lib/Bindings/Python/IRTypes.cpp
index 0f2719c10a027..b11e3f75b8463 100644
--- a/mlir/lib/Bindings/Python/IRTypes.cpp
+++ b/mlir/lib/Bindings/Python/IRTypes.cpp
@@ -544,6 +544,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim"),
       "Returns whether the dim-th dimension of the given shaped type is "
       "dynamic.");
+  c.def(
+      "is_static_dim",
+      [](PyShapedType &self, intptr_t dim) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticDim(self, dim);
+      },
+      nb::arg("dim"),
+      "Returns whether the dim-th dimension of the given shaped type is "
+      "static.");
   c.def(
       "get_dim_size",
       [](PyShapedType &self, intptr_t dim) {
@@ -558,6 +567,12 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given dimension size indicates a dynamic "
       "dimension.");
+  c.def_static(
+      "is_static_size",
+      [](int64_t size) -> bool { return mlirShapedTypeIsStaticSize(size); },
+      nb::arg("dim_size"),
+      "Returns whether the given dimension size indicates a static "
+      "dimension.");
   c.def(
       "is_dynamic_stride_or_offset",
       [](PyShapedType &self, int64_t val) -> bool {
@@ -567,6 +582,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given value is used as a placeholder for dynamic "
       "strides and offsets in shaped types.");
+  c.def(
+      "is_static_stride_or_offset",
+      [](PyShapedType &self, int64_t val) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticStrideOrOffset(val);
+      },
+      nb::arg("dim_size"),
+      "Returns whether the given shaped type stride or offset value is "
+      "statically-sized.");
   c.def_prop_ro(
       "shape",
       [](PyShapedType &self) {
diff --git a/mlir/lib/CAPI/IR/BuiltinTypes.cpp b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
index a080adf0f8103..9d8554aabff8a 100644
--- a/mlir/lib/CAPI/IR/BuiltinTypes.cpp
+++ b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
@@ -332,6 +332,11 @@ bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim) {
       .isDynamicDim(static_cast<unsigned>(dim));
 }
 
+bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim) {
+  return llvm::cast<ShapedType>(unwrap(type))
+      .isStaticDim(static_cast<unsigned>(dim));
+}
+
 int64_t mlirShapedTypeGetDimSize(MlirType type, intptr_t dim) {
   return llvm::cast<ShapedType>(unwrap(type))
       .getDimSize(static_cast<unsigned>(dim));
@@ -343,10 +348,18 @@ bool mlirShapedTypeIsDynamicSize(int64_t size) {
   return ShapedType::isDynamic(size);
 }
 
+bool mlirShapedTypeIsStaticSize(int64_t size) {
+  return ShapedType::isStatic(size);
+}
+
 bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val) {
   return ShapedType::isDynamic(val);
 }
 
+bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val) {
+  return ShapedType::isStatic(val);
+}
+
 int64_t mlirShapedTypeGetDynamicStrideOrOffset() {
   return ShapedType::kDynamic;
 }
diff --git a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
index 86d6643820376..e34d5f74d232f 100644
--- a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
@@ -53,7 +53,7 @@ MemRefDescriptor MemRefDescriptor::fromStaticShape(
 
   // Extract all strides and offsets and verify they are static.
   auto [strides, offset] = type.getStridesAndOffset();
-  assert(!ShapedType::isDynamic(offset) && "expected static offset");
+  assert(ShapedType::isStatic(offset) && "expected static offset");
   assert(!llvm::any_of(strides, ShapedType::isDynamic) &&
          "expected static strides");
 
diff --git a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
index 57c8f4402cf4b..efecea2d461a7 100644
--- a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
@@ -610,7 +610,7 @@ bool LLVMTypeConverter::canConvertToBarePtr(BaseMemRefType type) {
     if (ShapedType::isDynamic(stride))
       return false;
 
-  return !ShapedType::isDynamic(offset);
+  return ShapedType::isStatic(offset);
 }
 
 /// Convert a memref type to a bare pointer to the memref element type.
diff --git a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
index 7484e4b07390e..d767a24f6d698 100644
--- a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
+++ b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
@@ -44,7 +44,7 @@ static constexpr LLVM::GEPNoWrapFlags kNoWrapFlags =
 namespace {
 
 static bool isStaticStrideOrOffset(int64_t strideOrOffset) {
-  return !ShapedType::isDynamic(strideOrOffset);
+  return ShapedType::isStatic(strideOrOffset);
 }
 
 static FailureOr<LLVM::LLVMFuncOp>
@@ -1469,7 +1469,7 @@ struct MemRefReshapeOpLowering
       Value stride = nullptr;
       int64_t targetRank = targetMemRefType.getRank();
       for (auto i : llvm::reverse(llvm::seq<int64_t>(0, targetRank))) {
-        if (!ShapedType::isDynamic(strides[i])) {
+        if (ShapedType::isStatic(strides[i])) {
           // If the stride for this dimension is dynamic, then use the product
           // of the sizes of the inner dimensions.
           stride =
@@ -1723,7 +1723,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                 ArrayRef<int64_t> shape, ValueRange dynamicSizes, unsigned idx,
                 Type indexType) const {
     assert(idx < shape.size());
-    if (!ShapedType::isDynamic(shape[idx]))
+    if (ShapedType::isStatic(shape[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, shape[idx]);
     // Count the number of dynamic dims in range [0, idx]
     unsigned nDynamic =
@@ -1739,7 +1739,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                   ArrayRef<int64_t> strides, Value nextSize,
                   Value runningStride, unsigned idx, Type indexType) const {
     assert(idx < strides.size());
-    if (!ShapedType::isDynamic(strides[idx]))
+    if (ShapedType::isStatic(strides[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, strides[idx]);
     if (nextSize)
       return runningStride
diff --git a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
index c2be08ef40f21..c3ce71ee2c82c 100644
--- a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
+++ b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
@@ -759,7 +759,7 @@ computeTargetSize(PatternRewriter &rewriter, Location loc, IndexPool &indexPool,
   // dimension greater than 1 with a different value is undefined behavior.
   for (auto operand : operands) {
     auto size = cast<RankedTensorType>(operand.getType()).getDimSize(dim);
-    if (!ShapedType::isDynamic(size) && size > 1)
+    if (ShapedType::isStatic(size) && size > 1)
       return {rewriter.getIndexAttr(size), operand};
   }
 
diff --git a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
index 615c7ca1cfd15..f73821c4d35a2 100644
--- a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
+++ b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
@@ -84,7 +84,7 @@ TensorType inferReshapeExpandedType(TensorType inputType,
         return totalSize / totalSizeNoPlaceholder;
       });
 
-  bool resultIsStatic = !ShapedType::isDynamicShape(resultShape);
+  bool resultIsStatic = ShapedType::isStaticShape(resultShape);
 
   // A syntactic restriction in 'tensor.expand_shape' forbids a dynamically
   // shaped input from being reshaped into a statically shaped result. We may
@@ -306,7 +306,7 @@ class SliceConverter : public OpConversionPattern<tosa::SliceOp> {
       int64_t size = i.value();
       size_t index = i.index();
       sizes.push_back(size == -1 ? ShapedType::kDynamic : size);
-      if (!ShapedType::isDynamic(sizes.back()))
+      if (ShapedType::isStatic(sizes.back()))
         continue;
 
       auto dim = rewriter.create<tensor::DimOp>(loc, input, index);
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index 66949c96798de..5c1d42db18c47 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -44,7 +44,7 @@ FailureOr<Value> mlir::bufferization::castOrReallocMemRefValue(
         failed(target.getStridesAndOffset(targetStrides, targetOffset)))
       return false;
     auto dynamicToStatic = [](int64_t a, int64_t b) {
-      return ShapedType::isDynamic(a) && !ShapedType::isDynamic(b);
+      return ShapedType::isDynamic(a) && ShapedType::isStatic(b);
     };
     if (dynamicToStatic(sourceOffset, targetOffset))
       return false;
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
index acf5f7767d12a..15e03fbefe9c5 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
@@ -33,7 +33,7 @@ static bool hasFullyDynamicLayoutMap(MemRefType type) {
     return false;
   if (!llvm::all_of(strides, ShapedType::isDynamic))
     return false;
-  if (!ShapedType::isDynamic(offset))
+  if (ShapedType::isStatic(offset))
     return false;
   return true;
 }
diff --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
index 8075df730ccc6..1edf27201ee24 100644
--- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
+++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
@@ -4569,7 +4569,7 @@ static SmallVector<OpFoldResult> getMixedTilesImpl(OpTy op) {
   SmallVector<OpFoldResult> mixedInnerTiles;
   unsigned dynamicValIndex = 0;
   for (int64_t staticTile : op.getStaticInnerTiles()) {
-    if (!ShapedType::isDynamic(staticTile))
+    if (ShapedType::isStatic(staticTile))
       mixedInnerTiles.push_back(builder.getI64IntegerAttr(staticTile));
     else
       mixedInnerTiles.push_back(op.getInnerTiles()[dynamicValIndex++]);
@@ -4834,7 +4834,7 @@ bool PackOp::requirePaddingValue(ArrayRef<int64_t> inputShape,
     std::optional<int64_t> constantTile = getConstantIntValue(tileSize);
 
     if (!constantTile) {
-      if (!ShapedType::isDynamic(outputTileSizes[pos]) &&
+      if (ShapedType::isStatic(outputTileSizes[pos]) &&
           (inputShape[pos] % outputTileSizes[pos] != 0))
         return true;
     } else if (inputShape[pos] % (*constantTile) != 0) {
@@ -4940,7 +4940,7 @@ SmallVector<OpFoldResult> PackOp::getResultShape(
   // use dispatchIndexOpFoldResults on the result, and rely on exact number of
   // dynamic dims returned by that.
   for (unsigned i = 0; i < resultDims.size(); ++i) {
-    if (!ShapedType::isDynamic(resultTypeShape[i]))
+    if (ShapedType::isStatic(resultTypeShape[i]))
       continue;
     resultDims[i] =
         getValueOrCreateConstantIndexOp(builder, loc, resultDims[i]);
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index d0031e047b770..6907df096252e 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -2065,7 +2065,7 @@ transform::PadOp::apply(transform::TransformRewriter &rewriter,
       rewriter.setInsertionPoint(linalgTarget);
       for (OpOperand &operand : linalgTarget->getOpOperands()) {
         for (auto [i, dim] : llvm::enumerate(linalgTarget.getShape(&operand))) {
-          if (!ShapedType::isDynamic(dim))
+          if (ShapedType::isStatic(dim))
             continue;
           options.setSizeToPadTo(operand.getOperandNumber(), i,
                                  tensor::getMixedSize(rewriter,
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index f8592e2ca2174..e3aebce8dfc09 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -338,7 +338,7 @@ VectorizationState::precomputeIterSpaceValueSizes(RewriterBase &rewriter,
                                                   LinalgOp linalgOp) {
   // TODO: Support 0-d vectors.
   for (int vecDim = 0, end = canonicalVecShape.size(); vecDim < end; ++vecDim) {
-    if (!ShapedType::isDynamic(iterSpaceStaticSizes[vecDim])) {
+    if (ShapedType::isStatic(iterSpaceStaticSizes[vecDim])) {
       // Create constant index op for static dimensions.
       iterSpaceValueSizes.push_back(rewriter.create<arith::ConstantIndexOp>(
           linalgOp.getLoc(), iterSpaceStaticSizes[vecDim]));
@@ -1655,7 +1655,7 @@ createWriteOrMaskedWrite(OpBuilder &builder, Location loc, Value vecToStore,
     for (unsigned i = 0; i < vecToStoreRank; i++)
       inBoundsVal[i] =
           (destShape[destRank - vecToStoreRank + i] >= vecToStoreShape[i]) &&
-          !ShapedType::isDynamic(destShape[destRank - vecToStoreRank + i]);
+          ShapedType::isStatic(destShape[destRank - vecToStoreRank + i]);
   }
 
   // If missing, initialize the write indices to 0.
diff --git a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
index 209309ddb413a..472d7479dad0c 100644
--- a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
@@ -697,7 +697,7 @@ computeSliceParameters(OpBuilder &builder, Location loc, Value valueToTile,
     int64_t shapeSize = shape[r];
     std::optional<int64_t> sizeCst = getConstantIntValue(size);
     auto hasTileSizeOne = sizeCst == 1;
-    auto dividesEvenly = sizeCst && !ShapedType::isDynamic(shapeSize) &&
+    auto dividesEvenly = sizeCst && ShapedType::isStatic(shapeSize) &&
                          ((shapeSize % *sizeCst) == 0);
     if (!hasTileSizeOne && !dividesEvenly) {
       LLVM_DEBUG(llvm::dbgs() << "makeTiledShape: shapeSize=" << shapeSize
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 3c4d2562e6999..2371cff1043d6 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -99,7 +99,7 @@ static void constifyIndexValues(SmallVectorImpl<OpFoldResult> &values,
          "incorrect number of const values");
   for (auto [i, cstVal] : llvm::enumerate(constValues)) {
     Builder builder(values[i].getContext());
-    if (!ShapedType::isDynamic(cstVal)) {
+    if (ShapedType::isStatic(cstVal)) {
       // Constant value is known, use it directly.
       values[i] = builder.getIndexAttr(cstVal);
       continue;
@@ -189,7 +189,7 @@ struct SimplifyAllocConst : public OpRewritePattern<AllocLikeOp> {
     for (unsigned dim = 0, e = memrefType.getRank(); dim < e; ++dim) {
       int64_t dimSize = memrefType.getDimSize(dim);
       // If this is already static dimension, keep it.
-      if (!ShapedType::isDynamic(dimSize)) {
+      if (ShapedType::isStatic(dimSize)) {
         newShapeConstants.push_back(dimSize);
         continue;
       }
@@ -615,21 +615,21 @@ bool CastOp::canFoldIntoConsumerOp(CastOp castOp) {
   for (auto it : llvm::zip(sourceType.getShape(), resultType.getShape())) {
     auto ss = std::get<0>(it), st = std::get<1>(it);
     if (ss != st)
-      if (ShapedType::isDynamic(ss) && !ShapedType::isDynamic(st))
+      if (ShapedType::isDynamic(ss) && ShapedType::isStatic(st))
         return false;
   }
 
   // If cast is towards more static offset along any dimension, don't fold.
   if (sourceOffset != resultOffset)
     if (ShapedType::isDynamic(sourceOffset) &&
-        !ShapedType::isDynamic(resultOffset))
+        ShapedType::isStatic(resultOffset))
       return false;
 
   // If cast is towards more static strides along any dimension, don't fold.
   for (au...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 4, 2025

@llvm/pr-subscribers-mlir-sparse

Author: Jakub Kuderski (kuhar)

Changes

The motivation is to avoid having to negate isDynamic* checks, avoid double negations, and allow for ShapedType::isStaticDim to be used in ADT functions without having to wrap it in a lambda performing the negation.

Also add the new functions to C and Python bindings.


Patch is 59.48 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/147085.diff

37 Files Affected:

  • (modified) mlir/include/mlir-c/BuiltinTypes.h (+12-1)
  • (modified) mlir/include/mlir/IR/BuiltinTypeInterfaces.td (+20-3)
  • (modified) mlir/lib/Bindings/Python/IRTypes.cpp (+24)
  • (modified) mlir/lib/CAPI/IR/BuiltinTypes.cpp (+13)
  • (modified) mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp (+1-1)
  • (modified) mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp (+1-1)
  • (modified) mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp (+4-4)
  • (modified) mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp (+1-1)
  • (modified) mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Linalg/Utils/Utils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp (+13-13)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/ExpandStridedMetadata.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Mesh/IR/MeshOps.cpp (+7-8)
  • (modified) mlir/lib/Dialect/Mesh/Transforms/Spmdization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp (+5-5)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseReinterpretMap.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorInferTypeOpInterfaceImpl.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorOps.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Tosa/IR/TosaOps.cpp (+26-36)
  • (modified) mlir/lib/Dialect/Utils/ReshapeOpsUtils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Utils/StaticValueUtils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp (+1-1)
  • (modified) mlir/lib/IR/BuiltinAttributes.cpp (+4-4)
  • (modified) mlir/lib/IR/BuiltinTypes.cpp (+2-2)
  • (modified) mlir/lib/IR/TypeUtilities.cpp (+1-1)
  • (modified) mlir/lib/Interfaces/ViewLikeInterface.cpp (+2-2)
  • (modified) mlir/python/mlir/_mlir_libs/_mlir/ir.pyi (+13)
  • (modified) mlir/test/python/ir/builtin_types.py (+17)
diff --git a/mlir/include/mlir-c/BuiltinTypes.h b/mlir/include/mlir-c/BuiltinTypes.h
index 6875fab7bf796..a73d57f9362fd 100644
--- a/mlir/include/mlir-c/BuiltinTypes.h
+++ b/mlir/include/mlir-c/BuiltinTypes.h
@@ -292,6 +292,9 @@ MLIR_CAPI_EXPORTED bool mlirShapedTypeHasStaticShape(MlirType type);
 /// Checks wither the dim-th dimension of the given shaped type is dynamic.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim);
 
+/// Checks wither the dim-th dimension of the given shaped type is static.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim);
+
 /// Returns the dim-th dimension of the given ranked shaped type.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
                                                     intptr_t dim);
@@ -300,14 +303,22 @@ MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
 /// in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicSize(int64_t size);
 
+/// Checks whether the given shaped type dimension value is statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticSize(int64_t size);
+
 /// Returns the value indicating a dynamic size in a shaped type. Prefer
-/// mlirShapedTypeIsDynamicSize to direct comparisons with this value.
+/// mlirShapedTypeIsDynamicSize and mlirShapedTypeIsStaticSize to direct
+/// comparisons with this value.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDynamicSize(void);
 
 /// Checks whether the given value is used as a placeholder for dynamic strides
 /// and offsets in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val);
 
+/// Checks whether the given dimension value of a stride or an offset is
+/// statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val);
+
 /// Returns the value indicating a dynamic stride or offset in a shaped type.
 /// Prefer mlirShapedTypeGetDynamicStrideOrOffset to direct comparisons with
 /// this value.
diff --git a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
index 367aeb6ac512b..91ffe6572ac41 100644
--- a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
+++ b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
@@ -36,7 +36,7 @@ def VectorElementTypeInterface : TypeInterface<"VectorElementTypeInterface"> {
     This may change in the future, for example, to require types to provide
     their size or alignment given a data layout. Please post an RFC before
     adding this interface to additional types. Implementing this interface on
-    downstream types is discourged, until we specified the exact properties of
+    downstream types is discouraged, until we specified the exact properties of
     a vector element type in more detail.
   }];
 }
@@ -221,7 +221,17 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
 
     /// Whether the given shape has any size that indicates a dynamic dimension.
     static bool isDynamicShape(ArrayRef<int64_t> dSizes) {
-      return any_of(dSizes, [](int64_t dSize) { return isDynamic(dSize); });
+      return llvm::any_of(dSizes, isDynamic);
+    }
+
+    /// Whether the given dimension size indicates a statically-sized dimension.
+    static constexpr bool isStatic(int64_t dValue) {
+      return !isDynamic(dValue);
+    }
+
+    /// Whether the given shape has static dimensions only.
+    static bool isStaticShape(ArrayRef<int64_t> dSizes) {
+      return llvm::all_of(dSizes, isStatic);
     }
 
     /// Return the number of elements present in the given shape.
@@ -273,11 +283,18 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
       return ::mlir::ShapedType::isDynamic($_type.getShape()[idx]);
     }
 
+    /// Returns true if this dimension has a static size (for ranked types);
+    /// aborts for unranked types.
+    bool isStaticDim(unsigned idx) const {
+      assert(idx < getRank() && "invalid index for shaped type");
+      return ::mlir::ShapedType::isStatic($_type.getShape()[idx]);
+    }
+
     /// Returns if this type has a static shape, i.e. if the type is ranked and
     /// all dimensions have known size (>= 0).
     bool hasStaticShape() const {
       return $_type.hasRank() &&
-             !::mlir::ShapedType::isDynamicShape($_type.getShape());
+             ::mlir::ShapedType::isStaticShape($_type.getShape());
     }
 
     /// Returns if this type has a static shape and the shape is equal to
diff --git a/mlir/lib/Bindings/Python/IRTypes.cpp b/mlir/lib/Bindings/Python/IRTypes.cpp
index 0f2719c10a027..b11e3f75b8463 100644
--- a/mlir/lib/Bindings/Python/IRTypes.cpp
+++ b/mlir/lib/Bindings/Python/IRTypes.cpp
@@ -544,6 +544,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim"),
       "Returns whether the dim-th dimension of the given shaped type is "
       "dynamic.");
+  c.def(
+      "is_static_dim",
+      [](PyShapedType &self, intptr_t dim) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticDim(self, dim);
+      },
+      nb::arg("dim"),
+      "Returns whether the dim-th dimension of the given shaped type is "
+      "static.");
   c.def(
       "get_dim_size",
       [](PyShapedType &self, intptr_t dim) {
@@ -558,6 +567,12 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given dimension size indicates a dynamic "
       "dimension.");
+  c.def_static(
+      "is_static_size",
+      [](int64_t size) -> bool { return mlirShapedTypeIsStaticSize(size); },
+      nb::arg("dim_size"),
+      "Returns whether the given dimension size indicates a static "
+      "dimension.");
   c.def(
       "is_dynamic_stride_or_offset",
       [](PyShapedType &self, int64_t val) -> bool {
@@ -567,6 +582,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given value is used as a placeholder for dynamic "
       "strides and offsets in shaped types.");
+  c.def(
+      "is_static_stride_or_offset",
+      [](PyShapedType &self, int64_t val) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticStrideOrOffset(val);
+      },
+      nb::arg("dim_size"),
+      "Returns whether the given shaped type stride or offset value is "
+      "statically-sized.");
   c.def_prop_ro(
       "shape",
       [](PyShapedType &self) {
diff --git a/mlir/lib/CAPI/IR/BuiltinTypes.cpp b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
index a080adf0f8103..9d8554aabff8a 100644
--- a/mlir/lib/CAPI/IR/BuiltinTypes.cpp
+++ b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
@@ -332,6 +332,11 @@ bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim) {
       .isDynamicDim(static_cast<unsigned>(dim));
 }
 
+bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim) {
+  return llvm::cast<ShapedType>(unwrap(type))
+      .isStaticDim(static_cast<unsigned>(dim));
+}
+
 int64_t mlirShapedTypeGetDimSize(MlirType type, intptr_t dim) {
   return llvm::cast<ShapedType>(unwrap(type))
       .getDimSize(static_cast<unsigned>(dim));
@@ -343,10 +348,18 @@ bool mlirShapedTypeIsDynamicSize(int64_t size) {
   return ShapedType::isDynamic(size);
 }
 
+bool mlirShapedTypeIsStaticSize(int64_t size) {
+  return ShapedType::isStatic(size);
+}
+
 bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val) {
   return ShapedType::isDynamic(val);
 }
 
+bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val) {
+  return ShapedType::isStatic(val);
+}
+
 int64_t mlirShapedTypeGetDynamicStrideOrOffset() {
   return ShapedType::kDynamic;
 }
diff --git a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
index 86d6643820376..e34d5f74d232f 100644
--- a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
@@ -53,7 +53,7 @@ MemRefDescriptor MemRefDescriptor::fromStaticShape(
 
   // Extract all strides and offsets and verify they are static.
   auto [strides, offset] = type.getStridesAndOffset();
-  assert(!ShapedType::isDynamic(offset) && "expected static offset");
+  assert(ShapedType::isStatic(offset) && "expected static offset");
   assert(!llvm::any_of(strides, ShapedType::isDynamic) &&
          "expected static strides");
 
diff --git a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
index 57c8f4402cf4b..efecea2d461a7 100644
--- a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
@@ -610,7 +610,7 @@ bool LLVMTypeConverter::canConvertToBarePtr(BaseMemRefType type) {
     if (ShapedType::isDynamic(stride))
       return false;
 
-  return !ShapedType::isDynamic(offset);
+  return ShapedType::isStatic(offset);
 }
 
 /// Convert a memref type to a bare pointer to the memref element type.
diff --git a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
index 7484e4b07390e..d767a24f6d698 100644
--- a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
+++ b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
@@ -44,7 +44,7 @@ static constexpr LLVM::GEPNoWrapFlags kNoWrapFlags =
 namespace {
 
 static bool isStaticStrideOrOffset(int64_t strideOrOffset) {
-  return !ShapedType::isDynamic(strideOrOffset);
+  return ShapedType::isStatic(strideOrOffset);
 }
 
 static FailureOr<LLVM::LLVMFuncOp>
@@ -1469,7 +1469,7 @@ struct MemRefReshapeOpLowering
       Value stride = nullptr;
       int64_t targetRank = targetMemRefType.getRank();
       for (auto i : llvm::reverse(llvm::seq<int64_t>(0, targetRank))) {
-        if (!ShapedType::isDynamic(strides[i])) {
+        if (ShapedType::isStatic(strides[i])) {
           // If the stride for this dimension is dynamic, then use the product
           // of the sizes of the inner dimensions.
           stride =
@@ -1723,7 +1723,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                 ArrayRef<int64_t> shape, ValueRange dynamicSizes, unsigned idx,
                 Type indexType) const {
     assert(idx < shape.size());
-    if (!ShapedType::isDynamic(shape[idx]))
+    if (ShapedType::isStatic(shape[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, shape[idx]);
     // Count the number of dynamic dims in range [0, idx]
     unsigned nDynamic =
@@ -1739,7 +1739,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                   ArrayRef<int64_t> strides, Value nextSize,
                   Value runningStride, unsigned idx, Type indexType) const {
     assert(idx < strides.size());
-    if (!ShapedType::isDynamic(strides[idx]))
+    if (ShapedType::isStatic(strides[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, strides[idx]);
     if (nextSize)
       return runningStride
diff --git a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
index c2be08ef40f21..c3ce71ee2c82c 100644
--- a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
+++ b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
@@ -759,7 +759,7 @@ computeTargetSize(PatternRewriter &rewriter, Location loc, IndexPool &indexPool,
   // dimension greater than 1 with a different value is undefined behavior.
   for (auto operand : operands) {
     auto size = cast<RankedTensorType>(operand.getType()).getDimSize(dim);
-    if (!ShapedType::isDynamic(size) && size > 1)
+    if (ShapedType::isStatic(size) && size > 1)
       return {rewriter.getIndexAttr(size), operand};
   }
 
diff --git a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
index 615c7ca1cfd15..f73821c4d35a2 100644
--- a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
+++ b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
@@ -84,7 +84,7 @@ TensorType inferReshapeExpandedType(TensorType inputType,
         return totalSize / totalSizeNoPlaceholder;
       });
 
-  bool resultIsStatic = !ShapedType::isDynamicShape(resultShape);
+  bool resultIsStatic = ShapedType::isStaticShape(resultShape);
 
   // A syntactic restriction in 'tensor.expand_shape' forbids a dynamically
   // shaped input from being reshaped into a statically shaped result. We may
@@ -306,7 +306,7 @@ class SliceConverter : public OpConversionPattern<tosa::SliceOp> {
       int64_t size = i.value();
       size_t index = i.index();
       sizes.push_back(size == -1 ? ShapedType::kDynamic : size);
-      if (!ShapedType::isDynamic(sizes.back()))
+      if (ShapedType::isStatic(sizes.back()))
         continue;
 
       auto dim = rewriter.create<tensor::DimOp>(loc, input, index);
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index 66949c96798de..5c1d42db18c47 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -44,7 +44,7 @@ FailureOr<Value> mlir::bufferization::castOrReallocMemRefValue(
         failed(target.getStridesAndOffset(targetStrides, targetOffset)))
       return false;
     auto dynamicToStatic = [](int64_t a, int64_t b) {
-      return ShapedType::isDynamic(a) && !ShapedType::isDynamic(b);
+      return ShapedType::isDynamic(a) && ShapedType::isStatic(b);
     };
     if (dynamicToStatic(sourceOffset, targetOffset))
       return false;
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
index acf5f7767d12a..15e03fbefe9c5 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
@@ -33,7 +33,7 @@ static bool hasFullyDynamicLayoutMap(MemRefType type) {
     return false;
   if (!llvm::all_of(strides, ShapedType::isDynamic))
     return false;
-  if (!ShapedType::isDynamic(offset))
+  if (ShapedType::isStatic(offset))
     return false;
   return true;
 }
diff --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
index 8075df730ccc6..1edf27201ee24 100644
--- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
+++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
@@ -4569,7 +4569,7 @@ static SmallVector<OpFoldResult> getMixedTilesImpl(OpTy op) {
   SmallVector<OpFoldResult> mixedInnerTiles;
   unsigned dynamicValIndex = 0;
   for (int64_t staticTile : op.getStaticInnerTiles()) {
-    if (!ShapedType::isDynamic(staticTile))
+    if (ShapedType::isStatic(staticTile))
       mixedInnerTiles.push_back(builder.getI64IntegerAttr(staticTile));
     else
       mixedInnerTiles.push_back(op.getInnerTiles()[dynamicValIndex++]);
@@ -4834,7 +4834,7 @@ bool PackOp::requirePaddingValue(ArrayRef<int64_t> inputShape,
     std::optional<int64_t> constantTile = getConstantIntValue(tileSize);
 
     if (!constantTile) {
-      if (!ShapedType::isDynamic(outputTileSizes[pos]) &&
+      if (ShapedType::isStatic(outputTileSizes[pos]) &&
           (inputShape[pos] % outputTileSizes[pos] != 0))
         return true;
     } else if (inputShape[pos] % (*constantTile) != 0) {
@@ -4940,7 +4940,7 @@ SmallVector<OpFoldResult> PackOp::getResultShape(
   // use dispatchIndexOpFoldResults on the result, and rely on exact number of
   // dynamic dims returned by that.
   for (unsigned i = 0; i < resultDims.size(); ++i) {
-    if (!ShapedType::isDynamic(resultTypeShape[i]))
+    if (ShapedType::isStatic(resultTypeShape[i]))
       continue;
     resultDims[i] =
         getValueOrCreateConstantIndexOp(builder, loc, resultDims[i]);
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index d0031e047b770..6907df096252e 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -2065,7 +2065,7 @@ transform::PadOp::apply(transform::TransformRewriter &rewriter,
       rewriter.setInsertionPoint(linalgTarget);
       for (OpOperand &operand : linalgTarget->getOpOperands()) {
         for (auto [i, dim] : llvm::enumerate(linalgTarget.getShape(&operand))) {
-          if (!ShapedType::isDynamic(dim))
+          if (ShapedType::isStatic(dim))
             continue;
           options.setSizeToPadTo(operand.getOperandNumber(), i,
                                  tensor::getMixedSize(rewriter,
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index f8592e2ca2174..e3aebce8dfc09 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -338,7 +338,7 @@ VectorizationState::precomputeIterSpaceValueSizes(RewriterBase &rewriter,
                                                   LinalgOp linalgOp) {
   // TODO: Support 0-d vectors.
   for (int vecDim = 0, end = canonicalVecShape.size(); vecDim < end; ++vecDim) {
-    if (!ShapedType::isDynamic(iterSpaceStaticSizes[vecDim])) {
+    if (ShapedType::isStatic(iterSpaceStaticSizes[vecDim])) {
       // Create constant index op for static dimensions.
       iterSpaceValueSizes.push_back(rewriter.create<arith::ConstantIndexOp>(
           linalgOp.getLoc(), iterSpaceStaticSizes[vecDim]));
@@ -1655,7 +1655,7 @@ createWriteOrMaskedWrite(OpBuilder &builder, Location loc, Value vecToStore,
     for (unsigned i = 0; i < vecToStoreRank; i++)
       inBoundsVal[i] =
           (destShape[destRank - vecToStoreRank + i] >= vecToStoreShape[i]) &&
-          !ShapedType::isDynamic(destShape[destRank - vecToStoreRank + i]);
+          ShapedType::isStatic(destShape[destRank - vecToStoreRank + i]);
   }
 
   // If missing, initialize the write indices to 0.
diff --git a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
index 209309ddb413a..472d7479dad0c 100644
--- a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
@@ -697,7 +697,7 @@ computeSliceParameters(OpBuilder &builder, Location loc, Value valueToTile,
     int64_t shapeSize = shape[r];
     std::optional<int64_t> sizeCst = getConstantIntValue(size);
     auto hasTileSizeOne = sizeCst == 1;
-    auto dividesEvenly = sizeCst && !ShapedType::isDynamic(shapeSize) &&
+    auto dividesEvenly = sizeCst && ShapedType::isStatic(shapeSize) &&
                          ((shapeSize % *sizeCst) == 0);
     if (!hasTileSizeOne && !dividesEvenly) {
       LLVM_DEBUG(llvm::dbgs() << "makeTiledShape: shapeSize=" << shapeSize
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 3c4d2562e6999..2371cff1043d6 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -99,7 +99,7 @@ static void constifyIndexValues(SmallVectorImpl<OpFoldResult> &values,
          "incorrect number of const values");
   for (auto [i, cstVal] : llvm::enumerate(constValues)) {
     Builder builder(values[i].getContext());
-    if (!ShapedType::isDynamic(cstVal)) {
+    if (ShapedType::isStatic(cstVal)) {
       // Constant value is known, use it directly.
       values[i] = builder.getIndexAttr(cstVal);
       continue;
@@ -189,7 +189,7 @@ struct SimplifyAllocConst : public OpRewritePattern<AllocLikeOp> {
     for (unsigned dim = 0, e = memrefType.getRank(); dim < e; ++dim) {
       int64_t dimSize = memrefType.getDimSize(dim);
       // If this is already static dimension, keep it.
-      if (!ShapedType::isDynamic(dimSize)) {
+      if (ShapedType::isStatic(dimSize)) {
         newShapeConstants.push_back(dimSize);
         continue;
       }
@@ -615,21 +615,21 @@ bool CastOp::canFoldIntoConsumerOp(CastOp castOp) {
   for (auto it : llvm::zip(sourceType.getShape(), resultType.getShape())) {
     auto ss = std::get<0>(it), st = std::get<1>(it);
     if (ss != st)
-      if (ShapedType::isDynamic(ss) && !ShapedType::isDynamic(st))
+      if (ShapedType::isDynamic(ss) && ShapedType::isStatic(st))
         return false;
   }
 
   // If cast is towards more static offset along any dimension, don't fold.
   if (sourceOffset != resultOffset)
     if (ShapedType::isDynamic(sourceOffset) &&
-        !ShapedType::isDynamic(resultOffset))
+        ShapedType::isStatic(resultOffset))
       return false;
 
   // If cast is towards more static strides along any dimension, don't fold.
   for (au...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 4, 2025

@llvm/pr-subscribers-mlir-memref

Author: Jakub Kuderski (kuhar)

Changes

The motivation is to avoid having to negate isDynamic* checks, avoid double negations, and allow for ShapedType::isStaticDim to be used in ADT functions without having to wrap it in a lambda performing the negation.

Also add the new functions to C and Python bindings.


Patch is 59.48 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/147085.diff

37 Files Affected:

  • (modified) mlir/include/mlir-c/BuiltinTypes.h (+12-1)
  • (modified) mlir/include/mlir/IR/BuiltinTypeInterfaces.td (+20-3)
  • (modified) mlir/lib/Bindings/Python/IRTypes.cpp (+24)
  • (modified) mlir/lib/CAPI/IR/BuiltinTypes.cpp (+13)
  • (modified) mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp (+1-1)
  • (modified) mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp (+1-1)
  • (modified) mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp (+4-4)
  • (modified) mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp (+1-1)
  • (modified) mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Linalg/Utils/Utils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp (+13-13)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/ExpandStridedMetadata.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Mesh/IR/MeshOps.cpp (+7-8)
  • (modified) mlir/lib/Dialect/Mesh/Transforms/Spmdization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp (+5-5)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseReinterpretMap.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorInferTypeOpInterfaceImpl.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorOps.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Tosa/IR/TosaOps.cpp (+26-36)
  • (modified) mlir/lib/Dialect/Utils/ReshapeOpsUtils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Utils/StaticValueUtils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp (+1-1)
  • (modified) mlir/lib/IR/BuiltinAttributes.cpp (+4-4)
  • (modified) mlir/lib/IR/BuiltinTypes.cpp (+2-2)
  • (modified) mlir/lib/IR/TypeUtilities.cpp (+1-1)
  • (modified) mlir/lib/Interfaces/ViewLikeInterface.cpp (+2-2)
  • (modified) mlir/python/mlir/_mlir_libs/_mlir/ir.pyi (+13)
  • (modified) mlir/test/python/ir/builtin_types.py (+17)
diff --git a/mlir/include/mlir-c/BuiltinTypes.h b/mlir/include/mlir-c/BuiltinTypes.h
index 6875fab7bf796..a73d57f9362fd 100644
--- a/mlir/include/mlir-c/BuiltinTypes.h
+++ b/mlir/include/mlir-c/BuiltinTypes.h
@@ -292,6 +292,9 @@ MLIR_CAPI_EXPORTED bool mlirShapedTypeHasStaticShape(MlirType type);
 /// Checks wither the dim-th dimension of the given shaped type is dynamic.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim);
 
+/// Checks wither the dim-th dimension of the given shaped type is static.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim);
+
 /// Returns the dim-th dimension of the given ranked shaped type.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
                                                     intptr_t dim);
@@ -300,14 +303,22 @@ MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
 /// in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicSize(int64_t size);
 
+/// Checks whether the given shaped type dimension value is statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticSize(int64_t size);
+
 /// Returns the value indicating a dynamic size in a shaped type. Prefer
-/// mlirShapedTypeIsDynamicSize to direct comparisons with this value.
+/// mlirShapedTypeIsDynamicSize and mlirShapedTypeIsStaticSize to direct
+/// comparisons with this value.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDynamicSize(void);
 
 /// Checks whether the given value is used as a placeholder for dynamic strides
 /// and offsets in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val);
 
+/// Checks whether the given dimension value of a stride or an offset is
+/// statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val);
+
 /// Returns the value indicating a dynamic stride or offset in a shaped type.
 /// Prefer mlirShapedTypeGetDynamicStrideOrOffset to direct comparisons with
 /// this value.
diff --git a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
index 367aeb6ac512b..91ffe6572ac41 100644
--- a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
+++ b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
@@ -36,7 +36,7 @@ def VectorElementTypeInterface : TypeInterface<"VectorElementTypeInterface"> {
     This may change in the future, for example, to require types to provide
     their size or alignment given a data layout. Please post an RFC before
     adding this interface to additional types. Implementing this interface on
-    downstream types is discourged, until we specified the exact properties of
+    downstream types is discouraged, until we specified the exact properties of
     a vector element type in more detail.
   }];
 }
@@ -221,7 +221,17 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
 
     /// Whether the given shape has any size that indicates a dynamic dimension.
     static bool isDynamicShape(ArrayRef<int64_t> dSizes) {
-      return any_of(dSizes, [](int64_t dSize) { return isDynamic(dSize); });
+      return llvm::any_of(dSizes, isDynamic);
+    }
+
+    /// Whether the given dimension size indicates a statically-sized dimension.
+    static constexpr bool isStatic(int64_t dValue) {
+      return !isDynamic(dValue);
+    }
+
+    /// Whether the given shape has static dimensions only.
+    static bool isStaticShape(ArrayRef<int64_t> dSizes) {
+      return llvm::all_of(dSizes, isStatic);
     }
 
     /// Return the number of elements present in the given shape.
@@ -273,11 +283,18 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
       return ::mlir::ShapedType::isDynamic($_type.getShape()[idx]);
     }
 
+    /// Returns true if this dimension has a static size (for ranked types);
+    /// aborts for unranked types.
+    bool isStaticDim(unsigned idx) const {
+      assert(idx < getRank() && "invalid index for shaped type");
+      return ::mlir::ShapedType::isStatic($_type.getShape()[idx]);
+    }
+
     /// Returns if this type has a static shape, i.e. if the type is ranked and
     /// all dimensions have known size (>= 0).
     bool hasStaticShape() const {
       return $_type.hasRank() &&
-             !::mlir::ShapedType::isDynamicShape($_type.getShape());
+             ::mlir::ShapedType::isStaticShape($_type.getShape());
     }
 
     /// Returns if this type has a static shape and the shape is equal to
diff --git a/mlir/lib/Bindings/Python/IRTypes.cpp b/mlir/lib/Bindings/Python/IRTypes.cpp
index 0f2719c10a027..b11e3f75b8463 100644
--- a/mlir/lib/Bindings/Python/IRTypes.cpp
+++ b/mlir/lib/Bindings/Python/IRTypes.cpp
@@ -544,6 +544,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim"),
       "Returns whether the dim-th dimension of the given shaped type is "
       "dynamic.");
+  c.def(
+      "is_static_dim",
+      [](PyShapedType &self, intptr_t dim) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticDim(self, dim);
+      },
+      nb::arg("dim"),
+      "Returns whether the dim-th dimension of the given shaped type is "
+      "static.");
   c.def(
       "get_dim_size",
       [](PyShapedType &self, intptr_t dim) {
@@ -558,6 +567,12 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given dimension size indicates a dynamic "
       "dimension.");
+  c.def_static(
+      "is_static_size",
+      [](int64_t size) -> bool { return mlirShapedTypeIsStaticSize(size); },
+      nb::arg("dim_size"),
+      "Returns whether the given dimension size indicates a static "
+      "dimension.");
   c.def(
       "is_dynamic_stride_or_offset",
       [](PyShapedType &self, int64_t val) -> bool {
@@ -567,6 +582,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given value is used as a placeholder for dynamic "
       "strides and offsets in shaped types.");
+  c.def(
+      "is_static_stride_or_offset",
+      [](PyShapedType &self, int64_t val) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticStrideOrOffset(val);
+      },
+      nb::arg("dim_size"),
+      "Returns whether the given shaped type stride or offset value is "
+      "statically-sized.");
   c.def_prop_ro(
       "shape",
       [](PyShapedType &self) {
diff --git a/mlir/lib/CAPI/IR/BuiltinTypes.cpp b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
index a080adf0f8103..9d8554aabff8a 100644
--- a/mlir/lib/CAPI/IR/BuiltinTypes.cpp
+++ b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
@@ -332,6 +332,11 @@ bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim) {
       .isDynamicDim(static_cast<unsigned>(dim));
 }
 
+bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim) {
+  return llvm::cast<ShapedType>(unwrap(type))
+      .isStaticDim(static_cast<unsigned>(dim));
+}
+
 int64_t mlirShapedTypeGetDimSize(MlirType type, intptr_t dim) {
   return llvm::cast<ShapedType>(unwrap(type))
       .getDimSize(static_cast<unsigned>(dim));
@@ -343,10 +348,18 @@ bool mlirShapedTypeIsDynamicSize(int64_t size) {
   return ShapedType::isDynamic(size);
 }
 
+bool mlirShapedTypeIsStaticSize(int64_t size) {
+  return ShapedType::isStatic(size);
+}
+
 bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val) {
   return ShapedType::isDynamic(val);
 }
 
+bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val) {
+  return ShapedType::isStatic(val);
+}
+
 int64_t mlirShapedTypeGetDynamicStrideOrOffset() {
   return ShapedType::kDynamic;
 }
diff --git a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
index 86d6643820376..e34d5f74d232f 100644
--- a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
@@ -53,7 +53,7 @@ MemRefDescriptor MemRefDescriptor::fromStaticShape(
 
   // Extract all strides and offsets and verify they are static.
   auto [strides, offset] = type.getStridesAndOffset();
-  assert(!ShapedType::isDynamic(offset) && "expected static offset");
+  assert(ShapedType::isStatic(offset) && "expected static offset");
   assert(!llvm::any_of(strides, ShapedType::isDynamic) &&
          "expected static strides");
 
diff --git a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
index 57c8f4402cf4b..efecea2d461a7 100644
--- a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
@@ -610,7 +610,7 @@ bool LLVMTypeConverter::canConvertToBarePtr(BaseMemRefType type) {
     if (ShapedType::isDynamic(stride))
       return false;
 
-  return !ShapedType::isDynamic(offset);
+  return ShapedType::isStatic(offset);
 }
 
 /// Convert a memref type to a bare pointer to the memref element type.
diff --git a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
index 7484e4b07390e..d767a24f6d698 100644
--- a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
+++ b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
@@ -44,7 +44,7 @@ static constexpr LLVM::GEPNoWrapFlags kNoWrapFlags =
 namespace {
 
 static bool isStaticStrideOrOffset(int64_t strideOrOffset) {
-  return !ShapedType::isDynamic(strideOrOffset);
+  return ShapedType::isStatic(strideOrOffset);
 }
 
 static FailureOr<LLVM::LLVMFuncOp>
@@ -1469,7 +1469,7 @@ struct MemRefReshapeOpLowering
       Value stride = nullptr;
       int64_t targetRank = targetMemRefType.getRank();
       for (auto i : llvm::reverse(llvm::seq<int64_t>(0, targetRank))) {
-        if (!ShapedType::isDynamic(strides[i])) {
+        if (ShapedType::isStatic(strides[i])) {
           // If the stride for this dimension is dynamic, then use the product
           // of the sizes of the inner dimensions.
           stride =
@@ -1723,7 +1723,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                 ArrayRef<int64_t> shape, ValueRange dynamicSizes, unsigned idx,
                 Type indexType) const {
     assert(idx < shape.size());
-    if (!ShapedType::isDynamic(shape[idx]))
+    if (ShapedType::isStatic(shape[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, shape[idx]);
     // Count the number of dynamic dims in range [0, idx]
     unsigned nDynamic =
@@ -1739,7 +1739,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                   ArrayRef<int64_t> strides, Value nextSize,
                   Value runningStride, unsigned idx, Type indexType) const {
     assert(idx < strides.size());
-    if (!ShapedType::isDynamic(strides[idx]))
+    if (ShapedType::isStatic(strides[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, strides[idx]);
     if (nextSize)
       return runningStride
diff --git a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
index c2be08ef40f21..c3ce71ee2c82c 100644
--- a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
+++ b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
@@ -759,7 +759,7 @@ computeTargetSize(PatternRewriter &rewriter, Location loc, IndexPool &indexPool,
   // dimension greater than 1 with a different value is undefined behavior.
   for (auto operand : operands) {
     auto size = cast<RankedTensorType>(operand.getType()).getDimSize(dim);
-    if (!ShapedType::isDynamic(size) && size > 1)
+    if (ShapedType::isStatic(size) && size > 1)
       return {rewriter.getIndexAttr(size), operand};
   }
 
diff --git a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
index 615c7ca1cfd15..f73821c4d35a2 100644
--- a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
+++ b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
@@ -84,7 +84,7 @@ TensorType inferReshapeExpandedType(TensorType inputType,
         return totalSize / totalSizeNoPlaceholder;
       });
 
-  bool resultIsStatic = !ShapedType::isDynamicShape(resultShape);
+  bool resultIsStatic = ShapedType::isStaticShape(resultShape);
 
   // A syntactic restriction in 'tensor.expand_shape' forbids a dynamically
   // shaped input from being reshaped into a statically shaped result. We may
@@ -306,7 +306,7 @@ class SliceConverter : public OpConversionPattern<tosa::SliceOp> {
       int64_t size = i.value();
       size_t index = i.index();
       sizes.push_back(size == -1 ? ShapedType::kDynamic : size);
-      if (!ShapedType::isDynamic(sizes.back()))
+      if (ShapedType::isStatic(sizes.back()))
         continue;
 
       auto dim = rewriter.create<tensor::DimOp>(loc, input, index);
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index 66949c96798de..5c1d42db18c47 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -44,7 +44,7 @@ FailureOr<Value> mlir::bufferization::castOrReallocMemRefValue(
         failed(target.getStridesAndOffset(targetStrides, targetOffset)))
       return false;
     auto dynamicToStatic = [](int64_t a, int64_t b) {
-      return ShapedType::isDynamic(a) && !ShapedType::isDynamic(b);
+      return ShapedType::isDynamic(a) && ShapedType::isStatic(b);
     };
     if (dynamicToStatic(sourceOffset, targetOffset))
       return false;
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
index acf5f7767d12a..15e03fbefe9c5 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
@@ -33,7 +33,7 @@ static bool hasFullyDynamicLayoutMap(MemRefType type) {
     return false;
   if (!llvm::all_of(strides, ShapedType::isDynamic))
     return false;
-  if (!ShapedType::isDynamic(offset))
+  if (ShapedType::isStatic(offset))
     return false;
   return true;
 }
diff --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
index 8075df730ccc6..1edf27201ee24 100644
--- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
+++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
@@ -4569,7 +4569,7 @@ static SmallVector<OpFoldResult> getMixedTilesImpl(OpTy op) {
   SmallVector<OpFoldResult> mixedInnerTiles;
   unsigned dynamicValIndex = 0;
   for (int64_t staticTile : op.getStaticInnerTiles()) {
-    if (!ShapedType::isDynamic(staticTile))
+    if (ShapedType::isStatic(staticTile))
       mixedInnerTiles.push_back(builder.getI64IntegerAttr(staticTile));
     else
       mixedInnerTiles.push_back(op.getInnerTiles()[dynamicValIndex++]);
@@ -4834,7 +4834,7 @@ bool PackOp::requirePaddingValue(ArrayRef<int64_t> inputShape,
     std::optional<int64_t> constantTile = getConstantIntValue(tileSize);
 
     if (!constantTile) {
-      if (!ShapedType::isDynamic(outputTileSizes[pos]) &&
+      if (ShapedType::isStatic(outputTileSizes[pos]) &&
           (inputShape[pos] % outputTileSizes[pos] != 0))
         return true;
     } else if (inputShape[pos] % (*constantTile) != 0) {
@@ -4940,7 +4940,7 @@ SmallVector<OpFoldResult> PackOp::getResultShape(
   // use dispatchIndexOpFoldResults on the result, and rely on exact number of
   // dynamic dims returned by that.
   for (unsigned i = 0; i < resultDims.size(); ++i) {
-    if (!ShapedType::isDynamic(resultTypeShape[i]))
+    if (ShapedType::isStatic(resultTypeShape[i]))
       continue;
     resultDims[i] =
         getValueOrCreateConstantIndexOp(builder, loc, resultDims[i]);
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index d0031e047b770..6907df096252e 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -2065,7 +2065,7 @@ transform::PadOp::apply(transform::TransformRewriter &rewriter,
       rewriter.setInsertionPoint(linalgTarget);
       for (OpOperand &operand : linalgTarget->getOpOperands()) {
         for (auto [i, dim] : llvm::enumerate(linalgTarget.getShape(&operand))) {
-          if (!ShapedType::isDynamic(dim))
+          if (ShapedType::isStatic(dim))
             continue;
           options.setSizeToPadTo(operand.getOperandNumber(), i,
                                  tensor::getMixedSize(rewriter,
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index f8592e2ca2174..e3aebce8dfc09 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -338,7 +338,7 @@ VectorizationState::precomputeIterSpaceValueSizes(RewriterBase &rewriter,
                                                   LinalgOp linalgOp) {
   // TODO: Support 0-d vectors.
   for (int vecDim = 0, end = canonicalVecShape.size(); vecDim < end; ++vecDim) {
-    if (!ShapedType::isDynamic(iterSpaceStaticSizes[vecDim])) {
+    if (ShapedType::isStatic(iterSpaceStaticSizes[vecDim])) {
       // Create constant index op for static dimensions.
       iterSpaceValueSizes.push_back(rewriter.create<arith::ConstantIndexOp>(
           linalgOp.getLoc(), iterSpaceStaticSizes[vecDim]));
@@ -1655,7 +1655,7 @@ createWriteOrMaskedWrite(OpBuilder &builder, Location loc, Value vecToStore,
     for (unsigned i = 0; i < vecToStoreRank; i++)
       inBoundsVal[i] =
           (destShape[destRank - vecToStoreRank + i] >= vecToStoreShape[i]) &&
-          !ShapedType::isDynamic(destShape[destRank - vecToStoreRank + i]);
+          ShapedType::isStatic(destShape[destRank - vecToStoreRank + i]);
   }
 
   // If missing, initialize the write indices to 0.
diff --git a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
index 209309ddb413a..472d7479dad0c 100644
--- a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
@@ -697,7 +697,7 @@ computeSliceParameters(OpBuilder &builder, Location loc, Value valueToTile,
     int64_t shapeSize = shape[r];
     std::optional<int64_t> sizeCst = getConstantIntValue(size);
     auto hasTileSizeOne = sizeCst == 1;
-    auto dividesEvenly = sizeCst && !ShapedType::isDynamic(shapeSize) &&
+    auto dividesEvenly = sizeCst && ShapedType::isStatic(shapeSize) &&
                          ((shapeSize % *sizeCst) == 0);
     if (!hasTileSizeOne && !dividesEvenly) {
       LLVM_DEBUG(llvm::dbgs() << "makeTiledShape: shapeSize=" << shapeSize
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 3c4d2562e6999..2371cff1043d6 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -99,7 +99,7 @@ static void constifyIndexValues(SmallVectorImpl<OpFoldResult> &values,
          "incorrect number of const values");
   for (auto [i, cstVal] : llvm::enumerate(constValues)) {
     Builder builder(values[i].getContext());
-    if (!ShapedType::isDynamic(cstVal)) {
+    if (ShapedType::isStatic(cstVal)) {
       // Constant value is known, use it directly.
       values[i] = builder.getIndexAttr(cstVal);
       continue;
@@ -189,7 +189,7 @@ struct SimplifyAllocConst : public OpRewritePattern<AllocLikeOp> {
     for (unsigned dim = 0, e = memrefType.getRank(); dim < e; ++dim) {
       int64_t dimSize = memrefType.getDimSize(dim);
       // If this is already static dimension, keep it.
-      if (!ShapedType::isDynamic(dimSize)) {
+      if (ShapedType::isStatic(dimSize)) {
         newShapeConstants.push_back(dimSize);
         continue;
       }
@@ -615,21 +615,21 @@ bool CastOp::canFoldIntoConsumerOp(CastOp castOp) {
   for (auto it : llvm::zip(sourceType.getShape(), resultType.getShape())) {
     auto ss = std::get<0>(it), st = std::get<1>(it);
     if (ss != st)
-      if (ShapedType::isDynamic(ss) && !ShapedType::isDynamic(st))
+      if (ShapedType::isDynamic(ss) && ShapedType::isStatic(st))
         return false;
   }
 
   // If cast is towards more static offset along any dimension, don't fold.
   if (sourceOffset != resultOffset)
     if (ShapedType::isDynamic(sourceOffset) &&
-        !ShapedType::isDynamic(resultOffset))
+        ShapedType::isStatic(resultOffset))
       return false;
 
   // If cast is towards more static strides along any dimension, don't fold.
   for (au...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 4, 2025

@llvm/pr-subscribers-mlir-llvm

Author: Jakub Kuderski (kuhar)

Changes

The motivation is to avoid having to negate isDynamic* checks, avoid double negations, and allow for ShapedType::isStaticDim to be used in ADT functions without having to wrap it in a lambda performing the negation.

Also add the new functions to C and Python bindings.


Patch is 59.48 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/147085.diff

37 Files Affected:

  • (modified) mlir/include/mlir-c/BuiltinTypes.h (+12-1)
  • (modified) mlir/include/mlir/IR/BuiltinTypeInterfaces.td (+20-3)
  • (modified) mlir/lib/Bindings/Python/IRTypes.cpp (+24)
  • (modified) mlir/lib/CAPI/IR/BuiltinTypes.cpp (+13)
  • (modified) mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp (+1-1)
  • (modified) mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp (+1-1)
  • (modified) mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp (+4-4)
  • (modified) mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp (+1-1)
  • (modified) mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Linalg/Utils/Utils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp (+13-13)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/ExpandStridedMetadata.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Mesh/IR/MeshOps.cpp (+7-8)
  • (modified) mlir/lib/Dialect/Mesh/Transforms/Spmdization.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp (+5-5)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseReinterpretMap.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp (+1-1)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorInferTypeOpInterfaceImpl.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Tensor/IR/TensorOps.cpp (+4-4)
  • (modified) mlir/lib/Dialect/Tosa/IR/TosaOps.cpp (+26-36)
  • (modified) mlir/lib/Dialect/Utils/ReshapeOpsUtils.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Utils/StaticValueUtils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp (+1-1)
  • (modified) mlir/lib/IR/BuiltinAttributes.cpp (+4-4)
  • (modified) mlir/lib/IR/BuiltinTypes.cpp (+2-2)
  • (modified) mlir/lib/IR/TypeUtilities.cpp (+1-1)
  • (modified) mlir/lib/Interfaces/ViewLikeInterface.cpp (+2-2)
  • (modified) mlir/python/mlir/_mlir_libs/_mlir/ir.pyi (+13)
  • (modified) mlir/test/python/ir/builtin_types.py (+17)
diff --git a/mlir/include/mlir-c/BuiltinTypes.h b/mlir/include/mlir-c/BuiltinTypes.h
index 6875fab7bf796..a73d57f9362fd 100644
--- a/mlir/include/mlir-c/BuiltinTypes.h
+++ b/mlir/include/mlir-c/BuiltinTypes.h
@@ -292,6 +292,9 @@ MLIR_CAPI_EXPORTED bool mlirShapedTypeHasStaticShape(MlirType type);
 /// Checks wither the dim-th dimension of the given shaped type is dynamic.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim);
 
+/// Checks wither the dim-th dimension of the given shaped type is static.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim);
+
 /// Returns the dim-th dimension of the given ranked shaped type.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
                                                     intptr_t dim);
@@ -300,14 +303,22 @@ MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDimSize(MlirType type,
 /// in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicSize(int64_t size);
 
+/// Checks whether the given shaped type dimension value is statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticSize(int64_t size);
+
 /// Returns the value indicating a dynamic size in a shaped type. Prefer
-/// mlirShapedTypeIsDynamicSize to direct comparisons with this value.
+/// mlirShapedTypeIsDynamicSize and mlirShapedTypeIsStaticSize to direct
+/// comparisons with this value.
 MLIR_CAPI_EXPORTED int64_t mlirShapedTypeGetDynamicSize(void);
 
 /// Checks whether the given value is used as a placeholder for dynamic strides
 /// and offsets in shaped types.
 MLIR_CAPI_EXPORTED bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val);
 
+/// Checks whether the given dimension value of a stride or an offset is
+/// statically-sized.
+MLIR_CAPI_EXPORTED bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val);
+
 /// Returns the value indicating a dynamic stride or offset in a shaped type.
 /// Prefer mlirShapedTypeGetDynamicStrideOrOffset to direct comparisons with
 /// this value.
diff --git a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
index 367aeb6ac512b..91ffe6572ac41 100644
--- a/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
+++ b/mlir/include/mlir/IR/BuiltinTypeInterfaces.td
@@ -36,7 +36,7 @@ def VectorElementTypeInterface : TypeInterface<"VectorElementTypeInterface"> {
     This may change in the future, for example, to require types to provide
     their size or alignment given a data layout. Please post an RFC before
     adding this interface to additional types. Implementing this interface on
-    downstream types is discourged, until we specified the exact properties of
+    downstream types is discouraged, until we specified the exact properties of
     a vector element type in more detail.
   }];
 }
@@ -221,7 +221,17 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
 
     /// Whether the given shape has any size that indicates a dynamic dimension.
     static bool isDynamicShape(ArrayRef<int64_t> dSizes) {
-      return any_of(dSizes, [](int64_t dSize) { return isDynamic(dSize); });
+      return llvm::any_of(dSizes, isDynamic);
+    }
+
+    /// Whether the given dimension size indicates a statically-sized dimension.
+    static constexpr bool isStatic(int64_t dValue) {
+      return !isDynamic(dValue);
+    }
+
+    /// Whether the given shape has static dimensions only.
+    static bool isStaticShape(ArrayRef<int64_t> dSizes) {
+      return llvm::all_of(dSizes, isStatic);
     }
 
     /// Return the number of elements present in the given shape.
@@ -273,11 +283,18 @@ def ShapedTypeInterface : TypeInterface<"ShapedType"> {
       return ::mlir::ShapedType::isDynamic($_type.getShape()[idx]);
     }
 
+    /// Returns true if this dimension has a static size (for ranked types);
+    /// aborts for unranked types.
+    bool isStaticDim(unsigned idx) const {
+      assert(idx < getRank() && "invalid index for shaped type");
+      return ::mlir::ShapedType::isStatic($_type.getShape()[idx]);
+    }
+
     /// Returns if this type has a static shape, i.e. if the type is ranked and
     /// all dimensions have known size (>= 0).
     bool hasStaticShape() const {
       return $_type.hasRank() &&
-             !::mlir::ShapedType::isDynamicShape($_type.getShape());
+             ::mlir::ShapedType::isStaticShape($_type.getShape());
     }
 
     /// Returns if this type has a static shape and the shape is equal to
diff --git a/mlir/lib/Bindings/Python/IRTypes.cpp b/mlir/lib/Bindings/Python/IRTypes.cpp
index 0f2719c10a027..b11e3f75b8463 100644
--- a/mlir/lib/Bindings/Python/IRTypes.cpp
+++ b/mlir/lib/Bindings/Python/IRTypes.cpp
@@ -544,6 +544,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim"),
       "Returns whether the dim-th dimension of the given shaped type is "
       "dynamic.");
+  c.def(
+      "is_static_dim",
+      [](PyShapedType &self, intptr_t dim) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticDim(self, dim);
+      },
+      nb::arg("dim"),
+      "Returns whether the dim-th dimension of the given shaped type is "
+      "static.");
   c.def(
       "get_dim_size",
       [](PyShapedType &self, intptr_t dim) {
@@ -558,6 +567,12 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given dimension size indicates a dynamic "
       "dimension.");
+  c.def_static(
+      "is_static_size",
+      [](int64_t size) -> bool { return mlirShapedTypeIsStaticSize(size); },
+      nb::arg("dim_size"),
+      "Returns whether the given dimension size indicates a static "
+      "dimension.");
   c.def(
       "is_dynamic_stride_or_offset",
       [](PyShapedType &self, int64_t val) -> bool {
@@ -567,6 +582,15 @@ void mlir::PyShapedType::bindDerived(ClassTy &c) {
       nb::arg("dim_size"),
       "Returns whether the given value is used as a placeholder for dynamic "
       "strides and offsets in shaped types.");
+  c.def(
+      "is_static_stride_or_offset",
+      [](PyShapedType &self, int64_t val) -> bool {
+        self.requireHasRank();
+        return mlirShapedTypeIsStaticStrideOrOffset(val);
+      },
+      nb::arg("dim_size"),
+      "Returns whether the given shaped type stride or offset value is "
+      "statically-sized.");
   c.def_prop_ro(
       "shape",
       [](PyShapedType &self) {
diff --git a/mlir/lib/CAPI/IR/BuiltinTypes.cpp b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
index a080adf0f8103..9d8554aabff8a 100644
--- a/mlir/lib/CAPI/IR/BuiltinTypes.cpp
+++ b/mlir/lib/CAPI/IR/BuiltinTypes.cpp
@@ -332,6 +332,11 @@ bool mlirShapedTypeIsDynamicDim(MlirType type, intptr_t dim) {
       .isDynamicDim(static_cast<unsigned>(dim));
 }
 
+bool mlirShapedTypeIsStaticDim(MlirType type, intptr_t dim) {
+  return llvm::cast<ShapedType>(unwrap(type))
+      .isStaticDim(static_cast<unsigned>(dim));
+}
+
 int64_t mlirShapedTypeGetDimSize(MlirType type, intptr_t dim) {
   return llvm::cast<ShapedType>(unwrap(type))
       .getDimSize(static_cast<unsigned>(dim));
@@ -343,10 +348,18 @@ bool mlirShapedTypeIsDynamicSize(int64_t size) {
   return ShapedType::isDynamic(size);
 }
 
+bool mlirShapedTypeIsStaticSize(int64_t size) {
+  return ShapedType::isStatic(size);
+}
+
 bool mlirShapedTypeIsDynamicStrideOrOffset(int64_t val) {
   return ShapedType::isDynamic(val);
 }
 
+bool mlirShapedTypeIsStaticStrideOrOffset(int64_t val) {
+  return ShapedType::isStatic(val);
+}
+
 int64_t mlirShapedTypeGetDynamicStrideOrOffset() {
   return ShapedType::kDynamic;
 }
diff --git a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
index 86d6643820376..e34d5f74d232f 100644
--- a/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/MemRefBuilder.cpp
@@ -53,7 +53,7 @@ MemRefDescriptor MemRefDescriptor::fromStaticShape(
 
   // Extract all strides and offsets and verify they are static.
   auto [strides, offset] = type.getStridesAndOffset();
-  assert(!ShapedType::isDynamic(offset) && "expected static offset");
+  assert(ShapedType::isStatic(offset) && "expected static offset");
   assert(!llvm::any_of(strides, ShapedType::isDynamic) &&
          "expected static strides");
 
diff --git a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
index 57c8f4402cf4b..efecea2d461a7 100644
--- a/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
+++ b/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp
@@ -610,7 +610,7 @@ bool LLVMTypeConverter::canConvertToBarePtr(BaseMemRefType type) {
     if (ShapedType::isDynamic(stride))
       return false;
 
-  return !ShapedType::isDynamic(offset);
+  return ShapedType::isStatic(offset);
 }
 
 /// Convert a memref type to a bare pointer to the memref element type.
diff --git a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
index 7484e4b07390e..d767a24f6d698 100644
--- a/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
+++ b/mlir/lib/Conversion/MemRefToLLVM/MemRefToLLVM.cpp
@@ -44,7 +44,7 @@ static constexpr LLVM::GEPNoWrapFlags kNoWrapFlags =
 namespace {
 
 static bool isStaticStrideOrOffset(int64_t strideOrOffset) {
-  return !ShapedType::isDynamic(strideOrOffset);
+  return ShapedType::isStatic(strideOrOffset);
 }
 
 static FailureOr<LLVM::LLVMFuncOp>
@@ -1469,7 +1469,7 @@ struct MemRefReshapeOpLowering
       Value stride = nullptr;
       int64_t targetRank = targetMemRefType.getRank();
       for (auto i : llvm::reverse(llvm::seq<int64_t>(0, targetRank))) {
-        if (!ShapedType::isDynamic(strides[i])) {
+        if (ShapedType::isStatic(strides[i])) {
           // If the stride for this dimension is dynamic, then use the product
           // of the sizes of the inner dimensions.
           stride =
@@ -1723,7 +1723,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                 ArrayRef<int64_t> shape, ValueRange dynamicSizes, unsigned idx,
                 Type indexType) const {
     assert(idx < shape.size());
-    if (!ShapedType::isDynamic(shape[idx]))
+    if (ShapedType::isStatic(shape[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, shape[idx]);
     // Count the number of dynamic dims in range [0, idx]
     unsigned nDynamic =
@@ -1739,7 +1739,7 @@ struct ViewOpLowering : public ConvertOpToLLVMPattern<memref::ViewOp> {
                   ArrayRef<int64_t> strides, Value nextSize,
                   Value runningStride, unsigned idx, Type indexType) const {
     assert(idx < strides.size());
-    if (!ShapedType::isDynamic(strides[idx]))
+    if (ShapedType::isStatic(strides[idx]))
       return createIndexAttrConstant(rewriter, loc, indexType, strides[idx]);
     if (nextSize)
       return runningStride
diff --git a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
index c2be08ef40f21..c3ce71ee2c82c 100644
--- a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
+++ b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
@@ -759,7 +759,7 @@ computeTargetSize(PatternRewriter &rewriter, Location loc, IndexPool &indexPool,
   // dimension greater than 1 with a different value is undefined behavior.
   for (auto operand : operands) {
     auto size = cast<RankedTensorType>(operand.getType()).getDimSize(dim);
-    if (!ShapedType::isDynamic(size) && size > 1)
+    if (ShapedType::isStatic(size) && size > 1)
       return {rewriter.getIndexAttr(size), operand};
   }
 
diff --git a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
index 615c7ca1cfd15..f73821c4d35a2 100644
--- a/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
+++ b/mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp
@@ -84,7 +84,7 @@ TensorType inferReshapeExpandedType(TensorType inputType,
         return totalSize / totalSizeNoPlaceholder;
       });
 
-  bool resultIsStatic = !ShapedType::isDynamicShape(resultShape);
+  bool resultIsStatic = ShapedType::isStaticShape(resultShape);
 
   // A syntactic restriction in 'tensor.expand_shape' forbids a dynamically
   // shaped input from being reshaped into a statically shaped result. We may
@@ -306,7 +306,7 @@ class SliceConverter : public OpConversionPattern<tosa::SliceOp> {
       int64_t size = i.value();
       size_t index = i.index();
       sizes.push_back(size == -1 ? ShapedType::kDynamic : size);
-      if (!ShapedType::isDynamic(sizes.back()))
+      if (ShapedType::isStatic(sizes.back()))
         continue;
 
       auto dim = rewriter.create<tensor::DimOp>(loc, input, index);
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index 66949c96798de..5c1d42db18c47 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -44,7 +44,7 @@ FailureOr<Value> mlir::bufferization::castOrReallocMemRefValue(
         failed(target.getStridesAndOffset(targetStrides, targetOffset)))
       return false;
     auto dynamicToStatic = [](int64_t a, int64_t b) {
-      return ShapedType::isDynamic(a) && !ShapedType::isDynamic(b);
+      return ShapedType::isDynamic(a) && ShapedType::isStatic(b);
     };
     if (dynamicToStatic(sourceOffset, targetOffset))
       return false;
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
index acf5f7767d12a..15e03fbefe9c5 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/BufferResultsToOutParams.cpp
@@ -33,7 +33,7 @@ static bool hasFullyDynamicLayoutMap(MemRefType type) {
     return false;
   if (!llvm::all_of(strides, ShapedType::isDynamic))
     return false;
-  if (!ShapedType::isDynamic(offset))
+  if (ShapedType::isStatic(offset))
     return false;
   return true;
 }
diff --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
index 8075df730ccc6..1edf27201ee24 100644
--- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
+++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
@@ -4569,7 +4569,7 @@ static SmallVector<OpFoldResult> getMixedTilesImpl(OpTy op) {
   SmallVector<OpFoldResult> mixedInnerTiles;
   unsigned dynamicValIndex = 0;
   for (int64_t staticTile : op.getStaticInnerTiles()) {
-    if (!ShapedType::isDynamic(staticTile))
+    if (ShapedType::isStatic(staticTile))
       mixedInnerTiles.push_back(builder.getI64IntegerAttr(staticTile));
     else
       mixedInnerTiles.push_back(op.getInnerTiles()[dynamicValIndex++]);
@@ -4834,7 +4834,7 @@ bool PackOp::requirePaddingValue(ArrayRef<int64_t> inputShape,
     std::optional<int64_t> constantTile = getConstantIntValue(tileSize);
 
     if (!constantTile) {
-      if (!ShapedType::isDynamic(outputTileSizes[pos]) &&
+      if (ShapedType::isStatic(outputTileSizes[pos]) &&
           (inputShape[pos] % outputTileSizes[pos] != 0))
         return true;
     } else if (inputShape[pos] % (*constantTile) != 0) {
@@ -4940,7 +4940,7 @@ SmallVector<OpFoldResult> PackOp::getResultShape(
   // use dispatchIndexOpFoldResults on the result, and rely on exact number of
   // dynamic dims returned by that.
   for (unsigned i = 0; i < resultDims.size(); ++i) {
-    if (!ShapedType::isDynamic(resultTypeShape[i]))
+    if (ShapedType::isStatic(resultTypeShape[i]))
       continue;
     resultDims[i] =
         getValueOrCreateConstantIndexOp(builder, loc, resultDims[i]);
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index d0031e047b770..6907df096252e 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -2065,7 +2065,7 @@ transform::PadOp::apply(transform::TransformRewriter &rewriter,
       rewriter.setInsertionPoint(linalgTarget);
       for (OpOperand &operand : linalgTarget->getOpOperands()) {
         for (auto [i, dim] : llvm::enumerate(linalgTarget.getShape(&operand))) {
-          if (!ShapedType::isDynamic(dim))
+          if (ShapedType::isStatic(dim))
             continue;
           options.setSizeToPadTo(operand.getOperandNumber(), i,
                                  tensor::getMixedSize(rewriter,
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index f8592e2ca2174..e3aebce8dfc09 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -338,7 +338,7 @@ VectorizationState::precomputeIterSpaceValueSizes(RewriterBase &rewriter,
                                                   LinalgOp linalgOp) {
   // TODO: Support 0-d vectors.
   for (int vecDim = 0, end = canonicalVecShape.size(); vecDim < end; ++vecDim) {
-    if (!ShapedType::isDynamic(iterSpaceStaticSizes[vecDim])) {
+    if (ShapedType::isStatic(iterSpaceStaticSizes[vecDim])) {
       // Create constant index op for static dimensions.
       iterSpaceValueSizes.push_back(rewriter.create<arith::ConstantIndexOp>(
           linalgOp.getLoc(), iterSpaceStaticSizes[vecDim]));
@@ -1655,7 +1655,7 @@ createWriteOrMaskedWrite(OpBuilder &builder, Location loc, Value vecToStore,
     for (unsigned i = 0; i < vecToStoreRank; i++)
       inBoundsVal[i] =
           (destShape[destRank - vecToStoreRank + i] >= vecToStoreShape[i]) &&
-          !ShapedType::isDynamic(destShape[destRank - vecToStoreRank + i]);
+          ShapedType::isStatic(destShape[destRank - vecToStoreRank + i]);
   }
 
   // If missing, initialize the write indices to 0.
diff --git a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
index 209309ddb413a..472d7479dad0c 100644
--- a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
@@ -697,7 +697,7 @@ computeSliceParameters(OpBuilder &builder, Location loc, Value valueToTile,
     int64_t shapeSize = shape[r];
     std::optional<int64_t> sizeCst = getConstantIntValue(size);
     auto hasTileSizeOne = sizeCst == 1;
-    auto dividesEvenly = sizeCst && !ShapedType::isDynamic(shapeSize) &&
+    auto dividesEvenly = sizeCst && ShapedType::isStatic(shapeSize) &&
                          ((shapeSize % *sizeCst) == 0);
     if (!hasTileSizeOne && !dividesEvenly) {
       LLVM_DEBUG(llvm::dbgs() << "makeTiledShape: shapeSize=" << shapeSize
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 3c4d2562e6999..2371cff1043d6 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -99,7 +99,7 @@ static void constifyIndexValues(SmallVectorImpl<OpFoldResult> &values,
          "incorrect number of const values");
   for (auto [i, cstVal] : llvm::enumerate(constValues)) {
     Builder builder(values[i].getContext());
-    if (!ShapedType::isDynamic(cstVal)) {
+    if (ShapedType::isStatic(cstVal)) {
       // Constant value is known, use it directly.
       values[i] = builder.getIndexAttr(cstVal);
       continue;
@@ -189,7 +189,7 @@ struct SimplifyAllocConst : public OpRewritePattern<AllocLikeOp> {
     for (unsigned dim = 0, e = memrefType.getRank(); dim < e; ++dim) {
       int64_t dimSize = memrefType.getDimSize(dim);
       // If this is already static dimension, keep it.
-      if (!ShapedType::isDynamic(dimSize)) {
+      if (ShapedType::isStatic(dimSize)) {
         newShapeConstants.push_back(dimSize);
         continue;
       }
@@ -615,21 +615,21 @@ bool CastOp::canFoldIntoConsumerOp(CastOp castOp) {
   for (auto it : llvm::zip(sourceType.getShape(), resultType.getShape())) {
     auto ss = std::get<0>(it), st = std::get<1>(it);
     if (ss != st)
-      if (ShapedType::isDynamic(ss) && !ShapedType::isDynamic(st))
+      if (ShapedType::isDynamic(ss) && ShapedType::isStatic(st))
         return false;
   }
 
   // If cast is towards more static offset along any dimension, don't fold.
   if (sourceOffset != resultOffset)
     if (ShapedType::isDynamic(sourceOffset) &&
-        !ShapedType::isDynamic(resultOffset))
+        ShapedType::isStatic(resultOffset))
       return false;
 
   // If cast is towards more static strides along any dimension, don't fold.
   for (au...
[truncated]

Copy link

github-actions bot commented Jul 4, 2025

✅ With the latest revision this PR passed the Python code formatter.

Copy link
Contributor

@makslevental makslevental left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@banach-space banach-space left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I find this very nice, ShapedType::isStatic(val) is much more readable than !ShapedType::isDynamic(val).

Just one minor suggestion inline.

Copy link
Contributor

@hanhanW hanhanW left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, it looks more straightforward to me!

kuhar added 4 commits July 7, 2025 14:18
The motivation is to avoid having to negate `isDynamic*` checks, avoid
double negations, and allow for `ShapedType::isStaticDim` to be used in
ADT functions without having to wrap it in a lambda performing the
negation.

Also add the new functions to C and Python bindings.
@kuhar kuhar force-pushed the shaped-type-is-static branch from a9dba6d to 30d95a7 Compare July 7, 2025 18:23
@kuhar kuhar merged commit 6512ca7 into llvm:main Jul 7, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants