-
Notifications
You must be signed in to change notification settings - Fork 11k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vulkan: Add DP4A MMQ and Q8_1 quantization shader #12135
base: master
Are you sure you want to change the base?
Conversation
Hi @0cc4m, What are you thinking as the long-term plan for this? int8 everywhere (like CUDA?), or just for certain operations or HW that benefits from it? I think int8 is likely a win for mat-vec mul in most cases - even where we're not currently math limited, it should have lower register usage and avoid some of the annoying perf issues where the compiler doesn't schedule things well. And for cases that are math-limited (particularly older HW) it should give a big boost. For coopmat/coopmat2, while int8 is faster in terms of peak rate than fp16 (at least on NVIDIA), the int32 accumulator takes up a lot of register space and limits the tile sizes, and may not always be a win. Overall I'm excited to have the quantization path in place for the B matrix, it enables exploring a lot of new optimizations. |
Basically I started out with this with the goal of exploring new options to improve prompt processing on non-coopmat hardware, and also just to understand how to use int8 for acceleration. I don't think it's worth using over fp16/fp32 on hardware that doesn't have integer dot product acceleration, but for others it may be worth opening a shader path that utilizes it. With Vega20 and also Nvidia Pascal the Vulkan backend is currently noticeably behind, and I think this may be a way to close the gap.
Yes, looking into that would be the next step after this.
Since I store an entire q8_1 block in k-direction in registers, instead of loading single values for each k, I already have to reconsider tile sizes here, or rethink that approach. The L-tile seems slow and I assume that means it's register-limited.
Yeah, you used fp16 for coopmat2 to reduce memory pressure, maybe it would be worth moving to q8_1? Dequantization in the shader would not require much more compute. |
This shader already makes a positive difference on AMD and a huge difference on Intel. A770 performance is finally looking more like expected.
|
Yeah you're right. With some changes I got it working on my RX 470, which has no FP16 (do all GPUs with DP4A support FP16?) and no DP4A. It's... slow. my changes to make it run--------------------- ggml/src/ggml-vulkan/ggml-vulkan.cpp ---------------------
index b6cd2f21..8df9383c 100644
@@ -1926,6 +1926,7 @@ static void ggml_vk_load_shaders(vk_device& device) {
CREATE_MM(GGML_TYPE_IQ4_NL, pipeline_dequant_mul_mat_mat_id[GGML_TYPE_IQ4_NL].f16acc, matmul_id_iq4_nl_f32, _f16acc, mmq_wg_denoms, warptile_mmq, vk_mat_mat_id_push_constants, 4, _id);
#undef CREATE_MM2
#undef CREATE_MM
+#undef CREATE_MMQ
} else {
// Create 6 variants, {s,m,l}x{unaligned,aligned}
#define CREATE_MM(TYPE, PIPELINE_NAME, NAMELC, F16ACC, WG_DENOMS, WARPTILE, PUSHCONST, PARAMCOUNT, ID) \
@@ -1942,6 +1943,14 @@ static void ggml_vk_load_shaders(vk_device& device) {
if (device->mul_mat ## ID ## _s[TYPE]) \
ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->a_s, #NAMELC #F16ACC "_aligned_s", NAMELC ## _aligned ## F16ACC ## _fp32_len, NAMELC ## _aligned ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), s_ ## WG_DENOMS, s_ ## WARPTILE, s_align); \
+#define CREATE_MMQ(TYPE, PIPELINE_NAME, NAMELC, F16ACC, WG_DENOMS, WARPTILE, PUSHCONST, PARAMCOUNT, ID) \
+ if (device->mul_mat ## ID ## _l[TYPE]) \
+ ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->l, #NAMELC #F16ACC "_l", NAMELC ## F16ACC ## _fp32_len, NAMELC ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, 1); \
+ if (device->mul_mat ## ID ## _m[TYPE]) \
+ ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->m, #NAMELC #F16ACC "_m", NAMELC ## F16ACC ## _fp32_len, NAMELC ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), m_ ## WG_DENOMS, m_ ## WARPTILE, 1); \
+ if (device->mul_mat ## ID ## _s[TYPE]) \
+ ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->s, #NAMELC #F16ACC "_s", NAMELC ## F16ACC ## _fp32_len, NAMELC ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), s_ ## WG_DENOMS, s_ ## WARPTILE, 1); \
+
CREATE_MM(GGML_TYPE_F32, pipeline_matmul_f32, matmul_f32_f32, , wg_denoms, warptile, vk_mat_mat_push_constants, 3, );
CREATE_MM(GGML_TYPE_F32, pipeline_matmul_f32_f16, matmul_f32_f16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3, );
CREATE_MM(GGML_TYPE_F16, pipeline_matmul_f16.f32acc, matmul_f16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3, );
@@ -1968,6 +1977,9 @@ static void ggml_vk_load_shaders(vk_device& device) {
CREATE_MM(GGML_TYPE_IQ4_XS, pipeline_dequant_mul_mat_mat[GGML_TYPE_IQ4_XS].f32acc, matmul_iq4_xs_f32, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
CREATE_MM(GGML_TYPE_IQ4_NL, pipeline_dequant_mul_mat_mat[GGML_TYPE_IQ4_NL].f32acc, matmul_iq4_nl_f32, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
+ CREATE_MMQ(GGML_TYPE_Q4_0, pipeline_dequant_mul_mat_mat_q8_1[GGML_TYPE_Q4_0].f32acc, matmul_q4_0_q8_1, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
+ CREATE_MMQ(GGML_TYPE_Q8_0, pipeline_dequant_mul_mat_mat_q8_1[GGML_TYPE_Q8_0].f32acc, matmul_q8_0_q8_1, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
+
CREATE_MM(GGML_TYPE_F32, pipeline_matmul_id_f32, matmul_id_f32_f32, , wg_denoms, warptile, vk_mat_mat_push_constants, 4, _id);
CREATE_MM(GGML_TYPE_F16, pipeline_matmul_id_f16.f32acc, matmul_id_f16, , wg_denoms, warptile, vk_mat_mat_push_constants, 4, _id);
CREATE_MM(GGML_TYPE_F16, pipeline_matmul_id_f16_f32.f32acc, matmul_id_f16_f32, , wg_denoms, warptile, vk_mat_mat_push_constants, 4, _id);
@@ -1993,6 +2005,7 @@ static void ggml_vk_load_shaders(vk_device& device) {
CREATE_MM(GGML_TYPE_IQ4_XS, pipeline_dequant_mul_mat_mat_id[GGML_TYPE_IQ4_XS].f32acc, matmul_id_iq4_xs_f32, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_id_push_constants, 4, _id);
CREATE_MM(GGML_TYPE_IQ4_NL, pipeline_dequant_mul_mat_mat_id[GGML_TYPE_IQ4_NL].f32acc, matmul_id_iq4_nl_f32, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_id_push_constants, 4, _id);
#undef CREATE_MM
+#undef CREATE_MMQ
}
// mul mat vec
@@ -2431,7 +2444,8 @@ static vk_device ggml_vk_get_device(size_t idx) {
device->coopmat_support = false;
}
- device->integer_dot_product = device->integer_dot_product && shader_integer_dot_product_props.integerDotProductAccumulatingSaturating4x8BitPackedSignedAccelerated;
+ //device->integer_dot_product = device->integer_dot_product && shader_integer_dot_product_props.integerDotProductAccumulatingSaturating4x8BitPackedSignedAccelerated;
+ device->integer_dot_product = true;
std::vector<vk::QueueFamilyProperties> queue_family_props = device->physical_device.getQueueFamilyProperties();
@@ -3168,8 +3182,10 @@ static vk_matmul_pipeline ggml_vk_get_mul_mat_mat_pipeline(ggml_backend_vk_conte
default:
return nullptr;
}
-
- return ctx->device->pipeline_dequant_mul_mat_mat_q8_1[src0_type].f16acc;
+ if (ctx->device->fp16)
+ return ctx->device->pipeline_dequant_mul_mat_mat_q8_1[src0_type].f16acc;
+ else
+ return ctx->device->pipeline_dequant_mul_mat_mat_q8_1[src0_type].f32acc;
}
if (src1_type != GGML_TYPE_F32 && !ctx->device->coopmat2) {
--------------- ggml/src/ggml-vulkan/vulkan-shaders/mul_mmq.comp ---------------
index 81fa7b53..780182a3 100644
@@ -4,7 +4,7 @@
#extension GL_EXT_shader_16bit_storage : require
#extension GL_EXT_shader_explicit_arithmetic_types_int8 : require
-#extension GL_EXT_integer_dot_product : require
+//#extension GL_EXT_integer_dot_product : require
#ifdef FLOAT16
#extension GL_EXT_shader_explicit_arithmetic_types_float16 : require
@@ -318,9 +318,12 @@ void main() {
[[unroll]] for (uint cr = 0; cr < TM; cr++) {
const uint cache_a_idx = wsir * TM + cr;
const uint sums_idx = (wsic * TN + cc) * (WMITER * TM) + wsir * TM + cr;
- int32_t q_sum = 0;
+ float q_sum = 0;
[[unroll]] for (uint idx_k = 0; idx_k < BK / 4; idx_k++) {
- q_sum = dotPacked4x8AccSatEXT(cache_a[cache_a_idx].qs[idx_k], cache_b[cc].qs[idx_k], q_sum);
+ //q_sum = dotPacked4x8AccSatEXT(cache_a[cache_a_idx].qs[idx_k], cache_b[cc].qs[idx_k], q_sum);
+ vec4 cav = vec4(unpack8(cache_a[cache_a_idx].qs[idx_k]));
+ vec4 cbv = vec4(unpack8(cache_b[cc].qs[idx_k]));
+ q_sum += dot(cav, cbv);
}
#if QUANT_AUXF == 1
@@ -330,7 +333,7 @@ void main() {
// const float factor = float(cache_a[cache_a_idx].d) * float(cache_b[cc].d);
#endif
- sums[sums_idx] = ACC_TYPE(fma(float(q_sum), factor, float(sums[sums_idx])));
+ sums[sums_idx] = ACC_TYPE(fma(q_sum, factor, float(sums[sums_idx])));
}
}
}
I recreated the dot product instruction using floats as that ended up being faster than using ints. On my card it takes eight cycles to extract the int8s from the int32s and another four to do the FMAs. If we use float B like what's on master that becomes four FMAs, and of course with DP4A it's a single 1 or 2 cycle instruction. It's possible to make this run much faster on old GPUs by using the old mul_mm and dequantizing the Q8_1 B matrix first, but that's probably only worth doing if we see good improvements on the matvec side.
Since I don't have DP4A and am compute rather than memory bound for mat vec I won't be able to optimize this properly. At this point I'm probably going to stick with the float implementation until I get a new GPU 😞. |
|
||
const uint buf_ib = loadc_b + l; | ||
|
||
// Should ds be gated to a single thread? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No: Shared memory bank conflict
Yes: You get a branch condition
Not having a gate shouldn't mess up the result (the threads will just line up and write to the buffer one at a time) but the bank conflict might be slower than the branch.
Ah yeah, I forgot about that. You could look into q8_1 src1 support on mul_mm and mat_vec if you got too much time on your hand, but not sure if it would help.
All except Nvidia Pascal/GTX 1000 |
I now remembered that I was thinking of @daniandtheweb when I pinged netrunnereve, my bad. RX 5700 XT should profit a lot from the use of DP4A. |
RDNA1 removed the V_DOT* instructions from GCN, they only returned in RDNA2 so no this will not help RX 5700 XT. |
Oh wow, that's terrible. |
That's pretty much the correct reaction to rdna1 in general. Note it's a bit more complex than that as navi12 (the variant with hbm used in the pro 520) dose have a few v_dot variants, but that's an edge case hardly worth considering. |
Apparently also the RX 5500XT (Navi 14) does support it. It's quite unfortunate that the 5700 series lacks any support for it. |
You're right, it does. This is very confusing. |
This is a basic VK_KHR_shader_integer_dot_product (DP4A) implementation for matrix-matrix multiplication. I added a quantization shader that can quantize float32 src1 into q8_1, and an MMQ shader that can multiply a q8_0 src0 with a q8_1 src1.
Features I have to implement before this could be merged:
I'm opening this already to get some feedback about the implementation. Thank you @jeffbolznv for finishing the GLSL integer dot extension.
@netrunnereve In the long run we probably also want to use DP4A and q8_1 for matrix-vector multiplication to reduce the memory bandwidth bottleneck. Let me know if you want to look into that.
As far as hardware goes, integer dot product / DP4A is supported by Nvidia since Pascal/GTX 1000, AMD since Vega20/Radeon VII/MI50 (but not on RDNA1 /RX 5000 series), and Intel Xe (I think).