Better SIMD shuffles
I'm trying to do the following in AVX2 using intrinsics: shift x one byte to the right, while shifting the rightmost byte of y into the leftmost byte of x. This is best done using two instructions: vperm2i128 followed by vpalignr. However, simd_shuffle32 generates four instructions: vmovdqa (to load a constant), vpblendvb, then vperm2i128 and vpalignr. Here is a a full example, which may be compiled with rustc -O -C target_feature=+avx2 --crate-type=lib --emit=asm shuffle.rs.
#![feature(platform_intrinsics, repr_simd)]
#[allow(non_camel_case_types)]
#[repr(simd)]
pub struct u8x32(u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8);
extern "platform-intrinsic" {
fn simd_shuffle32<T, U>(x: T, y: T, idx: [u32; 32]) -> U;
}
pub fn right_shift_1(left: u8x32, right: u8x32) -> u8x32 {
unsafe { simd_shuffle32(left, right, [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]) }
}
This might be considered a bug in LLVM, in the sense that it's generating a sub-optimal shuffle. However, I think it should be addressed in rustc, because if I know what the right sequence of instructions is then I shouldn't have to hope that LLVM can generate it. Moreover, it's possible to get the right code from clang (compile with clang -emit-llvm -mavx2 -O -S shuffle.c):
#include <immintrin.h>
__m256i right_shift_1(__m256i left, __m256i right)
{
__m256i new_left = _mm256_permute2x128_si256(left, right, 33);
return _mm256_alignr_epi8(new_left, right, 1);
}
A possibly interesting observation is that the unoptimized LLVM IR from clang contains a llvm.x86.avx2.vperm2i128 intrinsic followed by a shufflevector. The optimized LLVM IR from clang contains two shufflevector intrinsics. In order to try to get the same output from rustc, I first patched it to support llvm.x86.avx2.vperm2i128. After modifying right_shift_1 to use the new intrinsic, I got rustc to produce llvm.x86.avx2.vperm2i128 followed by a shufflevector. However, the optimized LLVM IR from rustc still produces a single shufflevector, and it still ends up producing the bad asm.
I think this means that the fault is from some optimization pass in rustc that isn't in clang, but I haven't had time to investigate it yet...
This still occurs even when using the specific intrinsics in
std::arch, which is somewhat surprising! See: https://github.com/rust-lang/regex/commit/f962ddbff0d9b17488ef9e704ed0dfbe2b667670#r28069091Andrew Gallant at 2018-03-13 15:43:13
Can reproduce, will try to fill in an LLVM bug for this.
Once the LLVM bug is filled and recognized as a real bug we could workaround this in
std::arch(rustcis definitely the wrong place to do it).EDIT: reported https://bugs.llvm.org/show_bug.cgi?id=36933
gnzlbg at 2018-03-27 19:08:26
@rustbot modify labels: +A-simd
Jubilee at 2020-09-10 06:18:04
This was solved upstream, it seems. I twiddled this a bit to simplify reading it (for me, anyways):
#![feature(platform_intrinsics, repr_simd)] #[allow(non_camel_case_types)] #[repr(simd)] pub struct u8x32([u8; 32]); extern "platform-intrinsic" { fn simd_shuffle32<T, U>(x: T, y: T, idx: [u32; 32]) -> U; } pub fn right_shift_1(left: u8x32, right: u8x32) -> u8x32 { const IDX: [u32; 32] = { let mut a = [31u32; 32]; let mut n: u32 = 0; while n < 32 { a[n as usize] += n; n +=1; } a }; unsafe { simd_shuffle32(left, right, IDX) } }Output is now (Rust-Godbolt):
example::right_shift_1: mov rax, rdi vmovdqa ymm0, ymmword ptr [rdx] vperm2i128 ymm1, ymm0, ymmword ptr [rsi], 3 vpalignr ymm0, ymm0, ymm1, 15 vmovdqa ymmword ptr [rdi], ymm0 vzeroupper retThis seems "fixed" insofar as the vblend is gone, the vmovdqa and such are a somewhat unavoidable feature atm of our lack of using register-passing. So further improvements require large systemic changes in the compiler.
For comparison, clang emits (Clang-Godbolt):
right_shift_1(long long __vector(4), long long __vector(4)): # @right_shift_1(long long __vector(4), long long __vector(4)) vperm2i128 ymm0, ymm0, ymm1, 33 # ymm0 = ymm0[2,3],ymm1[0,1] vpalignr ymm0, ymm0, ymm1, 1 # ymm0 = ymm1[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15], ymm0[0], ymm1[17,18,19,20,21,22,23,24,25,26,27,28,29,30,31], ymm0[16] retJubilee at 2021-10-05 03:25:19