rust-lang/stdarch

Implement all ARM NEON intrinsics

gnzlbg opened this issue Β· 32 comments

Steps for implementing an intrinsic:

  • Select an intrinsic below
  • Review coresimd/arm/neon.rs and coresimd/aarch64/neon.rs
  • Consult ARM official documentation about your intrinsic
  • Consult godbolt for how the intrinsic should be codegen'd, using clang as an example. Use the links below and replace the name of the intrinsic in the code with your intrinsic. Note that if ARM is an error then your intrinsic may be AArch64-only
  • If the codegen is the same on ARM/AArch64, place the intrinsic in coresimd/arm/neon.rs. If it's different place it in both with appropriate #[cfg] in coresimd/arm/neon.rs. If it's only AArch64 place it in coresimd/aarch64/neon.rs
  • Write a test for your intrinsic at the bottom of the file as well
  • Test! Probably use rustup run nightly sh ci/run-docker.sh aarch64-unknown-linux-gnu.
  • When ready, send a PR!

All unimplemented NEON intrinsics

  • pub unsafe fn vbslq_f64(a: u64x2, b: f64x2, c: f64x2) -> f64x2 // Bitwise select (A64)
  • pub unsafe fn vcopy_lane_s8(a: i8x8, lane1: i, b: i8x8, lane2: i) -> i8x8 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_s8(a: i8x16, lane1: i, b: i8x8, lane2: i) -> i8x16 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_s16(a: i16x4, lane1: i, b: i16x4, lane2: i) -> i16x4 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_s16(a: i16x8, lane1: i, b: i16x4, lane2: i) -> i16x8 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_s32(a: i32x2, lane1: i, b: i32x2, lane2: i) -> i32x2 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_s32(a: i32x4, lane1: i, b: i32x2, lane2: i) -> i32x4 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_s64(a: i64x1, lane1: i, b: i64x1, lane2: i) -> i64x1 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_s64(a: i64x2, lane1: i, b: i64x1, lane2: i) -> i64x2 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_u8(a: u8x8, lane1: i, b: u8x8, lane2: i) -> u8x8 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_u8(a: u8x16, lane1: i, b: u8x8, lane2: i) -> u8x16 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_u16(a: u16x4, lane1: i, b: u16x4, lane2: i) -> u16x4 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_u16(a: u16x8, lane1: i, b: u16x4, lane2: i) -> u16x8 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_u32(a: u32x2, lane1: i, b: u32x2, lane2: i) -> u32x2 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_u32(a: u32x4, lane1: i, b: u32x2, lane2: i) -> u32x4 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_u64(a: u64x1, lane1: i, b: u64x1, lane2: i) -> u64x1 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_u64(a: u64x2, lane1: i, b: u64x1, lane2: i) -> u64x2 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_p64(a: p64x1, lane1: i, b: p64x1, lane2: i) -> p64x1 // Vector copy (A32/A64)
  • pub unsafe fn vcopyq_lane_p64(a: p64x2, lane1: i, b: p64x1, lane2: i) -> p64x2 // Vector copy (A32/A64)
  • pub unsafe fn vcopy_lane_f32(a: f32x2, lane1: i, b: f32x2, lane2: i) -> f32x2 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_f32(a: f32x4, lane1: i, b: f32x2, lane2: i) -> f32x4 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_f64(a: f64x1, lane1: i, b: f64x1, lane2: i) -> f64x1 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_f64(a: f64x2, lane1: i, b: f64x1, lane2: i) -> f64x2 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_p8(a: p8x8, lane1: i, b: p8x8, lane2: i) -> p8x8 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_p8(a: p8x16, lane1: i, b: p8x8, lane2: i) -> p8x16 // Vector copy (A64)
  • pub unsafe fn vcopy_lane_p16(a: p16x4, lane1: i, b: p16x4, lane2: i) -> p16x4 // Vector copy (A64)
  • pub unsafe fn vcopyq_lane_p16(a: p16x8, lane1: i, b: p16x4, lane2: i) -> p16x8 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_s8(a: i8x8, lane1: i, b: i8x16, lane2: i) -> i8x8 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_s8(a: i8x16, lane1: i, b: i8x16, lane2: i) -> i8x16 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_s16(a: i16x4, lane1: i, b: i16x8, lane2: i) -> i16x4 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_s16(a: i16x8, lane1: i, b: i16x8, lane2: i) -> i16x8 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_s32(a: i32x2, lane1: i, b: i32x4, lane2: i) -> i32x2 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_s32(a: i32x4, lane1: i, b: i32x4, lane2: i) -> i32x4 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_s64(a: i64x1, lane1: i, b: i64x2, lane2: i) -> i64x1 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_s64(a: i64x2, lane1: i, b: i64x2, lane2: i) -> i64x2 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_u8(a: u8x8, lane1: i, b: u8x16, lane2: i) -> u8x8 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_u8(a: u8x16, lane1: i, b: u8x16, lane2: i) -> u8x16 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_u16(a: u16x4, lane1: i, b: u16x8, lane2: i) -> u16x4 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_u16(a: u16x8, lane1: i, b: u16x8, lane2: i) -> u16x8 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_u32(a: u32x2, lane1: i, b: u32x4, lane2: i) -> u32x2 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_u32(a: u32x4, lane1: i, b: u32x4, lane2: i) -> u32x4 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_u64(a: u64x1, lane1: i, b: u64x2, lane2: i) -> u64x1 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_u64(a: u64x2, lane1: i, b: u64x2, lane2: i) -> u64x2 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_p64(a: p64x1, lane1: i, b: p64x2, lane2: i) -> p64x1 // Vector copy (A32/A64)
  • pub unsafe fn vcopyq_laneq_p64(a: p64x2, lane1: i, b: p64x2, lane2: i) -> p64x2 // Vector copy (A32/A64)
  • pub unsafe fn vcopy_laneq_f32(a: f32x2, lane1: i, b: f32x4, lane2: i) -> f32x2 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_f32(a: f32x4, lane1: i, b: f32x4, lane2: i) -> f32x4 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_f64(a: f64x1, lane1: i, b: f64x2, lane2: i) -> f64x1 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_f64(a: f64x2, lane1: i, b: f64x2, lane2: i) -> f64x2 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_p8(a: p8x8, lane1: i, b: p8x16, lane2: i) -> p8x8 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_p8(a: p8x16, lane1: i, b: p8x16, lane2: i) -> p8x16 // Vector copy (A64)
  • pub unsafe fn vcopy_laneq_p16(a: p16x4, lane1: i, b: p16x8, lane2: i) -> p16x4 // Vector copy (A64)
  • pub unsafe fn vcopyq_laneq_p16(a: p16x8, lane1: i, b: p16x8, lane2: i) -> p16x8 // Vector copy (A64)
  • pub unsafe fn vrbit_s8(a: i8x8) -> i8x8 // Reverse bit order (A64)
  • pub unsafe fn vrbitq_s8(a: i8x16) -> i8x16 // Reverse bit order (A64)
  • pub unsafe fn vrbit_u8(a: u8x8) -> u8x8 // Reverse bit order (A64)
  • pub unsafe fn vrbitq_u8(a: u8x16) -> u8x16 // Reverse bit order (A64)
  • pub unsafe fn vrbit_p8(a: p8x8) -> p8x8 // Reverse bit order (A64)
  • pub unsafe fn vrbitq_p8(a: p8x16) -> p8x16 // Reverse bit order (A64)
  • pub unsafe fn vcreate_s8(a: u64) -> i8x8 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_s16(a: u64) -> i16x4 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_s32(a: u64) -> i32x2 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_s64(a: u64) -> i64x1 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_u8(a: u64) -> u8x8 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_u16(a: u64) -> u16x4 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_u32(a: u64) -> u32x2 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_u64(a: u64) -> u64x1 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_p64(a: u64) -> p64x1 // Create vector from bit pattern (A32/A64)
  • pub unsafe fn vcreate_f16(a: u64) -> f16x4 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_f32(a: u64) -> f32x2 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_p8(a: u64) -> p8x8 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_p16(a: u64) -> p16x4 // Create vector from bit pattern (v7/A32/A64)
  • pub unsafe fn vcreate_f64(a: u64) -> f64x1 // Create vector from bit pattern (A64)
  • pub unsafe fn vdup_n_s8(value: i8) -> i8x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_s8(value: i8) -> i8x16 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_s16(value: i16) -> i16x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_s16(value: i16) -> i16x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_s32(value: i32) -> i32x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_s32(value: i32) -> i32x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_s64(value: i64) -> i64x1 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_s64(value: i64) -> i64x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_u8(value: u8) -> u8x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_u8(value: u8) -> u8x16 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_u16(value: u16) -> u16x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_u16(value: u16) -> u16x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_u32(value: u32) -> u32x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_u32(value: u32) -> u32x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_u64(value: u64) -> u64x1 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_u64(value: u64) -> u64x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_p64(value: p64) -> p64x1 // Vector duplicate (A32/A64)
  • pub unsafe fn vdupq_n_p64(value: p64) -> p64x2 // Vector duplicate (A32/A64)
  • pub unsafe fn vdup_n_f32(value: f32) -> f32x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_f32(value: f32) -> f32x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_p8(value: p8) -> p8x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_p8(value: p8) -> p8x16 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_p16(value: p16) -> p16x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_p16(value: p16) -> p16x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_n_f64(value: f64) -> f64x1 // Vector duplicate (A64)
  • pub unsafe fn vdupq_n_f64(value: f64) -> f64x2 // Vector duplicate (A64)
  • pub unsafe fn vmov_n_s8(value: i8) -> i8x8 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_s8(value: i8) -> i8x16 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_s16(value: i16) -> i16x4 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_s16(value: i16) -> i16x8 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_s32(value: i32) -> i32x2 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_s32(value: i32) -> i32x4 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_s64(value: i64) -> i64x1 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_s64(value: i64) -> i64x2 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_u8(value: u8) -> u8x8 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_u8(value: u8) -> u8x16 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_u16(value: u16) -> u16x4 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_u16(value: u16) -> u16x8 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_u32(value: u32) -> u32x2 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_u32(value: u32) -> u32x4 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_u64(value: u64) -> u64x1 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_u64(value: u64) -> u64x2 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_f32(value: f32) -> f32x2 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_f32(value: f32) -> f32x4 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_p8(value: p8) -> p8x8 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_p8(value: p8) -> p8x16 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_p16(value: p16) -> p16x4 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_p16(value: p16) -> p16x8 // Vector move (v7/A32/A64)
  • pub unsafe fn vmov_n_f64(value: f64) -> f64x1 // Vector move (A64)
  • pub unsafe fn vmovq_n_f64(value: f64) -> f64x2 // Vector move (A64)
  • pub unsafe fn vdup_lane_s8(vec: i8x8, lane: i) -> i8x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_s8(vec: i8x8, lane: i) -> i8x16 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_s16(vec: i16x4, lane: i) -> i16x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_s16(vec: i16x4, lane: i) -> i16x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_s32(vec: i32x2, lane: i) -> i32x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_s32(vec: i32x2, lane: i) -> i32x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_s64(vec: i64x1, lane: i) -> i64x1 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_s64(vec: i64x1, lane: i) -> i64x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_u8(vec: u8x8, lane: i) -> u8x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_u8(vec: u8x8, lane: i) -> u8x16 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_u16(vec: u16x4, lane: i) -> u16x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_u16(vec: u16x4, lane: i) -> u16x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_u32(vec: u32x2, lane: i) -> u32x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_u32(vec: u32x2, lane: i) -> u32x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_u64(vec: u64x1, lane: i) -> u64x1 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_u64(vec: u64x1, lane: i) -> u64x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_p64(vec: p64x1, lane: i) -> p64x1 // Vector duplicate (A32/A64)
  • pub unsafe fn vdupq_lane_p64(vec: p64x1, lane: i) -> p64x2 // Vector duplicate (A32/A64)
  • pub unsafe fn vdup_lane_f32(vec: f32x2, lane: i) -> f32x2 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_f32(vec: f32x2, lane: i) -> f32x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_p8(vec: p8x8, lane: i) -> p8x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_p8(vec: p8x8, lane: i) -> p8x16 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_p16(vec: p16x4, lane: i) -> p16x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_p16(vec: p16x4, lane: i) -> p16x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_f64(vec: f64x1, lane: i) -> f64x1 // Vector duplicate (A64)
  • pub unsafe fn vdupq_lane_f64(vec: f64x1, lane: i) -> f64x2 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_s8(vec: i8x16, lane: i) -> i8x8 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_s8(vec: i8x16, lane: i) -> i8x16 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_s16(vec: i16x8, lane: i) -> i16x4 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_s16(vec: i16x8, lane: i) -> i16x8 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_s32(vec: i32x4, lane: i) -> i32x2 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_s32(vec: i32x4, lane: i) -> i32x4 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_s64(vec: i64x2, lane: i) -> i64x1 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_s64(vec: i64x2, lane: i) -> i64x2 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_u8(vec: u8x16, lane: i) -> u8x8 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_u8(vec: u8x16, lane: i) -> u8x16 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_u16(vec: u16x8, lane: i) -> u16x4 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_u16(vec: u16x8, lane: i) -> u16x8 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_u32(vec: u32x4, lane: i) -> u32x2 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_u32(vec: u32x4, lane: i) -> u32x4 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_u64(vec: u64x2, lane: i) -> u64x1 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_u64(vec: u64x2, lane: i) -> u64x2 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_p64(vec: p64x2, lane: i) -> p64x1 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_p64(vec: p64x2, lane: i) -> p64x2 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_f32(vec: f32x4, lane: i) -> f32x2 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_f32(vec: f32x4, lane: i) -> f32x4 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_p8(vec: p8x16, lane: i) -> p8x8 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_p8(vec: p8x16, lane: i) -> p8x16 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_p16(vec: p16x8, lane: i) -> p16x4 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_p16(vec: p16x8, lane: i) -> p16x8 // Vector duplicate (A64)
  • pub unsafe fn vdup_laneq_f64(vec: f64x2, lane: i) -> f64x1 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_f64(vec: f64x2, lane: i) -> f64x2 // Vector duplicate (A64)
  • pub unsafe fn vcombine_s8(low: i8x8, high: i8x8) -> i8x16 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_s16(low: i16x4, high: i16x4) -> i16x8 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_s32(low: i32x2, high: i32x2) -> i32x4 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_s64(low: i64x1, high: i64x1) -> i64x2 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_u8(low: u8x8, high: u8x8) -> u8x16 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_u16(low: u16x4, high: u16x4) -> u16x8 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_u32(low: u32x2, high: u32x2) -> u32x4 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_u64(low: u64x1, high: u64x1) -> u64x2 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_p64(low: p64x1, high: p64x1) -> p64x2 // Vector combine (A32/A64)
  • pub unsafe fn vcombine_f16(low: f16x4, high: f16x4) -> f16x8 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_f32(low: f32x2, high: f32x2) -> f32x4 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_p8(low: p8x8, high: p8x8) -> p8x16 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_p16(low: p16x4, high: p16x4) -> p16x8 // Vector combine (v7/A32/A64)
  • pub unsafe fn vcombine_f64(low: f64x1, high: f64x1) -> f64x2 // Vector combine (A64)
  • pub unsafe fn vget_high_s8(a: i8x16) -> i8x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_s16(a: i16x8) -> i16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_s32(a: i32x4) -> i32x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_s64(a: i64x2) -> i64x1 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_u8(a: u8x16) -> u8x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_u16(a: u16x8) -> u16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_u32(a: u32x4) -> u32x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_u64(a: u64x2) -> u64x1 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_p64(a: p64x2) -> p64x1 // Vector extract (A32/A64)
  • pub unsafe fn vget_high_f16(a: f16x8) -> f16x4 // Vector extract (A64)
  • pub unsafe fn vget_high_f32(a: f32x4) -> f32x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_p8(a: p8x16) -> p8x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_p16(a: p16x8) -> p16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_high_f64(a: f64x2) -> f64x1 // Vector extract (A64)
  • pub unsafe fn vget_low_s8(a: i8x16) -> i8x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_s16(a: i16x8) -> i16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_s32(a: i32x4) -> i32x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_s64(a: i64x2) -> i64x1 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_u8(a: u8x16) -> u8x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_u16(a: u16x8) -> u16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_u32(a: u32x4) -> u32x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_u64(a: u64x2) -> u64x1 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_p64(a: p64x2) -> p64x1 // Vector extract (A32/A64)
  • pub unsafe fn vget_low_f16(a: f16x8) -> f16x4 // Vector extract (A64)
  • pub unsafe fn vget_low_f32(a: f32x4) -> f32x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_p8(a: p8x16) -> p8x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_p16(a: p16x8) -> p16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vget_low_f64(a: f64x2) -> f64x1 // Vector extract (A64)
  • pub unsafe fn vdupb_lane_s8(vec: i8x8, lane: i) -> i8 // Vector duplicate (A64)
  • pub unsafe fn vduph_lane_s16(vec: i16x4, lane: i) -> i16 // Vector duplicate (A64)
  • pub unsafe fn vdups_lane_s32(vec: i32x2, lane: i) -> i32 // Vector duplicate (A64)
  • pub unsafe fn vdupd_lane_s64(vec: i64x1, lane: i) -> i64 // Vector duplicate (A64)
  • pub unsafe fn vdupb_lane_u8(vec: u8x8, lane: i) -> u8 // Vector duplicate (A64)
  • pub unsafe fn vduph_lane_u16(vec: u16x4, lane: i) -> u16 // Vector duplicate (A64)
  • pub unsafe fn vdups_lane_u32(vec: u32x2, lane: i) -> u32 // Vector duplicate (A64)
  • pub unsafe fn vdupd_lane_u64(vec: u64x1, lane: i) -> u64 // Vector duplicate (A64)
  • pub unsafe fn vdups_lane_f32(vec: f32x2, lane: i) -> f32 // Vector duplicate (A64)
  • pub unsafe fn vdupd_lane_f64(vec: f64x1, lane: i) -> f64 // Vector duplicate (A64)
  • pub unsafe fn vdupb_lane_p8(vec: p8x8, lane: i) -> p8 // Vector duplicate (A64)
  • pub unsafe fn vduph_lane_p16(vec: p16x4, lane: i) -> p16 // Vector duplicate (A64)
  • pub unsafe fn vdupb_laneq_s8(vec: i8x16, lane: i) -> i8 // Vector duplicate (A64)
  • pub unsafe fn vduph_laneq_s16(vec: i16x8, lane: i) -> i16 // Vector duplicate (A64)
  • pub unsafe fn vdups_laneq_s32(vec: i32x4, lane: i) -> i32 // Vector duplicate (A64)
  • pub unsafe fn vdupd_laneq_s64(vec: i64x2, lane: i) -> i64 // Vector duplicate (A64)
  • pub unsafe fn vdupb_laneq_u8(vec: u8x16, lane: i) -> u8 // Vector duplicate (A64)
  • pub unsafe fn vduph_laneq_u16(vec: u16x8, lane: i) -> u16 // Vector duplicate (A64)
  • pub unsafe fn vdups_laneq_u32(vec: u32x4, lane: i) -> u32 // Vector duplicate (A64)
  • pub unsafe fn vdupd_laneq_u64(vec: u64x2, lane: i) -> u64 // Vector duplicate (A64)
  • pub unsafe fn vdups_laneq_f32(vec: f32x4, lane: i) -> f32 // Vector duplicate (A64)
  • pub unsafe fn vdupd_laneq_f64(vec: f64x2, lane: i) -> f64 // Vector duplicate (A64)
  • pub unsafe fn vdupb_laneq_p8(vec: p8x16, lane: i) -> p8 // Vector duplicate (A64)
  • pub unsafe fn vduph_laneq_p16(vec: p16x8, lane: i) -> p16 // Vector duplicate (A64)
  • pub unsafe fn vld1_s8(ptr: *const i8) -> i8x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s8(ptr: *const i8) -> i8x16 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s16(ptr: *const i16) -> i16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s16(ptr: *const i16) -> i16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s32(ptr: *const i32) -> i32x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s32(ptr: *const i32) -> i32x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s64(ptr: *const i64) -> i64x1 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s64(ptr: *const i64) -> i64x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u8(ptr: *const u8) -> u8x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u8(ptr: *const u8) -> u8x16 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u16(ptr: *const u16) -> u16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u16(ptr: *const u16) -> u16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u32(ptr: *const u32) -> u32x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u32(ptr: *const u32) -> u32x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u64(ptr: *const u64) -> u64x1 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u64(ptr: *const u64) -> u64x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p64(ptr: *const p64) -> p64x1 // Vector load (A32/A64)
  • pub unsafe fn vld1q_p64(ptr: *const p64) -> p64x2 // Vector load (A32/A64)
  • pub unsafe fn vld1_f16(ptr: *const f16) -> f16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_f16(ptr: *const f16) -> f16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_f32(ptr: *const f32) -> f32x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_f32(ptr: *const f32) -> f32x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p8(ptr: *const p8) -> p8x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p8(ptr: *const p8) -> p8x16 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p16(ptr: *const p16) -> p16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p16(ptr: *const p16) -> p16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_f64(ptr: *const f64) -> f64x1 // Vector load (A64)
  • pub unsafe fn vld1q_f64(ptr: *const f64) -> f64x2 // Vector load (A64)
  • pub unsafe fn vld1_lane_s8(ptr: *const i8, src: i8x8, lane: i) -> i8x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_s8(ptr: *const i8, src: i8x16, lane: i) -> i8x16 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_s16(ptr: *const i16, src: i16x4, lane: i) -> i16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_s16(ptr: *const i16, src: i16x8, lane: i) -> i16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_s32(ptr: *const i32, src: i32x2, lane: i) -> i32x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_s32(ptr: *const i32, src: i32x4, lane: i) -> i32x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_s64(ptr: *const i64, src: i64x1, lane: i) -> i64x1 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_s64(ptr: *const i64, src: i64x2, lane: i) -> i64x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_u8(ptr: *const u8, src: u8x8, lane: i) -> u8x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_u8(ptr: *const u8, src: u8x16, lane: i) -> u8x16 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_u16(ptr: *const u16, src: u16x4, lane: i) -> u16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_u16(ptr: *const u16, src: u16x8, lane: i) -> u16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_u32(ptr: *const u32, src: u32x2, lane: i) -> u32x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_u32(ptr: *const u32, src: u32x4, lane: i) -> u32x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_u64(ptr: *const u64, src: u64x1, lane: i) -> u64x1 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_u64(ptr: *const u64, src: u64x2, lane: i) -> u64x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_p64(ptr: *const p64, src: p64x1, lane: i) -> p64x1 // Vector load (A32/A64)
  • pub unsafe fn vld1q_lane_p64(ptr: *const p64, src: p64x2, lane: i) -> p64x2 // Vector load (A32/A64)
  • pub unsafe fn vld1_lane_f16(ptr: *const f16, src: f16x4, lane: i) -> f16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_f16(ptr: *const f16, src: f16x8, lane: i) -> f16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_f32(ptr: *const f32, src: f32x2, lane: i) -> f32x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_f32(ptr: *const f32, src: f32x4, lane: i) -> f32x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_p8(ptr: *const p8, src: p8x8, lane: i) -> p8x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_p8(ptr: *const p8, src: p8x16, lane: i) -> p8x16 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_p16(ptr: *const p16, src: p16x4, lane: i) -> p16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_lane_p16(ptr: *const p16, src: p16x8, lane: i) -> p16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_lane_f64(ptr: *const f64, src: f64x1, lane: i) -> f64x1 // Vector load (A64)
  • pub unsafe fn vld1q_lane_f64(ptr: *const f64, src: f64x2, lane: i) -> f64x2 // Vector load (A64)
  • pub unsafe fn vld1_dup_s8(ptr: *const i8) -> i8x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_s8(ptr: *const i8) -> i8x16 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_s16(ptr: *const i16) -> i16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_s16(ptr: *const i16) -> i16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_s32(ptr: *const i32) -> i32x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_s32(ptr: *const i32) -> i32x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_s64(ptr: *const i64) -> i64x1 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_s64(ptr: *const i64) -> i64x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_u8(ptr: *const u8) -> u8x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_u8(ptr: *const u8) -> u8x16 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_u16(ptr: *const u16) -> u16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_u16(ptr: *const u16) -> u16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_u32(ptr: *const u32) -> u32x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_u32(ptr: *const u32) -> u32x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_u64(ptr: *const u64) -> u64x1 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_u64(ptr: *const u64) -> u64x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_p64(ptr: *const p64) -> p64x1 // Vector load (A32/A64)
  • pub unsafe fn vld1q_dup_p64(ptr: *const p64) -> p64x2 // Vector load (A32/A64)
  • pub unsafe fn vld1_dup_f16(ptr: *const f16) -> f16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_f16(ptr: *const f16) -> f16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_f32(ptr: *const f32) -> f32x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_f32(ptr: *const f32) -> f32x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_p8(ptr: *const p8) -> p8x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_p8(ptr: *const p8) -> p8x16 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_p16(ptr: *const p16) -> p16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_dup_p16(ptr: *const p16) -> p16x8 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_dup_f64(ptr: *const f64) -> f64x1 // Vector load (A64)
  • pub unsafe fn vld1q_dup_f64(ptr: *const f64) -> f64x2 // Vector load (A64)
  • pub unsafe fn vst1_s8(ptr: *i8, val: i8x8) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s8(ptr: *i8, val: i8x16) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s16(ptr: *i16, val: i16x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s16(ptr: *i16, val: i16x8) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s32(ptr: *i32, val: i32x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s32(ptr: *i32, val: i32x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s64(ptr: *i64, val: i64x1) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s64(ptr: *i64, val: i64x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u8(ptr: *u8, val: u8x8) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u8(ptr: *u8, val: u8x16) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u16(ptr: *u16, val: u16x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u16(ptr: *u16, val: u16x8) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u32(ptr: *u32, val: u32x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u32(ptr: *u32, val: u32x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u64(ptr: *u64, val: u64x1) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u64(ptr: *u64, val: u64x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p64(ptr: *p64, val: p64x1) -> () // Vector store (A32/A64)
  • pub unsafe fn vst1q_p64(ptr: *p64, val: p64x2) -> () // Vector store (A32/A64)
  • pub unsafe fn vst1_f16(ptr: *f16, val: f16x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_f16(ptr: *f16, val: f16x8) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_f32(ptr: *f32, val: f32x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_f32(ptr: *f32, val: f32x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p8(ptr: *p8, val: p8x8) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p8(ptr: *p8, val: p8x16) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p16(ptr: *p16, val: p16x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p16(ptr: *p16, val: p16x8) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_f64(ptr: *f64, val: f64x1) -> () // Vector store (A64)
  • pub unsafe fn vst1q_f64(ptr: *f64, val: f64x2) -> () // Vector store (A64)
  • pub unsafe fn vst1_lane_s8(ptr: *i8, val: i8x8, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_s8(ptr: *i8, val: i8x16, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_s16(ptr: *i16, val: i16x4, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_s16(ptr: *i16, val: i16x8, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_s32(ptr: *i32, val: i32x2, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_s32(ptr: *i32, val: i32x4, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_s64(ptr: *i64, val: i64x1, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_s64(ptr: *i64, val: i64x2, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_u8(ptr: *u8, val: u8x8, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_u8(ptr: *u8, val: u8x16, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_u16(ptr: *u16, val: u16x4, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_u16(ptr: *u16, val: u16x8, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_u32(ptr: *u32, val: u32x2, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_u32(ptr: *u32, val: u32x4, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_u64(ptr: *u64, val: u64x1, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_u64(ptr: *u64, val: u64x2, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_p64(ptr: *p64, val: p64x1, lane: i) -> () // Vector store (A32/A64)
  • pub unsafe fn vst1q_lane_p64(ptr: *p64, val: p64x2, lane: i) -> () // Vector store (A32/A64)
  • pub unsafe fn vst1_lane_f16(ptr: *f16, val: f16x4, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_f16(ptr: *f16, val: f16x8, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_f32(ptr: *f32, val: f32x2, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_f32(ptr: *f32, val: f32x4, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_p8(ptr: *p8, val: p8x8, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_p8(ptr: *p8, val: p8x16, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_p16(ptr: *p16, val: p16x4, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_lane_p16(ptr: *p16, val: p16x8, lane: i) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_lane_f64(ptr: *f64, val: f64x1, lane: i) -> () // Vector store (A64)
  • pub unsafe fn vst1q_lane_f64(ptr: *f64, val: f64x2, lane: i) -> () // Vector store (A64)
  • pub unsafe fn vld2_s8(ptr: *const i8) -> i8x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_s8(ptr: *const i8) -> i8x16x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_s16(ptr: *const i16) -> i16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_s16(ptr: *const i16) -> i16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_s32(ptr: *const i32) -> i32x2x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_s32(ptr: *const i32) -> i32x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_u8(ptr: *const u8) -> u8x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_u8(ptr: *const u8) -> u8x16x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_u16(ptr: *const u16) -> u16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_u16(ptr: *const u16) -> u16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_u32(ptr: *const u32) -> u32x2x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_u32(ptr: *const u32) -> u32x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_f16(ptr: *const f16) -> f16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_f16(ptr: *const f16) -> f16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_f32(ptr: *const f32) -> f32x2x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_f32(ptr: *const f32) -> f32x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_p8(ptr: *const p8) -> p8x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_p8(ptr: *const p8) -> p8x16x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_p16(ptr: *const p16) -> p16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_p16(ptr: *const p16) -> p16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_s64(ptr: *const i64) -> i64x1x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_u64(ptr: *const u64) -> u64x1x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_p64(ptr: *const p64) -> p64x1x2 // 2-element vector load (A32/A64)
  • pub unsafe fn vld2q_s64(ptr: *const i64) -> i64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_u64(ptr: *const u64) -> u64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_p64(ptr: *const p64) -> p64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld2_f64(ptr: *const f64) -> f64x1x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_f64(ptr: *const f64) -> f64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld3_s8(ptr: *const i8) -> i8x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_s8(ptr: *const i8) -> i8x16x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_s16(ptr: *const i16) -> i16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_s16(ptr: *const i16) -> i16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_s32(ptr: *const i32) -> i32x2x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_s32(ptr: *const i32) -> i32x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_u8(ptr: *const u8) -> u8x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_u8(ptr: *const u8) -> u8x16x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_u16(ptr: *const u16) -> u16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_u16(ptr: *const u16) -> u16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_u32(ptr: *const u32) -> u32x2x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_u32(ptr: *const u32) -> u32x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_f16(ptr: *const f16) -> f16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_f16(ptr: *const f16) -> f16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_f32(ptr: *const f32) -> f32x2x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_f32(ptr: *const f32) -> f32x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_p8(ptr: *const p8) -> p8x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_p8(ptr: *const p8) -> p8x16x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_p16(ptr: *const p16) -> p16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_p16(ptr: *const p16) -> p16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_s64(ptr: *const i64) -> i64x1x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_u64(ptr: *const u64) -> u64x1x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_p64(ptr: *const p64) -> p64x1x3 // 3-element vector load (A32/A64)
  • pub unsafe fn vld3q_s64(ptr: *const i64) -> i64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_u64(ptr: *const u64) -> u64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_p64(ptr: *const p64) -> p64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld3_f64(ptr: *const f64) -> f64x1x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_f64(ptr: *const f64) -> f64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld4_s8(ptr: *const i8) -> i8x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_s8(ptr: *const i8) -> i8x16x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_s16(ptr: *const i16) -> i16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_s16(ptr: *const i16) -> i16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_s32(ptr: *const i32) -> i32x2x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_s32(ptr: *const i32) -> i32x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_u8(ptr: *const u8) -> u8x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_u8(ptr: *const u8) -> u8x16x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_u16(ptr: *const u16) -> u16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_u16(ptr: *const u16) -> u16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_u32(ptr: *const u32) -> u32x2x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_u32(ptr: *const u32) -> u32x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_f16(ptr: *const f16) -> f16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_f16(ptr: *const f16) -> f16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_f32(ptr: *const f32) -> f32x2x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_f32(ptr: *const f32) -> f32x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_p8(ptr: *const p8) -> p8x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_p8(ptr: *const p8) -> p8x16x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_p16(ptr: *const p16) -> p16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_p16(ptr: *const p16) -> p16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_s64(ptr: *const i64) -> i64x1x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_u64(ptr: *const u64) -> u64x1x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_p64(ptr: *const p64) -> p64x1x4 // 4-element vector load (A32/A64)
  • pub unsafe fn vld4q_s64(ptr: *const i64) -> i64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_u64(ptr: *const u64) -> u64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_p64(ptr: *const p64) -> p64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld4_f64(ptr: *const f64) -> f64x1x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_f64(ptr: *const f64) -> f64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld2_dup_s8(ptr: *const i8) -> i8x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_s8(ptr: *const i8) -> i8x16x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_s16(ptr: *const i16) -> i16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_s16(ptr: *const i16) -> i16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_s32(ptr: *const i32) -> i32x2x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_s32(ptr: *const i32) -> i32x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_u8(ptr: *const u8) -> u8x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_u8(ptr: *const u8) -> u8x16x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_u16(ptr: *const u16) -> u16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_u16(ptr: *const u16) -> u16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_u32(ptr: *const u32) -> u32x2x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_u32(ptr: *const u32) -> u32x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_f16(ptr: *const f16) -> f16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_f16(ptr: *const f16) -> f16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_f32(ptr: *const f32) -> f32x2x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_f32(ptr: *const f32) -> f32x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_p8(ptr: *const p8) -> p8x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_p8(ptr: *const p8) -> p8x16x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_p16(ptr: *const p16) -> p16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_dup_p16(ptr: *const p16) -> p16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_s64(ptr: *const i64) -> i64x1x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_u64(ptr: *const u64) -> u64x1x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_dup_p64(ptr: *const p64) -> p64x1x2 // 2-element vector load (A32/A64)
  • pub unsafe fn vld2q_dup_s64(ptr: *const i64) -> i64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_dup_u64(ptr: *const u64) -> u64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_dup_p64(ptr: *const p64) -> p64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld2_dup_f64(ptr: *const f64) -> f64x1x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_dup_f64(ptr: *const f64) -> f64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld3_dup_s8(ptr: *const i8) -> i8x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_s8(ptr: *const i8) -> i8x16x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_s16(ptr: *const i16) -> i16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_s16(ptr: *const i16) -> i16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_s32(ptr: *const i32) -> i32x2x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_s32(ptr: *const i32) -> i32x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_u8(ptr: *const u8) -> u8x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_u8(ptr: *const u8) -> u8x16x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_u16(ptr: *const u16) -> u16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_u16(ptr: *const u16) -> u16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_u32(ptr: *const u32) -> u32x2x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_u32(ptr: *const u32) -> u32x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_f16(ptr: *const f16) -> f16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_f16(ptr: *const f16) -> f16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_f32(ptr: *const f32) -> f32x2x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_f32(ptr: *const f32) -> f32x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_p8(ptr: *const p8) -> p8x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_p8(ptr: *const p8) -> p8x16x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_p16(ptr: *const p16) -> p16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_dup_p16(ptr: *const p16) -> p16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_s64(ptr: *const i64) -> i64x1x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_u64(ptr: *const u64) -> u64x1x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_dup_p64(ptr: *const p64) -> p64x1x3 // 3-element vector load (A32/A64)
  • pub unsafe fn vld3q_dup_s64(ptr: *const i64) -> i64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_dup_u64(ptr: *const u64) -> u64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_dup_p64(ptr: *const p64) -> p64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld3_dup_f64(ptr: *const f64) -> f64x1x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_dup_f64(ptr: *const f64) -> f64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld4_dup_s8(ptr: *const i8) -> i8x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_s8(ptr: *const i8) -> i8x16x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_s16(ptr: *const i16) -> i16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_s16(ptr: *const i16) -> i16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_s32(ptr: *const i32) -> i32x2x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_s32(ptr: *const i32) -> i32x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_u8(ptr: *const u8) -> u8x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_u8(ptr: *const u8) -> u8x16x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_u16(ptr: *const u16) -> u16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_u16(ptr: *const u16) -> u16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_u32(ptr: *const u32) -> u32x2x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_u32(ptr: *const u32) -> u32x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_f16(ptr: *const f16) -> f16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_f16(ptr: *const f16) -> f16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_f32(ptr: *const f32) -> f32x2x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_f32(ptr: *const f32) -> f32x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_p8(ptr: *const p8) -> p8x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_p8(ptr: *const p8) -> p8x16x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_p16(ptr: *const p16) -> p16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_dup_p16(ptr: *const p16) -> p16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_s64(ptr: *const i64) -> i64x1x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_u64(ptr: *const u64) -> u64x1x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_dup_p64(ptr: *const p64) -> p64x1x4 // 4-element vector load (A32/A64)
  • pub unsafe fn vld4q_dup_s64(ptr: *const i64) -> i64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_dup_u64(ptr: *const u64) -> u64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_dup_p64(ptr: *const p64) -> p64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld4_dup_f64(ptr: *const f64) -> f64x1x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_dup_f64(ptr: *const f64) -> f64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vst2_s8(ptr: *i8, val: i8x8x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_s8(ptr: *i8, val: i8x16x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_s16(ptr: *i16, val: i16x4x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_s16(ptr: *i16, val: i16x8x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_s32(ptr: *i32, val: i32x2x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_s32(ptr: *i32, val: i32x4x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_u8(ptr: *u8, val: u8x8x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_u8(ptr: *u8, val: u8x16x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_u16(ptr: *u16, val: u16x4x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_u16(ptr: *u16, val: u16x8x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_u32(ptr: *u32, val: u32x2x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_u32(ptr: *u32, val: u32x4x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_f16(ptr: *f16, val: f16x4x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_f16(ptr: *f16, val: f16x8x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_f32(ptr: *f32, val: f32x2x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_f32(ptr: *f32, val: f32x4x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_p8(ptr: *p8, val: p8x8x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_p8(ptr: *p8, val: p8x16x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_p16(ptr: *p16, val: p16x4x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_p16(ptr: *p16, val: p16x8x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_s64(ptr: *i64, val: i64x1x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_u64(ptr: *u64, val: u64x1x2) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_p64(ptr: *p64, val: p64x1x2) -> () // 2-element vector store (A32/A64)
  • pub unsafe fn vst2q_s64(ptr: *i64, val: i64x2x2) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2q_u64(ptr: *u64, val: u64x2x2) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2q_p64(ptr: *p64, val: p64x2x2) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2_f64(ptr: *f64, val: f64x1x2) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2q_f64(ptr: *f64, val: f64x2x2) -> () // 2-element vector store (A64)
  • pub unsafe fn vst3_s8(ptr: *i8, val: i8x8x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_s8(ptr: *i8, val: i8x16x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_s16(ptr: *i16, val: i16x4x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_s16(ptr: *i16, val: i16x8x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_s32(ptr: *i32, val: i32x2x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_s32(ptr: *i32, val: i32x4x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_u8(ptr: *u8, val: u8x8x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_u8(ptr: *u8, val: u8x16x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_u16(ptr: *u16, val: u16x4x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_u16(ptr: *u16, val: u16x8x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_u32(ptr: *u32, val: u32x2x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_u32(ptr: *u32, val: u32x4x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_f16(ptr: *f16, val: f16x4x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_f16(ptr: *f16, val: f16x8x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_f32(ptr: *f32, val: f32x2x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_f32(ptr: *f32, val: f32x4x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_p8(ptr: *p8, val: p8x8x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_p8(ptr: *p8, val: p8x16x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_p16(ptr: *p16, val: p16x4x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_p16(ptr: *p16, val: p16x8x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_s64(ptr: *i64, val: i64x1x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_u64(ptr: *u64, val: u64x1x3) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_p64(ptr: *p64, val: p64x1x3) -> () // 3-element vector store (A32/A64)
  • pub unsafe fn vst3q_s64(ptr: *i64, val: i64x2x3) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3q_u64(ptr: *u64, val: u64x2x3) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3q_p64(ptr: *p64, val: p64x2x3) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3_f64(ptr: *f64, val: f64x1x3) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3q_f64(ptr: *f64, val: f64x2x3) -> () // 3-element vector store (A64)
  • pub unsafe fn vst4_s8(ptr: *i8, val: i8x8x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_s8(ptr: *i8, val: i8x16x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_s16(ptr: *i16, val: i16x4x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_s16(ptr: *i16, val: i16x8x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_s32(ptr: *i32, val: i32x2x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_s32(ptr: *i32, val: i32x4x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_u8(ptr: *u8, val: u8x8x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_u8(ptr: *u8, val: u8x16x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_u16(ptr: *u16, val: u16x4x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_u16(ptr: *u16, val: u16x8x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_u32(ptr: *u32, val: u32x2x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_u32(ptr: *u32, val: u32x4x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_f16(ptr: *f16, val: f16x4x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_f16(ptr: *f16, val: f16x8x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_f32(ptr: *f32, val: f32x2x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_f32(ptr: *f32, val: f32x4x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_p8(ptr: *p8, val: p8x8x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_p8(ptr: *p8, val: p8x16x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_p16(ptr: *p16, val: p16x4x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_p16(ptr: *p16, val: p16x8x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_s64(ptr: *i64, val: i64x1x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_u64(ptr: *u64, val: u64x1x4) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_p64(ptr: *p64, val: p64x1x4) -> () // 4-element vector store (A32/A64)
  • pub unsafe fn vst4q_s64(ptr: *i64, val: i64x2x4) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4q_u64(ptr: *u64, val: u64x2x4) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4q_p64(ptr: *p64, val: p64x2x4) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4_f64(ptr: *f64, val: f64x1x4) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4q_f64(ptr: *f64, val: f64x2x4) -> () // 4-element vector store (A64)
  • pub unsafe fn vld2_lane_s16(ptr: *const i16, src: i16x4x2, lane: i) -> i16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_lane_s16(ptr: *const i16, src: i16x8x2, lane: i) -> i16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_lane_s32(ptr: *const i32, src: i32x2x2, lane: i) -> i32x2x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_lane_s32(ptr: *const i32, src: i32x4x2, lane: i) -> i32x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_lane_u16(ptr: *const u16, src: u16x4x2, lane: i) -> u16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_lane_u16(ptr: *const u16, src: u16x8x2, lane: i) -> u16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_lane_u32(ptr: *const u32, src: u32x2x2, lane: i) -> u32x2x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_lane_u32(ptr: *const u32, src: u32x4x2, lane: i) -> u32x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_lane_f16(ptr: *const f16, src: f16x4x2, lane: i) -> f16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_lane_f16(ptr: *const f16, src: f16x8x2, lane: i) -> f16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_lane_f32(ptr: *const f32, src: f32x2x2, lane: i) -> f32x2x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_lane_f32(ptr: *const f32, src: f32x4x2, lane: i) -> f32x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_lane_p16(ptr: *const p16, src: p16x4x2, lane: i) -> p16x4x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_lane_p16(ptr: *const p16, src: p16x8x2, lane: i) -> p16x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_lane_s8(ptr: *const i8, src: i8x8x2, lane: i) -> i8x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_lane_u8(ptr: *const u8, src: u8x8x2, lane: i) -> u8x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2_lane_p8(ptr: *const p8, src: p8x8x2, lane: i) -> p8x8x2 // 2-element vector load (v7/A32/A64)
  • pub unsafe fn vld2q_lane_s8(ptr: *const i8, src: i8x16x2, lane: i) -> i8x16x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_lane_u8(ptr: *const u8, src: u8x16x2, lane: i) -> u8x16x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_lane_p8(ptr: *const p8, src: p8x16x2, lane: i) -> p8x16x2 // 2-element vector load (A64)
  • pub unsafe fn vld2_lane_s64(ptr: *const i64, src: i64x1x2, lane: i) -> i64x1x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_lane_s64(ptr: *const i64, src: i64x2x2, lane: i) -> i64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld2_lane_u64(ptr: *const u64, src: u64x1x2, lane: i) -> u64x1x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_lane_u64(ptr: *const u64, src: u64x2x2, lane: i) -> u64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld2_lane_p64(ptr: *const p64, src: p64x1x2, lane: i) -> p64x1x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_lane_p64(ptr: *const p64, src: p64x2x2, lane: i) -> p64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld2_lane_f64(ptr: *const f64, src: f64x1x2, lane: i) -> f64x1x2 // 2-element vector load (A64)
  • pub unsafe fn vld2q_lane_f64(ptr: *const f64, src: f64x2x2, lane: i) -> f64x2x2 // 2-element vector load (A64)
  • pub unsafe fn vld3_lane_s16(ptr: *const i16, src: i16x4x3, lane: i) -> i16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_lane_s16(ptr: *const i16, src: i16x8x3, lane: i) -> i16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_lane_s32(ptr: *const i32, src: i32x2x3, lane: i) -> i32x2x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_lane_s32(ptr: *const i32, src: i32x4x3, lane: i) -> i32x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_lane_u16(ptr: *const u16, src: u16x4x3, lane: i) -> u16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_lane_u16(ptr: *const u16, src: u16x8x3, lane: i) -> u16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_lane_u32(ptr: *const u32, src: u32x2x3, lane: i) -> u32x2x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_lane_u32(ptr: *const u32, src: u32x4x3, lane: i) -> u32x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_lane_f16(ptr: *const f16, src: f16x4x3, lane: i) -> f16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_lane_f16(ptr: *const f16, src: f16x8x3, lane: i) -> f16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_lane_f32(ptr: *const f32, src: f32x2x3, lane: i) -> f32x2x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_lane_f32(ptr: *const f32, src: f32x4x3, lane: i) -> f32x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_lane_p16(ptr: *const p16, src: p16x4x3, lane: i) -> p16x4x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_lane_p16(ptr: *const p16, src: p16x8x3, lane: i) -> p16x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_lane_s8(ptr: *const i8, src: i8x8x3, lane: i) -> i8x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_lane_u8(ptr: *const u8, src: u8x8x3, lane: i) -> u8x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3_lane_p8(ptr: *const p8, src: p8x8x3, lane: i) -> p8x8x3 // 3-element vector load (v7/A32/A64)
  • pub unsafe fn vld3q_lane_s8(ptr: *const i8, src: i8x16x3, lane: i) -> i8x16x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_lane_u8(ptr: *const u8, src: u8x16x3, lane: i) -> u8x16x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_lane_p8(ptr: *const p8, src: p8x16x3, lane: i) -> p8x16x3 // 3-element vector load (A64)
  • pub unsafe fn vld3_lane_s64(ptr: *const i64, src: i64x1x3, lane: i) -> i64x1x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_lane_s64(ptr: *const i64, src: i64x2x3, lane: i) -> i64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld3_lane_u64(ptr: *const u64, src: u64x1x3, lane: i) -> u64x1x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_lane_u64(ptr: *const u64, src: u64x2x3, lane: i) -> u64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld3_lane_p64(ptr: *const p64, src: p64x1x3, lane: i) -> p64x1x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_lane_p64(ptr: *const p64, src: p64x2x3, lane: i) -> p64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld3_lane_f64(ptr: *const f64, src: f64x1x3, lane: i) -> f64x1x3 // 3-element vector load (A64)
  • pub unsafe fn vld3q_lane_f64(ptr: *const f64, src: f64x2x3, lane: i) -> f64x2x3 // 3-element vector load (A64)
  • pub unsafe fn vld4_lane_s16(ptr: *const i16, src: i16x4x4, lane: i) -> i16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_lane_s16(ptr: *const i16, src: i16x8x4, lane: i) -> i16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_lane_s32(ptr: *const i32, src: i32x2x4, lane: i) -> i32x2x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_lane_s32(ptr: *const i32, src: i32x4x4, lane: i) -> i32x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_lane_u16(ptr: *const u16, src: u16x4x4, lane: i) -> u16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_lane_u16(ptr: *const u16, src: u16x8x4, lane: i) -> u16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_lane_u32(ptr: *const u32, src: u32x2x4, lane: i) -> u32x2x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_lane_u32(ptr: *const u32, src: u32x4x4, lane: i) -> u32x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_lane_f16(ptr: *const f16, src: f16x4x4, lane: i) -> f16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_lane_f16(ptr: *const f16, src: f16x8x4, lane: i) -> f16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_lane_f32(ptr: *const f32, src: f32x2x4, lane: i) -> f32x2x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_lane_f32(ptr: *const f32, src: f32x4x4, lane: i) -> f32x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_lane_p16(ptr: *const p16, src: p16x4x4, lane: i) -> p16x4x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_lane_p16(ptr: *const p16, src: p16x8x4, lane: i) -> p16x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_lane_s8(ptr: *const i8, src: i8x8x4, lane: i) -> i8x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_lane_u8(ptr: *const u8, src: u8x8x4, lane: i) -> u8x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4_lane_p8(ptr: *const p8, src: p8x8x4, lane: i) -> p8x8x4 // 4-element vector load (v7/A32/A64)
  • pub unsafe fn vld4q_lane_s8(ptr: *const i8, src: i8x16x4, lane: i) -> i8x16x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_lane_u8(ptr: *const u8, src: u8x16x4, lane: i) -> u8x16x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_lane_p8(ptr: *const p8, src: p8x16x4, lane: i) -> p8x16x4 // 4-element vector load (A64)
  • pub unsafe fn vld4_lane_s64(ptr: *const i64, src: i64x1x4, lane: i) -> i64x1x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_lane_s64(ptr: *const i64, src: i64x2x4, lane: i) -> i64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld4_lane_u64(ptr: *const u64, src: u64x1x4, lane: i) -> u64x1x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_lane_u64(ptr: *const u64, src: u64x2x4, lane: i) -> u64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld4_lane_p64(ptr: *const p64, src: p64x1x4, lane: i) -> p64x1x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_lane_p64(ptr: *const p64, src: p64x2x4, lane: i) -> p64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vld4_lane_f64(ptr: *const f64, src: f64x1x4, lane: i) -> f64x1x4 // 4-element vector load (A64)
  • pub unsafe fn vld4q_lane_f64(ptr: *const f64, src: f64x2x4, lane: i) -> f64x2x4 // 4-element vector load (A64)
  • pub unsafe fn vst2_lane_s8(ptr: *i8, val: i8x8x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_lane_u8(ptr: *u8, val: u8x8x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_lane_p8(ptr: *p8, val: p8x8x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_s8(ptr: *i8, val: i8x8x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_u8(ptr: *u8, val: u8x8x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_p8(ptr: *p8, val: p8x8x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_lane_s8(ptr: *i8, val: i8x8x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_lane_u8(ptr: *u8, val: u8x8x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_lane_p8(ptr: *p8, val: p8x8x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_lane_s16(ptr: *i16, val: i16x4x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_lane_s16(ptr: *i16, val: i16x8x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_lane_s32(ptr: *i32, val: i32x2x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_lane_s32(ptr: *i32, val: i32x4x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_lane_u16(ptr: *u16, val: u16x4x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_lane_u16(ptr: *u16, val: u16x8x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_lane_u32(ptr: *u32, val: u32x2x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_lane_u32(ptr: *u32, val: u32x4x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_lane_f16(ptr: *f16, val: f16x4x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_lane_f16(ptr: *f16, val: f16x8x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_lane_f32(ptr: *f32, val: f32x2x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_lane_f32(ptr: *f32, val: f32x4x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2_lane_p16(ptr: *p16, val: p16x4x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_lane_p16(ptr: *p16, val: p16x8x2, lane: i) -> () // 2-element vector store (v7/A32/A64)
  • pub unsafe fn vst2q_lane_s8(ptr: *i8, val: i8x16x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2q_lane_u8(ptr: *u8, val: u8x16x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2q_lane_p8(ptr: *p8, val: p8x16x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2_lane_s64(ptr: *i64, val: i64x1x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2q_lane_s64(ptr: *i64, val: i64x2x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2_lane_u64(ptr: *u64, val: u64x1x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2q_lane_u64(ptr: *u64, val: u64x2x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2_lane_p64(ptr: *p64, val: p64x1x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2q_lane_p64(ptr: *p64, val: p64x2x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2_lane_f64(ptr: *f64, val: f64x1x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst2q_lane_f64(ptr: *f64, val: f64x2x2, lane: i) -> () // 2-element vector store (A64)
  • pub unsafe fn vst3_lane_s16(ptr: *i16, val: i16x4x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_s16(ptr: *i16, val: i16x8x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_s32(ptr: *i32, val: i32x2x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_s32(ptr: *i32, val: i32x4x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_u16(ptr: *u16, val: u16x4x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_u16(ptr: *u16, val: u16x8x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_u32(ptr: *u32, val: u32x2x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_u32(ptr: *u32, val: u32x4x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_f16(ptr: *f16, val: f16x4x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_f16(ptr: *f16, val: f16x8x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_f32(ptr: *f32, val: f32x2x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_f32(ptr: *f32, val: f32x4x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_p16(ptr: *p16, val: p16x4x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_p16(ptr: *p16, val: p16x8x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_s8(ptr: *i8, val: i8x16x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_u8(ptr: *u8, val: u8x16x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3q_lane_p8(ptr: *p8, val: p8x16x3, lane: i) -> () // 3-element vector store (v7/A32/A64)
  • pub unsafe fn vst3_lane_s64(ptr: *i64, val: i64x1x3, lane: i) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3q_lane_s64(ptr: *i64, val: i64x2x3, lane: i) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3_lane_u64(ptr: *u64, val: u64x1x3, lane: i) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3q_lane_u64(ptr: *u64, val: u64x2x3, lane: i) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3_lane_p64(ptr: *p64, val: p64x1x3, lane: i) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3q_lane_p64(ptr: *p64, val: p64x2x3, lane: i) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3_lane_f64(ptr: *f64, val: f64x1x3, lane: i) -> () // 3-element vector store (A64)
  • pub unsafe fn vst3q_lane_f64(ptr: *f64, val: f64x2x3, lane: i) -> () // 3-element vector store (A64)
  • pub unsafe fn vst4_lane_s16(ptr: *i16, val: i16x4x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_lane_s16(ptr: *i16, val: i16x8x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_lane_s32(ptr: *i32, val: i32x2x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_lane_s32(ptr: *i32, val: i32x4x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_lane_u16(ptr: *u16, val: u16x4x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_lane_u16(ptr: *u16, val: u16x8x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_lane_u32(ptr: *u32, val: u32x2x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_lane_u32(ptr: *u32, val: u32x4x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_lane_f16(ptr: *f16, val: f16x4x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_lane_f16(ptr: *f16, val: f16x8x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_lane_f32(ptr: *f32, val: f32x2x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_lane_f32(ptr: *f32, val: f32x4x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4_lane_p16(ptr: *p16, val: p16x4x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_lane_p16(ptr: *p16, val: p16x8x4, lane: i) -> () // 4-element vector store (v7/A32/A64)
  • pub unsafe fn vst4q_lane_s8(ptr: *i8, val: i8x16x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4q_lane_u8(ptr: *u8, val: u8x16x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4q_lane_p8(ptr: *p8, val: p8x16x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4_lane_s64(ptr: *i64, val: i64x1x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4q_lane_s64(ptr: *i64, val: i64x2x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4_lane_u64(ptr: *u64, val: u64x1x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4q_lane_u64(ptr: *u64, val: u64x2x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4_lane_p64(ptr: *p64, val: p64x1x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4q_lane_p64(ptr: *p64, val: p64x2x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4_lane_f64(ptr: *f64, val: f64x1x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst4q_lane_f64(ptr: *f64, val: f64x2x4, lane: i) -> () // 4-element vector store (A64)
  • pub unsafe fn vst1_s8_x2(ptr: *i8, val: i8x8x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s8_x2(ptr: *i8, val: i8x16x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s16_x2(ptr: *i16, val: i16x4x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s16_x2(ptr: *i16, val: i16x8x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s32_x2(ptr: *i32, val: i32x2x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s32_x2(ptr: *i32, val: i32x4x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u8_x2(ptr: *u8, val: u8x8x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u8_x2(ptr: *u8, val: u8x16x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u16_x2(ptr: *u16, val: u16x4x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u16_x2(ptr: *u16, val: u16x8x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u32_x2(ptr: *u32, val: u32x2x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u32_x2(ptr: *u32, val: u32x4x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_f16_x2(ptr: *f16, val: f16x4x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_f16_x2(ptr: *f16, val: f16x8x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_f32_x2(ptr: *f32, val: f32x2x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_f32_x2(ptr: *f32, val: f32x4x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p8_x2(ptr: *p8, val: p8x8x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p8_x2(ptr: *p8, val: p8x16x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p16_x2(ptr: *p16, val: p16x4x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p16_x2(ptr: *p16, val: p16x8x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s64_x2(ptr: *i64, val: i64x1x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u64_x2(ptr: *u64, val: u64x1x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p64_x2(ptr: *p64, val: p64x1x2) -> () // Vector store (A32/A64)
  • pub unsafe fn vst1q_s64_x2(ptr: *i64, val: i64x2x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u64_x2(ptr: *u64, val: u64x2x2) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p64_x2(ptr: *p64, val: p64x2x2) -> () // Vector store (A32/A64)
  • pub unsafe fn vst1_f64_x2(ptr: *f64, val: f64x1x2) -> () // Vector store (A64)
  • pub unsafe fn vst1q_f64_x2(ptr: *f64, val: f64x2x2) -> () // Vector store (A64)
  • pub unsafe fn vst1_s8_x3(ptr: *i8, val: i8x8x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s8_x3(ptr: *i8, val: i8x16x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s16_x3(ptr: *i16, val: i16x4x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s16_x3(ptr: *i16, val: i16x8x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s32_x3(ptr: *i32, val: i32x2x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s32_x3(ptr: *i32, val: i32x4x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u8_x3(ptr: *u8, val: u8x8x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u8_x3(ptr: *u8, val: u8x16x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u16_x3(ptr: *u16, val: u16x4x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u16_x3(ptr: *u16, val: u16x8x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u32_x3(ptr: *u32, val: u32x2x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u32_x3(ptr: *u32, val: u32x4x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_f16_x3(ptr: *f16, val: f16x4x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_f16_x3(ptr: *f16, val: f16x8x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_f32_x3(ptr: *f32, val: f32x2x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_f32_x3(ptr: *f32, val: f32x4x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p8_x3(ptr: *p8, val: p8x8x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p8_x3(ptr: *p8, val: p8x16x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p16_x3(ptr: *p16, val: p16x4x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p16_x3(ptr: *p16, val: p16x8x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s64_x3(ptr: *i64, val: i64x1x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u64_x3(ptr: *u64, val: u64x1x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p64_x3(ptr: *p64, val: p64x1x3) -> () // Vector store (A32/A64)
  • pub unsafe fn vst1q_s64_x3(ptr: *i64, val: i64x2x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u64_x3(ptr: *u64, val: u64x2x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p64_x3(ptr: *p64, val: p64x2x3) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_f64_x3(ptr: *f64, val: f64x1x3) -> () // Vector store (A64)
  • pub unsafe fn vst1q_f64_x3(ptr: *f64, val: f64x2x3) -> () // Vector store (A64)
  • pub unsafe fn vst1_s8_x4(ptr: *i8, val: i8x8x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s8_x4(ptr: *i8, val: i8x16x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s16_x4(ptr: *i16, val: i16x4x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s16_x4(ptr: *i16, val: i16x8x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s32_x4(ptr: *i32, val: i32x2x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_s32_x4(ptr: *i32, val: i32x4x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u8_x4(ptr: *u8, val: u8x8x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u8_x4(ptr: *u8, val: u8x16x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u16_x4(ptr: *u16, val: u16x4x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u16_x4(ptr: *u16, val: u16x8x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u32_x4(ptr: *u32, val: u32x2x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u32_x4(ptr: *u32, val: u32x4x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_f16_x4(ptr: *f16, val: f16x4x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_f16_x4(ptr: *f16, val: f16x8x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_f32_x4(ptr: *f32, val: f32x2x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_f32_x4(ptr: *f32, val: f32x4x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p8_x4(ptr: *p8, val: p8x8x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p8_x4(ptr: *p8, val: p8x16x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p16_x4(ptr: *p16, val: p16x4x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p16_x4(ptr: *p16, val: p16x8x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_s64_x4(ptr: *i64, val: i64x1x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_u64_x4(ptr: *u64, val: u64x1x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1_p64_x4(ptr: *p64, val: p64x1x4) -> () // Vector store (A32/A64)
  • pub unsafe fn vst1q_s64_x4(ptr: *i64, val: i64x2x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_u64_x4(ptr: *u64, val: u64x2x4) -> () // Vector store (v7/A32/A64)
  • pub unsafe fn vst1q_p64_x4(ptr: *p64, val: p64x2x4) -> () // Vector store (A32/A64)
  • pub unsafe fn vst1_f64_x4(ptr: *f64, val: f64x1x4) -> () // Vector store (A64)
  • pub unsafe fn vst1q_f64_x4(ptr: *f64, val: f64x2x4) -> () // Vector store (A64)
  • pub unsafe fn vld1_s8_x2(ptr: *const i8) -> i8x8x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s8_x2(ptr: *const i8) -> i8x16x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s16_x2(ptr: *const i16) -> i16x4x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s16_x2(ptr: *const i16) -> i16x8x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s32_x2(ptr: *const i32) -> i32x2x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s32_x2(ptr: *const i32) -> i32x4x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u8_x2(ptr: *const u8) -> u8x8x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u8_x2(ptr: *const u8) -> u8x16x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u16_x2(ptr: *const u16) -> u16x4x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u16_x2(ptr: *const u16) -> u16x8x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u32_x2(ptr: *const u32) -> u32x2x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u32_x2(ptr: *const u32) -> u32x4x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_f16_x2(ptr: *const f16) -> f16x4x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_f16_x2(ptr: *const f16) -> f16x8x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_f32_x2(ptr: *const f32) -> f32x2x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_f32_x2(ptr: *const f32) -> f32x4x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p8_x2(ptr: *const p8) -> p8x8x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p8_x2(ptr: *const p8) -> p8x16x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p16_x2(ptr: *const p16) -> p16x4x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p16_x2(ptr: *const p16) -> p16x8x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s64_x2(ptr: *const i64) -> i64x1x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u64_x2(ptr: *const u64) -> u64x1x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p64_x2(ptr: *const p64) -> p64x1x2 // Vector load (A32/A64)
  • pub unsafe fn vld1q_s64_x2(ptr: *const i64) -> i64x2x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u64_x2(ptr: *const u64) -> u64x2x2 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p64_x2(ptr: *const p64) -> p64x2x2 // Vector load (A32/A64)
  • pub unsafe fn vld1_f64_x2(ptr: *const f64) -> f64x1x2 // Vector load (A64)
  • pub unsafe fn vld1q_f64_x2(ptr: *const f64) -> f64x2x2 // Vector load (A64)
  • pub unsafe fn vld1_s8_x3(ptr: *const i8) -> i8x8x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s8_x3(ptr: *const i8) -> i8x16x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s16_x3(ptr: *const i16) -> i16x4x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s16_x3(ptr: *const i16) -> i16x8x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s32_x3(ptr: *const i32) -> i32x2x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s32_x3(ptr: *const i32) -> i32x4x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u8_x3(ptr: *const u8) -> u8x8x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u8_x3(ptr: *const u8) -> u8x16x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u16_x3(ptr: *const u16) -> u16x4x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u16_x3(ptr: *const u16) -> u16x8x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u32_x3(ptr: *const u32) -> u32x2x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u32_x3(ptr: *const u32) -> u32x4x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_f16_x3(ptr: *const f16) -> f16x4x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_f16_x3(ptr: *const f16) -> f16x8x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_f32_x3(ptr: *const f32) -> f32x2x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_f32_x3(ptr: *const f32) -> f32x4x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p8_x3(ptr: *const p8) -> p8x8x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p8_x3(ptr: *const p8) -> p8x16x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p16_x3(ptr: *const p16) -> p16x4x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p16_x3(ptr: *const p16) -> p16x8x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s64_x3(ptr: *const i64) -> i64x1x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u64_x3(ptr: *const u64) -> u64x1x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p64_x3(ptr: *const p64) -> p64x1x3 // Vector load (A32/A64)
  • pub unsafe fn vld1q_s64_x3(ptr: *const i64) -> i64x2x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u64_x3(ptr: *const u64) -> u64x2x3 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p64_x3(ptr: *const p64) -> p64x2x3 // Vector load (A32/A64)
  • pub unsafe fn vld1_f64_x3(ptr: *const f64) -> f64x1x3 // Vector load (A64)
  • pub unsafe fn vld1q_f64_x3(ptr: *const f64) -> f64x2x3 // Vector load (A64)
  • pub unsafe fn vld1_s8_x4(ptr: *const i8) -> i8x8x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s8_x4(ptr: *const i8) -> i8x16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s16_x4(ptr: *const i16) -> i16x4x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s16_x4(ptr: *const i16) -> i16x8x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s32_x4(ptr: *const i32) -> i32x2x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_s32_x4(ptr: *const i32) -> i32x4x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u8_x4(ptr: *const u8) -> u8x8x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u8_x4(ptr: *const u8) -> u8x16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u16_x4(ptr: *const u16) -> u16x4x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u16_x4(ptr: *const u16) -> u16x8x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u32_x4(ptr: *const u32) -> u32x2x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u32_x4(ptr: *const u32) -> u32x4x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_f16_x4(ptr: *const f16) -> f16x4x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_f16_x4(ptr: *const f16) -> f16x8x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_f32_x4(ptr: *const f32) -> f32x2x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_f32_x4(ptr: *const f32) -> f32x4x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p8_x4(ptr: *const p8) -> p8x8x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p8_x4(ptr: *const p8) -> p8x16x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p16_x4(ptr: *const p16) -> p16x4x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p16_x4(ptr: *const p16) -> p16x8x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_s64_x4(ptr: *const i64) -> i64x1x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_u64_x4(ptr: *const u64) -> u64x1x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1_p64_x4(ptr: *const p64) -> p64x1x4 // Vector load (A32/A64)
  • pub unsafe fn vld1q_s64_x4(ptr: *const i64) -> i64x2x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_u64_x4(ptr: *const u64) -> u64x2x4 // Vector load (v7/A32/A64)
  • pub unsafe fn vld1q_p64_x4(ptr: *const p64) -> p64x2x4 // Vector load (A32/A64)
  • pub unsafe fn vld1_f64_x4(ptr: *const f64) -> f64x1x4 // Vector load (A64)
  • pub unsafe fn vld1q_f64_x4(ptr: *const f64) -> f64x2x4 // Vector load (A64)
  • pub unsafe fn vpadd_s8(a: i8x8, b: i8x8) -> i8x8 // Pairwise add (v7/A32/A64)
  • pub unsafe fn vpadd_s16(a: i16x4, b: i16x4) -> i16x4 // Pairwise add (v7/A32/A64)
  • pub unsafe fn vpadd_s32(a: i32x2, b: i32x2) -> i32x2 // Pairwise add (v7/A32/A64)
  • pub unsafe fn vpadd_u8(a: u8x8, b: u8x8) -> u8x8 // Pairwise add (v7/A32/A64)
  • pub unsafe fn vpadd_u16(a: u16x4, b: u16x4) -> u16x4 // Pairwise add (v7/A32/A64)
  • pub unsafe fn vpadd_u32(a: u32x2, b: u32x2) -> u32x2 // Pairwise add (v7/A32/A64)
  • pub unsafe fn vpadd_f32(a: f32x2, b: f32x2) -> f32x2 // Pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddq_s8(a: i8x16, b: i8x16) -> i8x16 // Pairwise add (A64)
  • pub unsafe fn vpaddq_s16(a: i16x8, b: i16x8) -> i16x8 // Pairwise add (A64)
  • pub unsafe fn vpaddq_s32(a: i32x4, b: i32x4) -> i32x4 // Pairwise add (A64)
  • pub unsafe fn vpaddq_s64(a: i64x2, b: i64x2) -> i64x2 // Pairwise add (A64)
  • pub unsafe fn vpaddq_u8(a: u8x16, b: u8x16) -> u8x16 // Pairwise add (A64)
  • pub unsafe fn vpaddq_u16(a: u16x8, b: u16x8) -> u16x8 // Pairwise add (A64)
  • pub unsafe fn vpaddq_u32(a: u32x4, b: u32x4) -> u32x4 // Pairwise add (A64)
  • pub unsafe fn vpaddq_u64(a: u64x2, b: u64x2) -> u64x2 // Pairwise add (A64)
  • pub unsafe fn vpaddq_f32(a: f32x4, b: f32x4) -> f32x4 // Pairwise add (A64)
  • pub unsafe fn vpaddq_f64(a: f64x2, b: f64x2) -> f64x2 // Pairwise add (A64)
  • pub unsafe fn vpaddl_s8(a: i8x8) -> i16x4 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddlq_s8(a: i8x16) -> i16x8 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddl_s16(a: i16x4) -> i32x2 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddlq_s16(a: i16x8) -> i32x4 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddl_s32(a: i32x2) -> i64x1 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddlq_s32(a: i32x4) -> i64x2 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddl_u8(a: u8x8) -> u16x4 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddlq_u8(a: u8x16) -> u16x8 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddl_u16(a: u16x4) -> u32x2 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddlq_u16(a: u16x8) -> u32x4 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddl_u32(a: u32x2) -> u64x1 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpaddlq_u32(a: u32x4) -> u64x2 // Long pairwise add (v7/A32/A64)
  • pub unsafe fn vpadal_s8(a: i16x4, b: i8x8) -> i16x4 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadalq_s8(a: i16x8, b: i8x16) -> i16x8 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadal_s16(a: i32x2, b: i16x4) -> i32x2 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadalq_s16(a: i32x4, b: i16x8) -> i32x4 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadal_s32(a: i64x1, b: i32x2) -> i64x1 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadalq_s32(a: i64x2, b: i32x4) -> i64x2 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadal_u8(a: u16x4, b: u8x8) -> u16x4 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadalq_u8(a: u16x8, b: u8x16) -> u16x8 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadal_u16(a: u32x2, b: u16x4) -> u32x2 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadalq_u16(a: u32x4, b: u16x8) -> u32x4 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadal_u32(a: u64x1, b: u32x2) -> u64x1 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpadalq_u32(a: u64x2, b: u32x4) -> u64x2 // Long pairwise add and accumulate (v7/A32/A64)
  • pub unsafe fn vpmax_s8(a: i8x8, b: i8x8) -> i8x8 // Folding maximum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmax_s16(a: i16x4, b: i16x4) -> i16x4 // Folding maximum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmax_s32(a: i32x2, b: i32x2) -> i32x2 // Folding maximum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmax_u8(a: u8x8, b: u8x8) -> u8x8 // Folding maximum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmax_u16(a: u16x4, b: u16x4) -> u16x4 // Folding maximum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmax_u32(a: u32x2, b: u32x2) -> u32x2 // Folding maximum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmax_f32(a: f32x2, b: f32x2) -> f32x2 // Folding maximum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmaxq_s8(a: i8x16, b: i8x16) -> i8x16 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxq_s16(a: i16x8, b: i16x8) -> i16x8 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxq_s32(a: i32x4, b: i32x4) -> i32x4 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxq_u8(a: u8x16, b: u8x16) -> u8x16 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxq_u16(a: u16x8, b: u16x8) -> u16x8 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxq_u32(a: u32x4, b: u32x4) -> u32x4 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxq_f32(a: f32x4, b: f32x4) -> f32x4 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxq_f64(a: f64x2, b: f64x2) -> f64x2 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmin_s8(a: i8x8, b: i8x8) -> i8x8 // Folding minimum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmin_s16(a: i16x4, b: i16x4) -> i16x4 // Folding minimum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmin_s32(a: i32x2, b: i32x2) -> i32x2 // Folding minimum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmin_u8(a: u8x8, b: u8x8) -> u8x8 // Folding minimum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmin_u16(a: u16x4, b: u16x4) -> u16x4 // Folding minimum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmin_u32(a: u32x2, b: u32x2) -> u32x2 // Folding minimum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpmin_f32(a: f32x2, b: f32x2) -> f32x2 // Folding minimum of adjacent pairs (v7/A32/A64)
  • pub unsafe fn vpminq_s8(a: i8x16, b: i8x16) -> i8x16 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminq_s16(a: i16x8, b: i16x8) -> i16x8 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminq_s32(a: i32x4, b: i32x4) -> i32x4 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminq_u8(a: u8x16, b: u8x16) -> u8x16 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminq_u16(a: u16x8, b: u16x8) -> u16x8 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminq_u32(a: u32x4, b: u32x4) -> u32x4 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminq_f32(a: f32x4, b: f32x4) -> f32x4 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminq_f64(a: f64x2, b: f64x2) -> f64x2 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpmaxnm_f32(a: f32x2, b: f32x2) -> f32x2 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxnmq_f32(a: f32x4, b: f32x4) -> f32x4 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxnmq_f64(a: f64x2, b: f64x2) -> f64x2 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpminnm_f32(a: f32x2, b: f32x2) -> f32x2 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminnmq_f32(a: f32x4, b: f32x4) -> f32x4 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminnmq_f64(a: f64x2, b: f64x2) -> f64x2 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpaddd_s64(a: i64x2) -> i64 // Pairwise add (A64)
  • pub unsafe fn vpaddd_u64(a: u64x2) -> u64 // Pairwise add (A64)
  • pub unsafe fn vpadds_f32(a: f32x2) -> f32 // Pairwise add (A64)
  • pub unsafe fn vpaddd_f64(a: f64x2) -> f64 // Pairwise add (A64)
  • pub unsafe fn vpmaxs_f32(a: f32x2) -> f32 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxqd_f64(a: f64x2) -> f64 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmins_f32(a: f32x2) -> f32 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminqd_f64(a: f64x2) -> f64 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpmaxnms_f32(a: f32x2) -> f32 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxnmqd_f64(a: f64x2) -> f64 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpminnms_f32(a: f32x2) -> f32 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminnmqd_f64(a: f64x2) -> f64 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vaddv_s8(a: i8x8) -> i8 // Add across vector (A64)
  • pub unsafe fn vaddvq_s8(a: i8x16) -> i8 // Add across vector (A64)
  • pub unsafe fn vaddv_s16(a: i16x4) -> i16 // Add across vector (A64)
  • pub unsafe fn vaddvq_s16(a: i16x8) -> i16 // Add across vector (A64)
  • pub unsafe fn vaddv_s32(a: i32x2) -> i32 // Add across vector (A64)
  • pub unsafe fn vaddvq_s32(a: i32x4) -> i32 // Add across vector (A64)
  • pub unsafe fn vaddvq_s64(a: i64x2) -> i64 // Add across vector (A64)
  • pub unsafe fn vaddv_u8(a: u8x8) -> u8 // Add across vector (A64)
  • pub unsafe fn vaddvq_u8(a: u8x16) -> u8 // Add across vector (A64)
  • pub unsafe fn vaddv_u16(a: u16x4) -> u16 // Add across vector (A64)
  • pub unsafe fn vaddvq_u16(a: u16x8) -> u16 // Add across vector (A64)
  • pub unsafe fn vaddv_u32(a: u32x2) -> u32 // Add across vector (A64)
  • pub unsafe fn vaddvq_u32(a: u32x4) -> u32 // Add across vector (A64)
  • pub unsafe fn vaddvq_u64(a: u64x2) -> u64 // Add across vector (A64)
  • pub unsafe fn vaddv_f32(a: f32x2) -> f32 // Add across vector (A64)
  • pub unsafe fn vaddvq_f32(a: f32x4) -> f32 // Add across vector (A64)
  • pub unsafe fn vaddvq_f64(a: f64x2) -> f64 // Add across vector (A64)
  • pub unsafe fn vaddlv_s8(a: i8x8) -> i16 // Vector long add (A64)
  • pub unsafe fn vaddlvq_s8(a: i8x16) -> i16 // Vector long add (A64)
  • pub unsafe fn vaddlv_s16(a: i16x4) -> i32 // Vector long add (A64)
  • pub unsafe fn vaddlvq_s16(a: i16x8) -> i32 // Vector long add (A64)
  • pub unsafe fn vaddlv_s32(a: i32x2) -> i64 // Vector long add (A64)
  • pub unsafe fn vaddlvq_s32(a: i32x4) -> i64 // Vector long add (A64)
  • pub unsafe fn vaddlv_u8(a: u8x8) -> u16 // Vector long add (A64)
  • pub unsafe fn vaddlvq_u8(a: u8x16) -> u16 // Vector long add (A64)
  • pub unsafe fn vaddlv_u16(a: u16x4) -> u32 // Vector long add (A64)
  • pub unsafe fn vaddlvq_u16(a: u16x8) -> u32 // Vector long add (A64)
  • pub unsafe fn vaddlv_u32(a: u32x2) -> u64 // Vector long add (A64)
  • pub unsafe fn vaddlvq_u32(a: u32x4) -> u64 // Vector long add (A64)
  • pub unsafe fn vmaxv_s8(a: i8x8) -> i8 // Maximum (A64)
  • pub unsafe fn vmaxvq_s8(a: i8x16) -> i8 // Maximum (A64)
  • pub unsafe fn vmaxv_s16(a: i16x4) -> i16 // Maximum (A64)
  • pub unsafe fn vmaxvq_s16(a: i16x8) -> i16 // Maximum (A64)
  • pub unsafe fn vmaxv_s32(a: i32x2) -> i32 // Maximum (A64)
  • pub unsafe fn vmaxvq_s32(a: i32x4) -> i32 // Maximum (A64)
  • pub unsafe fn vmaxv_u8(a: u8x8) -> u8 // Maximum (A64)
  • pub unsafe fn vmaxvq_u8(a: u8x16) -> u8 // Maximum (A64)
  • pub unsafe fn vmaxv_u16(a: u16x4) -> u16 // Maximum (A64)
  • pub unsafe fn vmaxvq_u16(a: u16x8) -> u16 // Maximum (A64)
  • pub unsafe fn vmaxv_u32(a: u32x2) -> u32 // Maximum (A64)
  • pub unsafe fn vmaxvq_u32(a: u32x4) -> u32 // Maximum (A64)
  • pub unsafe fn vmaxv_f32(a: f32x2) -> f32 // Maximum (A64)
  • pub unsafe fn vmaxvq_f32(a: f32x4) -> f32 // Maximum (A64)
  • pub unsafe fn vmaxvq_f64(a: f64x2) -> f64 // Maximum (A64)
  • pub unsafe fn vminv_s8(a: i8x8) -> i8 // Minimum (A64)
  • pub unsafe fn vminvq_s8(a: i8x16) -> i8 // Minimum (A64)
  • pub unsafe fn vminv_s16(a: i16x4) -> i16 // Minimum (A64)
  • pub unsafe fn vminvq_s16(a: i16x8) -> i16 // Minimum (A64)
  • pub unsafe fn vminv_s32(a: i32x2) -> i32 // Minimum (A64)
  • pub unsafe fn vminvq_s32(a: i32x4) -> i32 // Minimum (A64)
  • pub unsafe fn vminv_u8(a: u8x8) -> u8 // Minimum (A64)
  • pub unsafe fn vminvq_u8(a: u8x16) -> u8 // Minimum (A64)
  • pub unsafe fn vminv_u16(a: u16x4) -> u16 // Minimum (A64)
  • pub unsafe fn vminvq_u16(a: u16x8) -> u16 // Minimum (A64)
  • pub unsafe fn vminv_u32(a: u32x2) -> u32 // Minimum (A64)
  • pub unsafe fn vminvq_u32(a: u32x4) -> u32 // Minimum (A64)
  • pub unsafe fn vminv_f32(a: f32x2) -> f32 // Minimum (A64)
  • pub unsafe fn vminvq_f32(a: f32x4) -> f32 // Minimum (A64)
  • pub unsafe fn vminvq_f64(a: f64x2) -> f64 // Minimum (A64)
  • pub unsafe fn vmaxnmv_f32(a: f32x2) -> f32 // Maximum (A64)
  • pub unsafe fn vmaxnmvq_f32(a: f32x4) -> f32 // Maximum (A64)
  • pub unsafe fn vmaxnmvq_f64(a: f64x2) -> f64 // Maximum (A64)
  • pub unsafe fn vminnmv_f32(a: f32x2) -> f32 // Minimum (A64)
  • pub unsafe fn vminnmvq_f32(a: f32x4) -> f32 // Minimum (A64)
  • pub unsafe fn vminnmvq_f64(a: f64x2) -> f64 // Minimum (A64)
  • pub unsafe fn vext_s8(a: i8x8, b: i8x8, n: i) -> i8x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_s8(a: i8x16, b: i8x16, n: i) -> i8x16 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_s16(a: i16x4, b: i16x4, n: i) -> i16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_s16(a: i16x8, b: i16x8, n: i) -> i16x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_s32(a: i32x2, b: i32x2, n: i) -> i32x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_s32(a: i32x4, b: i32x4, n: i) -> i32x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_s64(a: i64x1, b: i64x1, n: i) -> i64x1 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_s64(a: i64x2, b: i64x2, n: i) -> i64x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_u8(a: u8x8, b: u8x8, n: i) -> u8x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_u8(a: u8x16, b: u8x16, n: i) -> u8x16 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_u16(a: u16x4, b: u16x4, n: i) -> u16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_u16(a: u16x8, b: u16x8, n: i) -> u16x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_u32(a: u32x2, b: u32x2, n: i) -> u32x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_u32(a: u32x4, b: u32x4, n: i) -> u32x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_u64(a: u64x1, b: u64x1, n: i) -> u64x1 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_u64(a: u64x2, b: u64x2, n: i) -> u64x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_p64(a: p64x1, b: p64x1, n: i) -> p64x1 // Vector extract (A32/A64)
  • pub unsafe fn vextq_p64(a: p64x2, b: p64x2, n: i) -> p64x2 // Vector extract (A32/A64)
  • pub unsafe fn vext_f32(a: f32x2, b: f32x2, n: i) -> f32x2 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_f32(a: f32x4, b: f32x4, n: i) -> f32x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_f64(a: f64x1, b: f64x1, n: i) -> f64x1 // Vector extract (A64)
  • pub unsafe fn vextq_f64(a: f64x2, b: f64x2, n: i) -> f64x2 // Vector extract (A64)
  • pub unsafe fn vext_p8(a: p8x8, b: p8x8, n: i) -> p8x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_p8(a: p8x16, b: p8x16, n: i) -> p8x16 // Vector extract (v7/A32/A64)
  • pub unsafe fn vext_p16(a: p16x4, b: p16x4, n: i) -> p16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_p16(a: p16x8, b: p16x8, n: i) -> p16x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vrev64_s8(vec: i8x8) -> i8x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_s8(vec: i8x16) -> i8x16 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64_s16(vec: i16x4) -> i16x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_s16(vec: i16x8) -> i16x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64_s32(vec: i32x2) -> i32x2 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_s32(vec: i32x4) -> i32x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64_u8(vec: u8x8) -> u8x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_u8(vec: u8x16) -> u8x16 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64_u16(vec: u16x4) -> u16x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_u16(vec: u16x8) -> u16x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64_u32(vec: u32x2) -> u32x2 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_u32(vec: u32x4) -> u32x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64_f32(vec: f32x2) -> f32x2 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_f32(vec: f32x4) -> f32x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64_p8(vec: p8x8) -> p8x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_p8(vec: p8x16) -> p8x16 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64_p16(vec: p16x4) -> p16x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_p16(vec: p16x8) -> p16x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32_s8(vec: i8x8) -> i8x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32q_s8(vec: i8x16) -> i8x16 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32_s16(vec: i16x4) -> i16x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32q_s16(vec: i16x8) -> i16x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32_u8(vec: u8x8) -> u8x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32q_u8(vec: u8x16) -> u8x16 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32_u16(vec: u16x4) -> u16x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32q_u16(vec: u16x8) -> u16x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32_p8(vec: p8x8) -> p8x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32q_p8(vec: p8x16) -> p8x16 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32_p16(vec: p16x4) -> p16x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev32q_p16(vec: p16x8) -> p16x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev16_s8(vec: i8x8) -> i8x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev16q_s8(vec: i8x16) -> i8x16 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev16_u8(vec: u8x8) -> u8x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev16q_u8(vec: u8x16) -> u8x16 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev16_p8(vec: p8x8) -> p8x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev16q_p8(vec: p8x16) -> p8x16 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vzip1_s8(a: i8x8, b: i8x8) -> i8x8 // Zip vectors (A64)
  • pub unsafe fn vzip1q_s8(a: i8x16, b: i8x16) -> i8x16 // Zip vectors (A64)
  • pub unsafe fn vzip1_s16(a: i16x4, b: i16x4) -> i16x4 // Zip vectors (A64)
  • pub unsafe fn vzip1q_s16(a: i16x8, b: i16x8) -> i16x8 // Zip vectors (A64)
  • pub unsafe fn vzip1_s32(a: i32x2, b: i32x2) -> i32x2 // Zip vectors (A64)
  • pub unsafe fn vzip1q_s32(a: i32x4, b: i32x4) -> i32x4 // Zip vectors (A64)
  • pub unsafe fn vzip1q_s64(a: i64x2, b: i64x2) -> i64x2 // Zip vectors (A64)
  • pub unsafe fn vzip1_u8(a: u8x8, b: u8x8) -> u8x8 // Zip vectors (A64)
  • pub unsafe fn vzip1q_u8(a: u8x16, b: u8x16) -> u8x16 // Zip vectors (A64)
  • pub unsafe fn vzip1_u16(a: u16x4, b: u16x4) -> u16x4 // Zip vectors (A64)
  • pub unsafe fn vzip1q_u16(a: u16x8, b: u16x8) -> u16x8 // Zip vectors (A64)
  • pub unsafe fn vzip1_u32(a: u32x2, b: u32x2) -> u32x2 // Zip vectors (A64)
  • pub unsafe fn vzip1q_u32(a: u32x4, b: u32x4) -> u32x4 // Zip vectors (A64)
  • pub unsafe fn vzip1q_u64(a: u64x2, b: u64x2) -> u64x2 // Zip vectors (A64)
  • pub unsafe fn vzip1q_p64(a: p64x2, b: p64x2) -> p64x2 // Zip vectors (A64)
  • pub unsafe fn vzip1_f32(a: f32x2, b: f32x2) -> f32x2 // Zip vectors (A64)
  • pub unsafe fn vzip1q_f32(a: f32x4, b: f32x4) -> f32x4 // Zip vectors (A64)
  • pub unsafe fn vzip1q_f64(a: f64x2, b: f64x2) -> f64x2 // Zip vectors (A64)
  • pub unsafe fn vzip1_p8(a: p8x8, b: p8x8) -> p8x8 // Zip vectors (A64)
  • pub unsafe fn vzip1q_p8(a: p8x16, b: p8x16) -> p8x16 // Zip vectors (A64)
  • pub unsafe fn vzip1_p16(a: p16x4, b: p16x4) -> p16x4 // Zip vectors (A64)
  • pub unsafe fn vzip1q_p16(a: p16x8, b: p16x8) -> p16x8 // Zip vectors (A64)
  • pub unsafe fn vzip2_s8(a: i8x8, b: i8x8) -> i8x8 // Zip vectors (A64)
  • pub unsafe fn vzip2q_s8(a: i8x16, b: i8x16) -> i8x16 // Zip vectors (A64)
  • pub unsafe fn vzip2_s16(a: i16x4, b: i16x4) -> i16x4 // Zip vectors (A64)
  • pub unsafe fn vzip2q_s16(a: i16x8, b: i16x8) -> i16x8 // Zip vectors (A64)
  • pub unsafe fn vzip2_s32(a: i32x2, b: i32x2) -> i32x2 // Zip vectors (A64)
  • pub unsafe fn vzip2q_s32(a: i32x4, b: i32x4) -> i32x4 // Zip vectors (A64)
  • pub unsafe fn vzip2q_s64(a: i64x2, b: i64x2) -> i64x2 // Zip vectors (A64)
  • pub unsafe fn vzip2_u8(a: u8x8, b: u8x8) -> u8x8 // Zip vectors (A64)
  • pub unsafe fn vzip2q_u8(a: u8x16, b: u8x16) -> u8x16 // Zip vectors (A64)
  • pub unsafe fn vzip2_u16(a: u16x4, b: u16x4) -> u16x4 // Zip vectors (A64)
  • pub unsafe fn vzip2q_u16(a: u16x8, b: u16x8) -> u16x8 // Zip vectors (A64)
  • pub unsafe fn vzip2_u32(a: u32x2, b: u32x2) -> u32x2 // Zip vectors (A64)
  • pub unsafe fn vzip2q_u32(a: u32x4, b: u32x4) -> u32x4 // Zip vectors (A64)
  • pub unsafe fn vzip2q_u64(a: u64x2, b: u64x2) -> u64x2 // Zip vectors (A64)
  • pub unsafe fn vzip2q_p64(a: p64x2, b: p64x2) -> p64x2 // Zip vectors (A64)
  • pub unsafe fn vzip2_f32(a: f32x2, b: f32x2) -> f32x2 // Zip vectors (A64)
  • pub unsafe fn vzip2q_f32(a: f32x4, b: f32x4) -> f32x4 // Zip vectors (A64)
  • pub unsafe fn vzip2q_f64(a: f64x2, b: f64x2) -> f64x2 // Zip vectors (A64)
  • pub unsafe fn vzip2_p8(a: p8x8, b: p8x8) -> p8x8 // Zip vectors (A64)
  • pub unsafe fn vzip2q_p8(a: p8x16, b: p8x16) -> p8x16 // Zip vectors (A64)
  • pub unsafe fn vzip2_p16(a: p16x4, b: p16x4) -> p16x4 // Zip vectors (A64)
  • pub unsafe fn vzip2q_p16(a: p16x8, b: p16x8) -> p16x8 // Zip vectors (A64)
  • pub unsafe fn vuzp1_s8(a: i8x8, b: i8x8) -> i8x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_s8(a: i8x16, b: i8x16) -> i8x16 // Unzip vectors (A64)
  • pub unsafe fn vuzp1_s16(a: i16x4, b: i16x4) -> i16x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_s16(a: i16x8, b: i16x8) -> i16x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp1_s32(a: i32x2, b: i32x2) -> i32x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_s32(a: i32x4, b: i32x4) -> i32x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_s64(a: i64x2, b: i64x2) -> i64x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp1_u8(a: u8x8, b: u8x8) -> u8x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_u8(a: u8x16, b: u8x16) -> u8x16 // Unzip vectors (A64)
  • pub unsafe fn vuzp1_u16(a: u16x4, b: u16x4) -> u16x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_u16(a: u16x8, b: u16x8) -> u16x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp1_u32(a: u32x2, b: u32x2) -> u32x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_u32(a: u32x4, b: u32x4) -> u32x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_u64(a: u64x2, b: u64x2) -> u64x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_p64(a: p64x2, b: p64x2) -> p64x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp1_f32(a: f32x2, b: f32x2) -> f32x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_f32(a: f32x4, b: f32x4) -> f32x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_f64(a: f64x2, b: f64x2) -> f64x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp1_p8(a: p8x8, b: p8x8) -> p8x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_p8(a: p8x16, b: p8x16) -> p8x16 // Unzip vectors (A64)
  • pub unsafe fn vuzp1_p16(a: p16x4, b: p16x4) -> p16x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_p16(a: p16x8, b: p16x8) -> p16x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_s8(a: i8x8, b: i8x8) -> i8x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_s8(a: i8x16, b: i8x16) -> i8x16 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_s16(a: i16x4, b: i16x4) -> i16x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_s16(a: i16x8, b: i16x8) -> i16x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_s32(a: i32x2, b: i32x2) -> i32x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_s32(a: i32x4, b: i32x4) -> i32x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_s64(a: i64x2, b: i64x2) -> i64x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_u8(a: u8x8, b: u8x8) -> u8x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_u8(a: u8x16, b: u8x16) -> u8x16 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_u16(a: u16x4, b: u16x4) -> u16x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_u16(a: u16x8, b: u16x8) -> u16x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_u32(a: u32x2, b: u32x2) -> u32x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_u32(a: u32x4, b: u32x4) -> u32x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_u64(a: u64x2, b: u64x2) -> u64x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_p64(a: p64x2, b: p64x2) -> p64x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_f32(a: f32x2, b: f32x2) -> f32x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_f32(a: f32x4, b: f32x4) -> f32x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_f64(a: f64x2, b: f64x2) -> f64x2 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_p8(a: p8x8, b: p8x8) -> p8x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_p8(a: p8x16, b: p8x16) -> p8x16 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_p16(a: p16x4, b: p16x4) -> p16x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_p16(a: p16x8, b: p16x8) -> p16x8 // Unzip vectors (A64)
  • pub unsafe fn vtrn1_s8(a: i8x8, b: i8x8) -> i8x8 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_s8(a: i8x16, b: i8x16) -> i8x16 // Transpose elements (A64)
  • pub unsafe fn vtrn1_s16(a: i16x4, b: i16x4) -> i16x4 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_s16(a: i16x8, b: i16x8) -> i16x8 // Transpose elements (A64)
  • pub unsafe fn vtrn1_s32(a: i32x2, b: i32x2) -> i32x2 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_s32(a: i32x4, b: i32x4) -> i32x4 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_s64(a: i64x2, b: i64x2) -> i64x2 // Transpose elements (A64)
  • pub unsafe fn vtrn1_u8(a: u8x8, b: u8x8) -> u8x8 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_u8(a: u8x16, b: u8x16) -> u8x16 // Transpose elements (A64)
  • pub unsafe fn vtrn1_u16(a: u16x4, b: u16x4) -> u16x4 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_u16(a: u16x8, b: u16x8) -> u16x8 // Transpose elements (A64)
  • pub unsafe fn vtrn1_u32(a: u32x2, b: u32x2) -> u32x2 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_u32(a: u32x4, b: u32x4) -> u32x4 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_u64(a: u64x2, b: u64x2) -> u64x2 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_p64(a: p64x2, b: p64x2) -> p64x2 // Transpose elements (A64)
  • pub unsafe fn vtrn1_f32(a: f32x2, b: f32x2) -> f32x2 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_f32(a: f32x4, b: f32x4) -> f32x4 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_f64(a: f64x2, b: f64x2) -> f64x2 // Transpose elements (A64)
  • pub unsafe fn vtrn1_p8(a: p8x8, b: p8x8) -> p8x8 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_p8(a: p8x16, b: p8x16) -> p8x16 // Transpose elements (A64)
  • pub unsafe fn vtrn1_p16(a: p16x4, b: p16x4) -> p16x4 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_p16(a: p16x8, b: p16x8) -> p16x8 // Transpose elements (A64)
  • pub unsafe fn vtrn2_s8(a: i8x8, b: i8x8) -> i8x8 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_s8(a: i8x16, b: i8x16) -> i8x16 // Transpose elements (A64)
  • pub unsafe fn vtrn2_s16(a: i16x4, b: i16x4) -> i16x4 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_s16(a: i16x8, b: i16x8) -> i16x8 // Transpose elements (A64)
  • pub unsafe fn vtrn2_s32(a: i32x2, b: i32x2) -> i32x2 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_s32(a: i32x4, b: i32x4) -> i32x4 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_s64(a: i64x2, b: i64x2) -> i64x2 // Transpose elements (A64)
  • pub unsafe fn vtrn2_u8(a: u8x8, b: u8x8) -> u8x8 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_u8(a: u8x16, b: u8x16) -> u8x16 // Transpose elements (A64)
  • pub unsafe fn vtrn2_u16(a: u16x4, b: u16x4) -> u16x4 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_u16(a: u16x8, b: u16x8) -> u16x8 // Transpose elements (A64)
  • pub unsafe fn vtrn2_u32(a: u32x2, b: u32x2) -> u32x2 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_u32(a: u32x4, b: u32x4) -> u32x4 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_u64(a: u64x2, b: u64x2) -> u64x2 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_p64(a: p64x2, b: p64x2) -> p64x2 // Transpose elements (A64)
  • pub unsafe fn vtrn2_f32(a: f32x2, b: f32x2) -> f32x2 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_f32(a: f32x4, b: f32x4) -> f32x4 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_f64(a: f64x2, b: f64x2) -> f64x2 // Transpose elements (A64)
  • pub unsafe fn vtrn2_p8(a: p8x8, b: p8x8) -> p8x8 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_p8(a: p8x16, b: p8x16) -> p8x16 // Transpose elements (A64)
  • pub unsafe fn vtrn2_p16(a: p16x4, b: p16x4) -> p16x4 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_p16(a: p16x8, b: p16x8) -> p16x8 // Transpose elements (A64)
  • pub unsafe fn vtbl1_s8(a: i8x8, b: i8x8) -> i8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl1_u8(a: u8x8, b: u8x8) -> u8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl1_p8(a: p8x8, b: u8x8) -> p8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbx1_s8(a: i8x8, b: i8x8, c: i8x8) -> i8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx1_u8(a: u8x8, b: u8x8, c: u8x8) -> u8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx1_p8(a: p8x8, b: p8x8, c: u8x8) -> p8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbl2_s8(a: i8x8x2, b: i8x8) -> i8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl2_u8(a: u8x8x2, b: u8x8) -> u8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl2_p8(a: p8x8x2, b: u8x8) -> p8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl3_s8(a: i8x8x3, b: i8x8) -> i8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl3_u8(a: u8x8x3, b: u8x8) -> u8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl3_p8(a: p8x8x3, b: u8x8) -> p8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl4_s8(a: i8x8x4, b: i8x8) -> i8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl4_u8(a: u8x8x4, b: u8x8) -> u8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbl4_p8(a: p8x8x4, b: u8x8) -> p8x8 // Table look-up (v7/A32/A64)
  • pub unsafe fn vtbx2_s8(a: i8x8, b: i8x8x2, c: i8x8) -> i8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx2_u8(a: u8x8, b: u8x8x2, c: u8x8) -> u8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx2_p8(a: p8x8, b: p8x8x2, c: u8x8) -> p8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx3_s8(a: i8x8, b: i8x8x3, c: i8x8) -> i8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx3_u8(a: u8x8, b: u8x8x3, c: u8x8) -> u8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx3_p8(a: p8x8, b: p8x8x3, c: u8x8) -> p8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx4_s8(a: i8x8, b: i8x8x4, c: i8x8) -> i8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx4_u8(a: u8x8, b: u8x8x4, c: u8x8) -> u8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vtbx4_p8(a: p8x8, b: p8x8x4, c: u8x8) -> p8x8 // Extended table look-up (v7/A32/A64)
  • pub unsafe fn vqtbl1_s8(t: i8x16, idx: u8x8) -> i8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl1q_s8(t: i8x16, idx: u8x16) -> i8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl1_u8(t: u8x16, idx: u8x8) -> u8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl1q_u8(t: u8x16, idx: u8x16) -> u8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl1_p8(t: p8x16, idx: u8x8) -> p8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl1q_p8(t: p8x16, idx: u8x16) -> p8x16 // Table look-up (A64)
  • pub unsafe fn vqtbx1_s8(a: i8x8, t: i8x16, idx: u8x8) -> i8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx1q_s8(a: i8x16, t: i8x16, idx: u8x16) -> i8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx1_u8(a: u8x8, t: u8x16, idx: u8x8) -> u8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx1q_u8(a: u8x16, t: u8x16, idx: u8x16) -> u8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx1_p8(a: p8x8, t: p8x16, idx: u8x8) -> p8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx1q_p8(a: p8x16, t: p8x16, idx: u8x16) -> p8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbl2_s8(t: i8x16x2, idx: u8x8) -> i8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl2q_s8(t: i8x16x2, idx: u8x16) -> i8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl2_u8(t: u8x16x2, idx: u8x8) -> u8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl2q_u8(t: u8x16x2, idx: u8x16) -> u8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl2_p8(t: p8x16x2, idx: u8x8) -> p8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl2q_p8(t: p8x16x2, idx: u8x16) -> p8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl3_s8(t: i8x16x3, idx: u8x8) -> i8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl3q_s8(t: i8x16x3, idx: u8x16) -> i8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl3_u8(t: u8x16x3, idx: u8x8) -> u8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl3q_u8(t: u8x16x3, idx: u8x16) -> u8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl3_p8(t: p8x16x3, idx: u8x8) -> p8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl3q_p8(t: p8x16x3, idx: u8x16) -> p8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl4_s8(t: i8x16x4, idx: u8x8) -> i8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl4q_s8(t: i8x16x4, idx: u8x16) -> i8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl4_u8(t: u8x16x4, idx: u8x8) -> u8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl4q_u8(t: u8x16x4, idx: u8x16) -> u8x16 // Table look-up (A64)
  • pub unsafe fn vqtbl4_p8(t: p8x16x4, idx: u8x8) -> p8x8 // Table look-up (A64)
  • pub unsafe fn vqtbl4q_p8(t: p8x16x4, idx: u8x16) -> p8x16 // Table look-up (A64)
  • pub unsafe fn vqtbx2_s8(a: i8x8, t: i8x16x2, idx: u8x8) -> i8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx2q_s8(a: i8x16, t: i8x16x2, idx: u8x16) -> i8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx2_u8(a: u8x8, t: u8x16x2, idx: u8x8) -> u8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx2q_u8(a: u8x16, t: u8x16x2, idx: u8x16) -> u8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx2_p8(a: p8x8, t: p8x16x2, idx: u8x8) -> p8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx2q_p8(a: p8x16, t: p8x16x2, idx: u8x16) -> p8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx3_s8(a: i8x8, t: i8x16x3, idx: u8x8) -> i8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx3q_s8(a: i8x16, t: i8x16x3, idx: u8x16) -> i8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx3_u8(a: u8x8, t: u8x16x3, idx: u8x8) -> u8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx3q_u8(a: u8x16, t: u8x16x3, idx: u8x16) -> u8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx3_p8(a: p8x8, t: p8x16x3, idx: u8x8) -> p8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx3q_p8(a: p8x16, t: p8x16x3, idx: u8x16) -> p8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx4_s8(a: i8x8, t: i8x16x4, idx: u8x8) -> i8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx4q_s8(a: i8x16, t: i8x16x4, idx: u8x16) -> i8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx4_u8(a: u8x8, t: u8x16x4, idx: u8x8) -> u8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx4q_u8(a: u8x16, t: u8x16x4, idx: u8x16) -> u8x16 // Extended table look-up (A64)
  • pub unsafe fn vqtbx4_p8(a: p8x8, t: p8x16x4, idx: u8x8) -> p8x8 // Extended table look-up (A64)
  • pub unsafe fn vqtbx4q_p8(a: p8x16, t: p8x16x4, idx: u8x16) -> p8x16 // Extended table look-up (A64)
  • pub unsafe fn vget_lane_u8(v: u8x8, lane: i) -> u8 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_u16(v: u16x4, lane: i) -> u16 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_u32(v: u32x2, lane: i) -> u32 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_u64(v: u64x1, lane: i) -> u64 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_p64(v: p64x1, lane: i) -> p64 // Extract lanes from a vector (A32/A64)
  • pub unsafe fn vget_lane_s8(v: i8x8, lane: i) -> i8 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_s16(v: i16x4, lane: i) -> i16 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_s32(v: i32x2, lane: i) -> i32 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_s64(v: i64x1, lane: i) -> i64 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_p8(v: p8x8, lane: i) -> p8 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_p16(v: p16x4, lane: i) -> p16 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_f32(v: f32x2, lane: i) -> f32 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_f64(v: f64x1, lane: i) -> f64 // Extract lanes from a vector (A64)
  • pub unsafe fn vgetq_lane_u8(v: u8x16, lane: i) -> u8 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_u16(v: u16x8, lane: i) -> u16 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_u32(v: u32x4, lane: i) -> u32 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_u64(v: u64x2, lane: i) -> u64 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_p64(v: p64x2, lane: i) -> p64 // Extract lanes from a vector (A32/A64)
  • pub unsafe fn vgetq_lane_s8(v: i8x16, lane: i) -> i8 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_s16(v: i16x8, lane: i) -> i16 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_s32(v: i32x4, lane: i) -> i32 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_s64(v: i64x2, lane: i) -> i64 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_p8(v: p8x16, lane: i) -> p8 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_p16(v: p16x8, lane: i) -> p16 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vget_lane_f16(v: f16x4, lane: i) -> f16 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_f16(v: f16x8, lane: i) -> f16 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_f32(v: f32x4, lane: i) -> f32 // Extract lanes from a vector (v7/A32/A64)
  • pub unsafe fn vgetq_lane_f64(v: f64x2, lane: i) -> f64 // Extract lanes from a vector (A64)
  • pub unsafe fn vset_lane_u8(a: u8, v: u8x8, lane: i) -> u8x8 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_u16(a: u16, v: u16x4, lane: i) -> u16x4 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_u32(a: u32, v: u32x2, lane: i) -> u32x2 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_u64(a: u64, v: u64x1, lane: i) -> u64x1 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_p64(a: p64, v: p64x1, lane: i) -> p64x1 // Set lanes within a vector (A32/A64)
  • pub unsafe fn vset_lane_s8(a: i8, v: i8x8, lane: i) -> i8x8 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_s16(a: i16, v: i16x4, lane: i) -> i16x4 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_s32(a: i32, v: i32x2, lane: i) -> i32x2 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_s64(a: i64, v: i64x1, lane: i) -> i64x1 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_p8(a: p8, v: p8x8, lane: i) -> p8x8 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_p16(a: p16, v: p16x4, lane: i) -> p16x4 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_f16(a: f16, v: f16x4, lane: i) -> f16x4 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_f16(a: f16, v: f16x8, lane: i) -> f16x8 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_f32(a: f32, v: f32x2, lane: i) -> f32x2 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vset_lane_f64(a: f64, v: f64x1, lane: i) -> f64x1 // Set lanes within a vector (A64)
  • pub unsafe fn vsetq_lane_u8(a: u8, v: u8x16, lane: i) -> u8x16 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_u16(a: u16, v: u16x8, lane: i) -> u16x8 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_u32(a: u32, v: u32x4, lane: i) -> u32x4 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_u64(a: u64, v: u64x2, lane: i) -> u64x2 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_p64(a: p64, v: p64x2, lane: i) -> p64x2 // Set lanes within a vector (A32/A64)
  • pub unsafe fn vsetq_lane_s8(a: i8, v: i8x16, lane: i) -> i8x16 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_s16(a: i16, v: i16x8, lane: i) -> i16x8 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_s32(a: i32, v: i32x4, lane: i) -> i32x4 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_s64(a: i64, v: i64x2, lane: i) -> i64x2 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_p8(a: p8, v: p8x16, lane: i) -> p8x16 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_p16(a: p16, v: p16x8, lane: i) -> p16x8 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_f32(a: f32, v: f32x4, lane: i) -> f32x4 // Set lanes within a vector (v7/A32/A64)
  • pub unsafe fn vsetq_lane_f64(a: f64, v: f64x2, lane: i) -> f64x2 // Set lanes within a vector (A64)
  • pub unsafe fn vrecpxs_f32(a: f32) -> f32 // Reciprocal estimate/step and 1/sqrt estimate/step (A64)
  • pub unsafe fn vrecpxd_f64(a: f64) -> f64 // Reciprocal estimate/step and 1/sqrt estimate/step (A64)
  • pub unsafe fn vfma_n_f32(a: f32x2, b: f32x2, n: f32) -> f32x2 // Vector fused multiply accumulate (v7/A32/A64)
  • pub unsafe fn vfmaq_n_f32(a: f32x4, b: f32x4, n: f32) -> f32x4 // Vector fused multiply accumulate (v7/A32/A64)
  • pub unsafe fn vfms_n_f32(a: f32x2, b: f32x2, n: f32) -> f32x2 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfmsq_n_f32(a: f32x4, b: f32x4, n: f32) -> f32x4 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfma_n_f64(a: f64x1, b: f64x1, n: f64) -> f64x1 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfmaq_n_f64(a: f64x2, b: f64x2, n: f64) -> f64x2 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfms_n_f64(a: f64x1, b: f64x1, n: f64) -> f64x1 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfmsq_n_f64(a: f64x2, b: f64x2, n: f64) -> f64x2 // Vector fused multiply subtract (A64)
  • pub unsafe fn vtrn_s8(a: i8x8, b: i8x8) -> i8x8x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrn_s16(a: i16x4, b: i16x4) -> i16x4x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrn_u8(a: u8x8, b: u8x8) -> u8x8x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrn_u16(a: u16x4, b: u16x4) -> u16x4x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrn_p8(a: p8x8, b: p8x8) -> p8x8x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrn_p16(a: p16x4, b: p16x4) -> p16x4x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrn_s32(a: i32x2, b: i32x2) -> i32x2x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrn_f32(a: f32x2, b: f32x2) -> f32x2x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrn_u32(a: u32x2, b: u32x2) -> u32x2x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_s8(a: i8x16, b: i8x16) -> i8x16x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_s16(a: i16x8, b: i16x8) -> i16x8x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_s32(a: i32x4, b: i32x4) -> i32x4x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_f32(a: f32x4, b: f32x4) -> f32x4x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_u8(a: u8x16, b: u8x16) -> u8x16x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_u16(a: u16x8, b: u16x8) -> u16x8x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_u32(a: u32x4, b: u32x4) -> u32x4x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_p8(a: p8x16, b: p8x16) -> p8x16x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_p16(a: p16x8, b: p16x8) -> p16x8x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vzip_s8(a: i8x8, b: i8x8) -> i8x8x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzip_s16(a: i16x4, b: i16x4) -> i16x4x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzip_u8(a: u8x8, b: u8x8) -> u8x8x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzip_u16(a: u16x4, b: u16x4) -> u16x4x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzip_p8(a: p8x8, b: p8x8) -> p8x8x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzip_p16(a: p16x4, b: p16x4) -> p16x4x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzip_s32(a: i32x2, b: i32x2) -> i32x2x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzip_f32(a: f32x2, b: f32x2) -> f32x2x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzip_u32(a: u32x2, b: u32x2) -> u32x2x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_s8(a: i8x16, b: i8x16) -> i8x16x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_s16(a: i16x8, b: i16x8) -> i16x8x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_s32(a: i32x4, b: i32x4) -> i32x4x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_f32(a: f32x4, b: f32x4) -> f32x4x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_u8(a: u8x16, b: u8x16) -> u8x16x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_u16(a: u16x8, b: u16x8) -> u16x8x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_u32(a: u32x4, b: u32x4) -> u32x4x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_p8(a: p8x16, b: p8x16) -> p8x16x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_p16(a: p16x8, b: p16x8) -> p16x8x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_s8(a: i8x8, b: i8x8) -> i8x8x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_s16(a: i16x4, b: i16x4) -> i16x4x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_s32(a: i32x2, b: i32x2) -> i32x2x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_f32(a: f32x2, b: f32x2) -> f32x2x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_u8(a: u8x8, b: u8x8) -> u8x8x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_u16(a: u16x4, b: u16x4) -> u16x4x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_u32(a: u32x2, b: u32x2) -> u32x2x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_p8(a: p8x8, b: p8x8) -> p8x8x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_p16(a: p16x4, b: p16x4) -> p16x4x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_s8(a: i8x16, b: i8x16) -> i8x16x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_s16(a: i16x8, b: i16x8) -> i16x8x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_s32(a: i32x4, b: i32x4) -> i32x4x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_f32(a: f32x4, b: f32x4) -> f32x4x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_u8(a: u8x16, b: u8x16) -> u8x16x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_u16(a: u16x8, b: u16x8) -> u16x8x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_u32(a: u32x4, b: u32x4) -> u32x4x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_p8(a: p8x16, b: p8x16) -> p8x16x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_p16(a: p16x8, b: p16x8) -> p16x8x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_s8(a: i8x8) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_s8(a: i8x8) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_s8(a: i8x8) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_s8(a: i8x8) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_s8(a: i8x8) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_s8(a: i8x8) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_s8(a: i8x8) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_s8(a: i8x8) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_s8(a: i8x8) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_s8(a: i8x8) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_s8(a: i8x8) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_s8(a: i8x8) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_s8(a: i8x8) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_s16(a: i16x4) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_s16(a: i16x4) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_s16(a: i16x4) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_s16(a: i16x4) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_s16(a: i16x4) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_s16(a: i16x4) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_s16(a: i16x4) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_s16(a: i16x4) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_s16(a: i16x4) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_s16(a: i16x4) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_s16(a: i16x4) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_s16(a: i16x4) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_s16(a: i16x4) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_s32(a: i32x2) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_s32(a: i32x2) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_s32(a: i32x2) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_s32(a: i32x2) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_s32(a: i32x2) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_s32(a: i32x2) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_s32(a: i32x2) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_s32(a: i32x2) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_s32(a: i32x2) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_s32(a: i32x2) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_s32(a: i32x2) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_s32(a: i32x2) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_s32(a: i32x2) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_f32(a: f32x2) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_f32(a: f32x2) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_f32(a: f32x2) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_f32(a: f32x2) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_f32(a: f32x2) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_f32(a: f32x2) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_f32(a: f32x2) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_f32(a: f32x2) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_f32(a: f32x2) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_f32(a: f32x2) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_f32(a: f32x2) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_f32(a: f32x2) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_p64_f64(a: f64x1) -> p64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_f16_f32(a: f32x2) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_u8(a: u8x8) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_u8(a: u8x8) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_u8(a: u8x8) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_u8(a: u8x8) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_u8(a: u8x8) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_u8(a: u8x8) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_u8(a: u8x8) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_u8(a: u8x8) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_u8(a: u8x8) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_u8(a: u8x8) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_u8(a: u8x8) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_u8(a: u8x8) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_u8(a: u8x8) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_u16(a: u16x4) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_u16(a: u16x4) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_u16(a: u16x4) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_u16(a: u16x4) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_u16(a: u16x4) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_u16(a: u16x4) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_u16(a: u16x4) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_u16(a: u16x4) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_u16(a: u16x4) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_u16(a: u16x4) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_u16(a: u16x4) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_u16(a: u16x4) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_u16(a: u16x4) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_u32(a: u32x2) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_u32(a: u32x2) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_u32(a: u32x2) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_u32(a: u32x2) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_u32(a: u32x2) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_u32(a: u32x2) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_u32(a: u32x2) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_u32(a: u32x2) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_u32(a: u32x2) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_u32(a: u32x2) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_u32(a: u32x2) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_u32(a: u32x2) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_u32(a: u32x2) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_p8(a: p8x8) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_p8(a: p8x8) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_p8(a: p8x8) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_p8(a: p8x8) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_p8(a: p8x8) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_p8(a: p8x8) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_p8(a: p8x8) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_p8(a: p8x8) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_p8(a: p8x8) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_p8(a: p8x8) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_p8(a: p8x8) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_p8(a: p8x8) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_p8(a: p8x8) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_p16(a: p16x4) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_p16(a: p16x4) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_p16(a: p16x4) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_p16(a: p16x4) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_p16(a: p16x4) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_p16(a: p16x4) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_p16(a: p16x4) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_p16(a: p16x4) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_p16(a: p16x4) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_p16(a: p16x4) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_p16(a: p16x4) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_p16(a: p16x4) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_p16(a: p16x4) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_u64(a: u64x1) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_u64(a: u64x1) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_u64(a: u64x1) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_u64(a: u64x1) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_u64(a: u64x1) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_u64(a: u64x1) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_u64(a: u64x1) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_u64(a: u64x1) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_u64(a: u64x1) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_u64(a: u64x1) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_u64(a: u64x1) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_u64(a: u64x1) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_u64(a: u64x1) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_s64(a: i64x1) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_s64(a: i64x1) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_s64(a: i64x1) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_s64(a: i64x1) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_s64(a: i64x1) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_s64(a: i64x1) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_s64(a: i64x1) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_s64(a: i64x1) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_s64(a: i64x1) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_s64(a: i64x1) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_s64(a: i64x1) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_u64_p64(a: p64x1) -> u64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f16_s64(a: i64x1) -> f16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s8_f16(a: f16x4) -> i8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s16_f16(a: f16x4) -> i16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s32_f16(a: f16x4) -> i32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f32_f16(a: f16x4) -> f32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u8_f16(a: f16x4) -> u8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u16_f16(a: f16x4) -> u16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u32_f16(a: f16x4) -> u32x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p8_f16(a: f16x4) -> p8x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_p16_f16(a: f16x4) -> p16x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_u64_f16(a: f16x4) -> u64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_s64_f16(a: f16x4) -> i64x1 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpret_f64_f16(a: f16x4) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p64_f16(a: f16x4) -> p64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_s16_s8(a: i8x16) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_s8(a: i8x16) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_s8(a: i8x16) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_s8(a: i8x16) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_s8(a: i8x16) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_s8(a: i8x16) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_s8(a: i8x16) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_s8(a: i8x16) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_s8(a: i8x16) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_s8(a: i8x16) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_s8(a: i8x16) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_s8(a: i8x16) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_s8(a: i8x16) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_s8(a: i8x16) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_s16(a: i16x8) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_s16(a: i16x8) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_s16(a: i16x8) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_s16(a: i16x8) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_s16(a: i16x8) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_s16(a: i16x8) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_s16(a: i16x8) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_s16(a: i16x8) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_s16(a: i16x8) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_s16(a: i16x8) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_s16(a: i16x8) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_s16(a: i16x8) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_s16(a: i16x8) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_s16(a: i16x8) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_s32(a: i32x4) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_s32(a: i32x4) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_s32(a: i32x4) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_s32(a: i32x4) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_s32(a: i32x4) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_s32(a: i32x4) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_s32(a: i32x4) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_s32(a: i32x4) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_s32(a: i32x4) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_s32(a: i32x4) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_s32(a: i32x4) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_s32(a: i32x4) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_s32(a: i32x4) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_s32(a: i32x4) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_f32(a: f32x4) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_f32(a: f32x4) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_f32(a: f32x4) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_f32(a: f32x4) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_f32(a: f32x4) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_f32(a: f32x4) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_f32(a: f32x4) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_f32(a: f32x4) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_f32(a: f32x4) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_f32(a: f32x4) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_f32(a: f32x4) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_f32(a: f32x4) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_f32(a: f32x4) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p64_f64(a: f64x2) -> p64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p128_f64(a: f64x2) -> p128 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_f16_f32(a: f32x4) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_u8(a: u8x16) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_u8(a: u8x16) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_u8(a: u8x16) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_u8(a: u8x16) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_u8(a: u8x16) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_u8(a: u8x16) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_u8(a: u8x16) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_u8(a: u8x16) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_u8(a: u8x16) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_u8(a: u8x16) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_u8(a: u8x16) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_u8(a: u8x16) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_u8(a: u8x16) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_u8(a: u8x16) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_u16(a: u16x8) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_u16(a: u16x8) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_u16(a: u16x8) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_u16(a: u16x8) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_u16(a: u16x8) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_u16(a: u16x8) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_u16(a: u16x8) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_u16(a: u16x8) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_u16(a: u16x8) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_u16(a: u16x8) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_u16(a: u16x8) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_u16(a: u16x8) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_u16(a: u16x8) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_u16(a: u16x8) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_u32(a: u32x4) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_u32(a: u32x4) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_u32(a: u32x4) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_u32(a: u32x4) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_u32(a: u32x4) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_u32(a: u32x4) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_u32(a: u32x4) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_u32(a: u32x4) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_u32(a: u32x4) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_u32(a: u32x4) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_u32(a: u32x4) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_u32(a: u32x4) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_u32(a: u32x4) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_u32(a: u32x4) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_p8(a: p8x16) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_p8(a: p8x16) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_p8(a: p8x16) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_p8(a: p8x16) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_p8(a: p8x16) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_p8(a: p8x16) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_p8(a: p8x16) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_p8(a: p8x16) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_p8(a: p8x16) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_p8(a: p8x16) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_p8(a: p8x16) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_p8(a: p8x16) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_p8(a: p8x16) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_p8(a: p8x16) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_p16(a: p16x8) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_p16(a: p16x8) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_p16(a: p16x8) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_p16(a: p16x8) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_p16(a: p16x8) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_p16(a: p16x8) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_p16(a: p16x8) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_p16(a: p16x8) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_p16(a: p16x8) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_p16(a: p16x8) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_p16(a: p16x8) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_p16(a: p16x8) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_p16(a: p16x8) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_p16(a: p16x8) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_u64(a: u64x2) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_u64(a: u64x2) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_u64(a: u64x2) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_u64(a: u64x2) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_u64(a: u64x2) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_u64(a: u64x2) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_u64(a: u64x2) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_u64(a: u64x2) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_u64(a: u64x2) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_u64(a: u64x2) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_u64(a: u64x2) -> f64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_s64(a: i64x2) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_s64(a: i64x2) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_s64(a: i64x2) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p64_u64(a: u64x2) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_u64(a: u64x2) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_u64(a: u64x2) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_s64(a: i64x2) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_s64(a: i64x2) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_s64(a: i64x2) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_s64(a: i64x2) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_s64(a: i64x2) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_s64(a: i64x2) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_s64(a: i64x2) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_s64(a: i64x2) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_s64(a: i64x2) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_s64(a: i64x2) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_p64(a: p64x2) -> u64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f16_s64(a: i64x2) -> f16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s8_f16(a: f16x8) -> i8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s16_f16(a: f16x8) -> i16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s32_f16(a: f16x8) -> i32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f32_f16(a: f16x8) -> f32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u8_f16(a: f16x8) -> u8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u16_f16(a: f16x8) -> u16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u32_f16(a: f16x8) -> u32x4 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p8_f16(a: f16x8) -> p8x16 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_p16_f16(a: f16x8) -> p16x8 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_u64_f16(a: f16x8) -> u64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_s64_f16(a: f16x8) -> i64x2 // Vector reinterpret cast operations (v7/A32/A64)
  • pub unsafe fn vreinterpretq_f64_f16(a: f16x8) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p64_f16(a: f16x8) -> p64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p128_f16(a: f16x8) -> p128 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_s8_f64(a: f64x1) -> i8x8 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_s16_f64(a: f64x1) -> i16x4 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_s32_f64(a: f64x1) -> i32x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_u8_f64(a: f64x1) -> u8x8 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_u16_f64(a: f64x1) -> u16x4 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_u32_f64(a: f64x1) -> u32x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p8_f64(a: f64x1) -> p8x8 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_p16_f64(a: f64x1) -> p16x4 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_u64_f64(a: f64x1) -> u64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_s64_f64(a: f64x1) -> i64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_f16_f64(a: f64x1) -> f16x4 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_f32_f64(a: f64x1) -> f32x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_s8_f64(a: f64x2) -> i8x16 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_s16_f64(a: f64x2) -> i16x8 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_s32_f64(a: f64x2) -> i32x4 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_u8_f64(a: f64x2) -> u8x16 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_u16_f64(a: f64x2) -> u16x8 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_u32_f64(a: f64x2) -> u32x4 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p8_f64(a: f64x2) -> p8x16 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_p16_f64(a: f64x2) -> p16x8 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_u64_f64(a: f64x2) -> u64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_s64_f64(a: f64x2) -> i64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_f16_f64(a: f64x2) -> f16x8 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_f32_f64(a: f64x2) -> f32x4 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_s8_p64(a: p64x1) -> i8x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_s16_p64(a: p64x1) -> i16x4 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_s32_p64(a: p64x1) -> i32x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_u8_p64(a: p64x1) -> u8x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_u16_p64(a: p64x1) -> u16x4 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_u32_p64(a: p64x1) -> u32x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_p8_p64(a: p64x1) -> p8x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_p16_p64(a: p64x1) -> p16x4 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_u64_p64(a: p64x1) -> u64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_s64_p64(a: p64x1) -> i64x1 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpret_f64_p64(a: p64x1) -> f64x1 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpret_f16_p64(a: p64x1) -> f16x4 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_s8_p64(a: p64x2) -> i8x16 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_s16_p64(a: p64x2) -> i16x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_s32_p64(a: p64x2) -> i32x4 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_u8_p64(a: p64x2) -> u8x16 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_u16_p64(a: p64x2) -> u16x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_u32_p64(a: p64x2) -> u32x4 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p8_p64(a: p64x2) -> p8x16 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p16_p64(a: p64x2) -> p16x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_u64_p64(a: p64x2) -> u64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_s64_p64(a: p64x2) -> i64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f64_p64(a: p64x2) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_f16_p64(a: p64x2) -> f16x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_s8_p128(a: p128) -> i8x16 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_s16_p128(a: p128) -> i16x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_s32_p128(a: p128) -> i32x4 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_u8_p128(a: p128) -> u8x16 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_u16_p128(a: p128) -> u16x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_u32_p128(a: p128) -> u32x4 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p8_p128(a: p128) -> p8x16 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_p16_p128(a: p128) -> p16x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_u64_p128(a: p128) -> u64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_s64_p128(a: p128) -> i64x2 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vreinterpretq_f64_p128(a: p128) -> f64x2 // Vector reinterpret cast operations (A64)
  • pub unsafe fn vreinterpretq_f16_p128(a: p128) -> f16x8 // Vector reinterpret cast operations (A32/A64)
  • pub unsafe fn vldrq_p128(ptr: *const p128) -> p128 // Vector load (A32/A64)
  • pub unsafe fn vstrq_p128(ptr: *p128, val: p128) -> () // Vector store (A32/A64)
  • pub unsafe fn vaeseq_u8(data: u8x16, key: u8x16) -> u8x16 // AES cryptography (A32/A64)
  • pub unsafe fn vaesdq_u8(data: u8x16, key: u8x16) -> u8x16 // AES cryptography (A32/A64)
  • pub unsafe fn vaesmcq_u8(data: u8x16) -> u8x16 // AES cryptography (A32/A64)
  • pub unsafe fn vaesimcq_u8(data: u8x16) -> u8x16 // AES cryptography (A32/A64)
  • pub unsafe fn vsha1cq_u32(hash_abcd: u32x4, hash_e: u32, wk: u32x4) -> u32x4 // SHA1 cryptography (A32/A64)
  • pub unsafe fn vsha1pq_u32(hash_abcd: u32x4, hash_e: u32, wk: u32x4) -> u32x4 // SHA1 cryptography (A32/A64)
  • pub unsafe fn vsha1mq_u32(hash_abcd: u32x4, hash_e: u32, wk: u32x4) -> u32x4 // SHA1 cryptography (A32/A64)
  • pub unsafe fn vsha1h_u32(hash_e: u32) -> u32 // SHA1 cryptography (A32/A64)
  • pub unsafe fn vsha1su0q_u32(w0_3: u32x4, w4_7: u32x4, w8_11: u32x4) -> u32x4 // SHA1 cryptography (A32/A64)
  • pub unsafe fn vsha1su1q_u32(tw0_3: u32x4, w12_15: u32x4) -> u32x4 // SHA1 cryptography (A32/A64)
  • pub unsafe fn vsha256hq_u32(hash_abcd: u32x4, hash_efgh: u32x4, wk: u32x4) -> u32x4 // SHA2-256 cryptography (A32/A64)
  • pub unsafe fn vsha256h2q_u32(hash_efgh: u32x4, hash_abcd: u32x4, wk: u32x4) -> u32x4 // SHA2-256 cryptography (A32/A64)
  • pub unsafe fn vsha256su0q_u32(w0_3: u32x4, w4_7: u32x4) -> u32x4 // SHA2-256 cryptography (A32/A64)
  • pub unsafe fn vsha256su1q_u32(tw0_3: u32x4, w8_11: u32x4, w12_15: u32x4) -> u32x4 // SHA2-256 cryptography (A32/A64)
  • pub unsafe fn vmull_p64(a: p64, b: p64) -> p128 // Vector long multiply (A32/A64)
  • pub unsafe fn vmull_high_p64(a: p64x2, b: p64x2) -> p128 // Vector long multiply (A32/A64)
  • pub unsafe fn __crc32b(a: u32, b: u8) -> u32 // CRC-32 checksum (A32/A64)
  • pub unsafe fn __crc32h(a: u32, b: u16) -> u32 // CRC-32 checksum (A32/A64)
  • pub unsafe fn __crc32w(a: u32, b: u32) -> u32 // CRC-32 checksum (A32/A64)
  • pub unsafe fn __crc32d(a: u32, b: u64) -> u32 // CRC-32 checksum (A32/A64)
  • pub unsafe fn __crc32cb(a: u32, b: u8) -> u32 // CRC-32 checksum (A32/A64)
  • pub unsafe fn __crc32ch(a: u32, b: u16) -> u32 // CRC-32 checksum (A32/A64)
  • pub unsafe fn __crc32cw(a: u32, b: u32) -> u32 // CRC-32 checksum (A32/A64)
  • pub unsafe fn __crc32cd(a: u32, b: u64) -> u32 // CRC-32 checksum (A32/A64)
  • pub unsafe fn vqrdmlah_s16(a: i16x4, b: i16x4, c: i16x4) -> i16x4 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlah_s32(a: i32x2, b: i32x2, c: i32x2) -> i32x2 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahq_s16(a: i16x8, b: i16x8, c: i16x8) -> i16x8 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahq_s32(a: i32x4, b: i32x4, c: i32x4) -> i32x4 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlsh_s16(a: i16x4, b: i16x4, c: i16x4) -> i16x4 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlsh_s32(a: i32x2, b: i32x2, c: i32x2) -> i32x2 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshq_s16(a: i16x8, b: i16x8, c: i16x8) -> i16x8 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshq_s32(a: i32x4, b: i32x4, c: i32x4) -> i32x4 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlah_lane_s16(a: i16x4, b: i16x4, v: i16x4, lane: i) -> i16x4 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahq_lane_s16(a: i16x8, b: i16x8, v: i16x4, lane: i) -> i16x8 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlah_laneq_s16(a: i16x4, b: i16x4, v: i16x8, lane: i) -> i16x4 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahq_laneq_s16(a: i16x8, b: i16x8, v: i16x8, lane: i) -> i16x8 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlah_lane_s16(a: i32x2, b: i32x2, v: i32x2, lane: i) -> i32x2 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahq_lane_s16(a: i32x4, b: i32x4, v: i32x2, lane: i) -> i32x4 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlah_laneq_s16(a: i32x2, b: i32x2, v: i32x4, lane: i) -> i32x2 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahq_laneq_s16(a: i32x4, b: i32x4, v: i32x4, lane: i) -> i32x4 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlsh_lane_s16(a: i16x4, b: i16x4, v: i16x4, lane: i) -> i16x4 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshq_lane_s16(a: i16x8, b: i16x8, v: i16x4, lane: i) -> i16x8 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlsh_laneq_s16(a: i16x4, b: i16x4, v: i16x8, lane: i) -> i16x4 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshq_laneq_s16(a: i16x8, b: i16x8, v: i16x8, lane: i) -> i16x8 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlsh_lane_s16(a: i32x2, b: i32x2, v: i32x2, lane: i) -> i32x2 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshq_lane_s16(a: i32x4, b: i32x4, v: i32x2, lane: i) -> i32x4 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlsh_laneq_s16(a: i32x2, b: i32x2, v: i32x4, lane: i) -> i32x2 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshq_laneq_s16(a: i32x4, b: i32x4, v: i32x4, lane: i) -> i32x4 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlahh_s16(a: i16, b: i16, c: i16) -> i16 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahs_s32(a: i32, b: i32, c: i32) -> i32 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlshh_s16(a: i16, b: i16, c: i16) -> i16 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshs_s32(a: i32, b: i32, c: i32) -> i32 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlahh_lane_s16(a: i16, b: i16, v: i16x4, lane: i) -> i16 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahh_laneq_s16(a: i16, b: i16, v: i16x8, lane: i) -> i16 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahs_lane_s32(a: i32, b: i32, v: i32x4, lane: i) -> i32 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlahs_laneq_s32(a: i32, b: i32, v: i32x8, lane: i) -> i32 // Vector saturating rounding multiply accumulate (A64)
  • pub unsafe fn vqrdmlshh_lane_s16(a: i16, b: i16, v: i16x4, lane: i) -> i16 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshh_laneq_s16(a: i16, b: i16, v: i16x8, lane: i) -> i16 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshs_lane_s32(a: i32, b: i32, v: i32x4, lane: i) -> i32 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vqrdmlshs_laneq_s32(a: i32, b: i32, v: i32x8, lane: i) -> i32 // Vector saturating rounding multiply subtract (A64)
  • pub unsafe fn vabsh_f16(a: f16) -> f16 // Absolute (A32/A64)
  • pub unsafe fn vceqzh_f16(a: f16) -> u16 // Vector compare equal (A64)
  • pub unsafe fn vcgezh_f16(a: f16) -> u16 // Vector compare greater-than or equal (A64)
  • pub unsafe fn vcgtzh_f16(a: f16) -> u16 // Vector compare greater-than (A64)
  • pub unsafe fn vclezh_f16(a: f16) -> u16 // Vector compare less-than or equal (A64)
  • pub unsafe fn vcltzh_f16(a: f16) -> u16 // Vector compare less-than (A64)
  • pub unsafe fn vcvth_f16_s16(a: i16) -> f16 // Vector convert (A64)
  • pub unsafe fn vcvth_f16_s32(a: i32) -> f16 // Vector convert (A32/A64)
  • pub unsafe fn vcvth_f16_s64(a: i64) -> f16 // Vector convert (A64)
  • pub unsafe fn vcvth_f16_u16(a: u16) -> f16 // Vector convert (A64)
  • pub unsafe fn vcvth_f16_u32(a: u32) -> f16 // Vector convert (A32/A64)
  • pub unsafe fn vcvth_f16_u64(a: u64) -> f16 // Vector convert (A64)
  • pub unsafe fn vcvth_s16_f16(a: f16) -> i16 // Vector convert (A64)
  • pub unsafe fn vcvth_s32_f16(a: f16) -> i32 // Vector convert (A32/A64)
  • pub unsafe fn vcvth_s64_f16(a: f16) -> i64 // Vector convert (A64)
  • pub unsafe fn vcvth_u16_f16(a: f16) -> u16 // Vector convert (A64)
  • pub unsafe fn vcvth_u32_f16(a: f16) -> u32 // Vector convert (A32/A64)
  • pub unsafe fn vcvth_u64_f16(a: f16) -> u64 // Vector convert (A64)
  • pub unsafe fn vcvtah_s16_f16(a: f16) -> i16 // Vector convert (A64)
  • pub unsafe fn vcvtah_s32_f16(a: f16) -> i32 // Vector convert (A32/A64)
  • pub unsafe fn vcvtah_s64_f16(a: f16) -> i64 // Vector convert (A64)
  • pub unsafe fn vcvtah_u16_f16(a: f16) -> u16 // Vector convert (A64)
  • pub unsafe fn vcvtah_u32_f16(a: f16) -> u32 // Vector convert (A32/A64)
  • pub unsafe fn vcvtah_u64_f16(a: f16) -> u64 // Vector convert (A64)
  • pub unsafe fn vcvtmh_s16_f16(a: f16) -> i16 // Vector convert (A64)
  • pub unsafe fn vcvtmh_s32_f16(a: f16) -> i32 // Vector convert (A32/A64)
  • pub unsafe fn vcvtmh_s64_f16(a: f16) -> i64 // Vector convert (A64)
  • pub unsafe fn vcvtmh_u16_f16(a: f16) -> u16 // Vector convert (A64)
  • pub unsafe fn vcvtmh_u32_f16(a: f16) -> u32 // Vector convert (A32/A64)
  • pub unsafe fn vcvtmh_u64_f16(a: f16) -> u64 // Vector convert (A64)
  • pub unsafe fn vcvtnh_s16_f16(a: f16) -> i16 // Vector convert (A64)
  • pub unsafe fn vcvtnh_s32_f16(a: f16) -> i32 // Vector convert (A32/A64)
  • pub unsafe fn vcvtnh_s64_f16(a: f16) -> i64 // Vector convert (A64)
  • pub unsafe fn vcvtnh_u16_f16(a: f16) -> u16 // Vector convert (A64)
  • pub unsafe fn vcvtnh_u32_f16(a: f16) -> u32 // Vector convert (A32/A64)
  • pub unsafe fn vcvtnh_u64_f16(a: f16) -> u64 // Vector convert (A64)
  • pub unsafe fn vcvtph_s16_f16(a: f16) -> i16 // Vector convert (A64)
  • pub unsafe fn vcvtph_s32_f16(a: f16) -> i32 // Vector convert (A32/A64)
  • pub unsafe fn vcvtph_s64_f16(a: f16) -> i64 // Vector convert (A64)
  • pub unsafe fn vcvtph_u16_f16(a: f16) -> u16 // Vector convert (A64)
  • pub unsafe fn vcvtph_u32_f16(a: f16) -> u32 // Vector convert (A32/A64)
  • pub unsafe fn vcvtph_u64_f16(a: f16) -> u64 // Vector convert (A64)
  • pub unsafe fn vnegh_f16(a: f16) -> f16 // Negate (A32/A64)
  • pub unsafe fn vrecpeh_f16(a: f16) -> f16 // Reciprocal estimate (A64)
  • pub unsafe fn vrecpxh_f16(a: f16) -> f16 // Reciprocal estimate (A64)
  • pub unsafe fn vrndh_f16(a: f16) -> f16 // Vector round (A32/A64)
  • pub unsafe fn vrndah_f16(a: f16) -> f16 // Vector round (A32/A64)
  • pub unsafe fn vrndih_f16(a: f16) -> f16 // Vector round (A32/A64)
  • pub unsafe fn vrndmh_f16(a: f16) -> f16 // Vector round (A32/A64)
  • pub unsafe fn vrndnh_f16(a: f16) -> f16 // Vector round (A32/A64)
  • pub unsafe fn vrndph_f16(a: f16) -> f16 // Vector round (A32/A64)
  • pub unsafe fn vrndxh_f16(a: f16) -> f16 // Vector round (A32/A64)
  • pub unsafe fn vrsqrteh_f16(a: f16) -> f16 // Reciprocal square root estimate (A64)
  • pub unsafe fn vsqrth_f16(a: f16) -> f16 // Vector square root (A32/A64)
  • pub unsafe fn vaddh_f16(a: f16, b: f16) -> f16 // Vector add (A32/A64)
  • pub unsafe fn vabdh_f16(a: f16, b: f16) -> f16 // Absolute difference (A64)
  • pub unsafe fn vcageh_f16(a: f16, b: f16) -> u16 // Vector compare absolute greater-than or equal (A64)
  • pub unsafe fn vcagth_f16(a: f16, b: f16) -> u16 // Vector compare absolute less-than or equal (A64)
  • pub unsafe fn vcaleh_f16(a: f16, b: f16) -> u16 // Vector compare absolute less-than or equal (A64)
  • pub unsafe fn vcalth_f16(a: f16, b: f16) -> u16 // Vector compare absolute less-than (A64)
  • pub unsafe fn vceqh_f16(a: f16, b: f16) -> u16 // Vector compare equal (A64)
  • pub unsafe fn vcgeh_f16(a: f16, b: f16) -> u16 // Vector compare greater-than or equal (A64)
  • pub unsafe fn vcgth_f16(a: f16, b: f16) -> u16 // Vector compare greater-than (A64)
  • pub unsafe fn vcleh_f16(a: f16, b: f16) -> u16 // Vector compare less-than or equal (A64)
  • pub unsafe fn vclth_f16(a: f16, b: f16) -> u16 // Vector compare less-than (A64)
  • pub unsafe fn vcvth_n_f16_s16(a: i16, n: i) -> f16 // Vector convert (A64)
  • pub unsafe fn vcvth_n_f16_s32(a: i32, n: i) -> f16 // Vector convert (A32/A64)
  • pub unsafe fn vcvth_n_f16_s64(a: i64, n: i) -> f16 // Vector convert (A64)
  • pub unsafe fn vcvth_n_f16_u16(a: u16, n: i) -> f16 // Vector convert (A64)
  • pub unsafe fn vcvth_n_f16_u32(a: u32, n: i) -> f16 // Vector convert (A32/A64)
  • pub unsafe fn vcvth_n_f16_u64(a: u64, n: i) -> f16 // Vector convert (A64)
  • pub unsafe fn vcvth_n_s16_f16(a: f16, n: i) -> i16 // Vector convert (A64)
  • pub unsafe fn vcvth_n_s32_f16(a: f16, n: i) -> i32 // Vector convert (A32/A64)
  • pub unsafe fn vcvth_n_s64_f16(a: f16, n: i) -> i64 // Vector convert (A64)
  • pub unsafe fn vcvth_n_u16_f16(a: f16, n: i) -> u16 // Vector convert (A64)
  • pub unsafe fn vcvth_n_u32_f16(a: f16, n: i) -> u32 // Vector convert (A32/A64)
  • pub unsafe fn vcvth_n_u64_f16(a: f16, n: i) -> u64 // Vector convert (A64)
  • pub unsafe fn vdivh_f16(a: f16, b: f16) -> f16 // Vector divide (A32/A64)
  • pub unsafe fn vmaxh_f16(a: f16, b: f16) -> f16 // Maximum (A64)
  • pub unsafe fn vmaxnmh_f16(a: f16, b: f16) -> f16 // Maximum (A32/A64)
  • pub unsafe fn vminh_f16(a: f16, b: f16) -> f16 // Minimum (A64)
  • pub unsafe fn vminnmh_f16(a: f16, b: f16) -> f16 // Minimum (A32/A64)
  • pub unsafe fn vmulh_f16(a: f16, b: f16) -> f16 // Vector multiply (A32/A64)
  • pub unsafe fn vmulxh_f16(a: f16, b: f16) -> f16 // Vector multiply extended (A64)
  • pub unsafe fn vrecpsh_f16(a: f16, b: f16) -> f16 // Reciprocal estimate/step and 1/sqrt estimate/step (A64)
  • pub unsafe fn vrsqrtsh_f16(a: f16, b: f16) -> f16 // Reciprocal square root step (A64)
  • pub unsafe fn vsubh_f16(a: f16, b: f16) -> f16 // Vector subtract (A32/A64)
  • pub unsafe fn vfmah_f16(a: f16, b: f16, c: f16) -> f16 // Vector fused multiply accumulate (A32/A64)
  • pub unsafe fn vfmsh_f16(a: f16, b: f16, c: f16) -> f16 // Vector fused multiply subtract (A32/A64)
  • pub unsafe fn vabs_f16(a: f16x4) -> f16x4 // Absolute (A32/A64)
  • pub unsafe fn vabsq_f16(a: f16x8) -> f16x8 // Absolute (A32/A64)
  • pub unsafe fn vceqz_f16(a: f16x4) -> u16x4 // Vector compare equal (A32/A64)
  • pub unsafe fn vceqzq_f16(a: f16x8) -> u16x8 // Vector compare equal (A32/A64)
  • pub unsafe fn vcgez_f16(a: f16x4) -> u16x4 // Vector compare greater-than or equal (A32/A64)
  • pub unsafe fn vcgezq_f16(a: f16x8) -> u16x8 // Vector compare greater-than or equal (A32/A64)
  • pub unsafe fn vcgtz_f16(a: f16x4) -> u16x4 // Vector compare greater-than (A32/A64)
  • pub unsafe fn vcgtzq_f16(a: f16x8) -> u16x8 // Vector compare greater-than (A32/A64)
  • pub unsafe fn vclez_f16(a: f16x4) -> u16x4 // Vector compare less-than or equal (A32/A64)
  • pub unsafe fn vclezq_f16(a: f16x8) -> u16x8 // Vector compare less-than or equal (A32/A64)
  • pub unsafe fn vcltz_f16(a: f16x4) -> u16x4 // Vector compare less-than (A32/A64)
  • pub unsafe fn vcltzq_f16(a: f16x8) -> u16x8 // Vector compare less-than (A32/A64)
  • pub unsafe fn vcvt_f16_s16(a: i16x4) -> f16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtq_f16_s16(a: i16x8) -> f16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvt_f16_u16(a: u16x4) -> f16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtq_f16_u16(a: u16x8) -> f16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvt_s16_f16(a: f16x4) -> i16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtq_s16_f16(a: f16x8) -> i16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvt_u16_f16(a: f16x4) -> u16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtq_u16_f16(a: f16x8) -> u16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvta_s16_f16(a: f16x4) -> i16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtaq_s16_f16(a: f16x8) -> i16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvta_u16_f16(a: f16x4) -> u16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtaq_u16_f16(a: f16x8) -> u16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvtm_s16_f16(a: f16x4) -> i16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtmq_s16_f16(a: f16x8) -> i16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvtm_u16_f16(a: f16x4) -> u16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtmq_u16_f16(a: f16x8) -> u16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvtn_s16_f16(a: f16x4) -> i16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtnq_s16_f16(a: f16x8) -> i16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvtn_u16_f16(a: f16x4) -> u16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtnq_u16_f16(a: f16x8) -> u16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvtp_s16_f16(a: f16x4) -> i16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtpq_s16_f16(a: f16x8) -> i16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvtp_u16_f16(a: f16x4) -> u16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtpq_u16_f16(a: f16x8) -> u16x8 // Vector convert (A32/A64)
  • pub unsafe fn vneg_f16(a: f16x4) -> f16x4 // Negate (A32/A64)
  • pub unsafe fn vnegq_f16(a: f16x8) -> f16x8 // Negate (A32/A64)
  • pub unsafe fn vrecpe_f16(a: f16x4) -> f16x4 // Reciprocal estimate (A32/A64)
  • pub unsafe fn vrecpeq_f16(a: f16x8) -> f16x8 // Reciprocal estimate (A32/A64)
  • pub unsafe fn vrnd_f16(a: f16x4) -> f16x4 // Vector round (A32/A64)
  • pub unsafe fn vrndq_f16(a: f16x8) -> f16x8 // Vector round (A32/A64)
  • pub unsafe fn vrnda_f16(a: f16x4) -> f16x4 // Vector round (A32/A64)
  • pub unsafe fn vrndaq_f16(a: f16x8) -> f16x8 // Vector round (A32/A64)
  • pub unsafe fn vrndi_f16(a: f16x4) -> f16x4 // Vector round (A64)
  • pub unsafe fn vrndiq_f16(a: f16x8) -> f16x8 // Vector round (A64)
  • pub unsafe fn vrndm_f16(a: f16x4) -> f16x4 // Vector round (A32/A64)
  • pub unsafe fn vrndmq_f16(a: f16x8) -> f16x8 // Vector round (A32/A64)
  • pub unsafe fn vrndn_f16(a: f16x4) -> f16x4 // Vector round (A32/A64)
  • pub unsafe fn vrndnq_f16(a: f16x8) -> f16x8 // Vector round (A32/A64)
  • pub unsafe fn vrndp_f16(a: f16x4) -> f16x4 // Vector round (A32/A64)
  • pub unsafe fn vrndpq_f16(a: f16x8) -> f16x8 // Vector round (A32/A64)
  • pub unsafe fn vrndx_f16(a: f16x4) -> f16x4 // Vector round (A32/A64)
  • pub unsafe fn vrndxq_f16(a: f16x8) -> f16x8 // Vector round (A32/A64)
  • pub unsafe fn vrsqrte_f16(a: f16x4) -> f16x4 // Reciprocal square root estimate (A32/A64)
  • pub unsafe fn vrsqrteq_f16(a: f16x8) -> f16x8 // Reciprocal square root estimate (A32/A64)
  • pub unsafe fn vsqrt_f16(a: f16x4) -> f16x4 // Vector square root (A64)
  • pub unsafe fn vsqrtq_f16(a: f16x8) -> f16x8 // Vector square root (A64)
  • pub unsafe fn vadd_f16(a: f16x4, b: f16x4) -> f16x4 // Vector add (A32/A64)
  • pub unsafe fn vaddq_f16(a: f16x8, b: f16x8) -> f16x8 // Vector add (A32/A64)
  • pub unsafe fn vabd_f16(a: f16x4, b: f16x4) -> f16x4 // Absolute difference (A32/A64)
  • pub unsafe fn vabdq_f16(a: f16x8, b: f16x8) -> f16x8 // Absolute difference (A32/A64)
  • pub unsafe fn vcage_f16(a: f16x4, b: f16x4) -> u16x4 // Vector compare absolute greater-than or equal (A32/A64)
  • pub unsafe fn vcageq_f16(a: f16x8, b: f16x8) -> u16x8 // Vector compare absolute greater-than or equal (A32/A64)
  • pub unsafe fn vcagt_f16(a: f16x4, b: f16x4) -> u16x4 // Vector compare absolute greater-than (A32/A64)
  • pub unsafe fn vcagtq_f16(a: f16x8, b: f16x8) -> u16x8 // Vector compare absolute greater-than (A32/A64)
  • pub unsafe fn vcale_f16(a: f16x4, b: f16x4) -> u16x4 // Vector compare absolute less-than or equal (A32/A64)
  • pub unsafe fn vcaleq_f16(a: f16x8, b: f16x8) -> u16x8 // Vector compare absolute less-than or equal (A32/A64)
  • pub unsafe fn vcalt_f16(a: f16x4, b: f16x4) -> u16x4 // Vector compare absolute less-than (A32/A64)
  • pub unsafe fn vcaltq_f16(a: f16x8, b: f16x8) -> u16x8 // Vector compare absolute less-than (A32/A64)
  • pub unsafe fn vceq_f16(a: f16x4, b: f16x4) -> u16x4 // Vector compare equal (A32/A64)
  • pub unsafe fn vceqq_f16(a: f16x8, b: f16x8) -> u16x8 // Vector compare equal (A32/A64)
  • pub unsafe fn vcge_f16(a: f16x4, b: f16x4) -> u16x4 // Vector compare greater-than or equal (A32/A64)
  • pub unsafe fn vcgeq_f16(a: f16x8, b: f16x8) -> u16x8 // Vector compare greater-than or equal (A32/A64)
  • pub unsafe fn vcgt_f16(a: f16x4, b: f16x4) -> u16x4 // Vector compare greater-than (A32/A64)
  • pub unsafe fn vcgtq_f16(a: f16x8, b: f16x8) -> u16x8 // Vector compare greater-than (A32/A64)
  • pub unsafe fn vcle_f16(a: f16x4, b: f16x4) -> u16x4 // Vector compare less-than or equal (A32/A64)
  • pub unsafe fn vcleq_f16(a: f16x8, b: f16x8) -> u16x8 // Vector compare less-than or equal (A32/A64)
  • pub unsafe fn vclt_f16(a: f16x4, b: f16x4) -> u16x4 // Vector compare less-than (A32/A64)
  • pub unsafe fn vcltq_f16(a: f16x8, b: f16x8) -> u16x8 // Vector compare less-than (A32/A64)
  • pub unsafe fn vcvt_n_f16_s16(a: i16x4, n: i) -> f16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtq_n_f16_s16(a: i16x8, n: i) -> f16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvt_n_f16_u16(a: u16x4, n: i) -> f16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtq_n_f16_u16(a: u16x8, n: i) -> f16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvt_n_s16_f16(a: f16x4, n: i) -> i16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtq_n_s16_f16(a: f16x8, n: i) -> i16x8 // Vector convert (A32/A64)
  • pub unsafe fn vcvt_n_u16_f16(a: f16x4, n: i) -> u16x4 // Vector convert (A32/A64)
  • pub unsafe fn vcvtq_n_u16_f16(a: f16x8, n: i) -> u16x8 // Vector convert (A32/A64)
  • pub unsafe fn vdiv_f16(a: f16x4, b: f16x4) -> f16x4 // Vector divide (A64)
  • pub unsafe fn vdivq_f16(a: f16x8, b: f16x8) -> f16x8 // Vector divide (A64)
  • pub unsafe fn vmax_f16(a: f16x4, b: f16x4) -> f16x4 // Maximum (A32/A64)
  • pub unsafe fn vmaxq_f16(a: f16x8, b: f16x8) -> f16x8 // Maximum (A32/A64)
  • pub unsafe fn vmaxnm_f16(a: f16x4, b: f16x4) -> f16x4 // Maximum (A32/A64)
  • pub unsafe fn vmaxnmq_f16(a: f16x8, b: f16x8) -> f16x8 // Maximum (A32/A64)
  • pub unsafe fn vmin_f16(a: f16x4, b: f16x4) -> f16x4 // Minimum (A32/A64)
  • pub unsafe fn vminq_f16(a: f16x8, b: f16x8) -> f16x8 // Minimum (A32/A64)
  • pub unsafe fn vminnm_f16(a: f16x4, b: f16x4) -> f16x4 // Minimum (A32/A64)
  • pub unsafe fn vminnmq_f16(a: f16x8, b: f16x8) -> f16x8 // Minimum (A32/A64)
  • pub unsafe fn vmul_f16(a: f16x4, b: f16x4) -> f16x4 // Vector multiply (A32/A64)
  • pub unsafe fn vmulq_f16(a: f16x8, b: f16x8) -> f16x8 // Vector multiply (A32/A64)
  • pub unsafe fn vmulx_f16(a: f16x4, b: f16x4) -> f16x4 // Vector multiply extended (A64)
  • pub unsafe fn vmulxq_f16(a: f16x8, b: f16x8) -> f16x8 // Vector multiply extended (A64)
  • pub unsafe fn vpadd_f16(a: f16x4, b: f16x4) -> f16x4 // Pairwise add (A32/A64)
  • pub unsafe fn vpaddq_f16(a: f16x8, b: f16x8) -> f16x8 // Pairwise add (A64)
  • pub unsafe fn vpmax_f16(a: f16x4, b: f16x4) -> f16x4 // Folding maximum of adjacent pairs (A32/A64)
  • pub unsafe fn vpmaxq_f16(a: f16x8, b: f16x8) -> f16x8 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxnm_f16(a: f16x4, b: f16x4) -> f16x4 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmaxnmq_f16(a: f16x8, b: f16x8) -> f16x8 // Folding maximum of adjacent pairs (A64)
  • pub unsafe fn vpmin_f16(a: f16x4, b: f16x4) -> f16x4 // Folding minimum of adjacent pairs (A32/A64)
  • pub unsafe fn vpminq_f16(a: f16x8, b: f16x8) -> f16x8 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminnm_f16(a: f16x4, b: f16x4) -> f16x4 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vpminnmq_f16(a: f16x8, b: f16x8) -> f16x8 // Folding minimum of adjacent pairs (A64)
  • pub unsafe fn vrecps_f16(a: f16x4, b: f16x4) -> f16x4 // Reciprocal estimate/step and 1/sqrt estimate/step (A32/A64)
  • pub unsafe fn vrecpsq_f16(a: f16x8, b: f16x8) -> f16x8 // Reciprocal estimate/step and 1/sqrt estimate/step (A32/A64)
  • pub unsafe fn vrsqrts_f16(a: f16x4, b: f16x4) -> f16x4 // Reciprocal square root step (A32/A64)
  • pub unsafe fn vrsqrtsq_f16(a: f16x8, b: f16x8) -> f16x8 // Reciprocal square root step (A32/A64)
  • pub unsafe fn vsub_f16(a: f16x4, b: f16x4) -> f16x4 // Vector subtract (A32/A64)
  • pub unsafe fn vsubq_f16(a: f16x8, b: f16x8) -> f16x8 // Vector subtract (A32/A64)
  • pub unsafe fn vfma_f16(a: f16x4, b: f16x4, c: f16x4) -> f16x4 // Vector fused multiply accumulate (A32/A64)
  • pub unsafe fn vfmaq_f16(a: f16x8, b: f16x8, c: f16x8) -> f16x8 // Vector fused multiply accumulate (A32/A64)
  • pub unsafe fn vfms_f16(a: f16x4, b: f16x4, c: f16x4) -> f16x4 // Vector fused multiply subtract (A32/A64)
  • pub unsafe fn vfmsq_f16(a: f16x8, b: f16x8, c: f16x8) -> f16x8 // Vector fused multiply subtract (A32/A64)
  • pub unsafe fn vfma_lane_f16(a: f16x4, b: f16x4, v: f16x4, lane: i) -> f16x4 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfmaq_lane_f16(a: f16x8, b: f16x8, v: f16x4, lane: i) -> f16x8 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfma_laneq_f16(a: f16x4, b: f16x4, v: f16x8, lane: i) -> f16x4 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfmaq_laneq_f16(a: f16x8, b: f16x8, v: f16x8, lane: i) -> f16x8 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfma_n_f16(a: f16x4, b: f16x4, n: f16) -> f16x4 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfmaq_n_f16(a: f16x8, b: f16x8, n: f16) -> f16x8 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfmah_lane_f16(a: f16, b: f16, v: f16x4, lane: i) -> f16 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfmah_laneq_f16(a: f16, b: f16, v: f16x8, lane: i) -> f16 // Vector fused multiply accumulate (A64)
  • pub unsafe fn vfms_lane_f16(a: f16x4, b: f16x4, v: f16x4, lane: i) -> f16x4 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfmsq_lane_f16(a: f16x8, b: f16x8, v: f16x4, lane: i) -> f16x8 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfms_laneq_f16(a: f16x4, b: f16x4, v: f16x8, lane: i) -> f16x4 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfmsq_laneq_f16(a: f16x8, b: f16x8, v: f16x8, lane: i) -> f16x8 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfms_n_f16(a: f16x4, b: f16x4, n: f16) -> f16x4 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfmsq_n_f16(a: f16x8, b: f16x8, n: f16) -> f16x8 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfmsh_lane_f16(a: f16, b: f16, v: f16x4, lane: i) -> f16 // Vector fused multiply subtract (A64)
  • pub unsafe fn vfmsh_laneq_f16(a: f16, b: f16, v: f16x8, lane: i) -> f16 // Vector fused multiply subtract (A64)
  • pub unsafe fn vmul_lane_f16(a: f16x4, v: f16x4, lane: i) -> f16x4 // Vector multiply (A32/A64)
  • pub unsafe fn vmulq_lane_f16(a: f16x8, v: f16x4, lane: i) -> f16x8 // Vector multiply (A32/A64)
  • pub unsafe fn vmul_laneq_f16(a: f16x4, v: f16x8, lane: i) -> f16x4 // Vector multiply (A64)
  • pub unsafe fn vmulq_laneq_f16(a: f16x8, v: f16x8, lane: i) -> f16x8 // Vector multiply (A64)
  • pub unsafe fn vmul_n_f16(a: f16x4, n: f16) -> f16x4 // Vector multiply by scalar (A32/A64)
  • pub unsafe fn vmulq_n_f16(a: f16x8, n: f16) -> f16x8 // Vector multiply by scalar (A32/A64)
  • pub unsafe fn vmulh_lane_f16(a: f16, v: f16x4, lane: i) -> f16 // Vector multiply (A64)
  • pub unsafe fn vmulh_laneq_f16(a: f16, v: f16x8, lane: i) -> f16 // Vector multiply (A64)
  • pub unsafe fn vmulx_lane_f16(a: f16x4, v: f16x4, lane: i) -> f16x4 // Vector multiply extended (A64)
  • pub unsafe fn vmulxq_lane_f16(a: f16x8, v: f16x4, lane: i) -> f16x8 // Vector multiply extended (A64)
  • pub unsafe fn vmulx_laneq_f16(a: f16x4, v: f16x8, lane: i) -> f16x4 // Vector multiply extended (A64)
  • pub unsafe fn vmulxq_laneq_f16(a: f16x8, v: f16x8, lane: i) -> f16x8 // Vector multiply extended (A64)
  • pub unsafe fn vmulx_n_f16(a: f16x4, n: f16) -> f16x4 // Vector multiply extended (A64)
  • pub unsafe fn vmulxq_n_f16(a: f16x8, n: f16) -> f16x8 // Vector multiply extended (A64)
  • pub unsafe fn vmulxh_lane_f16(a: f16, v: f16x4, lane: i) -> f16 // Vector multiply extended (A64)
  • pub unsafe fn vmulxh_laneq_f16(a: f16, v: f16x8, lane: i) -> f16 // Vector multiply extended (A64)
  • pub unsafe fn vmaxv_f16(a: f16x4) -> f16 // Maximum (A64)
  • pub unsafe fn vmaxvq_f16(a: f16x8) -> f16 // Maximum (A64)
  • pub unsafe fn vminv_f16(a: f16x4) -> f16 // Minimum (A64)
  • pub unsafe fn vminvq_f16(a: f16x8) -> f16 // Minimum (A64)
  • pub unsafe fn vmaxnmv_f16(a: f16x4) -> f16 // Maximum (A64)
  • pub unsafe fn vmaxnmvq_f16(a: f16x8) -> f16 // Maximum (A64)
  • pub unsafe fn vminnmv_f16(a: f16x4) -> f16 // Minimum (A64)
  • pub unsafe fn vminnmvq_f16(a: f16x8) -> f16 // Minimum (A64)
  • pub unsafe fn vbsl_f16(a: u16x4, b: f16x4, c: f16x4) -> f16x4 // Bitwise select (v7/A32/A64)
  • pub unsafe fn vbslq_f16(a: u16x8, b: f16x8, c: f16x8) -> f16x8 // Bitwise select (v7/A32/A64)
  • pub unsafe fn vzip_f16(a: f16x4, b: f16x4) -> f16x4x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vzipq_f16(a: f16x8, b: f16x8) -> f16x8x2 // Zip vectors (v7/A32/A64)
  • pub unsafe fn vuzp_f16(a: f16x4, b: f16x4) -> f16x4x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vuzpq_f16(a: f16x8, b: f16x8) -> f16x8x2 // Unzip vectors (v7/A32/A64)
  • pub unsafe fn vtrn_f16(a: f16x4, b: f16x4) -> f16x4x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vtrnq_f16(a: f16x8, b: f16x8) -> f16x8x2 // Transpose elements (v7/A32/A64)
  • pub unsafe fn vmov_n_f16(value: f16) -> f16x4 // Vector move (v7/A32/A64)
  • pub unsafe fn vmovq_n_f16(value: f16) -> f16x8 // Vector move (v7/A32/A64)
  • pub unsafe fn vdup_n_f16(value: f16) -> f16x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_n_f16(value: f16) -> f16x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdup_lane_f16(vec: f16x4, lane: i) -> f16x4 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vdupq_lane_f16(vec: f16x4, lane: i) -> f16x8 // Vector duplicate (v7/A32/A64)
  • pub unsafe fn vext_f16(a: f16x4, b: f16x4, n: i) -> f16x4 // Vector extract (v7/A32/A64)
  • pub unsafe fn vextq_f16(a: f16x8, b: f16x8, n: i) -> f16x8 // Vector extract (v7/A32/A64)
  • pub unsafe fn vrev64_f16(vec: f16x4) -> f16x4 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vrev64q_f16(vec: f16x8) -> f16x8 // Reverse vector elements (swap endianness) (v7/A32/A64)
  • pub unsafe fn vzip1_f16(a: f16x4, b: f16x4) -> f16x4 // Zip vectors (A64)
  • pub unsafe fn vzip1q_f16(a: f16x8, b: f16x8) -> f16x8 // Zip vectors (A64)
  • pub unsafe fn vzip2_f16(a: f16x4, b: f16x4) -> f16x4 // Zip vectors (A64)
  • pub unsafe fn vzip2q_f16(a: f16x8, b: f16x8) -> f16x8 // Zip vectors (A64)
  • pub unsafe fn vuzp1_f16(a: f16x4, b: f16x4) -> f16x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp1q_f16(a: f16x8, b: f16x8) -> f16x8 // Unzip vectors (A64)
  • pub unsafe fn vuzp2_f16(a: f16x4, b: f16x4) -> f16x4 // Unzip vectors (A64)
  • pub unsafe fn vuzp2q_f16(a: f16x8, b: f16x8) -> f16x8 // Unzip vectors (A64)
  • pub unsafe fn vtrn1_f16(a: f16x4, b: f16x4) -> f16x4 // Transpose elements (A64)
  • pub unsafe fn vtrn1q_f16(a: f16x8, b: f16x8) -> f16x8 // Transpose elements (A64)
  • pub unsafe fn vtrn2_f16(a: f16x4, b: f16x4) -> f16x4 // Transpose elements (A64)
  • pub unsafe fn vtrn2q_f16(a: f16x8, b: f16x8) -> f16x8 // Transpose elements (A64)
  • pub unsafe fn vdup_laneq_f16(vec: f16x8, lane: i) -> f16x4 // Vector duplicate (A64)
  • pub unsafe fn vdupq_laneq_f16(vec: f16x8, lane: i) -> f16x8 // Vector duplicate (A64)
  • pub unsafe fn vduph_lane_f16(vec: f16x4, lane: i) -> f16 // Vector duplicate (A64)
  • pub unsafe fn vduph_laneq_f16(vec: f16x8, lane: i) -> f16 // Vector duplicate (A64)

Is there a blocker for these, or is it just finding time to do it? I'd like to help, but I'd need a more experienced compiler/SIMD person to point me in the right direction.

I can mentor. Start by taking a look at some of the intrinsics in the coresimd/aarch64/neon.rs module :)

Is there some upstream source that these all get copied from, or are they actually written by hand?

I am not sure I understand the question ? The neon modules in this repository are written by hand, although @Amanieu has expressed interest into generating some parts of them automatically.

Ah, I see, that would be the ARM NEON spec: https://developer.arm.com/technologies/neon/intrinsics

Now might be a great time to help make some more progress on this! We've got tons of intrinsics already implemented (thanks @gnzlbg!), and I've just implemented automatic verification of all added intrinsics, so we know if they're added they've got the correct signature at least!

I've updated the OP of this issue with more detailed instructions about how to bind NEON intrinsics. Hopefully it's not too bad any more!

We'll probably want to reorganize modules so they're a bit smaller and more manageable over time, but for now if anyone's interested to add more intrinsics and needs some help let me know!

more manageable

I have a proposal for this: using a macro to make definitions one-line e.g.:

neon_op!(binary vadd_s8 : int8x8_t == simd_add, assert vadd / add, doc "Vector add");
neon_op!(binary vaddl_s8 : int8x8_t -> int16x8_t == simd_add, assert vaddl / saddl, doc "Vector long add");
neon_op!(unary vmovn_s16 : int16x8_t -> int8x8_t == simd_cast, assert vmovn / xtn, doc "Vector narrow integer");

This will make adding new ones easier (scrolling through a bolierplate-filled file just feels awful), and I'll add a lot more simd_sub simd_mul simd_lt etc. ones. Would this be accepted?

macro definition I currently have
macro_rules! neon_op {
    (binary $name:ident : $type:ident == $op:ident, assert $instr32:ident / $instr64:ident, doc $doc:literal) => {
        #[inline]
        #[doc = $doc]
        #[target_feature(enable = "neon")]
        #[cfg_attr(target_arch = "arm", target_feature(enable = "v7"))]
        #[cfg_attr(all(test, target_arch = "arm"), assert_instr($instr32))]
        #[cfg_attr(all(test, target_arch = "aarch64"), assert_instr($instr64))]
        pub unsafe fn $name(a: $type, b: $type) -> $type {
            $op(a, b)
        }
    };
    (binary $name:ident : $type:ident -> $result_type:ident == $op:ident, assert $instr32:ident / $instr64:ident, doc $doc:literal) => {
        #[inline]
        #[doc = $doc]
        #[target_feature(enable = "neon")]
        #[cfg_attr(target_arch = "arm", target_feature(enable = "v7"))]
        #[cfg_attr(all(test, target_arch = "arm"), assert_instr($instr32))]
        #[cfg_attr(all(test, target_arch = "aarch64"), assert_instr($instr64))]
        pub unsafe fn $name(a: $type, b: $type) -> $result_type {
            let a: $result_type = simd_cast(a);
            let b: $result_type = simd_cast(b);
            $op(a, b)
        }
    };
    (unary $name:ident : $type:ident -> $result_type:ident == $op:ident, assert $instr32:ident / $instr64:ident, doc $doc:literal) => {
        #[inline]
        #[doc = $doc]
        #[target_feature(enable = "neon")]
        #[cfg_attr(target_arch = "arm", target_feature(enable = "v7"))]
        #[cfg_attr(all(test, target_arch = "arm"), assert_instr($instr32))]
        #[cfg_attr(all(test, target_arch = "aarch64"), assert_instr($instr64))]
        pub unsafe fn $name(a: $type) -> $result_type {
            $op(a)
        }
    };
}

For the definitions, I think that using macros is ok.

I am not sure I follow how does macros generate run-time tests for the intrinsics, that's usually the bulk of the work.

What is the reasoning behind some intrinsics linking in the LLVM intrinsic directly while others are using the generic simd_XXX functions?

For example:

/// Halving add
#[inline]
#[target_feature(enable = "neon")]
#[cfg_attr(target_arch = "arm", target_feature(enable = "v7"))]
#[cfg_attr(all(test, target_arch = "arm"), assert_instr("vhadd.u16"))]
#[cfg_attr(all(test, target_arch = "aarch64"), assert_instr(uhadd))]
pub unsafe fn vhadd_u16(a: uint16x4_t, b: uint16x4_t) -> uint16x4_t {
#[allow(improper_ctypes)]
extern "C" {
#[cfg_attr(target_arch = "arm", link_name = "llvm.arm.neon.vhaddu.v4i16")]
#[cfg_attr(target_arch = "aarch64", link_name = "llvm.aarch64.neon.uhadd.v4i16")]
fn vhadd_u16_(a: uint16x4_t, b: uint16x4_t) -> uint16x4_t;
}
vhadd_u16_(a, b)
}

Versus:

/// Compare bitwise Equal (vector)
#[inline]
#[target_feature(enable = "neon")]
#[cfg_attr(test, assert_instr(cmeq))]
pub unsafe fn vceq_u64(a: uint64x1_t, b: uint64x1_t) -> uint64x1_t {
simd_eq(a, b)
}

Given the sheer volume of neon intrinsics, it seems rather daunting to implement them all by hand using the guide in the first post. I'm wondering if there's a deterministic data driven way to generate all of them using #[link_name = "llvm.*"] as done in the first example. Maybe the llvm c headers could be useful?

What is the reasoning behind some intrinsics linking in the LLVM intrinsic directly while others are using the generic simd_XXX functions?

Not all intrinsics have a corresponding simd_* platform-intrinsic.

I'm wondering if there's a deterministic data driven way to generate all of them using #[link_name = "llvm.*"] as done in the first example. Maybe the llvm c headers could be useful?

Please don't. The simd_* platform intrinsics are much easier to implement in alternative codegen backends than the llvm intrinsics, as they are generic over vector types and they are backend independent.

@aloucks most of the intrinsics (AFAIK) have been added piecemeal over time, so it's sort of expected that they're not 100% consistent. Otherwise though I'd imagine that whatever works best would be fine to add to this repository. Auto-generation sounds pretty reasonable to me, and for an implementation we strive to match what Clang does in its implementation of these intrinsics.

Also, to be clear, this library is not designed for ease of implementation in alternate codegen backends. The purpose of this crate is to get the LLVM backend up and running with SIMD. Discussions and design constraints for alternate backends should be discussed in a separate issue.

Hey all, some friends and I have made a google sheet of all the Neon intrinsics, their inputs, output, and the ARM summary comment.

There could easily have been errors when copying around and manipulating thousands of entries of text, but I think that it's got all the bugs sorted out.

If you want to try some auto-generation, this is a good place to start. There's even a column where I've marked what we have in nightly so far, so if you just auto-gen all the functions that aren't checked you shouldn't hit any duplicate definitions.

I hope to find the time to actually contribute some functions, but for now this will have to do.

EDIT: also I just subscribed to the entire repo, so if there's any PRs that add more functions I'll try to check those boxes on the sheet and keep it up to date.

Working with @Shnatsel, I described the "godbolt process" and they were kind enough to make it a bash script that you can run locally

#!/bin/bash
set -e
INTRINSIC_NAME="$1"
TEMP_DIR="$(mktemp -d)"
cleanup() {
    rm -r "$TEMP_DIR"
}
trap cleanup EXIT
(
cd "$TEMP_DIR"
echo "#include <arm_neon.h>
int test() {
  return (int) $INTRINSIC_NAME;
}" > ./in.c

clang -emit-llvm -O2 -S -target armv7-unknown-linux-gnueabihf -g0 in.c
ARM_NAME=$(grep --only-matching '@llvm.arm.neon.[A-Za-z0-9.]\+' ./*.ll | tr -d '@' | head -n 1)

clang -emit-llvm -O2 -S -target aarch64-unknown-linux-gnu -g0 in.c
AARCH64_NAME=$(grep --only-matching '@llvm.aarch64.neon.[A-Za-z0-9.]\+' ./*.ll | tr -d '@' | head -n 1)

echo "$INTRINSIC_NAME, $ARM_NAME, $AARCH64_NAME"
)

You will probably need the gcc-multilib package or similar installed so that the correct headers are available.

Note that many functions don't have an associated llvm intrinsic that can be as easily scrapped out this way, but maybe 1/4th or so of them do.

@Lokathor Several instructions have been added recently: vaddhn, vbic, vorn, vceqz, vtst, vabd, vaba. Though some of them are not fully supported( like vceqzd). If you don’t have time to maintain this google sheet, I think I can help

@Lokathor Several instructions have been added recently: vaddhn, vbic, vorn, vceqz, vtst, vabd, vaba. Though some of them are not fully supported( like vceqzd). If you don’t have time to maintain this google sheet, I think I can help

Awesome, looking forward to this!

Any updates after a long time...? Thanks

If you look at the pull request list you can see that there has been activity on this quite recently. For example #1224 was opened yesterday.

@bjorn3 Thanks! Indeed I mostly want to know when can we see it in stable version.
By the way do you suggest use nightly in production environment? If so I can use it now.

CryZe commented

@SparrowLii You marked the following instructions as completed (same for min):

https://i.imgur.com/OipjDCy.png

It doesn't seem like those instructions are actually part of your recent PR (nor were they on the master branch before that) so I unmarked them again.

CryZe commented

Welp, I'll mark them again then. Somehow the GitHub Pull Request UI doesn't show them as diffs at all: https://i.imgur.com/BsHR5in.gif

Github’s comparison tool will always have problems when changing a large amount of code XD

As in #1230, except for the following instructions and those use 16-bit floating-point, other instructions have been implemented:

  1. The following instructions are only available in aarch64 now, because the corresponding target_feature cannot be found in the available features of arm:
    vcadd_rot、vcmla、vdot

  2. The feature i8mm is not valid:
    vmmla、vusmmla: https://rust.godbolt.org/z/8GbKW5ef4

  3. LLVM ERROR(Can be reproduced in godbolt):
    vsm4e: https://rust.godbolt.org/z/xhT1xvGTP

  4. LLVM ERROR(Normal in gotbolt, but LLVM ERROR: Cannot select: intrinsic raises at runtime)
    vsudot、vusdot: https://rust.godbolt.org/z/aMnEvab3n
    vqshlu: https://rust.godbolt.org/z/hvGhrhdMT

  5. Not implmented in LLVM and cannot be implemented manually:
    vmull_p64(for arm)、vsm3、vrax1q_u64、vxarq_u64、vrnd32、vrnd64、vsha512

As in #1230, except for the following instructions and those use 16-bit floating-point, other instructions have been implemented:

1. The following instructions are only available in aarch64 now, because the corresponding `target_feature` cannot be found in the available features of arm:
   `vcadd_rot`、`vcmla`、`vdot`

On LLVM's ARM backend, vcadd_rot and vcmla are under the v8.3a feature. vdot is under the dotprod feature. I got this information from llvm-project/llvm/lib/Target/ARM/ARMInstrNEON.td.

2. The feature `i8mm` is not valid:
   `vmmla`、`vusmmla`: [rust.godbolt.org/z/8GbKW5ef4](https://rust.godbolt.org/z/8GbKW5ef4)

Already discussed in rust-lang/rust#90079.

3. LLVM ERROR(Can be reproduced in godbolt):
   `vsm4e`: [rust.godbolt.org/z/xhT1xvGTP](https://rust.godbolt.org/z/xhT1xvGTP)

Use llvm.aarch64.crypto.sm4ekey instead of llvm.aarch64.sve.sm4ekey.

4. LLVM ERROR(Normal in gotbolt, but `LLVM ERROR: Cannot select: intrinsic` raises at runtime)
   `vsudot`、`vusdot`: [rust.godbolt.org/z/aMnEvab3n](https://rust.godbolt.org/z/aMnEvab3n)
   `vqshlu`: [rust.godbolt.org/z/hvGhrhdMT](https://rust.godbolt.org/z/hvGhrhdMT)

You need to make you test function pub in godbolt, otherwise it will be optimized away as unreachable by rustc before LLVM.

vsudot/vusdot require the i8mm target feature. vqshlu seems to work fine in godbolt after changing the pub.

5. Not implmented in LLVM and cannot be implemented manually:
   `vmull_p64`(for arm)、`vsm3`、`vrax1q_u64`、`vxarq_u64`、`vrnd32`、`vrnd64`、`vsha512`

These all seem to exist in LLVM at least for AArch64. For ARM we can just leave these out for now.

Hope someone can help implement the remaining instructions.

@Amanieu v8.5a feature is non-runtime detected so we can't use #[simd_test(enable = "neon,v8.5a")]. So how do we add tests for instructions that use v8.5a, like vrnd32x and vrnd64x?

@SparrowLii Shouldn't that work with the frintts feature?

@SparrowLii Shouldn't that work with the frintts feature?

Looks useful: https://rust.godbolt.org/z/894W8cndG

LLVM only supports frintts on AArch64, so it's fine to not support this intrinsic on ARM.