ia32-64/x86/gf2p8affineqb.html

167 lines
7.4 KiB
HTML
Raw Normal View History

2025-07-08 02:23:29 -03:00
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg" xmlns:x86="http://www.felixcloutier.com/x86"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><link rel="stylesheet" type="text/css" href="style.css"></link><title>GF2P8AFFINEQB
— Galois Field Affine Transformation</title></head><body><header><nav><ul><li><a href='index.html'>Index</a></li><li>December 2023</li></ul></nav></header><h1>GF2P8AFFINEQB
— Galois Field Affine Transformation</h1>
<table>
<tr>
<th>Opcode/Instruction</th>
<th>Op/En</th>
<th>64/32 bit Mode Support</th>
<th>CPUID Feature Flag</th>
<th>Description</th></tr>
<tr>
<td>66 0F3A CE /r /ib GF2P8AFFINEQB xmm1, xmm2/m128, imm8</td>
<td>A</td>
<td>V/V</td>
<td>GFNI</td>
<td>Computes affine transformation in the finite field GF(2^8).</td></tr>
<tr>
<td>VEX.128.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB xmm1, xmm2, xmm3/m128, imm8</td>
<td>B</td>
<td>V/V</td>
<td>AVX GFNI</td>
<td>Computes affine transformation in the finite field GF(2^8).</td></tr>
<tr>
<td>VEX.256.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB ymm1, ymm2, ymm3/m256, imm8</td>
<td>B</td>
<td>V/V</td>
<td>AVX GFNI</td>
<td>Computes affine transformation in the finite field GF(2^8).</td></tr>
<tr>
<td>EVEX.128.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB xmm1{k1}{z}, xmm2, xmm3/m128/m64bcst, imm8</td>
<td>C</td>
<td>V/V</td>
<td>AVX512VL GFNI</td>
<td>Computes affine transformation in the finite field GF(2^8).</td></tr>
<tr>
<td>EVEX.256.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB ymm1{k1}{z}, ymm2, ymm3/m256/m64bcst, imm8</td>
<td>C</td>
<td>V/V</td>
<td>AVX512VL GFNI</td>
<td>Computes affine transformation in the finite field GF(2^8).</td></tr>
<tr>
<td>EVEX.512.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB zmm1{k1}{z}, zmm2, zmm3/m512/m64bcst, imm8</td>
<td>C</td>
<td>V/V</td>
<td>AVX512F GFNI</td>
<td>Computes affine transformation in the finite field GF(2^8).</td></tr></table>
<h2 id="instruction-operand-encoding">Instruction Operand Encoding<a class="anchor" href="#instruction-operand-encoding">
</a></h2>
<table>
<tr>
<th>Op/En</th>
<th>Tuple</th>
<th>Operand 1</th>
<th>Operand 2</th>
<th>Operand 3</th>
<th>Operand 4</th></tr>
<tr>
<td>A</td>
<td>N/A</td>
<td>ModRM:reg (r, w)</td>
<td>ModRM:r/m (r)</td>
<td>imm8 (r)</td>
<td>N/A</td></tr>
<tr>
<td>B</td>
<td>N/A</td>
<td>ModRM:reg (w)</td>
<td>VEX.vvvv (r)</td>
<td>ModRM:r/m (r)</td>
<td>imm8 (r)</td></tr>
<tr>
<td>C</td>
<td>Full</td>
<td>ModRM:reg (w)</td>
<td>EVEX.vvvv (r)</td>
<td>ModRM:r/m (r)</td>
<td>imm8 (r)</td></tr></table>
<h3 id="description">Description<a class="anchor" href="#description">
</a></h3>
<p>The AFFINEB instruction computes an affine transformation in the Galois Field 2<sup>8</sup>. For this instruction, an affine transformation is defined by A * x + b where “A” is an 8 by 8 bit matrix, and “x” and “b” are 8-bit vectors. One SIMD register (operand 1) holds “x” as either 16, 32 or 64 8-bit vectors. A second SIMD (operand 2) register or memory operand contains 2, 4, or 8 “A” values, which are operated upon by the correspondingly aligned 8 “x” values in the first register. The “b” vector is constant for all calculations and contained in the immediate byte.</p>
<p>The EVEX encoded form of this instruction does not support memory fault suppression. The SSE encoded forms of the instruction require16B alignment on their memory operations.</p>
<h3 id="operation">Operation<a class="anchor" href="#operation">
</a></h3>
<pre>define parity(x):
t := 0 // single bit
FOR i := 0 to 7:
t = t xor x.bit[i]
return t
define affine_byte(tsrc2qw, src1byte, imm):
FOR i := 0 to 7:
* parity(x) = 1 if x has an odd number of 1s in it, and 0 otherwise.*
retbyte.bit[i] := parity(tsrc2qw.byte[7-i] AND src1byte) XOR imm8.bit[i]
return retbyte
</pre>
<h4 id="vgf2p8affineqb-dest--src1--src2--imm8--evex-encoded-version-">VGF2P8AFFINEQB dest, src1, src2, imm8 (EVEX Encoded Version)<a class="anchor" href="#vgf2p8affineqb-dest--src1--src2--imm8--evex-encoded-version-">
</a></h4>
<pre>(KL, VL) = (2, 128), (4, 256), (8, 512)
FOR j := 0 TO KL-1:
IF SRC2 is memory and EVEX.b==1:
tsrc2 := SRC2.qword[0]
ELSE:
tsrc2 := SRC2.qword[j]
FOR b := 0 to 7:
IF k1[j*8+b] OR *no writemask*:
DEST.qword[j].byte[b] := affine_byte(tsrc2, SRC1.qword[j].byte[b], imm8)
ELSE IF *zeroing*:
DEST.qword[j].byte[b] := 0
*ELSE DEST.qword[j].byte[b] remains unchanged*
DEST[MAX_VL-1:VL] := 0
</pre>
<h4 id="vgf2p8affineqb-dest--src1--src2--imm8--128b-and-256b-vex-encoded-versions-">VGF2P8AFFINEQB dest, src1, src2, imm8 (128b and 256b VEX Encoded Versions)<a class="anchor" href="#vgf2p8affineqb-dest--src1--src2--imm8--128b-and-256b-vex-encoded-versions-">
</a></h4>
<pre>(KL, VL) = (2, 128), (4, 256)
FOR j := 0 TO KL-1:
FOR b := 0 to 7:
DEST.qword[j].byte[b] := affine_byte(SRC2.qword[j], SRC1.qword[j].byte[b], imm8)
DEST[MAX_VL-1:VL] := 0
</pre>
<h4 id="gf2p8affineqb-srcdest--src1--imm8--128b-sse-encoded-version-">GF2P8AFFINEQB srcdest, src1, imm8 (128b SSE Encoded Version)<a class="anchor" href="#gf2p8affineqb-srcdest--src1--imm8--128b-sse-encoded-version-">
</a></h4>
<pre>FOR j := 0 TO 1:
FOR b := 0 to 7:
SRCDEST.qword[j].byte[b] := affine_byte(SRC1.qword[j], SRCDEST.qword[j].byte[b], imm8)
</pre>
<h3 id="intel-c-c++-compiler-intrinsic-equivalent">Intel C/C++ Compiler Intrinsic Equivalent<a class="anchor" href="#intel-c-c++-compiler-intrinsic-equivalent">
</a></h3>
<pre>(V)GF2P8AFFINEQB __m128i _mm_gf2p8affine_epi64_epi8(__m128i, __m128i, int);
</pre>
<pre>(V)GF2P8AFFINEQB __m128i _mm_mask_gf2p8affine_epi64_epi8(__m128i, __mmask16, __m128i, __m128i, int);
</pre>
<pre>(V)GF2P8AFFINEQB __m128i _mm_maskz_gf2p8affine_epi64_epi8(__mmask16, __m128i, __m128i, int);
</pre>
<pre>VGF2P8AFFINEQB __m256i _mm256_gf2p8affine_epi64_epi8(__m256i, __m256i, int);
</pre>
<pre>VGF2P8AFFINEQB __m256i _mm256_mask_gf2p8affine_epi64_epi8(__m256i, __mmask32, __m256i, __m256i, int);
</pre>
<pre>VGF2P8AFFINEQB __m256i _mm256_maskz_gf2p8affine_epi64_epi8(__mmask32, __m256i, __m256i, int);
</pre>
<pre>VGF2P8AFFINEQB __m512i _mm512_gf2p8affine_epi64_epi8(__m512i, __m512i, int);
</pre>
<pre>VGF2P8AFFINEQB __m512i _mm512_mask_gf2p8affine_epi64_epi8(__m512i, __mmask64, __m512i, __m512i, int);
</pre>
<pre>VGF2P8AFFINEQB __m512i _mm512_maskz_gf2p8affine_epi64_epi8(__mmask64, __m512i, __m512i, int);
</pre>
<h3 class="exceptions" id="simd-floating-point-exceptions">SIMD Floating-Point Exceptions<a class="anchor" href="#simd-floating-point-exceptions">
</a></h3>
<p>None.</p>
<h3 class="exceptions" id="other-exceptions">Other Exceptions<a class="anchor" href="#other-exceptions">
</a></h3>
<p>Legacy-encoded and VEX-encoded: See <span class="not-imported">Table 2-21</span>, “Type 4 Class Exception Conditions.”</p>
<p>EVEX-encoded: See <span class="not-imported">Table 2-50</span>, “Type E4NF Class Exception Conditions.”</p><footer><p>
This UNOFFICIAL, mechanically-separated, non-verified reference is provided for convenience, but it may be
inc<span style="opacity: 0.2">omp</span>lete or b<sub>r</sub>oke<sub>n</sub> in various obvious or non-obvious
ways. Refer to <a href="https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-1-2a-2b-2c-2d-3a-3b-3c-3d-and-4">Intel® 64 and IA-32 Architectures Software Developers Manual</a> for anything serious.
</p></footer></body></html>