ia32-64/x86/pxor.html
2025-07-08 02:23:29 -03:00

247 lines
10 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg" xmlns:x86="http://www.felixcloutier.com/x86"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><link rel="stylesheet" type="text/css" href="style.css"></link><title>PXOR
— Logical Exclusive OR</title></head><body><header><nav><ul><li><a href='index.html'>Index</a></li><li>December 2023</li></ul></nav></header><h1>PXOR
— Logical Exclusive OR</h1>
<table>
<tr>
<th>Opcode/Instruction</th>
<th>Op/En</th>
<th>64/32 bit Mode Support</th>
<th>CPUID Feature Flag</th>
<th>Description</th></tr>
<tr>
<td>NP 0F EF /r<sup>1</sup> PXOR mm, mm/m64</td>
<td>A</td>
<td>V/V</td>
<td>MMX</td>
<td>Bitwise XOR of mm/m64 and mm.</td></tr>
<tr>
<td>66 0F EF /r PXOR xmm1, xmm2/m128</td>
<td>A</td>
<td>V/V</td>
<td>SSE2</td>
<td>Bitwise XOR of xmm2/m128 and xmm1.</td></tr>
<tr>
<td>VEX.128.66.0F.WIG EF /r VPXOR xmm1, xmm2, xmm3/m128</td>
<td>B</td>
<td>V/V</td>
<td>AVX</td>
<td>Bitwise XOR of xmm3/m128 and xmm2.</td></tr>
<tr>
<td>VEX.256.66.0F.WIG EF /r VPXOR ymm1, ymm2, ymm3/m256</td>
<td>B</td>
<td>V/V</td>
<td>AVX2</td>
<td>Bitwise XOR of ymm3/m256 and ymm2.</td></tr>
<tr>
<td>EVEX.128.66.0F.W0 EF /r VPXORD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst</td>
<td>C</td>
<td>V/V</td>
<td>AVX512VL AVX512F</td>
<td>Bitwise XOR of packed doubleword integers in xmm2 and xmm3/m128 using writemask k1.</td></tr>
<tr>
<td>EVEX.256.66.0F.W0 EF /r VPXORD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst</td>
<td>C</td>
<td>V/V</td>
<td>AVX512VL AVX512F</td>
<td>Bitwise XOR of packed doubleword integers in ymm2 and ymm3/m256 using writemask k1.</td></tr>
<tr>
<td>EVEX.512.66.0F.W0 EF /r VPXORD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst</td>
<td>C</td>
<td>V/V</td>
<td>AVX512F</td>
<td>Bitwise XOR of packed doubleword integers in zmm2 and zmm3/m512/m32bcst using writemask k1.</td></tr>
<tr>
<td>EVEX.128.66.0F.W1 EF /r VPXORQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst</td>
<td>C</td>
<td>V/V</td>
<td>AVX512VL AVX512F</td>
<td>Bitwise XOR of packed quadword integers in xmm2 and xmm3/m128 using writemask k1.</td></tr>
<tr>
<td>EVEX.256.66.0F.W1 EF /r VPXORQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst</td>
<td>C</td>
<td>V/V</td>
<td>AVX512VL AVX512F</td>
<td>Bitwise XOR of packed quadword integers in ymm2 and ymm3/m256 using writemask k1.</td></tr>
<tr>
<td>EVEX.512.66.0F.W1 EF /r VPXORQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst</td>
<td>C</td>
<td>V/V</td>
<td>AVX512F</td>
<td>Bitwise XOR of packed quadword integers in zmm2 and zmm3/m512/m64bcst using writemask k1.</td></tr></table>
<blockquote>
<p>1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel<sup>®</sup> 64 and IA-32 Architectures Software Developers Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel<sup>®</sup> 64 and IA-32 Architectures Software Developers Manual, Volume 3B.</p></blockquote>
<h2 id="instruction-operand-encoding">Instruction Operand Encoding<a class="anchor" href="#instruction-operand-encoding">
</a></h2>
<table>
<tr>
<th>Op/En</th>
<th>Tuple Type</th>
<th>Operand 1</th>
<th>Operand 2</th>
<th>Operand 3</th>
<th>Operand 4</th></tr>
<tr>
<td>A</td>
<td>N/A</td>
<td>ModRM:reg (r, w)</td>
<td>ModRM:r/m (r)</td>
<td>N/A</td>
<td>N/A</td></tr>
<tr>
<td>B</td>
<td>N/A</td>
<td>ModRM:reg (w)</td>
<td>VEX.vvvv (r)</td>
<td>ModRM:r/m (r)</td>
<td>N/A</td></tr>
<tr>
<td>C</td>
<td>Full</td>
<td>ModRM:reg (w)</td>
<td>EVEX.vvvv (r)</td>
<td>ModRM:r/m (r)</td>
<td>N/A</td></tr></table>
<h2 id="description">Description<a class="anchor" href="#description">
</a></h2>
<p>Performs a bitwise logical exclusive-OR (XOR) operation on the source operand (second operand) and the destination operand (first operand) and stores the result in the destination operand. Each bit of the result is 1 if the corresponding bits of the two operands are different; each bit is 0 if the corresponding bits of the operands are the same.</p>
<p>In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).</p>
<p>Legacy SSE instructions 64-bit operand: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an MMX technology register.</p>
<p>128-bit Legacy SSE version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.</p>
<p>VEX.128 encoded version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.</p>
<p>VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding register destination are zeroed.</p>
<p>EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with write-mask k1.</p>
<h2 id="operation">Operation<a class="anchor" href="#operation">
</a></h2>
<h3 id="pxor--64-bit-operand-">PXOR (64-bit Operand)<a class="anchor" href="#pxor--64-bit-operand-">
</a></h3>
<pre>DEST := DEST XOR SRC
</pre>
<h3 id="pxor--128-bit-legacy-sse-version-">PXOR (128-bit Legacy SSE Version)<a class="anchor" href="#pxor--128-bit-legacy-sse-version-">
</a></h3>
<pre>DEST := DEST XOR SRC
DEST[MAXVL-1:128] (Unmodified)
</pre>
<h3 id="vpxor--vex-128-encoded-version-">VPXOR (VEX.128 Encoded Version)<a class="anchor" href="#vpxor--vex-128-encoded-version-">
</a></h3>
<pre>DEST := SRC1 XOR SRC2
DEST[MAXVL-1:128] := 0
</pre>
<h3 id="vpxor--vex-256-encoded-version-">VPXOR (VEX.256 Encoded Version)<a class="anchor" href="#vpxor--vex-256-encoded-version-">
</a></h3>
<pre>DEST := SRC1 XOR SRC2
DEST[MAXVL-1:256] := 0
</pre>
<h3 id="vpxord--evex-encoded-versions-">VPXORD (EVEX Encoded Versions)<a class="anchor" href="#vpxord--evex-encoded-versions-">
</a></h3>
<pre>(KL, VL) = (4, 128), (8, 256), (16, 512)
FOR j := 0 TO KL-1
i := j * 32
IF k1[j] OR *no writemask* THEN
IF (EVEX.b = 1) AND (SRC2 *is memory*)
THEN DEST[i+31:i] := SRC1[i+31:i] BITWISE XOR SRC2[31:0]
ELSE DEST[i+31:i] := SRC1[i+31:i] BITWISE XOR SRC2[i+31:i]
FI;
ELSE
IF *merging-masking* ; merging-masking
THEN *DEST[31:0] remains unchanged*
ELSE
; zeroing-masking
DEST[31:0] := 0
FI;
FI;
ENDFOR;
DEST[MAXVL-1:VL] := 0
</pre>
<h3 id="vpxorq--evex-encoded-versions-">VPXORQ (EVEX Encoded Versions)<a class="anchor" href="#vpxorq--evex-encoded-versions-">
</a></h3>
<pre>(KL, VL) = (2, 128), (4, 256), (8, 512)
FOR j := 0 TO KL-1
i := j * 64
IF k1[j] OR *no writemask* THEN
IF (EVEX.b = 1) AND (SRC2 *is memory*)
THEN DEST[i+63:i] := SRC1[i+63:i] BITWISE XOR SRC2[63:0]
ELSE DEST[i+63:i] := SRC1[i+63:i] BITWISE XOR SRC2[i+63:i]
FI;
ELSE
IF *merging-masking* ; merging-masking
THEN *DEST[63:0] remains unchanged*
ELSE ; zeroing-masking
DEST[63:0] := 0
FI;
FI;
ENDFOR;
DEST[MAXVL-1:VL] := 0
</pre>
<h2 id="intel-c-c++-compiler-intrinsic-equivalent">Intel C/C++ Compiler Intrinsic Equivalent<a class="anchor" href="#intel-c-c++-compiler-intrinsic-equivalent">
</a></h2>
<pre>VPXORD __m512i _mm512_xor_epi32(__m512i a, __m512i b)
</pre>
<pre>VPXORD __m512i _mm512_mask_xor_epi32(__m512i s, __mmask16 m, __m512i a, __m512i b)
</pre>
<pre>VPXORD __m512i _mm512_maskz_xor_epi32( __mmask16 m, __m512i a, __m512i b)
</pre>
<pre>VPXORD __m256i _mm256_xor_epi32(__m256i a, __m256i b)
</pre>
<pre>VPXORD __m256i _mm256_mask_xor_epi32(__m256i s, __mmask8 m, __m256i a, __m256i b)
</pre>
<pre>VPXORD __m256i _mm256_maskz_xor_epi32( __mmask8 m, __m256i a, __m256i b)
</pre>
<pre>VPXORD __m128i _mm_xor_epi32(__m128i a, __m128i b)
</pre>
<pre>VPXORD __m128i _mm_mask_xor_epi32(__m128i s, __mmask8 m, __m128i a, __m128i b)
</pre>
<pre>VPXORD __m128i _mm_maskz_xor_epi32( __mmask16 m, __m128i a, __m128i b)
</pre>
<pre>VPXORQ __m512i _mm512_xor_epi64( __m512i a, __m512i b);
</pre>
<pre>VPXORQ __m512i _mm512_mask_xor_epi64(__m512i s, __mmask8 m, __m512i a, __m512i b);
</pre>
<pre>VPXORQ __m512i _mm512_maskz_xor_epi64(__mmask8 m, __m512i a, __m512i b);
</pre>
<pre>VPXORQ __m256i _mm256_xor_epi64( __m256i a, __m256i b);
</pre>
<pre>VPXORQ __m256i _mm256_mask_xor_epi64(__m256i s, __mmask8 m, __m256i a, __m256i b);
</pre>
<pre>VPXORQ __m256i _mm256_maskz_xor_epi64(__mmask8 m, __m256i a, __m256i b);
</pre>
<pre>VPXORQ __m128i _mm_xor_epi64( __m128i a, __m128i b);
</pre>
<pre>VPXORQ __m128i _mm_mask_xor_epi64(__m128i s, __mmask8 m, __m128i a, __m128i b);
</pre>
<pre>VPXORQ __m128i _mm_maskz_xor_epi64(__mmask8 m, __m128i a, __m128i b);
</pre>
<pre>PXOR:__m64 _mm_xor_si64 (__m64 m1, __m64 m2)
</pre>
<pre>(V)PXOR:__m128i _mm_xor_si128 ( __m128i a, __m128i b)
</pre>
<pre>VPXOR:__m256i _mm256_xor_si256 ( __m256i a, __m256i b)
</pre>
<h2 id="flags-affected">Flags Affected<a class="anchor" href="#flags-affected">
</a></h2>
<p>None.</p>
<h2 class="exceptions" id="numeric-exceptions">Numeric Exceptions<a class="anchor" href="#numeric-exceptions">
</a></h2>
<p>None.</p>
<h2 class="exceptions" id="other-exceptions">Other Exceptions<a class="anchor" href="#other-exceptions">
</a></h2>
<p>Non-EVEX-encoded instruction, see <span class="not-imported">Table 2-21</span>, “Type 4 Class Exception Conditions.”</p>
<p>EVEX-encoded instruction, see <span class="not-imported">Table 2-49</span>, “Type E4 Class Exception Conditions.”</p><footer><p>
This UNOFFICIAL, mechanically-separated, non-verified reference is provided for convenience, but it may be
inc<span style="opacity: 0.2">omp</span>lete or b<sub>r</sub>oke<sub>n</sub> in various obvious or non-obvious
ways. Refer to <a href="https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-1-2a-2b-2c-2d-3a-3b-3c-3d-and-4">Intel® 64 and IA-32 Architectures Software Developers Manual</a> for anything serious.
</p></footer></body></html>