ia32-64/x86/unpcklpd.html
2025-07-08 02:23:29 -03:00

225 lines
9.9 KiB
HTML
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg" xmlns:x86="http://www.felixcloutier.com/x86"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><link rel="stylesheet" type="text/css" href="style.css"></link><title>UNPCKLPD
— Unpack and Interleave Low Packed Double Precision Floating-Point Values</title></head><body><header><nav><ul><li><a href='index.html'>Index</a></li><li>December 2023</li></ul></nav></header><h1>UNPCKLPD
— Unpack and Interleave Low Packed Double Precision Floating-Point Values</h1>
<table>
<tr>
<th>Opcode/Instruction</th>
<th>Op / En</th>
<th>64/32 bit Mode Support</th>
<th>CPUID Feature Flag</th>
<th>Description</th></tr>
<tr>
<td>66 0F 14 /r UNPCKLPD xmm1, xmm2/m128</td>
<td>A</td>
<td>V/V</td>
<td>SSE2</td>
<td>Unpacks and Interleaves double precision floating-point values from low quadwords of xmm1 and xmm2/m128.</td></tr>
<tr>
<td>VEX.128.66.0F.WIG 14 /r VUNPCKLPD xmm1,xmm2, xmm3/m128</td>
<td>B</td>
<td>V/V</td>
<td>AVX</td>
<td>Unpacks and Interleaves double precision floating-point values from low quadwords of xmm2 and xmm3/m128.</td></tr>
<tr>
<td>VEX.256.66.0F.WIG 14 /r VUNPCKLPD ymm1,ymm2, ymm3/m256</td>
<td>B</td>
<td>V/V</td>
<td>AVX</td>
<td>Unpacks and Interleaves double precision floating-point values from low quadwords of ymm2 and ymm3/m256.</td></tr>
<tr>
<td>EVEX.128.66.0F.W1 14 /r VUNPCKLPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst</td>
<td>C</td>
<td>V/V</td>
<td>AVX512VL AVX512F</td>
<td>Unpacks and Interleaves double precision floating-point values from low quadwords of xmm2 and xmm3/m128/m64bcst subject to write mask k1.</td></tr>
<tr>
<td>EVEX.256.66.0F.W1 14 /r VUNPCKLPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst</td>
<td>C</td>
<td>V/V</td>
<td>AVX512VL AVX512F</td>
<td>Unpacks and Interleaves double precision floating-point values from low quadwords of ymm2 and ymm3/m256/m64bcst subject to write mask k1.</td></tr>
<tr>
<td>EVEX.512.66.0F.W1 14 /r VUNPCKLPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst</td>
<td>C</td>
<td>V/V</td>
<td>AVX512F</td>
<td>Unpacks and Interleaves double precision floating-point values from low quadwords of zmm2 and zmm3/m512/m64bcst subject to write mask k1.</td></tr></table>
<h2 id="instruction-operand-encoding">Instruction Operand Encoding<a class="anchor" href="#instruction-operand-encoding">
</a></h2>
<table>
<tr>
<th>Op/En</th>
<th>Tuple Type</th>
<th>Operand 1</th>
<th>Operand 2</th>
<th>Operand 3</th>
<th>Operand 4</th></tr>
<tr>
<td>A</td>
<td>N/A</td>
<td>ModRM:reg (r, w)</td>
<td>ModRM:r/m (r)</td>
<td>N/A</td>
<td>N/A</td></tr>
<tr>
<td>B</td>
<td>N/A</td>
<td>ModRM:reg (w)</td>
<td>VEX.vvvv (r)</td>
<td>ModRM:r/m (r)</td>
<td>N/A</td></tr>
<tr>
<td>C</td>
<td>Full</td>
<td>ModRM:reg (w)</td>
<td>EVEX.vvvv (r)</td>
<td>ModRM:r/m (r)</td>
<td>N/A</td></tr></table>
<h2 id="description">Description<a class="anchor" href="#description">
</a></h2>
<p>Performs an interleaved unpack of the low double precision floating-point values from the first source operand and the second source operand.</p>
<p>128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified. When unpacking from a memory operand, an implementation may fetch only the appropriate 64 bits; however, alignment to 16-byte boundary and normal segment checking will still be enforced.</p>
<p>VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.</p>
<p>VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.</p>
<p>EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.</p>
<p>EVEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 64-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.</p>
<p>EVEX.128 encoded version: The first source operand is an XMM register. The second source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 64-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.</p>
<h2 id="operation">Operation<a class="anchor" href="#operation">
</a></h2>
<h3 id="vunpcklpd--evex-encoded-versions-when-src2-is-a-register-">VUNPCKLPD (EVEX Encoded Versions When SRC2 is a Register)<a class="anchor" href="#vunpcklpd--evex-encoded-versions-when-src2-is-a-register-">
</a></h3>
<pre>(KL, VL) = (2, 128), (4, 256), (8, 512)
IF VL &gt;= 128
TMP_DEST[63:0] := SRC1[63:0]
TMP_DEST[127:64] := SRC2[63:0]
FI;
IF VL &gt;= 256
TMP_DEST[191:128] := SRC1[191:128]
TMP_DEST[255:192] := SRC2[191:128]
FI;
IF VL &gt;= 512
TMP_DEST[319:256] := SRC1[319:256]
TMP_DEST[383:320] := SRC2[319:256]
TMP_DEST[447:384] := SRC1[447:384]
TMP_DEST[511:448] := SRC2[447:384]
FI;
FOR j := 0 TO KL-1
i := j * 64
IF k1[j] OR *no writemask*
THEN DEST[i+63:i] := TMP_DEST[i+63:i]
ELSE
IF *merging-masking*
; merging-masking
THEN *DEST[i+63:i] remains unchanged*
ELSE *zeroing-masking*
; zeroing-masking
DEST[i+63:i] := 0
FI
FI;
ENDFOR
DEST[MAXVL-1:VL] := 0
</pre>
<h3 id="vunpcklpd--evex-encoded-version-when-src2-is-memory-">VUNPCKLPD (EVEX Encoded Version When SRC2 is Memory)<a class="anchor" href="#vunpcklpd--evex-encoded-version-when-src2-is-memory-">
</a></h3>
<pre>(KL, VL) = (2, 128), (4, 256), (8, 512)
FOR j := 0 TO KL-1
i := j * 64
IF (EVEX.b = 1)
THEN TMP_SRC2[i+63:i] := SRC2[63:0]
ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i]
FI;
ENDFOR;
IF VL &gt;= 128
TMP_DEST[63:0] := SRC1[63:0]
TMP_DEST[127:64] := TMP_SRC2[63:0]
FI;
IF VL &gt;= 256
TMP_DEST[191:128] := SRC1[191:128]
TMP_DEST[255:192] := TMP_SRC2[191:128]
FI;
IF VL &gt;= 512
TMP_DEST[319:256] := SRC1[319:256]
TMP_DEST[383:320] := TMP_SRC2[319:256]
TMP_DEST[447:384] := SRC1[447:384]
TMP_DEST[511:448] := TMP_SRC2[447:384]
FI;
FOR j := 0 TO KL-1
i := j * 64
IF k1[j] OR *no writemask*
THEN DEST[i+63:i] := TMP_DEST[i+63:i]
ELSE
IF *merging-masking*
; merging-masking
THEN *DEST[i+63:i] remains unchanged*
ELSE *zeroing-masking*
; zeroing-masking
DEST[i+63:i] := 0
FI
FI;
ENDFOR
DEST[MAXVL-1:VL] := 0
</pre>
<h3 id="vunpcklpd--vex-256-encoded-version-">VUNPCKLPD (VEX.256 Encoded Version)<a class="anchor" href="#vunpcklpd--vex-256-encoded-version-">
</a></h3>
<pre>DEST[63:0] := SRC1[63:0]
DEST[127:64] := SRC2[63:0]
DEST[191:128] := SRC1[191:128]
DEST[255:192] := SRC2[191:128]
DEST[MAXVL-1:256] := 0
</pre>
<h3 id="vunpcklpd--vex-128-encoded-version-">VUNPCKLPD (VEX.128 Encoded Version)<a class="anchor" href="#vunpcklpd--vex-128-encoded-version-">
</a></h3>
<pre>DEST[63:0] := SRC1[63:0]
DEST[127:64] := SRC2[63:0]
DEST[MAXVL-1:128] := 0
</pre>
<h3 id="unpcklpd--128-bit-legacy-sse-version-">UNPCKLPD (128-bit Legacy SSE Version)<a class="anchor" href="#unpcklpd--128-bit-legacy-sse-version-">
</a></h3>
<pre>DEST[63:0] := SRC1[63:0]
DEST[127:64] := SRC2[63:0]
DEST[MAXVL-1:128] (Unmodified)
</pre>
<h2 id="intel-c-c++-compiler-intrinsic-equivalent">Intel C/C++ Compiler Intrinsic Equivalent<a class="anchor" href="#intel-c-c++-compiler-intrinsic-equivalent">
</a></h2>
<pre>VUNPCKLPD __m512d _mm512_unpacklo_pd( __m512d a, __m512d b);
</pre>
<pre>VUNPCKLPD __m512d _mm512_mask_unpacklo_pd(__m512d s, __mmask8 k, __m512d a, __m512d b);
</pre>
<pre>VUNPCKLPD __m512d _mm512_maskz_unpacklo_pd(__mmask8 k, __m512d a, __m512d b);
</pre>
<pre>VUNPCKLPD __m256d _mm256_unpacklo_pd(__m256d a, __m256d b)
</pre>
<pre>VUNPCKLPD __m256d _mm256_mask_unpacklo_pd(__m256d s, __mmask8 k, __m256d a, __m256d b);
</pre>
<pre>VUNPCKLPD __m256d _mm256_maskz_unpacklo_pd(__mmask8 k, __m256d a, __m256d b);
</pre>
<pre>UNPCKLPD __m128d _mm_unpacklo_pd(__m128d a, __m128d b)
</pre>
<pre>VUNPCKLPD __m128d _mm_mask_unpacklo_pd(__m128d s, __mmask8 k, __m128d a, __m128d b);
</pre>
<pre>VUNPCKLPD __m128d _mm_maskz_unpacklo_pd(__mmask8 k, __m128d a, __m128d b);
</pre>
<h2 class="exceptions" id="simd-floating-point-exceptions">SIMD Floating-Point Exceptions<a class="anchor" href="#simd-floating-point-exceptions">
</a></h2>
<p>None.</p>
<h2 class="exceptions" id="other-exceptions">Other Exceptions<a class="anchor" href="#other-exceptions">
</a></h2>
<p>Non-EVEX-encoded instructions, see <span class="not-imported">Table 2-21</span>, “Type 4 Class Exception Conditions.”</p>
<p>EVEX-encoded instructions, see <span class="not-imported">Table 2-50</span>, “Type E4NF Class Exception Conditions.”</p><footer><p>
This UNOFFICIAL, mechanically-separated, non-verified reference is provided for convenience, but it may be
inc<span style="opacity: 0.2">omp</span>lete or b<sub>r</sub>oke<sub>n</sub> in various obvious or non-obvious
ways. Refer to <a href="https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-1-2a-2b-2c-2d-3a-3b-3c-3d-and-4">Intel® 64 and IA-32 Architectures Software Developers Manual</a> for anything serious.
</p></footer></body></html>