commit 92622c37d6b972d2595393b6f3f46cf83bd3a836 Author: NRZ Code Date: Tue Jul 8 02:23:29 2025 -0300 Initial commit diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..f288702 --- /dev/null +++ b/LICENSE @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/README.md b/README.md new file mode 100644 index 0000000..befd2e7 --- /dev/null +++ b/README.md @@ -0,0 +1,20 @@ +# Intel® 64 and IA-32 Instruction Set Reference + +Intel® 64 and IA-32 Instruction Set Reference using FZF ([Fuzzy Finder](https://github.com/junegunn/fzf)) interface. + +This UNOFFICIAL reference was generated from the official [Intel® 64 and IA-32 Architectures Software Developer’s Manual](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html) by a dumb script. + +There is no guarantee that some parts aren't mangled or broken and is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + +[![ia32-64](./img/ia32-64.png "IA32-64")](./img/ia32-64.png) + +Please report issues [here](https://bolha.dev/nrzcode/ia32-64) in case you run into any. + +Read [here](https://github.com/junegunn/fzf) to install FZF or checkout the project and run this according shell in use. + +```sh +git clone {https://github.com,~/.local/vendor}/junegunn/fzf +~/.local/vendor/junegunn/fzf/install --all +``` + +You could be [here](https://www.felixcloutier.com/x86) for the online x86 reference. diff --git a/bin/build.sh b/bin/build.sh new file mode 100755 index 0000000..1064f71 --- /dev/null +++ b/bin/build.sh @@ -0,0 +1,17 @@ +#!/bin/bash +workdir=$(mktemp -d) +cd $workdir +wget -r https://www.felixcloutier.com +workdir+=/www.felixcloutier.com/x86 +for f in $workdir/*; do + if [[ $f == *:* ]]; then + mv "$f" "${f//:/.}" + : "${f##*/}" + sed -i "s|$_|${_//:/.}|g" $workdir/* + f=${f//:/.} + fi + [[ $f == *html ]] || mv "$f" "$f.html" +done +sed -i -E "/href='\/x86[^#']+#/s|href='/x86/([^'#]+)#|href='\1.html#|g" $workdir/* +sed -i -E "/href='\/x86[^']+'/s|href='/x86/([^']+)'|href='\1.html'|g" $workdir/* +sed -i "/href='\/x86\/'/s|href='/x86/'|href='index.html'|g" $workdir/* diff --git a/ia32-64.sh b/ia32-64.sh new file mode 100755 index 0000000..7155c07 --- /dev/null +++ b/ia32-64.sh @@ -0,0 +1,90 @@ +#!/usr/bin/env bash +# ------------------------------------------- +# @script: ia32-64.sh +# @link: https://bolha.dev/nrzcode/ia32-64 +# @description: Intel® 64 and IA-32 Instruction Set Reference using FZF interface. +# @license: GNU/GPL v3.0 +# @version: 0.0.1 +# @author: NRZ Code +# @created: 08/07/2025 01:24 +# +# @requirements: --- +# @bugs: --- +# @notes: --- +# @revision: --- +# ------------------------------------------- +# Copyright (C) 2025 NRZ Code +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# ------------------------------------------- +# USAGE +# ia32-64.sh [OPTIONS] +# ------------------------------------------- +version='0.0.1' +usage='Usage: ia32-64.sh [OPTIONS] + +DESCRIPTION + Intel® 64 and IA-32 Instruction Set Reference using FZF interface + +OPTIONS + General options + -h, --help Print this help usage and exit + -v, --version Display version information and exit +' +copy='Copyright (C) 2025 NRZ Code . +License: GNU GPL version 3 or later . +This is free software: you are free to change and redistribute it. +There is NO WARRANTY, to the extent permitted by law. +' + +# functions --------------------------------- +check_dependencies() { + local code=0 + for dep; do + if ! type -p $dep &>/dev/null; then + echo "ERROR: ${BASH_SOURCE##*/}: $dep dependency is missing." + code=1 + fi + done + return $code +} + +main() { + fzf_opt=( + --info inline-right + --pointer '█' + --color hl+:red:bold,hl:red:bold + --color label:gray + --no-scrollbar + --border + --border-label $'┤ \e[1;34mINSTRUÇÕES INTEL 32 E 64 BITS\e[0m ├' + -e -i + --reverse + --prompt ': ' + --tiebreak begin + --tabstop 24 + -d '|' + --with-nth $'{1} \t- {2}' + --preview 'w3m -cols $(tput cols) -o display_border=1 {3}.html' + --preview-window bottom,70%,border-top + --bind 'enter:become(w3m -o display_border=1 {3}.html),esc:become(true)' + ) + fzf "${fzf_opt[@]}" < ../inst.list +} + +# main -------------------------------------- +check_dependencies fzf w3m || exit 1 +workdir=$(dirname $BASH_SOURCE) +cd $workdir/x86 +[[ $BASH_SOURCE == $0 ]] && main "$@" diff --git a/img/ia32-64.png b/img/ia32-64.png new file mode 100644 index 0000000..ccbbe12 Binary files /dev/null and b/img/ia32-64.png differ diff --git a/inst.list b/inst.list new file mode 100644 index 0000000..94689dc --- /dev/null +++ b/inst.list @@ -0,0 +1,1222 @@ +AAA|ASCII Adjust After Addition|aaa +AAD|ASCII Adjust AX Before Division|aad +AAM|ASCII Adjust AX After Multiply|aam +AAS|ASCII Adjust AL After Subtraction|aas +ADC|Add With Carry|adc +ADCX|Unsigned Integer Addition of Two Operands With Carry Flag|adcx +ADD|Add|add +ADDPD|Add Packed Double Precision Floating-Point Values|addpd +ADDPS|Add Packed Single Precision Floating-Point Values|addps +ADDSD|Add Scalar Double Precision Floating-Point Values|addsd +ADDSS|Add Scalar Single Precision Floating-Point Values|addss +ADDSUBPD|Packed Double Precision Floating-Point Add/Subtract|addsubpd +ADDSUBPS|Packed Single Precision Floating-Point Add/Subtract|addsubps +ADOX|Unsigned Integer Addition of Two Operands With Overflow Flag|adox +AESDEC|Perform One Round of an AES Decryption Flow|aesdec +AESDEC128KL|Perform Ten Rounds of AES Decryption Flow With Key Locker Using 128-BitKey|aesdec128kl +AESDEC256KL|Perform 14 Rounds of AES Decryption Flow With Key Locker Using 256-Bit Key|aesdec256kl +AESDECLAST|Perform Last Round of an AES Decryption Flow|aesdeclast +AESDECWIDE128KL|Perform Ten Rounds of AES Decryption Flow With Key Locker on 8 BlocksUsing 128-Bit Key|aesdecwide128kl +AESDECWIDE256KL|Perform 14 Rounds of AES Decryption Flow With Key Locker on 8 BlocksUsing 256-Bit Key|aesdecwide256kl +AESENC|Perform One Round of an AES Encryption Flow|aesenc +AESENC128KL|Perform Ten Rounds of AES Encryption Flow With Key Locker Using 128-Bit Key|aesenc128kl +AESENC256KL|Perform 14 Rounds of AES Encryption Flow With Key Locker Using 256-Bit Key|aesenc256kl +AESENCLAST|Perform Last Round of an AES Encryption Flow|aesenclast +AESENCWIDE128KL|Perform Ten Rounds of AES Encryption Flow With Key Locker on 8 BlocksUsing 128-Bit Key|aesencwide128kl +AESENCWIDE256KL|Perform 14 Rounds of AES Encryption Flow With Key Locker on 8 BlocksUsing 256-Bit Key|aesencwide256kl +AESIMC|Perform the AES InvMixColumn Transformation|aesimc +AESKEYGENASSIST|AES Round Key Generation Assist|aeskeygenassist +AND|Logical AND|and +ANDN|Logical AND NOT|andn +ANDNPD|Bitwise Logical AND NOT of Packed Double Precision Floating-Point Values|andnpd +ANDNPS|Bitwise Logical AND NOT of Packed Single Precision Floating-Point Values|andnps +ANDPD|Bitwise Logical AND of Packed Double Precision Floating-Point Values|andpd +ANDPS|Bitwise Logical AND of Packed Single Precision Floating-Point Values|andps +ARPL|Adjust RPL Field of Segment Selector|arpl +BEXTR|Bit Field Extract|bextr +BLENDPD|Blend Packed Double Precision Floating-Point Values|blendpd +BLENDPS|Blend Packed Single Precision Floating-Point Values|blendps +BLENDVPD|Variable Blend Packed Double Precision Floating-Point Values|blendvpd +BLENDVPS|Variable Blend Packed Single Precision Floating-Point Values|blendvps +BLSI|Extract Lowest Set Isolated Bit|blsi +BLSMSK|Get Mask Up to Lowest Set Bit|blsmsk +BLSR|Reset Lowest Set Bit|blsr +BNDCL|Check Lower Bound|bndcl +BNDCN|Check Upper Bound|bndcu.bndcn +BNDCU|Check Upper Bound|bndcu.bndcn +BNDLDX|Load Extended Bounds Using Address Translation|bndldx +BNDMK|Make Bounds|bndmk +BNDMOV|Move Bounds|bndmov +BNDSTX|Store Extended Bounds Using Address Translation|bndstx +BOUND|Check Array Index Against Bounds|bound +BSF|Bit Scan Forward|bsf +BSR|Bit Scan Reverse|bsr +BSWAP|Byte Swap|bswap +BT|Bit Test|bt +BTC|Bit Test and Complement|btc +BTR|Bit Test and Reset|btr +BTS|Bit Test and Set|bts +BZHI|Zero High Bits Starting with Specified Bit Position|bzhi +CALL|Call Procedure|call +CBW|Convert Byte to Word/Convert Word to Doubleword/Convert Doubleword toQuadword|cbw.cwde.cdqe +CDQ|Convert Word to Doubleword/Convert Doubleword to Quadword|cwd.cdq.cqo +CDQE|Convert Byte to Word/Convert Word to Doubleword/Convert Doubleword toQuadword|cbw.cwde.cdqe +CLAC|Clear AC Flag in EFLAGS Register|clac +CLC|Clear Carry Flag|clc +CLD|Clear Direction Flag|cld +CLDEMOTE|Cache Line Demote|cldemote +CLFLUSH|Flush Cache Line|clflush +CLFLUSHOPT|Flush Cache Line Optimized|clflushopt +CLI|Clear Interrupt Flag|cli +CLRSSBSY|Clear Busy Flag in a Supervisor Shadow Stack Token|clrssbsy +CLTS|Clear Task-Switched Flag in CR0|clts +CLUI|Clear User Interrupt Flag|clui +CLWB|Cache Line Write Back|clwb +CMC|Complement Carry Flag|cmc +CMOVcc|Conditional Move|cmovcc +CMP|Compare Two Operands|cmp +CMPPD|Compare Packed Double Precision Floating-Point Values|cmppd +CMPPS|Compare Packed Single Precision Floating-Point Values|cmpps +CMPS|Compare String Operands|cmps.cmpsb.cmpsw.cmpsd.cmpsq +CMPSB|Compare String Operands|cmps.cmpsb.cmpsw.cmpsd.cmpsq +CMPSD|Compare String Operands|cmps.cmpsb.cmpsw.cmpsd.cmpsq +CMPSD|Compare Scalar Double Precision Floating-Point Value|cmpsd +CMPSQ|Compare String Operands|cmps.cmpsb.cmpsw.cmpsd.cmpsq +CMPSS|Compare Scalar Single Precision Floating-Point Value|cmpss +CMPSW|Compare String Operands|cmps.cmpsb.cmpsw.cmpsd.cmpsq +CMPXCHG|Compare and Exchange|cmpxchg +CMPXCHG16B|Compare and Exchange Bytes|cmpxchg8b.cmpxchg16b +CMPXCHG8B|Compare and Exchange Bytes|cmpxchg8b.cmpxchg16b +COMISD|Compare Scalar Ordered Double Precision Floating-Point Values and Set EFLAGS|comisd +COMISS|Compare Scalar Ordered Single Precision Floating-Point Values and Set EFLAGS|comiss +CPUID|CPU Identification|cpuid +CQO|Convert Word to Doubleword/Convert Doubleword to Quadword|cwd.cdq.cqo +CRC32|Accumulate CRC32 Value|crc32 +CVTDQ2PD|Convert Packed Doubleword Integers to Packed Double Precision Floating-PointValues|cvtdq2pd +CVTDQ2PS|Convert Packed Doubleword Integers to Packed Single Precision Floating-PointValues|cvtdq2ps +CVTPD2DQ|Convert Packed Double Precision Floating-Point Values to Packed DoublewordIntegers|cvtpd2dq +CVTPD2PI|Convert Packed Double Precision Floating-Point Values to Packed Dword Integers|cvtpd2pi +CVTPD2PS|Convert Packed Double Precision Floating-Point Values to Packed Single PrecisionFloating-Point Values|cvtpd2ps +CVTPI2PD|Convert Packed Dword Integers to Packed Double Precision Floating-Point Values|cvtpi2pd +CVTPI2PS|Convert Packed Dword Integers to Packed Single Precision Floating-Point Values|cvtpi2ps +CVTPS2DQ|Convert Packed Single Precision Floating-Point Values to Packed SignedDoubleword Integer Values|cvtps2dq +CVTPS2PD|Convert Packed Single Precision Floating-Point Values to Packed Double PrecisionFloating-Point Values|cvtps2pd +CVTPS2PI|Convert Packed Single Precision Floating-Point Values to Packed Dword Integers|cvtps2pi +CVTSD2SI|Convert Scalar Double Precision Floating-Point Value to Doubleword Integer|cvtsd2si +CVTSD2SS|Convert Scalar Double Precision Floating-Point Value to Scalar Single PrecisionFloating-Point Value|cvtsd2ss +CVTSI2SD|Convert Doubleword Integer to Scalar Double Precision Floating-Point Value|cvtsi2sd +CVTSI2SS|Convert Doubleword Integer to Scalar Single Precision Floating-Point Value|cvtsi2ss +CVTSS2SD|Convert Scalar Single Precision Floating-Point Value to Scalar Double PrecisionFloating-Point Value|cvtss2sd +CVTSS2SI|Convert Scalar Single Precision Floating-Point Value to Doubleword Integer|cvtss2si +CVTTPD2DQ|Convert with Truncation Packed Double Precision Floating-Point Values toPacked Doubleword Integers|cvttpd2dq +CVTTPD2PI|Convert With Truncation Packed Double Precision Floating-Point Values to PackedDword Integers|cvttpd2pi +CVTTPS2DQ|Convert With Truncation Packed Single Precision Floating-Point Values to PackedSigned Doubleword Integer Values|cvttps2dq +CVTTPS2PI|Convert With Truncation Packed Single Precision Floating-Point Values to PackedDword Integers|cvttps2pi +CVTTSD2SI|Convert With Truncation Scalar Double Precision Floating-Point Value to SignedInteger|cvttsd2si +CVTTSS2SI|Convert With Truncation Scalar Single Precision Floating-Point Value to Integer|cvttss2si +CWD|Convert Word to Doubleword/Convert Doubleword to Quadword|cwd.cdq.cqo +CWDE|Convert Byte to Word/Convert Word to Doubleword/Convert Doubleword toQuadword|cbw.cwde.cdqe +DAA|Decimal Adjust AL After Addition|daa +DAS|Decimal Adjust AL After Subtraction|das +DEC|Decrement by 1|dec +DIV|Unsigned Divide|div +DIVPD|Divide Packed Double Precision Floating-Point Values|divpd +DIVPS|Divide Packed Single Precision Floating-Point Values|divps +DIVSD|Divide Scalar Double Precision Floating-Point Value|divsd +DIVSS|Divide Scalar Single Precision Floating-Point Values|divss +DPPD|Dot Product of Packed Double Precision Floating-Point Values|dppd +DPPS|Dot Product of Packed Single Precision Floating-Point Values|dpps +EMMS|Empty MMX Technology State|emms +ENCODEKEY128|Encode 128-Bit Key With Key Locker|encodekey128 +ENCODEKEY256|Encode 256-Bit Key With Key Locker|encodekey256 +ENDBR32|Terminate an Indirect Branch in 32-bit and Compatibility Mode|endbr32 +ENDBR64|Terminate an Indirect Branch in 64-bit Mode|endbr64 +ENQCMD|Enqueue Command|enqcmd +ENQCMDS|Enqueue Command Supervisor|enqcmds +ENTER|Make Stack Frame for Procedure Parameters|enter +EXTRACTPS|Extract Packed Floating-Point Values|extractps +F2XM1|Compute 2x–1|f2xm1 +FABS|Absolute Value|fabs +FADD|Add|fadd.faddp.fiadd +FADDP|Add|fadd.faddp.fiadd +FBLD|Load Binary Coded Decimal|fbld +FBSTP|Store BCD Integer and Pop|fbstp +FCHS|Change Sign|fchs +FCLEX|Clear Exceptions|fclex.fnclex +FCMOVcc|Floating-Point Conditional Move|fcmovcc +FCOM|Compare Floating-Point Values|fcom.fcomp.fcompp +FCOMI|Compare Floating-Point Values and Set EFLAGS|fcomi.fcomip.fucomi.fucomip +FCOMIP|Compare Floating-Point Values and Set EFLAGS|fcomi.fcomip.fucomi.fucomip +FCOMP|Compare Floating-Point Values|fcom.fcomp.fcompp +FCOMPP|Compare Floating-Point Values|fcom.fcomp.fcompp +FCOS|Cosine|fcos +FDECSTP|Decrement Stack-Top Pointer|fdecstp +FDIV|Divide|fdiv.fdivp.fidiv +FDIVP|Divide|fdiv.fdivp.fidiv +FDIVR|Reverse Divide|fdivr.fdivrp.fidivr +FDIVRP|Reverse Divide|fdivr.fdivrp.fidivr +FFREE|Free Floating-Point Register|ffree +FIADD|Add|fadd.faddp.fiadd +FICOM|Compare Integer|ficom.ficomp +FICOMP|Compare Integer|ficom.ficomp +FIDIV|Divide|fdiv.fdivp.fidiv +FIDIVR|Reverse Divide|fdivr.fdivrp.fidivr +FILD|Load Integer|fild +FIMUL|Multiply|fmul.fmulp.fimul +FINCSTP|Increment Stack-Top Pointer|fincstp +FINIT|Initialize Floating-Point Unit|finit.fninit +FIST|Store Integer|fist.fistp +FISTP|Store Integer|fist.fistp +FISTTP|Store Integer With Truncation|fisttp +FISUB|Subtract|fsub.fsubp.fisub +FISUBR|Reverse Subtract|fsubr.fsubrp.fisubr +FLD|Load Floating-Point Value|fld +FLD1|Load Constant|fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz +FLDCW|Load x87 FPU Control Word|fldcw +FLDENV|Load x87 FPU Environment|fldenv +FLDL2E|Load Constant|fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz +FLDL2T|Load Constant|fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz +FLDLG2|Load Constant|fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz +FLDLN2|Load Constant|fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz +FLDPI|Load Constant|fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz +FLDZ|Load Constant|fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz +FMUL|Multiply|fmul.fmulp.fimul +FMULP|Multiply|fmul.fmulp.fimul +FNCLEX|Clear Exceptions|fclex.fnclex +FNINIT|Initialize Floating-Point Unit|finit.fninit +FNOP|No Operation|fnop +FNSAVE|Store x87 FPU State|fsave.fnsave +FNSTCW|Store x87 FPU Control Word|fstcw.fnstcw +FNSTENV|Store x87 FPU Environment|fstenv.fnstenv +FNSTSW|Store x87 FPU Status Word|fstsw.fnstsw +FPATAN|Partial Arctangent|fpatan +FPREM|Partial Remainder|fprem +FPREM1|Partial Remainder|fprem1 +FPTAN|Partial Tangent|fptan +FRNDINT|Round to Integer|frndint +FRSTOR|Restore x87 FPU State|frstor +FSAVE|Store x87 FPU State|fsave.fnsave +FSCALE|Scale|fscale +FSIN|Sine|fsin +FSINCOS|Sine and Cosine|fsincos +FSQRT|Square Root|fsqrt +FST|Store Floating-Point Value|fst.fstp +FSTCW|Store x87 FPU Control Word|fstcw.fnstcw +FSTENV|Store x87 FPU Environment|fstenv.fnstenv +FSTP|Store Floating-Point Value|fst.fstp +FSTSW|Store x87 FPU Status Word|fstsw.fnstsw +FSUB|Subtract|fsub.fsubp.fisub +FSUBP|Subtract|fsub.fsubp.fisub +FSUBR|Reverse Subtract|fsubr.fsubrp.fisubr +FSUBRP|Reverse Subtract|fsubr.fsubrp.fisubr +FTST|TEST|ftst +FUCOM|Unordered Compare Floating-Point Values|fucom.fucomp.fucompp +FUCOMI|Compare Floating-Point Values and Set EFLAGS|fcomi.fcomip.fucomi.fucomip +FUCOMIP|Compare Floating-Point Values and Set EFLAGS|fcomi.fcomip.fucomi.fucomip +FUCOMP|Unordered Compare Floating-Point Values|fucom.fucomp.fucompp +FUCOMPP|Unordered Compare Floating-Point Values|fucom.fucomp.fucompp +FWAIT|Wait|wait.fwait +FXAM|Examine Floating-Point|fxam +FXCH|Exchange Register Contents|fxch +FXRSTOR|Restore x87 FPU, MMX, XMM, and MXCSR State|fxrstor +FXSAVE|Save x87 FPU, MMX Technology, and SSE State|fxsave +FXTRACT|Extract Exponent and Significand|fxtract +FYL2X|Compute y ∗ log2x|fyl2x +FYL2XP1|Compute y ∗ log2(x +1)|fyl2xp1 +GF2P8AFFINEINVQB|Galois Field Affine Transformation Inverse|gf2p8affineinvqb +GF2P8AFFINEQB|Galois Field Affine Transformation|gf2p8affineqb +GF2P8MULB|Galois Field Multiply Bytes|gf2p8mulb +HADDPD|Packed Double Precision Floating-Point Horizontal Add|haddpd +HADDPS|Packed Single Precision Floating-Point Horizontal Add|haddps +HLT|Halt|hlt +HRESET|History Reset|hreset +HSUBPD|Packed Double Precision Floating-Point Horizontal Subtract|hsubpd +HSUBPS|Packed Single Precision Floating-Point Horizontal Subtract|hsubps +IDIV|Signed Divide|idiv +IMUL|Signed Multiply|imul +IN|Input From Port|in +INC|Increment by 1|inc +INCSSPD|Increment Shadow Stack Pointer|incsspd.incsspq +INCSSPQ|Increment Shadow Stack Pointer|incsspd.incsspq +INS|Input from Port to String|ins.insb.insw.insd +INSB|Input from Port to String|ins.insb.insw.insd +INSD|Input from Port to String|ins.insb.insw.insd +INSERTPS|Insert Scalar Single Precision Floating-Point Value|insertps +INSW|Input from Port to String|ins.insb.insw.insd +INT n|Call to Interrupt Procedure|intn.into.int3.int1 +INT1|Call to Interrupt Procedure|intn.into.int3.int1 +INT3|Call to Interrupt Procedure|intn.into.int3.int1 +INTO|Call to Interrupt Procedure|intn.into.int3.int1 +INVD|Invalidate Internal Caches|invd +INVLPG|Invalidate TLB Entries|invlpg +INVPCID|Invalidate Process-Context Identifier|invpcid +IRET|Interrupt Return|iret.iretd.iretq +IRETD|Interrupt Return|iret.iretd.iretq +IRETQ|Interrupt Return|iret.iretd.iretq +JMP|Jump|jmp +Jcc|Jump if Condition Is Met|jcc +KADDB|ADD Two Masks|kaddw.kaddb.kaddq.kaddd +KADDD|ADD Two Masks|kaddw.kaddb.kaddq.kaddd +KADDQ|ADD Two Masks|kaddw.kaddb.kaddq.kaddd +KADDW|ADD Two Masks|kaddw.kaddb.kaddq.kaddd +KANDB|Bitwise Logical AND Masks|kandw.kandb.kandq.kandd +KANDD|Bitwise Logical AND Masks|kandw.kandb.kandq.kandd +KANDNB|Bitwise Logical AND NOT Masks|kandnw.kandnb.kandnq.kandnd +KANDND|Bitwise Logical AND NOT Masks|kandnw.kandnb.kandnq.kandnd +KANDNQ|Bitwise Logical AND NOT Masks|kandnw.kandnb.kandnq.kandnd +KANDNW|Bitwise Logical AND NOT Masks|kandnw.kandnb.kandnq.kandnd +KANDQ|Bitwise Logical AND Masks|kandw.kandb.kandq.kandd +KANDW|Bitwise Logical AND Masks|kandw.kandb.kandq.kandd +KMOVB|Move From and to Mask Registers|kmovw.kmovb.kmovq.kmovd +KMOVD|Move From and to Mask Registers|kmovw.kmovb.kmovq.kmovd +KMOVQ|Move From and to Mask Registers|kmovw.kmovb.kmovq.kmovd +KMOVW|Move From and to Mask Registers|kmovw.kmovb.kmovq.kmovd +KNOTB|NOT Mask Register|knotw.knotb.knotq.knotd +KNOTD|NOT Mask Register|knotw.knotb.knotq.knotd +KNOTQ|NOT Mask Register|knotw.knotb.knotq.knotd +KNOTW|NOT Mask Register|knotw.knotb.knotq.knotd +KORB|Bitwise Logical OR Masks|korw.korb.korq.kord +KORD|Bitwise Logical OR Masks|korw.korb.korq.kord +KORQ|Bitwise Logical OR Masks|korw.korb.korq.kord +KORTESTB|OR Masks and Set Flags|kortestw.kortestb.kortestq.kortestd +KORTESTD|OR Masks and Set Flags|kortestw.kortestb.kortestq.kortestd +KORTESTQ|OR Masks and Set Flags|kortestw.kortestb.kortestq.kortestd +KORTESTW|OR Masks and Set Flags|kortestw.kortestb.kortestq.kortestd +KORW|Bitwise Logical OR Masks|korw.korb.korq.kord +KSHIFTLB|Shift Left Mask Registers|kshiftlw.kshiftlb.kshiftlq.kshiftld +KSHIFTLD|Shift Left Mask Registers|kshiftlw.kshiftlb.kshiftlq.kshiftld +KSHIFTLQ|Shift Left Mask Registers|kshiftlw.kshiftlb.kshiftlq.kshiftld +KSHIFTLW|Shift Left Mask Registers|kshiftlw.kshiftlb.kshiftlq.kshiftld +KSHIFTRB|Shift Right Mask Registers|kshiftrw.kshiftrb.kshiftrq.kshiftrd +KSHIFTRD|Shift Right Mask Registers|kshiftrw.kshiftrb.kshiftrq.kshiftrd +KSHIFTRQ|Shift Right Mask Registers|kshiftrw.kshiftrb.kshiftrq.kshiftrd +KSHIFTRW|Shift Right Mask Registers|kshiftrw.kshiftrb.kshiftrq.kshiftrd +KTESTB|Packed Bit Test Masks and Set Flags|ktestw.ktestb.ktestq.ktestd +KTESTD|Packed Bit Test Masks and Set Flags|ktestw.ktestb.ktestq.ktestd +KTESTQ|Packed Bit Test Masks and Set Flags|ktestw.ktestb.ktestq.ktestd +KTESTW|Packed Bit Test Masks and Set Flags|ktestw.ktestb.ktestq.ktestd +KUNPCKBW|Unpack for Mask Registers|kunpckbw.kunpckwd.kunpckdq +KUNPCKDQ|Unpack for Mask Registers|kunpckbw.kunpckwd.kunpckdq +KUNPCKWD|Unpack for Mask Registers|kunpckbw.kunpckwd.kunpckdq +KXNORB|Bitwise Logical XNOR Masks|kxnorw.kxnorb.kxnorq.kxnord +KXNORD|Bitwise Logical XNOR Masks|kxnorw.kxnorb.kxnorq.kxnord +KXNORQ|Bitwise Logical XNOR Masks|kxnorw.kxnorb.kxnorq.kxnord +KXNORW|Bitwise Logical XNOR Masks|kxnorw.kxnorb.kxnorq.kxnord +KXORB|Bitwise Logical XOR Masks|kxorw.kxorb.kxorq.kxord +KXORD|Bitwise Logical XOR Masks|kxorw.kxorb.kxorq.kxord +KXORQ|Bitwise Logical XOR Masks|kxorw.kxorb.kxorq.kxord +KXORW|Bitwise Logical XOR Masks|kxorw.kxorb.kxorq.kxord +LAHF|Load Status Flags Into AH Register|lahf +LAR|Load Access Rights Byte|lar +LDDQU|Load Unaligned Integer 128 Bits|lddqu +LDMXCSR|Load MXCSR Register|ldmxcsr +LDS|Load Far Pointer|lds.les.lfs.lgs.lss +LDTILECFG|Load Tile Configuration|ldtilecfg +LEA|Load Effective Address|lea +LEAVE|High Level Procedure Exit|leave +LES|Load Far Pointer|lds.les.lfs.lgs.lss +LFENCE|Load Fence|lfence +LFS|Load Far Pointer|lds.les.lfs.lgs.lss +LGDT|Load Global/Interrupt Descriptor Table Register|lgdt.lidt +LGS|Load Far Pointer|lds.les.lfs.lgs.lss +LIDT|Load Global/Interrupt Descriptor Table Register|lgdt.lidt +LLDT|Load Local Descriptor Table Register|lldt +LMSW|Load Machine Status Word|lmsw +LOADIWKEY|Load Internal Wrapping Key With Key Locker|loadiwkey +LOCK|Assert LOCK# Signal Prefix|lock +LODS|Load String|lods.lodsb.lodsw.lodsd.lodsq +LODSB|Load String|lods.lodsb.lodsw.lodsd.lodsq +LODSD|Load String|lods.lodsb.lodsw.lodsd.lodsq +LODSQ|Load String|lods.lodsb.lodsw.lodsd.lodsq +LODSW|Load String|lods.lodsb.lodsw.lodsd.lodsq +LOOP|Loop According to ECX Counter|loop.loopcc +LOOPcc|Loop According to ECX Counter|loop.loopcc +LSL|Load Segment Limit|lsl +LSS|Load Far Pointer|lds.les.lfs.lgs.lss +LTR|Load Task Register|ltr +LZCNT|Count the Number of Leading Zero Bits|lzcnt +MASKMOVDQU|Store Selected Bytes of Double Quadword|maskmovdqu +MASKMOVQ|Store Selected Bytes of Quadword|maskmovq +MAXPD|Maximum of Packed Double Precision Floating-Point Values|maxpd +MAXPS|Maximum of Packed Single Precision Floating-Point Values|maxps +MAXSD|Return Maximum Scalar Double Precision Floating-Point Value|maxsd +MAXSS|Return Maximum Scalar Single Precision Floating-Point Value|maxss +MFENCE|Memory Fence|mfence +MINPD|Minimum of Packed Double Precision Floating-Point Values|minpd +MINPS|Minimum of Packed Single Precision Floating-Point Values|minps +MINSD|Return Minimum Scalar Double Precision Floating-Point Value|minsd +MINSS|Return Minimum Scalar Single Precision Floating-Point Value|minss +MONITOR|Set Up Monitor Address|monitor +MOV|Move|mov +MOV|Move to/from Control Registers|mov-1 +MOV|Move to/from Debug Registers|mov-2 +MOVAPD|Move Aligned Packed Double Precision Floating-Point Values|movapd +MOVAPS|Move Aligned Packed Single Precision Floating-Point Values|movaps +MOVBE|Move Data After Swapping Bytes|movbe +MOVD|Move Doubleword/Move Quadword|movd.movq +MOVDDUP|Replicate Double Precision Floating-Point Values|movddup +MOVDIR64B|Move 64 Bytes as Direct Store|movdir64b +MOVDIRI|Move Doubleword as Direct Store|movdiri +MOVDQ2Q|Move Quadword from XMM to MMX Technology Register|movdq2q +MOVDQA|Move Aligned Packed Integer Values|movdqa.vmovdqa32.vmovdqa64 +MOVDQU|Move Unaligned Packed Integer Values|movdqu.vmovdqu8.vmovdqu16.vmovdqu32.vmovdqu64 +MOVHLPS|Move Packed Single Precision Floating-Point Values High to Low|movhlps +MOVHPD|Move High Packed Double Precision Floating-Point Value|movhpd +MOVHPS|Move High Packed Single Precision Floating-Point Values|movhps +MOVLHPS|Move Packed Single Precision Floating-Point Values Low to High|movlhps +MOVLPD|Move Low Packed Double Precision Floating-Point Value|movlpd +MOVLPS|Move Low Packed Single Precision Floating-Point Values|movlps +MOVMSKPD|Extract Packed Double Precision Floating-Point Sign Mask|movmskpd +MOVMSKPS|Extract Packed Single Precision Floating-Point Sign Mask|movmskps +MOVNTDQ|Store Packed Integers Using Non-Temporal Hint|movntdq +MOVNTDQA|Load Double Quadword Non-Temporal Aligned Hint|movntdqa +MOVNTI|Store Doubleword Using Non-Temporal Hint|movnti +MOVNTPD|Store Packed Double Precision Floating-Point Values Using Non-Temporal Hint|movntpd +MOVNTPS|Store Packed Single Precision Floating-Point Values Using Non-Temporal Hint|movntps +MOVNTQ|Store of Quadword Using Non-Temporal Hint|movntq +MOVQ|Move Doubleword/Move Quadword|movd.movq +MOVQ|Move Quadword|movq +MOVQ2DQ|Move Quadword from MMX Technology to XMM Register|movq2dq +MOVS|Move Data From String to String|movs.movsb.movsw.movsd.movsq +MOVSB|Move Data From String to String|movs.movsb.movsw.movsd.movsq +MOVSD|Move Data From String to String|movs.movsb.movsw.movsd.movsq +MOVSD|Move or Merge Scalar Double Precision Floating-Point Value|movsd +MOVSHDUP|Replicate Single Precision Floating-Point Values|movshdup +MOVSLDUP|Replicate Single Precision Floating-Point Values|movsldup +MOVSQ|Move Data From String to String|movs.movsb.movsw.movsd.movsq +MOVSS|Move or Merge Scalar Single Precision Floating-Point Value|movss +MOVSW|Move Data From String to String|movs.movsb.movsw.movsd.movsq +MOVSX|Move With Sign-Extension|movsx.movsxd +MOVSXD|Move With Sign-Extension|movsx.movsxd +MOVUPD|Move Unaligned Packed Double Precision Floating-Point Values|movupd +MOVUPS|Move Unaligned Packed Single Precision Floating-Point Values|movups +MOVZX|Move With Zero-Extend|movzx +MPSADBW|Compute Multiple Packed Sums of Absolute Difference|mpsadbw +MUL|Unsigned Multiply|mul +MULPD|Multiply Packed Double Precision Floating-Point Values|mulpd +MULPS|Multiply Packed Single Precision Floating-Point Values|mulps +MULSD|Multiply Scalar Double Precision Floating-Point Value|mulsd +MULSS|Multiply Scalar Single Precision Floating-Point Values|mulss +MULX|Unsigned Multiply Without Affecting Flags|mulx +MWAIT|Monitor Wait|mwait +NEG|Two's Complement Negation|neg +NOP|No Operation|nop +NOT|One's Complement Negation|not +OR|Logical Inclusive OR|or +ORPD|Bitwise Logical OR of Packed Double Precision Floating-Point Values|orpd +ORPS|Bitwise Logical OR of Packed Single Precision Floating-Point Values|orps +OUT|Output to Port|out +OUTS|Output String to Port|outs.outsb.outsw.outsd +OUTSB|Output String to Port|outs.outsb.outsw.outsd +OUTSD|Output String to Port|outs.outsb.outsw.outsd +OUTSW|Output String to Port|outs.outsb.outsw.outsd +PABSB|Packed Absolute Value|pabsb.pabsw.pabsd.pabsq +PABSD|Packed Absolute Value|pabsb.pabsw.pabsd.pabsq +PABSQ|Packed Absolute Value|pabsb.pabsw.pabsd.pabsq +PABSW|Packed Absolute Value|pabsb.pabsw.pabsd.pabsq +PACKSSDW|Pack With Signed Saturation|packsswb.packssdw +PACKSSWB|Pack With Signed Saturation|packsswb.packssdw +PACKUSDW|Pack With Unsigned Saturation|packusdw +PACKUSWB|Pack With Unsigned Saturation|packuswb +PADDB|Add Packed Integers|paddb.paddw.paddd.paddq +PADDD|Add Packed Integers|paddb.paddw.paddd.paddq +PADDQ|Add Packed Integers|paddb.paddw.paddd.paddq +PADDSB|Add Packed Signed Integers with Signed Saturation|paddsb.paddsw +PADDSW|Add Packed Signed Integers with Signed Saturation|paddsb.paddsw +PADDUSB|Add Packed Unsigned Integers With Unsigned Saturation|paddusb.paddusw +PADDUSW|Add Packed Unsigned Integers With Unsigned Saturation|paddusb.paddusw +PADDW|Add Packed Integers|paddb.paddw.paddd.paddq +PALIGNR|Packed Align Right|palignr +PAND|Logical AND|pand +PANDN|Logical AND NOT|pandn +PAUSE|Spin Loop Hint|pause +PAVGB|Average Packed Integers|pavgb.pavgw +PAVGW|Average Packed Integers|pavgb.pavgw +PBLENDVB|Variable Blend Packed Bytes|pblendvb +PBLENDW|Blend Packed Words|pblendw +PCLMULQDQ|Carry-Less Multiplication Quadword|pclmulqdq +PCMPEQB|Compare Packed Data for Equal|pcmpeqb.pcmpeqw.pcmpeqd +PCMPEQD|Compare Packed Data for Equal|pcmpeqb.pcmpeqw.pcmpeqd +PCMPEQQ|Compare Packed Qword Data for Equal|pcmpeqq +PCMPEQW|Compare Packed Data for Equal|pcmpeqb.pcmpeqw.pcmpeqd +PCMPESTRI|Packed Compare Explicit Length Strings, Return Index|pcmpestri +PCMPESTRM|Packed Compare Explicit Length Strings, Return Mask|pcmpestrm +PCMPGTB|Compare Packed Signed Integers for Greater Than|pcmpgtb.pcmpgtw.pcmpgtd +PCMPGTD|Compare Packed Signed Integers for Greater Than|pcmpgtb.pcmpgtw.pcmpgtd +PCMPGTQ|Compare Packed Data for Greater Than|pcmpgtq +PCMPGTW|Compare Packed Signed Integers for Greater Than|pcmpgtb.pcmpgtw.pcmpgtd +PCMPISTRI|Packed Compare Implicit Length Strings, Return Index|pcmpistri +PCMPISTRM|Packed Compare Implicit Length Strings, Return Mask|pcmpistrm +PCONFIG|Platform Configuration|pconfig +PDEP|Parallel Bits Deposit|pdep +PEXT|Parallel Bits Extract|pext +PEXTRB|Extract Byte/Dword/Qword|pextrb.pextrd.pextrq +PEXTRD|Extract Byte/Dword/Qword|pextrb.pextrd.pextrq +PEXTRQ|Extract Byte/Dword/Qword|pextrb.pextrd.pextrq +PEXTRW|Extract Word|pextrw +PHADDD|Packed Horizontal Add|phaddw.phaddd +PHADDSW|Packed Horizontal Add and Saturate|phaddsw +PHADDW|Packed Horizontal Add|phaddw.phaddd +PHMINPOSUW|Packed Horizontal Word Minimum|phminposuw +PHSUBD|Packed Horizontal Subtract|phsubw.phsubd +PHSUBSW|Packed Horizontal Subtract and Saturate|phsubsw +PHSUBW|Packed Horizontal Subtract|phsubw.phsubd +PINSRB|Insert Byte/Dword/Qword|pinsrb.pinsrd.pinsrq +PINSRD|Insert Byte/Dword/Qword|pinsrb.pinsrd.pinsrq +PINSRQ|Insert Byte/Dword/Qword|pinsrb.pinsrd.pinsrq +PINSRW|Insert Word|pinsrw +PMADDUBSW|Multiply and Add Packed Signed and Unsigned Bytes|pmaddubsw +PMADDWD|Multiply and Add Packed Integers|pmaddwd +PMAXSB|Maximum of Packed Signed Integers|pmaxsb.pmaxsw.pmaxsd.pmaxsq +PMAXSD|Maximum of Packed Signed Integers|pmaxsb.pmaxsw.pmaxsd.pmaxsq +PMAXSQ|Maximum of Packed Signed Integers|pmaxsb.pmaxsw.pmaxsd.pmaxsq +PMAXSW|Maximum of Packed Signed Integers|pmaxsb.pmaxsw.pmaxsd.pmaxsq +PMAXUB|Maximum of Packed Unsigned Integers|pmaxub.pmaxuw +PMAXUD|Maximum of Packed Unsigned Integers|pmaxud.pmaxuq +PMAXUQ|Maximum of Packed Unsigned Integers|pmaxud.pmaxuq +PMAXUW|Maximum of Packed Unsigned Integers|pmaxub.pmaxuw +PMINSB|Minimum of Packed Signed Integers|pminsb.pminsw +PMINSD|Minimum of Packed Signed Integers|pminsd.pminsq +PMINSQ|Minimum of Packed Signed Integers|pminsd.pminsq +PMINSW|Minimum of Packed Signed Integers|pminsb.pminsw +PMINUB|Minimum of Packed Unsigned Integers|pminub.pminuw +PMINUD|Minimum of Packed Unsigned Integers|pminud.pminuq +PMINUQ|Minimum of Packed Unsigned Integers|pminud.pminuq +PMINUW|Minimum of Packed Unsigned Integers|pminub.pminuw +PMOVMSKB|Move Byte Mask|pmovmskb +PMOVSX|Packed Move With Sign Extend|pmovsx +PMOVZX|Packed Move With Zero Extend|pmovzx +PMULDQ|Multiply Packed Doubleword Integers|pmuldq +PMULHRSW|Packed Multiply High With Round and Scale|pmulhrsw +PMULHUW|Multiply Packed Unsigned Integers and Store High Result|pmulhuw +PMULHW|Multiply Packed Signed Integers and Store High Result|pmulhw +PMULLD|Multiply Packed Integers and Store Low Result|pmulld.pmullq +PMULLQ|Multiply Packed Integers and Store Low Result|pmulld.pmullq +PMULLW|Multiply Packed Signed Integers and Store Low Result|pmullw +PMULUDQ|Multiply Packed Unsigned Doubleword Integers|pmuludq +POP|Pop a Value From the Stack|pop +POPA|Pop All General-Purpose Registers|popa.popad +POPAD|Pop All General-Purpose Registers|popa.popad +POPCNT|Return the Count of Number of Bits Set to 1|popcnt +POPF|Pop Stack Into EFLAGS Register|popf.popfd.popfq +POPFD|Pop Stack Into EFLAGS Register|popf.popfd.popfq +POPFQ|Pop Stack Into EFLAGS Register|popf.popfd.popfq +POR|Bitwise Logical OR|por +PREFETCHW|Prefetch Data Into Caches in Anticipation of a Write|prefetchw +PREFETCHh|Prefetch Data Into Caches|prefetchh +PSADBW|Compute Sum of Absolute Differences|psadbw +PSHUFB|Packed Shuffle Bytes|pshufb +PSHUFD|Shuffle Packed Doublewords|pshufd +PSHUFHW|Shuffle Packed High Words|pshufhw +PSHUFLW|Shuffle Packed Low Words|pshuflw +PSHUFW|Shuffle Packed Words|pshufw +PSIGNB|Packed SIGN|psignb.psignw.psignd +PSIGND|Packed SIGN|psignb.psignw.psignd +PSIGNW|Packed SIGN|psignb.psignw.psignd +PSLLD|Shift Packed Data Left Logical|psllw.pslld.psllq +PSLLDQ|Shift Double Quadword Left Logical|pslldq +PSLLQ|Shift Packed Data Left Logical|psllw.pslld.psllq +PSLLW|Shift Packed Data Left Logical|psllw.pslld.psllq +PSRAD|Shift Packed Data Right Arithmetic|psraw.psrad.psraq +PSRAQ|Shift Packed Data Right Arithmetic|psraw.psrad.psraq +PSRAW|Shift Packed Data Right Arithmetic|psraw.psrad.psraq +PSRLD|Shift Packed Data Right Logical|psrlw.psrld.psrlq +PSRLDQ|Shift Double Quadword Right Logical|psrldq +PSRLQ|Shift Packed Data Right Logical|psrlw.psrld.psrlq +PSRLW|Shift Packed Data Right Logical|psrlw.psrld.psrlq +PSUBB|Subtract Packed Integers|psubb.psubw.psubd +PSUBD|Subtract Packed Integers|psubb.psubw.psubd +PSUBQ|Subtract Packed Quadword Integers|psubq +PSUBSB|Subtract Packed Signed Integers With Signed Saturation|psubsb.psubsw +PSUBSW|Subtract Packed Signed Integers With Signed Saturation|psubsb.psubsw +PSUBUSB|Subtract Packed Unsigned Integers With Unsigned Saturation|psubusb.psubusw +PSUBUSW|Subtract Packed Unsigned Integers With Unsigned Saturation|psubusb.psubusw +PSUBW|Subtract Packed Integers|psubb.psubw.psubd +PTEST|Logical Compare|ptest +PTWRITE|Write Data to a Processor Trace Packet|ptwrite +PUNPCKHBW|Unpack High Data|punpckhbw.punpckhwd.punpckhdq.punpckhqdq +PUNPCKHDQ|Unpack High Data|punpckhbw.punpckhwd.punpckhdq.punpckhqdq +PUNPCKHQDQ|Unpack High Data|punpckhbw.punpckhwd.punpckhdq.punpckhqdq +PUNPCKHWD|Unpack High Data|punpckhbw.punpckhwd.punpckhdq.punpckhqdq +PUNPCKLBW|Unpack Low Data|punpcklbw.punpcklwd.punpckldq.punpcklqdq +PUNPCKLDQ|Unpack Low Data|punpcklbw.punpcklwd.punpckldq.punpcklqdq +PUNPCKLQDQ|Unpack Low Data|punpcklbw.punpcklwd.punpckldq.punpcklqdq +PUNPCKLWD|Unpack Low Data|punpcklbw.punpcklwd.punpckldq.punpcklqdq +PUSH|Push Word, Doubleword, or Quadword Onto the Stack|push +PUSHA|Push All General-Purpose Registers|pusha.pushad +PUSHAD|Push All General-Purpose Registers|pusha.pushad +PUSHF|Push EFLAGS Register Onto the Stack|pushf.pushfd.pushfq +PUSHFD|Push EFLAGS Register Onto the Stack|pushf.pushfd.pushfq +PUSHFQ|Push EFLAGS Register Onto the Stack|pushf.pushfd.pushfq +PXOR|Logical Exclusive OR|pxor +RCL|Rotate|rcl.rcr.rol.ror +RCPPS|Compute Reciprocals of Packed Single Precision Floating-Point Values|rcpps +RCPSS|Compute Reciprocal of Scalar Single Precision Floating-Point Values|rcpss +RCR|Rotate|rcl.rcr.rol.ror +RDFSBASE|Read FS/GS Segment Base|rdfsbase.rdgsbase +RDGSBASE|Read FS/GS Segment Base|rdfsbase.rdgsbase +RDMSR|Read From Model Specific Register|rdmsr +RDPID|Read Processor ID|rdpid +RDPKRU|Read Protection Key Rights for User Pages|rdpkru +RDPMC|Read Performance-Monitoring Counters|rdpmc +RDRAND|Read Random Number|rdrand +RDSEED|Read Random SEED|rdseed +RDSSPD|Read Shadow Stack Pointer|rdsspd.rdsspq +RDSSPQ|Read Shadow Stack Pointer|rdsspd.rdsspq +RDTSC|Read Time-Stamp Counter|rdtsc +RDTSCP|Read Time-Stamp Counter and Processor ID|rdtscp +REP|Repeat String Operation Prefix|rep.repe.repz.repne.repnz +REPE|Repeat String Operation Prefix|rep.repe.repz.repne.repnz +REPNE|Repeat String Operation Prefix|rep.repe.repz.repne.repnz +REPNZ|Repeat String Operation Prefix|rep.repe.repz.repne.repnz +REPZ|Repeat String Operation Prefix|rep.repe.repz.repne.repnz +RET|Return From Procedure|ret +ROL|Rotate|rcl.rcr.rol.ror +ROR|Rotate|rcl.rcr.rol.ror +RORX|Rotate Right Logical Without Affecting Flags|rorx +ROUNDPD|Round Packed Double Precision Floating-Point Values|roundpd +ROUNDPS|Round Packed Single Precision Floating-Point Values|roundps +ROUNDSD|Round Scalar Double Precision Floating-Point Values|roundsd +ROUNDSS|Round Scalar Single Precision Floating-Point Values|roundss +RSM|Resume From System Management Mode|rsm +RSQRTPS|Compute Reciprocals of Square Roots of Packed Single Precision Floating-PointValues|rsqrtps +RSQRTSS|Compute Reciprocal of Square Root of Scalar Single Precision Floating-Point Value|rsqrtss +RSTORSSP|Restore Saved Shadow Stack Pointer|rstorssp +SAHF|Store AH Into Flags|sahf +SAL|Shift|sal.sar.shl.shr +SAR|Shift|sal.sar.shl.shr +SARX|Shift Without Affecting Flags|sarx.shlx.shrx +SAVEPREVSSP|Save Previous Shadow Stack Pointer|saveprevssp +SBB|Integer Subtraction With Borrow|sbb +SCAS|Scan String|scas.scasb.scasw.scasd +SCASB|Scan String|scas.scasb.scasw.scasd +SCASD|Scan String|scas.scasb.scasw.scasd +SCASW|Scan String|scas.scasb.scasw.scasd +SENDUIPI|Send User Interprocessor Interrupt|senduipi +SERIALIZE|Serialize Instruction Execution|serialize +SETSSBSY|Mark Shadow Stack Busy|setssbsy +SETcc|Set Byte on Condition|setcc +SFENCE|Store Fence|sfence +SGDT|Store Global Descriptor Table Register|sgdt +SHA1MSG1|Perform an Intermediate Calculation for the Next Four SHA1 Message Dwords|sha1msg1 +SHA1MSG2|Perform a Final Calculation for the Next Four SHA1 Message Dwords|sha1msg2 +SHA1NEXTE|Calculate SHA1 State Variable E After Four Rounds|sha1nexte +SHA1RNDS4|Perform Four Rounds of SHA1 Operation|sha1rnds4 +SHA256MSG1|Perform an Intermediate Calculation for the Next Four SHA256 MessageDwords|sha256msg1 +SHA256MSG2|Perform a Final Calculation for the Next Four SHA256 Message Dwords|sha256msg2 +SHA256RNDS2|Perform Two Rounds of SHA256 Operation|sha256rnds2 +SHL|Shift|sal.sar.shl.shr +SHLD|Double Precision Shift Left|shld +SHLX|Shift Without Affecting Flags|sarx.shlx.shrx +SHR|Shift|sal.sar.shl.shr +SHRD|Double Precision Shift Right|shrd +SHRX|Shift Without Affecting Flags|sarx.shlx.shrx +SHUFPD|Packed Interleave Shuffle of Pairs of Double Precision Floating-Point Values|shufpd +SHUFPS|Packed Interleave Shuffle of Quadruplets of Single Precision Floating-Point Values|shufps +SIDT|Store Interrupt Descriptor Table Register|sidt +SLDT|Store Local Descriptor Table Register|sldt +SMSW|Store Machine Status Word|smsw +SQRTPD|Square Root of Double Precision Floating-Point Values|sqrtpd +SQRTPS|Square Root of Single Precision Floating-Point Values|sqrtps +SQRTSD|Compute Square Root of Scalar Double Precision Floating-Point Value|sqrtsd +SQRTSS|Compute Square Root of Scalar Single Precision Value|sqrtss +STAC|Set AC Flag in EFLAGS Register|stac +STC|Set Carry Flag|stc +STD|Set Direction Flag|std +STI|Set Interrupt Flag|sti +STMXCSR|Store MXCSR Register State|stmxcsr +STOS|Store String|stos.stosb.stosw.stosd.stosq +STOSB|Store String|stos.stosb.stosw.stosd.stosq +STOSD|Store String|stos.stosb.stosw.stosd.stosq +STOSQ|Store String|stos.stosb.stosw.stosd.stosq +STOSW|Store String|stos.stosb.stosw.stosd.stosq +STR|Store Task Register|str +STTILECFG|Store Tile Configuration|sttilecfg +STUI|Set User Interrupt Flag|stui +SUB|Subtract|sub +SUBPD|Subtract Packed Double Precision Floating-Point Values|subpd +SUBPS|Subtract Packed Single Precision Floating-Point Values|subps +SUBSD|Subtract Scalar Double Precision Floating-Point Value|subsd +SUBSS|Subtract Scalar Single Precision Floating-Point Value|subss +SWAPGS|Swap GS Base Register|swapgs +SYSCALL|Fast System Call|syscall +SYSENTER|Fast System Call|sysenter +SYSEXIT|Fast Return from Fast System Call|sysexit +SYSRET|Return From Fast System Call|sysret +TDPBF16PS|Dot Product of BF16 Tiles Accumulated into Packed Single Precision Tile|tdpbf16ps +TDPBSSD|Dot Product of Signed/Unsigned Bytes with DwordAccumulation|tdpbssd.tdpbsud.tdpbusd.tdpbuud +TDPBSUD|Dot Product of Signed/Unsigned Bytes with DwordAccumulation|tdpbssd.tdpbsud.tdpbusd.tdpbuud +TDPBUSD|Dot Product of Signed/Unsigned Bytes with DwordAccumulation|tdpbssd.tdpbsud.tdpbusd.tdpbuud +TDPBUUD|Dot Product of Signed/Unsigned Bytes with DwordAccumulation|tdpbssd.tdpbsud.tdpbusd.tdpbuud +TEST|Logical Compare|test +TESTUI|Determine User Interrupt Flag|testui +TILELOADD|Load Tile|tileloadd.tileloaddt1 +TILELOADDT1|Load Tile|tileloadd.tileloaddt1 +TILERELEASE|Release Tile|tilerelease +TILESTORED|Store Tile|tilestored +TILEZERO|Zero Tile|tilezero +TPAUSE|Timed PAUSE|tpause +TZCNT|Count the Number of Trailing Zero Bits|tzcnt +UCOMISD|Unordered Compare Scalar Double Precision Floating-Point Values and Set EFLAGS|ucomisd +UCOMISS|Unordered Compare Scalar Single Precision Floating-Point Values and Set EFLAGS|ucomiss +UD|Undefined Instruction|ud +UIRET|User-Interrupt Return|uiret +UMONITOR|User Level Set Up Monitor Address|umonitor +UMWAIT|User Level Monitor Wait|umwait +UNPCKHPD|Unpack and Interleave High Packed Double Precision Floating-Point Values|unpckhpd +UNPCKHPS|Unpack and Interleave High Packed Single Precision Floating-Point Values|unpckhps +UNPCKLPD|Unpack and Interleave Low Packed Double Precision Floating-Point Values|unpcklpd +UNPCKLPS|Unpack and Interleave Low Packed Single Precision Floating-Point Values|unpcklps +VADDPH|Add Packed FP16 Values|vaddph +VADDSH|Add Scalar FP16 Values|vaddsh +VALIGND|Align Doubleword/Quadword Vectors|valignd.valignq +VALIGNQ|Align Doubleword/Quadword Vectors|valignd.valignq +VBLENDMPD|Blend Float64/Float32 Vectors Using an OpMask Control|vblendmpd.vblendmps +VBLENDMPS|Blend Float64/Float32 Vectors Using an OpMask Control|vblendmpd.vblendmps +VBROADCAST|Load with Broadcast Floating-Point Data|vbroadcast +VCMPPH|Compare Packed FP16 Values|vcmpph +VCMPSH|Compare Scalar FP16 Values|vcmpsh +VCOMISH|Compare Scalar Ordered FP16 Values and Set EFLAGS|vcomish +VCOMPRESSPD|Store Sparse Packed Double Precision Floating-Point Values Into DenseMemory|vcompresspd +VCOMPRESSPS|Store Sparse Packed Single Precision Floating-Point Values Into Dense Memory|vcompressps +VCOMPRESSW|Store Sparse Packed Byte/Word Integer Values Into DenseMemory/Register|vpcompressb.vcompressw +VCVTDQ2PH|Convert Packed Signed Doubleword Integers to Packed FP16 Values|vcvtdq2ph +VCVTNE2PS2BF16|Convert Two Packed Single Data to One Packed BF16 Data|vcvtne2ps2bf16 +VCVTNEPS2BF16|Convert Packed Single Data to Packed BF16 Data|vcvtneps2bf16 +VCVTPD2PH|Convert Packed Double Precision FP Values to Packed FP16 Values|vcvtpd2ph +VCVTPD2QQ|Convert Packed Double Precision Floating-Point Values to Packed QuadwordIntegers|vcvtpd2qq +VCVTPD2UDQ|Convert Packed Double Precision Floating-Point Values to Packed UnsignedDoubleword Integers|vcvtpd2udq +VCVTPD2UQQ|Convert Packed Double Precision Floating-Point Values to Packed UnsignedQuadword Integers|vcvtpd2uqq +VCVTPH2DQ|Convert Packed FP16 Values to Signed Doubleword Integers|vcvtph2dq +VCVTPH2PD|Convert Packed FP16 Values to FP64 Values|vcvtph2pd +VCVTPH2PS|Convert Packed FP16 Values to Single Precision Floating-PointValues|vcvtph2ps.vcvtph2psx +VCVTPH2PSX|Convert Packed FP16 Values to Single Precision Floating-PointValues|vcvtph2ps.vcvtph2psx +VCVTPH2QQ|Convert Packed FP16 Values to Signed Quadword Integer Values|vcvtph2qq +VCVTPH2UDQ|Convert Packed FP16 Values to Unsigned Doubleword Integers|vcvtph2udq +VCVTPH2UQQ|Convert Packed FP16 Values to Unsigned Quadword Integers|vcvtph2uqq +VCVTPH2UW|Convert Packed FP16 Values to Unsigned Word Integers|vcvtph2uw +VCVTPH2W|Convert Packed FP16 Values to Signed Word Integers|vcvtph2w +VCVTPS2PH|Convert Single-Precision FP Value to 16-bit FP Value|vcvtps2ph +VCVTPS2PHX|Convert Packed Single Precision Floating-Point Values to Packed FP16 Values|vcvtps2phx +VCVTPS2QQ|Convert Packed Single Precision Floating-Point Values to Packed SignedQuadword Integer Values|vcvtps2qq +VCVTPS2UDQ|Convert Packed Single Precision Floating-Point Values to Packed UnsignedDoubleword Integer Values|vcvtps2udq +VCVTPS2UQQ|Convert Packed Single Precision Floating-Point Values to Packed UnsignedQuadword Integer Values|vcvtps2uqq +VCVTQQ2PD|Convert Packed Quadword Integers to Packed Double Precision Floating-PointValues|vcvtqq2pd +VCVTQQ2PH|Convert Packed Signed Quadword Integers to Packed FP16 Values|vcvtqq2ph +VCVTQQ2PS|Convert Packed Quadword Integers to Packed Single Precision Floating-PointValues|vcvtqq2ps +VCVTSD2SH|Convert Low FP64 Value to an FP16 Value|vcvtsd2sh +VCVTSD2USI|Convert Scalar Double Precision Floating-Point Value to Unsigned DoublewordInteger|vcvtsd2usi +VCVTSH2SD|Convert Low FP16 Value to an FP64 Value|vcvtsh2sd +VCVTSH2SI|Convert Low FP16 Value to Signed Integer|vcvtsh2si +VCVTSH2SS|Convert Low FP16 Value to FP32 Value|vcvtsh2ss +VCVTSH2USI|Convert Low FP16 Value to Unsigned Integer|vcvtsh2usi +VCVTSI2SH|Convert a Signed Doubleword/Quadword Integer to an FP16 Value|vcvtsi2sh +VCVTSS2SH|Convert Low FP32 Value to an FP16 Value|vcvtss2sh +VCVTSS2USI|Convert Scalar Single Precision Floating-Point Value to Unsigned DoublewordInteger|vcvtss2usi +VCVTTPD2QQ|Convert With Truncation Packed Double Precision Floating-Point Values toPacked Quadword Integers|vcvttpd2qq +VCVTTPD2UDQ|Convert With Truncation Packed Double Precision Floating-Point Values toPacked Unsigned Doubleword Integers|vcvttpd2udq +VCVTTPD2UQQ|Convert With Truncation Packed Double Precision Floating-Point Values toPacked Unsigned Quadword Integers|vcvttpd2uqq +VCVTTPH2DQ|Convert with Truncation Packed FP16 Values to Signed Doubleword Integers|vcvttph2dq +VCVTTPH2QQ|Convert with Truncation Packed FP16 Values to Signed Quadword Integers|vcvttph2qq +VCVTTPH2UDQ|Convert with Truncation Packed FP16 Values to Unsigned DoublewordIntegers|vcvttph2udq +VCVTTPH2UQQ|Convert with Truncation Packed FP16 Values to Unsigned Quadword Integers|vcvttph2uqq +VCVTTPH2UW|Convert Packed FP16 Values to Unsigned Word Integers|vcvttph2uw +VCVTTPH2W|Convert Packed FP16 Values to Signed Word Integers|vcvttph2w +VCVTTPS2QQ|Convert With Truncation Packed Single Precision Floating-Point Values toPacked Signed Quadword Integer Values|vcvttps2qq +VCVTTPS2UDQ|Convert With Truncation Packed Single Precision Floating-Point Values toPacked Unsigned Doubleword Integer Values|vcvttps2udq +VCVTTPS2UQQ|Convert With Truncation Packed Single Precision Floating-Point Values toPacked Unsigned Quadword Integer Values|vcvttps2uqq +VCVTTSD2USI|Convert With Truncation Scalar Double Precision Floating-Point Value toUnsigned Integer|vcvttsd2usi +VCVTTSH2SI|Convert with Truncation Low FP16 Value to a Signed Integer|vcvttsh2si +VCVTTSH2USI|Convert with Truncation Low FP16 Value to an Unsigned Integer|vcvttsh2usi +VCVTTSS2USI|Convert With Truncation Scalar Single Precision Floating-Point Value toUnsigned Integer|vcvttss2usi +VCVTUDQ2PD|Convert Packed Unsigned Doubleword Integers to Packed Double PrecisionFloating-Point Values|vcvtudq2pd +VCVTUDQ2PH|Convert Packed Unsigned Doubleword Integers to Packed FP16 Values|vcvtudq2ph +VCVTUDQ2PS|Convert Packed Unsigned Doubleword Integers to Packed Single PrecisionFloating-Point Values|vcvtudq2ps +VCVTUQQ2PD|Convert Packed Unsigned Quadword Integers to Packed Double PrecisionFloating-Point Values|vcvtuqq2pd +VCVTUQQ2PH|Convert Packed Unsigned Quadword Integers to Packed FP16 Values|vcvtuqq2ph +VCVTUQQ2PS|Convert Packed Unsigned Quadword Integers to Packed Single PrecisionFloating-Point Values|vcvtuqq2ps +VCVTUSI2SD|Convert Unsigned Integer to Scalar Double Precision Floating-Point Value|vcvtusi2sd +VCVTUSI2SH|Convert Unsigned Doubleword Integer to an FP16 Value|vcvtusi2sh +VCVTUSI2SS|Convert Unsigned Integer to Scalar Single Precision Floating-Point Value|vcvtusi2ss +VCVTUW2PH|Convert Packed Unsigned Word Integers to FP16 Values|vcvtuw2ph +VCVTW2PH|Convert Packed Signed Word Integers to FP16 Values|vcvtw2ph +VDBPSADBW|Double Block Packed Sum-Absolute-Differences (SAD) on Unsigned Bytes|vdbpsadbw +VDIVPH|Divide Packed FP16 Values|vdivph +VDIVSH|Divide Scalar FP16 Values|vdivsh +VDPBF16PS|Dot Product of BF16 Pairs Accumulated Into Packed Single Precision|vdpbf16ps +VERR|Verify a Segment for Reading or Writing|verr.verw +VERW|Verify a Segment for Reading or Writing|verr.verw +VEXPANDPD|Load Sparse Packed Double Precision Floating-Point Values From Dense Memory|vexpandpd +VEXPANDPS|Load Sparse Packed Single Precision Floating-Point Values From Dense Memory|vexpandps +VEXTRACTF128|Extract Packed Floating-Point Values|vextractf128.vextractf32x4.vextractf64x2.vextractf32x8.vextractf64x4 +VEXTRACTF32x4|Extract Packed Floating-Point Values|vextractf128.vextractf32x4.vextractf64x2.vextractf32x8.vextractf64x4 +VEXTRACTF32x8|Extract Packed Floating-Point Values|vextractf128.vextractf32x4.vextractf64x2.vextractf32x8.vextractf64x4 +VEXTRACTF64x2|Extract Packed Floating-Point Values|vextractf128.vextractf32x4.vextractf64x2.vextractf32x8.vextractf64x4 +VEXTRACTF64x4|Extract Packed Floating-Point Values|vextractf128.vextractf32x4.vextractf64x2.vextractf32x8.vextractf64x4 +VEXTRACTI128|ExtractPacked Integer Values|vextracti128.vextracti32x4.vextracti64x2.vextracti32x8.vextracti64x4 +VEXTRACTI32x4|ExtractPacked Integer Values|vextracti128.vextracti32x4.vextracti64x2.vextracti32x8.vextracti64x4 +VEXTRACTI32x8|ExtractPacked Integer Values|vextracti128.vextracti32x4.vextracti64x2.vextracti32x8.vextracti64x4 +VEXTRACTI64x2|ExtractPacked Integer Values|vextracti128.vextracti32x4.vextracti64x2.vextracti32x8.vextracti64x4 +VEXTRACTI64x4|ExtractPacked Integer Values|vextracti128.vextracti32x4.vextracti64x2.vextracti32x8.vextracti64x4 +VFCMADDCPH|Complex Multiply and Accumulate FP16 Values|vfcmaddcph.vfmaddcph +VFCMADDCSH|Complex Multiply and Accumulate Scalar FP16 Values|vfcmaddcsh.vfmaddcsh +VFCMULCPH|Complex Multiply FP16 Values|vfcmulcph.vfmulcph +VFCMULCSH|Complex Multiply Scalar FP16 Values|vfcmulcsh.vfmulcsh +VFIXUPIMMPD|Fix Up Special Packed Float64 Values|vfixupimmpd +VFIXUPIMMPS|Fix Up Special Packed Float32 Values|vfixupimmps +VFIXUPIMMSD|Fix Up Special Scalar Float64 Value|vfixupimmsd +VFIXUPIMMSS|Fix Up Special Scalar Float32 Value|vfixupimmss +VFMADD132PD|Fused Multiply-Add of Packed DoublePrecision Floating-Point Values|vfmadd132pd.vfmadd213pd.vfmadd231pd +VFMADD132PH|Fused Multiply-Add of Packed FP16 Values|vfmadd132ph.vfnmadd132ph.vfmadd213ph.vfnmadd213ph.vfmadd231ph.vfnmadd231ph +VFMADD132PS|Fused Multiply-Add of Packed SinglePrecision Floating-Point Values|vfmadd132ps.vfmadd213ps.vfmadd231ps +VFMADD132SD|Fused Multiply-Add of Scalar DoublePrecision Floating-Point Values|vfmadd132sd.vfmadd213sd.vfmadd231sd +VFMADD132SH|Fused Multiply-Add of Scalar FP16 Values|vfmadd132sh.vfnmadd132sh.vfmadd213sh.vfnmadd213sh.vfmadd231sh.vfnmadd231sh +VFMADD132SS|Fused Multiply-Add of Scalar Single PrecisionFloating-Point Values|vfmadd132ss.vfmadd213ss.vfmadd231ss +VFMADD213PD|Fused Multiply-Add of Packed DoublePrecision Floating-Point Values|vfmadd132pd.vfmadd213pd.vfmadd231pd +VFMADD213PH|Fused Multiply-Add of Packed FP16 Values|vfmadd132ph.vfnmadd132ph.vfmadd213ph.vfnmadd213ph.vfmadd231ph.vfnmadd231ph +VFMADD213PS|Fused Multiply-Add of Packed SinglePrecision Floating-Point Values|vfmadd132ps.vfmadd213ps.vfmadd231ps +VFMADD213SD|Fused Multiply-Add of Scalar DoublePrecision Floating-Point Values|vfmadd132sd.vfmadd213sd.vfmadd231sd +VFMADD213SH|Fused Multiply-Add of Scalar FP16 Values|vfmadd132sh.vfnmadd132sh.vfmadd213sh.vfnmadd213sh.vfmadd231sh.vfnmadd231sh +VFMADD213SS|Fused Multiply-Add of Scalar Single PrecisionFloating-Point Values|vfmadd132ss.vfmadd213ss.vfmadd231ss +VFMADD231PD|Fused Multiply-Add of Packed DoublePrecision Floating-Point Values|vfmadd132pd.vfmadd213pd.vfmadd231pd +VFMADD231PH|Fused Multiply-Add of Packed FP16 Values|vfmadd132ph.vfnmadd132ph.vfmadd213ph.vfnmadd213ph.vfmadd231ph.vfnmadd231ph +VFMADD231PS|Fused Multiply-Add of Packed SinglePrecision Floating-Point Values|vfmadd132ps.vfmadd213ps.vfmadd231ps +VFMADD231SD|Fused Multiply-Add of Scalar DoublePrecision Floating-Point Values|vfmadd132sd.vfmadd213sd.vfmadd231sd +VFMADD231SH|Fused Multiply-Add of Scalar FP16 Values|vfmadd132sh.vfnmadd132sh.vfmadd213sh.vfnmadd213sh.vfmadd231sh.vfnmadd231sh +VFMADD231SS|Fused Multiply-Add of Scalar Single PrecisionFloating-Point Values|vfmadd132ss.vfmadd213ss.vfmadd231ss +VFMADDCPH|Complex Multiply and Accumulate FP16 Values|vfcmaddcph.vfmaddcph +VFMADDCSH|Complex Multiply and Accumulate Scalar FP16 Values|vfcmaddcsh.vfmaddcsh +VFMADDRND231PD|Fused Multiply-Add of Packed Double-Precision Floating-Point Valueswith rounding control|vfmaddrnd231pd +VFMADDSUB132PD|Fused Multiply-AlternatingAdd/Subtract of Packed Double Precision Floating-Point Values|vfmaddsub132pd.vfmaddsub213pd.vfmaddsub231pd +VFMADDSUB132PH|Fused Multiply-AlternatingAdd/Subtract of Packed FP16 Values|vfmaddsub132ph.vfmaddsub213ph.vfmaddsub231ph +VFMADDSUB132PS|Fused Multiply-AlternatingAdd/Subtract of Packed Single Precision Floating-Point Values|vfmaddsub132ps.vfmaddsub213ps.vfmaddsub231ps +VFMADDSUB213PD|Fused Multiply-AlternatingAdd/Subtract of Packed Double Precision Floating-Point Values|vfmaddsub132pd.vfmaddsub213pd.vfmaddsub231pd +VFMADDSUB213PH|Fused Multiply-AlternatingAdd/Subtract of Packed FP16 Values|vfmaddsub132ph.vfmaddsub213ph.vfmaddsub231ph +VFMADDSUB213PS|Fused Multiply-AlternatingAdd/Subtract of Packed Single Precision Floating-Point Values|vfmaddsub132ps.vfmaddsub213ps.vfmaddsub231ps +VFMADDSUB231PD|Fused Multiply-AlternatingAdd/Subtract of Packed Double Precision Floating-Point Values|vfmaddsub132pd.vfmaddsub213pd.vfmaddsub231pd +VFMADDSUB231PH|Fused Multiply-AlternatingAdd/Subtract of Packed FP16 Values|vfmaddsub132ph.vfmaddsub213ph.vfmaddsub231ph +VFMADDSUB231PS|Fused Multiply-AlternatingAdd/Subtract of Packed Single Precision Floating-Point Values|vfmaddsub132ps.vfmaddsub213ps.vfmaddsub231ps +VFMSUB132PD|Fused Multiply-Subtract of Packed DoublePrecision Floating-Point Values|vfmsub132pd.vfmsub213pd.vfmsub231pd +VFMSUB132PH|Fused Multiply-Subtract of Packed FP16 Values|vfmsub132ph.vfnmsub132ph.vfmsub213ph.vfnmsub213ph.vfmsub231ph.vfnmsub231ph +VFMSUB132PS|Fused Multiply-Subtract of Packed SinglePrecision Floating-Point Values|vfmsub132ps.vfmsub213ps.vfmsub231ps +VFMSUB132SD|Fused Multiply-Subtract of Scalar DoublePrecision Floating-Point Values|vfmsub132sd.vfmsub213sd.vfmsub231sd +VFMSUB132SH|Fused Multiply-Subtract of Scalar FP16 Values|vfmsub132sh.vfnmsub132sh.vfmsub213sh.vfnmsub213sh.vfmsub231sh.vfnmsub231sh +VFMSUB132SS|Fused Multiply-Subtract of Scalar SinglePrecision Floating-Point Values|vfmsub132ss.vfmsub213ss.vfmsub231ss +VFMSUB213PD|Fused Multiply-Subtract of Packed DoublePrecision Floating-Point Values|vfmsub132pd.vfmsub213pd.vfmsub231pd +VFMSUB213PH|Fused Multiply-Subtract of Packed FP16 Values|vfmsub132ph.vfnmsub132ph.vfmsub213ph.vfnmsub213ph.vfmsub231ph.vfnmsub231ph +VFMSUB213PS|Fused Multiply-Subtract of Packed SinglePrecision Floating-Point Values|vfmsub132ps.vfmsub213ps.vfmsub231ps +VFMSUB213SD|Fused Multiply-Subtract of Scalar DoublePrecision Floating-Point Values|vfmsub132sd.vfmsub213sd.vfmsub231sd +VFMSUB213SH|Fused Multiply-Subtract of Scalar FP16 Values|vfmsub132sh.vfnmsub132sh.vfmsub213sh.vfnmsub213sh.vfmsub231sh.vfnmsub231sh +VFMSUB213SS|Fused Multiply-Subtract of Scalar SinglePrecision Floating-Point Values|vfmsub132ss.vfmsub213ss.vfmsub231ss +VFMSUB231PD|Fused Multiply-Subtract of Packed DoublePrecision Floating-Point Values|vfmsub132pd.vfmsub213pd.vfmsub231pd +VFMSUB231PH|Fused Multiply-Subtract of Packed FP16 Values|vfmsub132ph.vfnmsub132ph.vfmsub213ph.vfnmsub213ph.vfmsub231ph.vfnmsub231ph +VFMSUB231PS|Fused Multiply-Subtract of Packed SinglePrecision Floating-Point Values|vfmsub132ps.vfmsub213ps.vfmsub231ps +VFMSUB231SD|Fused Multiply-Subtract of Scalar DoublePrecision Floating-Point Values|vfmsub132sd.vfmsub213sd.vfmsub231sd +VFMSUB231SH|Fused Multiply-Subtract of Scalar FP16 Values|vfmsub132sh.vfnmsub132sh.vfmsub213sh.vfnmsub213sh.vfmsub231sh.vfnmsub231sh +VFMSUB231SS|Fused Multiply-Subtract of Scalar SinglePrecision Floating-Point Values|vfmsub132ss.vfmsub213ss.vfmsub231ss +VFMSUBADD132PD|Fused Multiply-AlternatingSubtract/Add of Packed Double Precision Floating-Point Values|vfmsubadd132pd.vfmsubadd213pd.vfmsubadd231pd +VFMSUBADD132PH|Fused Multiply-AlternatingSubtract/Add of Packed FP16 Values|vfmsubadd132ph.vfmsubadd213ph.vfmsubadd231ph +VFMSUBADD132PS|Fused Multiply-AlternatingSubtract/Add of Packed Single Precision Floating-Point Values|vfmsubadd132ps.vfmsubadd213ps.vfmsubadd231ps +VFMSUBADD213PD|Fused Multiply-AlternatingSubtract/Add of Packed Double Precision Floating-Point Values|vfmsubadd132pd.vfmsubadd213pd.vfmsubadd231pd +VFMSUBADD213PH|Fused Multiply-AlternatingSubtract/Add of Packed FP16 Values|vfmsubadd132ph.vfmsubadd213ph.vfmsubadd231ph +VFMSUBADD213PS|Fused Multiply-AlternatingSubtract/Add of Packed Single Precision Floating-Point Values|vfmsubadd132ps.vfmsubadd213ps.vfmsubadd231ps +VFMSUBADD231PD|Fused Multiply-AlternatingSubtract/Add of Packed Double Precision Floating-Point Values|vfmsubadd132pd.vfmsubadd213pd.vfmsubadd231pd +VFMSUBADD231PH|Fused Multiply-AlternatingSubtract/Add of Packed FP16 Values|vfmsubadd132ph.vfmsubadd213ph.vfmsubadd231ph +VFMSUBADD231PS|Fused Multiply-AlternatingSubtract/Add of Packed Single Precision Floating-Point Values|vfmsubadd132ps.vfmsubadd213ps.vfmsubadd231ps +VFMULCPH|Complex Multiply FP16 Values|vfcmulcph.vfmulcph +VFMULCSH|Complex Multiply Scalar FP16 Values|vfcmulcsh.vfmulcsh +VFNMADD132PD|Fused Negative Multiply-Add of PackedDouble Precision Floating-Point Values|vfnmadd132pd.vfnmadd213pd.vfnmadd231pd +VFNMADD132PH|Fused Multiply-Add of Packed FP16 Values|vfmadd132ph.vfnmadd132ph.vfmadd213ph.vfnmadd213ph.vfmadd231ph.vfnmadd231ph +VFNMADD132PS|Fused Negative Multiply-Add of PackedSingle Precision Floating-Point Values|vfnmadd132ps.vfnmadd213ps.vfnmadd231ps +VFNMADD132SD|Fused Negative Multiply-Add of ScalarDouble Precision Floating-Point Values|vfnmadd132sd.vfnmadd213sd.vfnmadd231sd +VFNMADD132SH|Fused Multiply-Add of Scalar FP16 Values|vfmadd132sh.vfnmadd132sh.vfmadd213sh.vfnmadd213sh.vfmadd231sh.vfnmadd231sh +VFNMADD132SS|Fused Negative Multiply-Add of ScalarSingle Precision Floating-Point Values|vfnmadd132ss.vfnmadd213ss.vfnmadd231ss +VFNMADD213PD|Fused Negative Multiply-Add of PackedDouble Precision Floating-Point Values|vfnmadd132pd.vfnmadd213pd.vfnmadd231pd +VFNMADD213PH|Fused Multiply-Add of Packed FP16 Values|vfmadd132ph.vfnmadd132ph.vfmadd213ph.vfnmadd213ph.vfmadd231ph.vfnmadd231ph +VFNMADD213PS|Fused Negative Multiply-Add of PackedSingle Precision Floating-Point Values|vfnmadd132ps.vfnmadd213ps.vfnmadd231ps +VFNMADD213SD|Fused Negative Multiply-Add of ScalarDouble Precision Floating-Point Values|vfnmadd132sd.vfnmadd213sd.vfnmadd231sd +VFNMADD213SH|Fused Multiply-Add of Scalar FP16 Values|vfmadd132sh.vfnmadd132sh.vfmadd213sh.vfnmadd213sh.vfmadd231sh.vfnmadd231sh +VFNMADD213SS|Fused Negative Multiply-Add of ScalarSingle Precision Floating-Point Values|vfnmadd132ss.vfnmadd213ss.vfnmadd231ss +VFNMADD231PD|Fused Negative Multiply-Add of PackedDouble Precision Floating-Point Values|vfnmadd132pd.vfnmadd213pd.vfnmadd231pd +VFNMADD231PH|Fused Multiply-Add of Packed FP16 Values|vfmadd132ph.vfnmadd132ph.vfmadd213ph.vfnmadd213ph.vfmadd231ph.vfnmadd231ph +VFNMADD231PS|Fused Negative Multiply-Add of PackedSingle Precision Floating-Point Values|vfnmadd132ps.vfnmadd213ps.vfnmadd231ps +VFNMADD231SD|Fused Negative Multiply-Add of ScalarDouble Precision Floating-Point Values|vfnmadd132sd.vfnmadd213sd.vfnmadd231sd +VFNMADD231SH|Fused Multiply-Add of Scalar FP16 Values|vfmadd132sh.vfnmadd132sh.vfmadd213sh.vfnmadd213sh.vfmadd231sh.vfnmadd231sh +VFNMADD231SS|Fused Negative Multiply-Add of ScalarSingle Precision Floating-Point Values|vfnmadd132ss.vfnmadd213ss.vfnmadd231ss +VFNMSUB132PD|Fused Negative Multiply-Subtract ofPacked Double Precision Floating-Point Values|vfnmsub132pd.vfnmsub213pd.vfnmsub231pd +VFNMSUB132PH|Fused Multiply-Subtract of Packed FP16 Values|vfmsub132ph.vfnmsub132ph.vfmsub213ph.vfnmsub213ph.vfmsub231ph.vfnmsub231ph +VFNMSUB132PS|Fused Negative Multiply-Subtract ofPacked Single Precision Floating-Point Values|vfnmsub132ps.vfnmsub213ps.vfnmsub231ps +VFNMSUB132SD|Fused Negative Multiply-Subtract ofScalar Double Precision Floating-Point Values|vfnmsub132sd.vfnmsub213sd.vfnmsub231sd +VFNMSUB132SH|Fused Multiply-Subtract of Scalar FP16 Values|vfmsub132sh.vfnmsub132sh.vfmsub213sh.vfnmsub213sh.vfmsub231sh.vfnmsub231sh +VFNMSUB132SS|Fused Negative Multiply-Subtract ofScalar Single Precision Floating-Point Values|vfnmsub132ss.vfnmsub213ss.vfnmsub231ss +VFNMSUB213PD|Fused Negative Multiply-Subtract ofPacked Double Precision Floating-Point Values|vfnmsub132pd.vfnmsub213pd.vfnmsub231pd +VFNMSUB213PH|Fused Multiply-Subtract of Packed FP16 Values|vfmsub132ph.vfnmsub132ph.vfmsub213ph.vfnmsub213ph.vfmsub231ph.vfnmsub231ph +VFNMSUB213PS|Fused Negative Multiply-Subtract ofPacked Single Precision Floating-Point Values|vfnmsub132ps.vfnmsub213ps.vfnmsub231ps +VFNMSUB213SD|Fused Negative Multiply-Subtract ofScalar Double Precision Floating-Point Values|vfnmsub132sd.vfnmsub213sd.vfnmsub231sd +VFNMSUB213SH|Fused Multiply-Subtract of Scalar FP16 Values|vfmsub132sh.vfnmsub132sh.vfmsub213sh.vfnmsub213sh.vfmsub231sh.vfnmsub231sh +VFNMSUB213SS|Fused Negative Multiply-Subtract ofScalar Single Precision Floating-Point Values|vfnmsub132ss.vfnmsub213ss.vfnmsub231ss +VFNMSUB231PD|Fused Negative Multiply-Subtract ofPacked Double Precision Floating-Point Values|vfnmsub132pd.vfnmsub213pd.vfnmsub231pd +VFNMSUB231PH|Fused Multiply-Subtract of Packed FP16 Values|vfmsub132ph.vfnmsub132ph.vfmsub213ph.vfnmsub213ph.vfmsub231ph.vfnmsub231ph +VFNMSUB231PS|Fused Negative Multiply-Subtract ofPacked Single Precision Floating-Point Values|vfnmsub132ps.vfnmsub213ps.vfnmsub231ps +VFNMSUB231SD|Fused Negative Multiply-Subtract ofScalar Double Precision Floating-Point Values|vfnmsub132sd.vfnmsub213sd.vfnmsub231sd +VFNMSUB231SH|Fused Multiply-Subtract of Scalar FP16 Values|vfmsub132sh.vfnmsub132sh.vfmsub213sh.vfnmsub213sh.vfmsub231sh.vfnmsub231sh +VFNMSUB231SS|Fused Negative Multiply-Subtract ofScalar Single Precision Floating-Point Values|vfnmsub132ss.vfnmsub213ss.vfnmsub231ss +VFPCLASSPD|Tests Types of Packed Float64 Values|vfpclasspd +VFPCLASSPH|Test Types of Packed FP16 Values|vfpclassph +VFPCLASSPS|Tests Types of Packed Float32 Values|vfpclassps +VFPCLASSSD|Tests Type of a Scalar Float64 Value|vfpclasssd +VFPCLASSSH|Test Types of Scalar FP16 Values|vfpclasssh +VFPCLASSSS|Tests Type of a Scalar Float32 Value|vfpclassss +VGATHERDPD|Gather Packed Double Precision Floating-Point Values UsingSigned Dword/Qword Indices|vgatherdpd.vgatherqpd +VGATHERDPD|Gather Packed Single, Packed Double with Signed Dword Indices|vgatherdps.vgatherdpd +VGATHERDPS|Gather Packed Single Precision Floating-Point Values UsingSigned Dword/Qword Indices|vgatherdps.vgatherqps +VGATHERDPS|Gather Packed Single, Packed Double with Signed Dword Indices|vgatherdps.vgatherdpd +VGATHERQPD|Gather Packed Double Precision Floating-Point Values UsingSigned Dword/Qword Indices|vgatherdpd.vgatherqpd +VGATHERQPD|Gather Packed Single, Packed Double with Signed Qword Indices|vgatherqps.vgatherqpd +VGATHERQPS|Gather Packed Single Precision Floating-Point Values UsingSigned Dword/Qword Indices|vgatherdps.vgatherqps +VGATHERQPS|Gather Packed Single, Packed Double with Signed Qword Indices|vgatherqps.vgatherqpd +VGETEXPPD|Convert Exponents of Packed Double Precision Floating-Point Values to DoublePrecision Floating-Point Values|vgetexppd +VGETEXPPH|Convert Exponents of Packed FP16 Values to FP16 Values|vgetexpph +VGETEXPPS|Convert Exponents of Packed Single Precision Floating-Point Values to SinglePrecision Floating-Point Values|vgetexpps +VGETEXPSD|Convert Exponents of Scalar Double Precision Floating-Point Value to DoublePrecision Floating-Point Value|vgetexpsd +VGETEXPSH|Convert Exponents of Scalar FP16 Values to FP16 Values|vgetexpsh +VGETEXPSS|Convert Exponents of Scalar Single Precision Floating-Point Value to SinglePrecision Floating-Point Value|vgetexpss +VGETMANTPD|Extract Float64 Vector of Normalized Mantissas From Float64 Vector|vgetmantpd +VGETMANTPH|Extract FP16 Vector of Normalized Mantissas from FP16 Vector|vgetmantph +VGETMANTPS|Extract Float32 Vector of Normalized Mantissas From Float32 Vector|vgetmantps +VGETMANTSD|Extract Float64 of Normalized Mantissa From Float64 Scalar|vgetmantsd +VGETMANTSH|Extract FP16 of Normalized Mantissa from FP16 Scalar|vgetmantsh +VGETMANTSS|Extract Float32 Vector of Normalized Mantissa From Float32 Scalar|vgetmantss +VINSERTF128|Insert PackedFloating-Point Values|vinsertf128.vinsertf32x4.vinsertf64x2.vinsertf32x8.vinsertf64x4 +VINSERTF32x4|Insert PackedFloating-Point Values|vinsertf128.vinsertf32x4.vinsertf64x2.vinsertf32x8.vinsertf64x4 +VINSERTF32x8|Insert PackedFloating-Point Values|vinsertf128.vinsertf32x4.vinsertf64x2.vinsertf32x8.vinsertf64x4 +VINSERTF64x2|Insert PackedFloating-Point Values|vinsertf128.vinsertf32x4.vinsertf64x2.vinsertf32x8.vinsertf64x4 +VINSERTF64x4|Insert PackedFloating-Point Values|vinsertf128.vinsertf32x4.vinsertf64x2.vinsertf32x8.vinsertf64x4 +VINSERTI128|Insert PackedInteger Values|vinserti128.vinserti32x4.vinserti64x2.vinserti32x8.vinserti64x4 +VINSERTI32x4|Insert PackedInteger Values|vinserti128.vinserti32x4.vinserti64x2.vinserti32x8.vinserti64x4 +VINSERTI32x8|Insert PackedInteger Values|vinserti128.vinserti32x4.vinserti64x2.vinserti32x8.vinserti64x4 +VINSERTI64x2|Insert PackedInteger Values|vinserti128.vinserti32x4.vinserti64x2.vinserti32x8.vinserti64x4 +VINSERTI64x4|Insert PackedInteger Values|vinserti128.vinserti32x4.vinserti64x2.vinserti32x8.vinserti64x4 +VMASKMOV|Conditional SIMD Packed Loads and Stores|vmaskmov +VMAXPH|Return Maximum of Packed FP16 Values|vmaxph +VMAXSH|Return Maximum of Scalar FP16 Values|vmaxsh +VMINPH|Return Minimum of Packed FP16 Values|vminph +VMINSH|Return Minimum Scalar FP16 Value|vminsh +VMOVDQA32|Move Aligned Packed Integer Values|movdqa.vmovdqa32.vmovdqa64 +VMOVDQA64|Move Aligned Packed Integer Values|movdqa.vmovdqa32.vmovdqa64 +VMOVDQU16|Move Unaligned Packed Integer Values|movdqu.vmovdqu8.vmovdqu16.vmovdqu32.vmovdqu64 +VMOVDQU32|Move Unaligned Packed Integer Values|movdqu.vmovdqu8.vmovdqu16.vmovdqu32.vmovdqu64 +VMOVDQU64|Move Unaligned Packed Integer Values|movdqu.vmovdqu8.vmovdqu16.vmovdqu32.vmovdqu64 +VMOVDQU8|Move Unaligned Packed Integer Values|movdqu.vmovdqu8.vmovdqu16.vmovdqu32.vmovdqu64 +VMOVSH|Move Scalar FP16 Value|vmovsh +VMOVW|Move Word|vmovw +VMULPH|Multiply Packed FP16 Values|vmulph +VMULSH|Multiply Scalar FP16 Values|vmulsh +VP2INTERSECTD|Compute Intersection Between DWORDS/QUADWORDS to aPair of Mask Registers|vp2intersectd.vp2intersectq +VP2INTERSECTQ|Compute Intersection Between DWORDS/QUADWORDS to aPair of Mask Registers|vp2intersectd.vp2intersectq +VPBLENDD|Blend Packed Dwords|vpblendd +VPBLENDMB|Blend Byte/Word Vectors Using an Opmask Control|vpblendmb.vpblendmw +VPBLENDMD|Blend Int32/Int64 Vectors Using an OpMask Control|vpblendmd.vpblendmq +VPBLENDMQ|Blend Int32/Int64 Vectors Using an OpMask Control|vpblendmd.vpblendmq +VPBLENDMW|Blend Byte/Word Vectors Using an Opmask Control|vpblendmb.vpblendmw +VPBROADCAST|Load Integer and Broadcast|vpbroadcast +VPBROADCASTB|Load With Broadcast Integer Data From General Purpose Register|vpbroadcastb.vpbroadcastw.vpbroadcastd.vpbroadcastq +VPBROADCASTD|Load With Broadcast Integer Data From General Purpose Register|vpbroadcastb.vpbroadcastw.vpbroadcastd.vpbroadcastq +VPBROADCASTM|Broadcast Mask to Vector Register|vpbroadcastm +VPBROADCASTQ|Load With Broadcast Integer Data From General Purpose Register|vpbroadcastb.vpbroadcastw.vpbroadcastd.vpbroadcastq +VPBROADCASTW|Load With Broadcast Integer Data From General Purpose Register|vpbroadcastb.vpbroadcastw.vpbroadcastd.vpbroadcastq +VPCMPB|Compare Packed Byte Values Into Mask|vpcmpb.vpcmpub +VPCMPD|Compare Packed Integer Values Into Mask|vpcmpd.vpcmpud +VPCMPQ|Compare Packed Integer Values Into Mask|vpcmpq.vpcmpuq +VPCMPUB|Compare Packed Byte Values Into Mask|vpcmpb.vpcmpub +VPCMPUD|Compare Packed Integer Values Into Mask|vpcmpd.vpcmpud +VPCMPUQ|Compare Packed Integer Values Into Mask|vpcmpq.vpcmpuq +VPCMPUW|Compare Packed Word Values Into Mask|vpcmpw.vpcmpuw +VPCMPW|Compare Packed Word Values Into Mask|vpcmpw.vpcmpuw +VPCOMPRESSB|Store Sparse Packed Byte/Word Integer Values Into DenseMemory/Register|vpcompressb.vcompressw +VPCOMPRESSD|Store Sparse Packed Doubleword Integer Values Into Dense Memory/Register|vpcompressd +VPCOMPRESSQ|Store Sparse Packed Quadword Integer Values Into Dense Memory/Register|vpcompressq +VPCONFLICTD|Detect Conflicts Within a Vector of Packed Dword/Qword Values Into DenseMemory/ Register|vpconflictd.vpconflictq +VPCONFLICTQ|Detect Conflicts Within a Vector of Packed Dword/Qword Values Into DenseMemory/ Register|vpconflictd.vpconflictq +VPDPBUSD|Multiply and Add Unsigned and Signed Bytes|vpdpbusd +VPDPBUSDS|Multiply and Add Unsigned and Signed Bytes With Saturation|vpdpbusds +VPDPWSSD|Multiply and Add Signed Word Integers|vpdpwssd +VPDPWSSDS|Multiply and Add Signed Word Integers With Saturation|vpdpwssds +VPERM2F128|Permute Floating-Point Values|vperm2f128 +VPERM2I128|Permute Integer Values|vperm2i128 +VPERMB|Permute Packed Bytes Elements|vpermb +VPERMD|Permute Packed Doubleword/Word Elements|vpermd.vpermw +VPERMI2B|Full Permute of Bytes From Two Tables Overwriting the Index|vpermi2b +VPERMI2D|Full Permute From Two Tables Overwriting the Index|vpermi2w.vpermi2d.vpermi2q.vpermi2ps.vpermi2pd +VPERMI2PD|Full Permute From Two Tables Overwriting the Index|vpermi2w.vpermi2d.vpermi2q.vpermi2ps.vpermi2pd +VPERMI2PS|Full Permute From Two Tables Overwriting the Index|vpermi2w.vpermi2d.vpermi2q.vpermi2ps.vpermi2pd +VPERMI2Q|Full Permute From Two Tables Overwriting the Index|vpermi2w.vpermi2d.vpermi2q.vpermi2ps.vpermi2pd +VPERMI2W|Full Permute From Two Tables Overwriting the Index|vpermi2w.vpermi2d.vpermi2q.vpermi2ps.vpermi2pd +VPERMILPD|Permute In-Lane of Pairs of Double Precision Floating-Point Values|vpermilpd +VPERMILPS|Permute In-Lane of Quadruples of Single Precision Floating-Point Values|vpermilps +VPERMPD|Permute Double Precision Floating-Point Elements|vpermpd +VPERMPS|Permute Single Precision Floating-Point Elements|vpermps +VPERMQ|Qwords Element Permutation|vpermq +VPERMT2B|Full Permute of Bytes From Two Tables Overwriting a Table|vpermt2b +VPERMT2D|Full Permute From Two Tables Overwriting One Table|vpermt2w.vpermt2d.vpermt2q.vpermt2ps.vpermt2pd +VPERMT2PD|Full Permute From Two Tables Overwriting One Table|vpermt2w.vpermt2d.vpermt2q.vpermt2ps.vpermt2pd +VPERMT2PS|Full Permute From Two Tables Overwriting One Table|vpermt2w.vpermt2d.vpermt2q.vpermt2ps.vpermt2pd +VPERMT2Q|Full Permute From Two Tables Overwriting One Table|vpermt2w.vpermt2d.vpermt2q.vpermt2ps.vpermt2pd +VPERMT2W|Full Permute From Two Tables Overwriting One Table|vpermt2w.vpermt2d.vpermt2q.vpermt2ps.vpermt2pd +VPERMW|Permute Packed Doubleword/Word Elements|vpermd.vpermw +VPEXPANDB|Expand Byte/Word Values|vpexpandb.vpexpandw +VPEXPANDD|Load Sparse Packed Doubleword Integer Values From Dense Memory/Register|vpexpandd +VPEXPANDQ|Load Sparse Packed Quadword Integer Values From Dense Memory/Register|vpexpandq +VPEXPANDW|Expand Byte/Word Values|vpexpandb.vpexpandw +VPGATHERDD|Gather Packed Dword Values Using Signed Dword/Qword Indices|vpgatherdd.vpgatherqd +VPGATHERDD|Gather Packed Dword, Packed Qword With Signed Dword Indices|vpgatherdd.vpgatherdq +VPGATHERDQ|Gather Packed Dword, Packed Qword With Signed Dword Indices|vpgatherdd.vpgatherdq +VPGATHERDQ|Gather Packed Qword Values Using Signed Dword/Qword Indices|vpgatherdq.vpgatherqq +VPGATHERQD|Gather Packed Dword Values Using Signed Dword/Qword Indices|vpgatherdd.vpgatherqd +VPGATHERQD|Gather Packed Dword, Packed Qword with Signed Qword Indices|vpgatherqd.vpgatherqq +VPGATHERQQ|Gather Packed Qword Values Using Signed Dword/Qword Indices|vpgatherdq.vpgatherqq +VPGATHERQQ|Gather Packed Dword, Packed Qword with Signed Qword Indices|vpgatherqd.vpgatherqq +VPLZCNTD|Count the Number of Leading Zero Bits for Packed Dword, Packed Qword Values|vplzcntd.vplzcntq +VPLZCNTQ|Count the Number of Leading Zero Bits for Packed Dword, Packed Qword Values|vplzcntd.vplzcntq +VPMADD52HUQ|Packed Multiply of Unsigned 52-Bit Unsigned Integers and Add High 52-BitProducts to 64-Bit Accumulators|vpmadd52huq +VPMADD52LUQ|Packed Multiply of Unsigned 52-Bit Integers and Add the Low 52-Bit Productsto Qword Accumulators|vpmadd52luq +VPMASKMOV|Conditional SIMD Integer Packed Loads and Stores|vpmaskmov +VPMOVB2M|Convert a Vector Register to a Mask|vpmovb2m.vpmovw2m.vpmovd2m.vpmovq2m +VPMOVD2M|Convert a Vector Register to a Mask|vpmovb2m.vpmovw2m.vpmovd2m.vpmovq2m +VPMOVDB|Down Convert DWord to Byte|vpmovdb.vpmovsdb.vpmovusdb +VPMOVDW|Down Convert DWord to Word|vpmovdw.vpmovsdw.vpmovusdw +VPMOVM2B|Convert a Mask Register to a VectorRegister|vpmovm2b.vpmovm2w.vpmovm2d.vpmovm2q +VPMOVM2D|Convert a Mask Register to a VectorRegister|vpmovm2b.vpmovm2w.vpmovm2d.vpmovm2q +VPMOVM2Q|Convert a Mask Register to a VectorRegister|vpmovm2b.vpmovm2w.vpmovm2d.vpmovm2q +VPMOVM2W|Convert a Mask Register to a VectorRegister|vpmovm2b.vpmovm2w.vpmovm2d.vpmovm2q +VPMOVQ2M|Convert a Vector Register to a Mask|vpmovb2m.vpmovw2m.vpmovd2m.vpmovq2m +VPMOVQB|Down Convert QWord to Byte|vpmovqb.vpmovsqb.vpmovusqb +VPMOVQD|Down Convert QWord to DWord|vpmovqd.vpmovsqd.vpmovusqd +VPMOVQW|Down Convert QWord to Word|vpmovqw.vpmovsqw.vpmovusqw +VPMOVSDB|Down Convert DWord to Byte|vpmovdb.vpmovsdb.vpmovusdb +VPMOVSDW|Down Convert DWord to Word|vpmovdw.vpmovsdw.vpmovusdw +VPMOVSQB|Down Convert QWord to Byte|vpmovqb.vpmovsqb.vpmovusqb +VPMOVSQD|Down Convert QWord to DWord|vpmovqd.vpmovsqd.vpmovusqd +VPMOVSQW|Down Convert QWord to Word|vpmovqw.vpmovsqw.vpmovusqw +VPMOVSWB|Down Convert Word to Byte|vpmovwb.vpmovswb.vpmovuswb +VPMOVUSDB|Down Convert DWord to Byte|vpmovdb.vpmovsdb.vpmovusdb +VPMOVUSDW|Down Convert DWord to Word|vpmovdw.vpmovsdw.vpmovusdw +VPMOVUSQB|Down Convert QWord to Byte|vpmovqb.vpmovsqb.vpmovusqb +VPMOVUSQD|Down Convert QWord to DWord|vpmovqd.vpmovsqd.vpmovusqd +VPMOVUSQW|Down Convert QWord to Word|vpmovqw.vpmovsqw.vpmovusqw +VPMOVUSWB|Down Convert Word to Byte|vpmovwb.vpmovswb.vpmovuswb +VPMOVW2M|Convert a Vector Register to a Mask|vpmovb2m.vpmovw2m.vpmovd2m.vpmovq2m +VPMOVWB|Down Convert Word to Byte|vpmovwb.vpmovswb.vpmovuswb +VPMULTISHIFTQB|Select Packed Unaligned Bytes From Quadword Sources|vpmultishiftqb +VPOPCNT|Return the Count of Number of Bits Set to 1 in BYTE/WORD/DWORD/QWORD|vpopcnt +VPROLD|Bit Rotate Left|vprold.vprolvd.vprolq.vprolvq +VPROLQ|Bit Rotate Left|vprold.vprolvd.vprolq.vprolvq +VPROLVD|Bit Rotate Left|vprold.vprolvd.vprolq.vprolvq +VPROLVQ|Bit Rotate Left|vprold.vprolvd.vprolq.vprolvq +VPRORD|Bit Rotate Right|vprord.vprorvd.vprorq.vprorvq +VPRORQ|Bit Rotate Right|vprord.vprorvd.vprorq.vprorvq +VPRORVD|Bit Rotate Right|vprord.vprorvd.vprorq.vprorvq +VPRORVQ|Bit Rotate Right|vprord.vprorvd.vprorq.vprorvq +VPSCATTERDD|Scatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices|vpscatterdd.vpscatterdq.vpscatterqd.vpscatterqq +VPSCATTERDQ|Scatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices|vpscatterdd.vpscatterdq.vpscatterqd.vpscatterqq +VPSCATTERQD|Scatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices|vpscatterdd.vpscatterdq.vpscatterqd.vpscatterqq +VPSCATTERQQ|Scatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices|vpscatterdd.vpscatterdq.vpscatterqd.vpscatterqq +VPSHLD|Concatenate and Shift Packed Data Left Logical|vpshld +VPSHLDV|Concatenate and Variable Shift Packed Data Left Logical|vpshldv +VPSHRD|Concatenate and Shift Packed Data Right Logical|vpshrd +VPSHRDV|Concatenate and Variable Shift Packed Data Right Logical|vpshrdv +VPSHUFBITQMB|Shuffle Bits From Quadword Elements Using Byte Indexes Into Mask|vpshufbitqmb +VPSLLVD|Variable Bit Shift Left Logical|vpsllvw.vpsllvd.vpsllvq +VPSLLVQ|Variable Bit Shift Left Logical|vpsllvw.vpsllvd.vpsllvq +VPSLLVW|Variable Bit Shift Left Logical|vpsllvw.vpsllvd.vpsllvq +VPSRAVD|Variable Bit Shift Right Arithmetic|vpsravw.vpsravd.vpsravq +VPSRAVQ|Variable Bit Shift Right Arithmetic|vpsravw.vpsravd.vpsravq +VPSRAVW|Variable Bit Shift Right Arithmetic|vpsravw.vpsravd.vpsravq +VPSRLVD|Variable Bit Shift Right Logical|vpsrlvw.vpsrlvd.vpsrlvq +VPSRLVQ|Variable Bit Shift Right Logical|vpsrlvw.vpsrlvd.vpsrlvq +VPSRLVW|Variable Bit Shift Right Logical|vpsrlvw.vpsrlvd.vpsrlvq +VPTERNLOGD|Bitwise Ternary Logic|vpternlogd.vpternlogq +VPTERNLOGQ|Bitwise Ternary Logic|vpternlogd.vpternlogq +VPTESTMB|Logical AND and Set Mask|vptestmb.vptestmw.vptestmd.vptestmq +VPTESTMD|Logical AND and Set Mask|vptestmb.vptestmw.vptestmd.vptestmq +VPTESTMQ|Logical AND and Set Mask|vptestmb.vptestmw.vptestmd.vptestmq +VPTESTMW|Logical AND and Set Mask|vptestmb.vptestmw.vptestmd.vptestmq +VPTESTNMB|Logical NAND and Set|vptestnmb.vptestnmw.vptestnmd.vptestnmq +VPTESTNMD|Logical NAND and Set|vptestnmb.vptestnmw.vptestnmd.vptestnmq +VPTESTNMQ|Logical NAND and Set|vptestnmb.vptestnmw.vptestnmd.vptestnmq +VPTESTNMW|Logical NAND and Set|vptestnmb.vptestnmw.vptestnmd.vptestnmq +VRANGEPD|Range Restriction Calculation for Packed Pairs of Float64 Values|vrangepd +VRANGEPS|Range Restriction Calculation for Packed Pairs of Float32 Values|vrangeps +VRANGESD|Range Restriction Calculation From a Pair of Scalar Float64 Values|vrangesd +VRANGESS|Range Restriction Calculation From a Pair of Scalar Float32 Values|vrangess +VRCP14PD|Compute Approximate Reciprocals of Packed Float64 Values|vrcp14pd +VRCP14PS|Compute Approximate Reciprocals of Packed Float32 Values|vrcp14ps +VRCP14SD|Compute Approximate Reciprocal of Scalar Float64 Value|vrcp14sd +VRCP14SS|Compute Approximate Reciprocal of Scalar Float32 Value|vrcp14ss +VRCPPH|Compute Reciprocals of Packed FP16 Values|vrcpph +VRCPSH|Compute Reciprocal of Scalar FP16 Value|vrcpsh +VREDUCEPD|Perform Reduction Transformation on Packed Float64 Values|vreducepd +VREDUCEPH|Perform Reduction Transformation on Packed FP16 Values|vreduceph +VREDUCEPS|Perform Reduction Transformation on Packed Float32 Values|vreduceps +VREDUCESD|Perform a Reduction Transformation on a Scalar Float64 Value|vreducesd +VREDUCESH|Perform Reduction Transformation on Scalar FP16 Value|vreducesh +VREDUCESS|Perform a Reduction Transformation on a Scalar Float32 Value|vreducess +VRNDSCALEPD|Round Packed Float64 Values to Include a Given Number of Fraction Bits|vrndscalepd +VRNDSCALEPH|Round Packed FP16 Values to Include a Given Number of Fraction Bits|vrndscaleph +VRNDSCALEPS|Round Packed Float32 Values to Include a Given Number of Fraction Bits|vrndscaleps +VRNDSCALESD|Round Scalar Float64 Value to Include a Given Number of Fraction Bits|vrndscalesd +VRNDSCALESH|Round Scalar FP16 Value to Include a Given Number of Fraction Bits|vrndscalesh +VRNDSCALESS|Round Scalar Float32 Value to Include a Given Number of Fraction Bits|vrndscaless +VRSQRT14PD|Compute Approximate Reciprocals of Square Roots of Packed Float64 Values|vrsqrt14pd +VRSQRT14PS|Compute Approximate Reciprocals of Square Roots of Packed Float32 Values|vrsqrt14ps +VRSQRT14SD|Compute Approximate Reciprocal of Square Root of Scalar Float64 Value|vrsqrt14sd +VRSQRT14SS|Compute Approximate Reciprocal of Square Root of Scalar Float32 Value|vrsqrt14ss +VRSQRTPH|Compute Reciprocals of Square Roots of Packed FP16 Values|vrsqrtph +VRSQRTSH|Compute Approximate Reciprocal of Square Root of Scalar FP16 Value|vrsqrtsh +VSCALEFPD|Scale Packed Float64 Values With Float64 Values|vscalefpd +VSCALEFPH|Scale Packed FP16 Values with FP16 Values|vscalefph +VSCALEFPS|Scale Packed Float32 Values With Float32 Values|vscalefps +VSCALEFSD|Scale Scalar Float64 Values With Float64 Values|vscalefsd +VSCALEFSH|Scale Scalar FP16 Values with FP16 Values|vscalefsh +VSCALEFSS|Scale Scalar Float32 Value With Float32 Value|vscalefss +VSCATTERDPD|Scatter Packed Single, PackedDouble with Signed Dword and Qword Indices|vscatterdps.vscatterdpd.vscatterqps.vscatterqpd +VSCATTERDPS|Scatter Packed Single, PackedDouble with Signed Dword and Qword Indices|vscatterdps.vscatterdpd.vscatterqps.vscatterqpd +VSCATTERQPD|Scatter Packed Single, PackedDouble with Signed Dword and Qword Indices|vscatterdps.vscatterdpd.vscatterqps.vscatterqpd +VSCATTERQPS|Scatter Packed Single, PackedDouble with Signed Dword and Qword Indices|vscatterdps.vscatterdpd.vscatterqps.vscatterqpd +VSHUFF32x4|Shuffle Packed Values at 128-BitGranularity|vshuff32x4.vshuff64x2.vshufi32x4.vshufi64x2 +VSHUFF64x2|Shuffle Packed Values at 128-BitGranularity|vshuff32x4.vshuff64x2.vshufi32x4.vshufi64x2 +VSHUFI32x4|Shuffle Packed Values at 128-BitGranularity|vshuff32x4.vshuff64x2.vshufi32x4.vshufi64x2 +VSHUFI64x2|Shuffle Packed Values at 128-BitGranularity|vshuff32x4.vshuff64x2.vshufi32x4.vshufi64x2 +VSQRTPH|Compute Square Root of Packed FP16 Values|vsqrtph +VSQRTSH|Compute Square Root of Scalar FP16 Value|vsqrtsh +VSUBPH|Subtract Packed FP16 Values|vsubph +VSUBSH|Subtract Scalar FP16 Value|vsubsh +VTESTPD|Packed Bit Test|vtestpd.vtestps +VTESTPS|Packed Bit Test|vtestpd.vtestps +VUCOMISH|Unordered Compare Scalar FP16 Values and Set EFLAGS|vucomish +VZEROALL|Zero XMM, YMM, and ZMM Registers|vzeroall +VZEROUPPER|Zero Upper Bits of YMM and ZMM Registers|vzeroupper +WAIT|Wait|wait.fwait +WBINVD|Write Back and Invalidate Cache|wbinvd +WBNOINVD|Write Back and Do Not Invalidate Cache|wbnoinvd +WRFSBASE|Write FS/GS Segment Base|wrfsbase.wrgsbase +WRGSBASE|Write FS/GS Segment Base|wrfsbase.wrgsbase +WRMSR|Write to Model Specific Register|wrmsr +WRPKRU|Write Data to User Page Key Register|wrpkru +WRSSD|Write to Shadow Stack|wrssd.wrssq +WRSSQ|Write to Shadow Stack|wrssd.wrssq +WRUSSD|Write to User Shadow Stack|wrussd.wrussq +WRUSSQ|Write to User Shadow Stack|wrussd.wrussq +XABORT|Transactional Abort|xabort +XACQUIRE|Hardware Lock Elision Prefix Hints|xacquire.xrelease +XADD|Exchange and Add|xadd +XBEGIN|Transactional Begin|xbegin +XCHG|Exchange Register/Memory With Register|xchg +XEND|Transactional End|xend +XGETBV|Get Value of Extended Control Register|xgetbv +XLAT|Table Look-up Translation|xlat.xlatb +XLATB|Table Look-up Translation|xlat.xlatb +XOR|Logical Exclusive OR|xor +XORPD|Bitwise Logical XOR of Packed Double Precision Floating-Point Values|xorpd +XORPS|Bitwise Logical XOR of Packed Single Precision Floating-Point Values|xorps +XRELEASE|Hardware Lock Elision Prefix Hints|xacquire.xrelease +XRESLDTRK|Resume Tracking Load Addresses|xresldtrk +XRSTOR|Restore Processor Extended States|xrstor +XRSTORS|Restore Processor Extended States Supervisor|xrstors +XSAVE|Save Processor Extended States|xsave +XSAVEC|Save Processor Extended States With Compaction|xsavec +XSAVEOPT|Save Processor Extended States Optimized|xsaveopt +XSAVES|Save Processor Extended States Supervisor|xsaves +XSETBV|Set Extended Control Register|xsetbv +XSUSLDTRK|Suspend Tracking Load Addresses|xsusldtrk +XTEST|Test if in Transactional Execution|xtest +ENCLS|Execute an Enclave System Function of Specified Leaf Number|encls +ENCLS[EADD]|Add a Page to an Uninitialized Enclave|eadd +ENCLS[EAUG]|Add a Page to an Initialized Enclave|eaug +ENCLS[EBLOCK]|Mark a page in EPC as Blocked|eblock +ENCLS[ECREATE]|Create an SECS page in the Enclave Page Cache|ecreate +ENCLS[EDBGRD]|Read From a Debug Enclave|edbgrd +ENCLS[EDBGWR]|Write to a Debug Enclave|edbgwr +ENCLS[EEXTEND]|Extend Uninitialized Enclave Measurement by 256 Bytes|eextend +ENCLS[EINIT]|Initialize an Enclave for Execution|einit +ENCLS[ELDBC]|Load an EPC Page and Mark its State|eldb.eldu.eldbc.elduc +ENCLS[ELDB]|Load an EPC Page and Mark its State|eldb.eldu.eldbc.elduc +ENCLS[ELDUC]|Load an EPC Page and Mark its State|eldb.eldu.eldbc.elduc +ENCLS[ELDU]|Load an EPC Page and Mark its State|eldb.eldu.eldbc.elduc +ENCLS[EMODPR]|Restrict the Permissions of an EPC Page|emodpr +ENCLS[EMODT]|Change the Type of an EPC Page|emodt +ENCLS[EPA]|Add Version Array|epa +ENCLS[ERDINFO]|Read Type and Status Information About an EPC Page|erdinfo +ENCLS[EREMOVE]|Remove a page from the EPC|eremove +ENCLS[ETRACKC]|Activates EBLOCK Checks|etrackc +ENCLS[ETRACK]|Activates EBLOCK Checks|etrack +ENCLS[EWB]|Invalidate an EPC Page and Write out to Main Memory|ewb +ENCLU|Execute an Enclave User Function of Specified Leaf Number|enclu +ENCLU[EACCEPTCOPY]|Initialize a Pending Page|eacceptcopy +ENCLU[EACCEPT]|Accept Changes to an EPC Page|eaccept +ENCLU[EDECCSSA]|Decrements TCS.CSSA|edeccssa +ENCLU[EENTER]|Enters an Enclave|eenter +ENCLU[EEXIT]|Exits an Enclave|eexit +ENCLU[EGETKEY]|Retrieves a Cryptographic Key|egetkey +ENCLU[EMODPE]|Extend an EPC Page Permissions|emodpe +ENCLU[EREPORT]|Create a Cryptographic Report of the Enclave|ereport +ENCLU[ERESUME]|Re-Enters an Enclave|eresume +ENCLV|Execute an Enclave VMM Function of Specified Leaf Number|enclv +ENCLV[EDECVIRTCHILD]|Decrement VIRTCHILDCNT in SECS|edecvirtchild +ENCLV[EINCVIRTCHILD]|Increment VIRTCHILDCNT in SECS|eincvirtchild +ENCLV[ESETCONTEXT]|Set the ENCLAVECONTEXT Field in SECS|esetcontext +GETSEC[CAPABILITIES]|Report the SMX Capabilities|capabilities +GETSEC[ENTERACCS]|Execute Authenticated Chipset Code|enteraccs +GETSEC[EXITAC]|Exit Authenticated Code Execution Mode|exitac +GETSEC[PARAMETERS]|Report the SMX Parameters|parameters +GETSEC[SENTER]|Enter a Measured Environment|senter +GETSEC[SEXIT]|Exit Measured Environment|sexit +GETSEC[SMCTRL]|SMX Mode Control|smctrl +GETSEC[WAKEUP]|Wake Up Sleeping Processors in Measured Environment|wakeup +INVEPT|Invalidate Translations Derived from EPT|invept +INVVPID|Invalidate Translations Based on VPID|invvpid +VMCALL|Call to VM Monitor|vmcall +VMCLEAR|Clear Virtual-Machine Control Structure|vmclear +VMFUNC|Invoke VM function|vmfunc +VMLAUNCH|Launch/Resume Virtual Machine|vmlaunch.vmresume +VMPTRLD|Load Pointer to Virtual-Machine Control Structure|vmptrld +VMPTRST|Store Pointer to Virtual-Machine Control Structure|vmptrst +VMREAD|Read Field from Virtual-Machine Control Structure|vmread +VMRESUME|Launch/Resume Virtual Machine|vmlaunch.vmresume +VMRESUME|Resume Virtual Machine|vmresume +VMWRITE|Write Field to Virtual-Machine Control Structure|vmwrite +VMXOFF|Leave VMX Operation|vmxoff +VMXON|Enter VMX Operation|vmxon +PREFETCHWT1|Prefetch Vector Data Into Caches With Intent to Write and T1 Hint|prefetchwt1 +V4FMADDPS|Packed Single Precision Floating-Point Fused Multiply-Add(4-Iterations)|v4fmaddps.v4fnmaddps +V4FMADDSS|Scalar Single Precision Floating-Point Fused Multiply-Add(4-Iterations)|v4fmaddss.v4fnmaddss +V4FNMADDPS|Packed Single Precision Floating-Point Fused Multiply-Add(4-Iterations)|v4fmaddps.v4fnmaddps +V4FNMADDSS|Scalar Single Precision Floating-Point Fused Multiply-Add(4-Iterations)|v4fmaddss.v4fnmaddss +VEXP2PD|Approximation to the Exponential 2^x of Packed Double Precision Floating-PointValues With Less Than 2^-23 Relative Error|vexp2pd +VEXP2PS|Approximation to the Exponential 2^x of Packed Single Precision Floating-PointValues With Less Than 2^-23 Relative Error|vexp2ps +VGATHERPF0DPD|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint|vgatherpf0dps.vgatherpf0qps.vgatherpf0dpd.vgatherpf0qpd +VGATHERPF0DPS|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint|vgatherpf0dps.vgatherpf0qps.vgatherpf0dpd.vgatherpf0qpd +VGATHERPF0QPD|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint|vgatherpf0dps.vgatherpf0qps.vgatherpf0dpd.vgatherpf0qpd +VGATHERPF0QPS|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint|vgatherpf0dps.vgatherpf0qps.vgatherpf0dpd.vgatherpf0qpd +VGATHERPF1DPD|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint|vgatherpf1dps.vgatherpf1qps.vgatherpf1dpd.vgatherpf1qpd +VGATHERPF1DPS|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint|vgatherpf1dps.vgatherpf1qps.vgatherpf1dpd.vgatherpf1qpd +VGATHERPF1QPD|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint|vgatherpf1dps.vgatherpf1qps.vgatherpf1dpd.vgatherpf1qpd +VGATHERPF1QPS|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint|vgatherpf1dps.vgatherpf1qps.vgatherpf1dpd.vgatherpf1qpd +VP4DPWSSD|Dot Product of Signed Words With Dword Accumulation (4-Iterations)|vp4dpwssd +VP4DPWSSDS|Dot Product of Signed Words With Dword Accumulation and Saturation(4-Iterations)|vp4dpwssds +VRCP28PD|Approximation to the Reciprocal of Packed Double Precision Floating-Point ValuesWith Less Than 2^-28 Relative Error|vrcp28pd +VRCP28PS|Approximation to the Reciprocal of Packed Single Precision Floating-Point ValuesWith Less Than 2^-28 Relative Error|vrcp28ps +VRCP28SD|Approximation to the Reciprocal of Scalar Double Precision Floating-Point ValueWith Less Than 2^-28 Relative Error|vrcp28sd +VRCP28SS|Approximation to the Reciprocal of Scalar Single Precision Floating-Point ValueWith Less Than 2^-28 Relative Error|vrcp28ss +VRSQRT28PD|Approximation to the Reciprocal Square Root of Packed Double PrecisionFloating-Point Values With Less Than 2^-28 Relative Error|vrsqrt28pd +VRSQRT28PS|Approximation to the Reciprocal Square Root of Packed Single PrecisionFloating-Point Values With Less Than 2^-28 Relative Error|vrsqrt28ps +VRSQRT28SD|Approximation to the Reciprocal Square Root of Scalar Double PrecisionFloating-Point Value With Less Than 2^-28 Relative Error|vrsqrt28sd +VRSQRT28SS|Approximation to the Reciprocal Square Root of Scalar Single Precision Floating-Point Value With Less Than 2^-28 Relative Error|vrsqrt28ss +VSCATTERPF0DPD|Sparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write|vscatterpf0dps.vscatterpf0qps.vscatterpf0dpd.vscatterpf0qpd +VSCATTERPF0DPS|Sparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write|vscatterpf0dps.vscatterpf0qps.vscatterpf0dpd.vscatterpf0qpd +VSCATTERPF0QPD|Sparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write|vscatterpf0dps.vscatterpf0qps.vscatterpf0dpd.vscatterpf0qpd +VSCATTERPF0QPS|Sparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write|vscatterpf0dps.vscatterpf0qps.vscatterpf0dpd.vscatterpf0qpd +VSCATTERPF1DPD|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write|vscatterpf1dps.vscatterpf1qps.vscatterpf1dpd.vscatterpf1qpd +VSCATTERPF1DPS|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write|vscatterpf1dps.vscatterpf1qps.vscatterpf1dpd.vscatterpf1qpd +VSCATTERPF1QPD|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write|vscatterpf1dps.vscatterpf1qps.vscatterpf1dpd.vscatterpf1qpd +VSCATTERPF1QPS|Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write|vscatterpf1dps.vscatterpf1qps.vscatterpf1dpd.vscatterpf1qpd diff --git a/x86/aaa.html b/x86/aaa.html new file mode 100644 index 0000000..d86236a --- /dev/null +++ b/x86/aaa.html @@ -0,0 +1,95 @@ + +AAA + — ASCII Adjust After Addition

AAA + — ASCII Adjust After Addition

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
37AAAZOInvalidValidASCII adjust AL after addition.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Adjusts the sum of two unpacked BCD values to create an unpacked BCD result. The AL register is the implied source and destination operand for this instruction. The AAA instruction is only useful when it follows an ADD instruction that adds (binary addition) two unpacked BCD values and stores a byte result in the AL register. The AAA instruction then adjusts the contents of the AL register to contain the correct 1-digit unpacked BCD result.

+

If the addition produces a decimal carry, the AH register increments by 1, and the CF and AF flags are set. If there was no decimal carry, the CF and AF flags are cleared and the AH register is unchanged. In either case, bits 4 through 7 of the AL register are set to 0.

+

This instruction executes as described in compatibility mode and legacy mode. It is not valid in 64-bit mode.

+

Operation + ¶ +

+
IF 64-Bit Mode
+    THEN
+        #UD;
+    ELSE
+        IF ((AL AND 0FH) > 9) or (AF = 1)
+            THEN
+                AX := AX + 106H;
+                AF := 1;
+                CF := 1;
+            ELSE
+                AF := 0;
+                CF := 0;
+        FI;
+        AL := AL AND 0FH;
+FI;
+
+

Flags Affected + ¶ +

+

The AF and CF flags are set to 1 if the adjustment results in a decimal carry; otherwise they are set to 0. The OF, SF, ZF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf in 64-bit mode.
diff --git a/x86/aad.html b/x86/aad.html new file mode 100644 index 0000000..bff2b9c --- /dev/null +++ b/x86/aad.html @@ -0,0 +1,99 @@ + +AAD + — ASCII Adjust AX Before Division

AAD + — ASCII Adjust AX Before Division

+ + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
D5 0AAADZOInvalidValidASCII adjust AX before division.
D5 ibAAD imm8ZOInvalidValidAdjust AX before division to number base imm8.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Adjusts two unpacked BCD digits (the least-significant digit in the AL register and the most-significant digit in the AH register) so that a division operation performed on the result will yield a correct unpacked BCD value. The AAD instruction is only useful when it precedes a DIV instruction that divides (binary division) the adjusted value in the AX register by an unpacked BCD value.

+

The AAD instruction sets the value in the AL register to (AL + (10 * AH)), and then clears the AH register to 00H. The value in the AX register is then equal to the binary equivalent of the original unpacked two-digit (base 10) number in registers AH and AL.

+

The generalized version of this instruction allows adjustment of two unpacked digits of any number base (see the “Operation” section below), by setting the imm8 byte to the selected number base (for example, 08H for octal, 0AH for decimal, or 0CH for base 12 numbers). The AAD mnemonic is interpreted by all assemblers to mean adjust ASCII (base 10) values. To adjust values in another number base, the instruction must be hand coded in machine code (D5 imm8).

+

This instruction executes as described in compatibility mode and legacy mode. It is not valid in 64-bit mode.

+

Operation + ¶ +

+
IF 64-Bit Mode
+    THEN
+        #UD;
+    ELSE
+        tempAL := AL;
+        tempAH := AH;
+        AL := (tempAL + (tempAH ∗ imm8)) AND FFH;
+        (* imm8 is set to 0AH for the AAD mnemonic.*)
+        AH := 0;
+FI;
+The immediate value (imm8) is taken from the second byte of the instruction.
+
+

Flags Affected + ¶ +

+

The SF, ZF, and PF flags are set according to the resulting binary value in the AL register; the OF, AF, and CF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf in 64-bit mode.
diff --git a/x86/aam.html b/x86/aam.html new file mode 100644 index 0000000..d28de09 --- /dev/null +++ b/x86/aam.html @@ -0,0 +1,99 @@ + +AAM + — ASCII Adjust AX After Multiply

AAM + — ASCII Adjust AX After Multiply

+ + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
D4 0AAAMZOInvalidValidASCII adjust AX after multiply.
D4 ibAAM imm8ZOInvalidValidAdjust AX after multiply to number base imm8.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Adjusts the result of the multiplication of two unpacked BCD values to create a pair of unpacked (base 10) BCD values. The AX register is the implied source and destination operand for this instruction. The AAM instruction is only useful when it follows an MUL instruction that multiplies (binary multiplication) two unpacked BCD values and stores a word result in the AX register. The AAM instruction then adjusts the contents of the AX register to contain the correct 2-digit unpacked (base 10) BCD result.

+

The generalized version of this instruction allows adjustment of the contents of the AX to create two unpacked digits of any number base (see the “Operation” section below). Here, the imm8 byte is set to the selected number base (for example, 08H for octal, 0AH for decimal, or 0CH for base 12 numbers). The AAM mnemonic is interpreted by all assemblers to mean adjust to ASCII (base 10) values. To adjust to values in another number base, the instruction must be hand coded in machine code (D4 imm8).

+

This instruction executes as described in compatibility mode and legacy mode. It is not valid in 64-bit mode.

+

Operation + ¶ +

+
IF 64-Bit Mode
+    THEN
+        #UD;
+    ELSE
+        tempAL := AL;
+        AH := tempAL / imm8; (* imm8 is set to 0AH for the AAM mnemonic *)
+        AL := tempAL MOD imm8;
+FI;
+The immediate value (imm8) is taken from the second byte of the instruction.
+
+

Flags Affected + ¶ +

+

The SF, ZF, and PF flags are set according to the resulting binary value in the AL register. The OF, AF, and CF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#DEIf an immediate value of 0 is used.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf in 64-bit mode.
diff --git a/x86/aas.html b/x86/aas.html new file mode 100644 index 0000000..0cf43e9 --- /dev/null +++ b/x86/aas.html @@ -0,0 +1,97 @@ + +AAS + — ASCII Adjust AL After Subtraction

AAS + — ASCII Adjust AL After Subtraction

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
3FAASZOInvalidValidASCII adjust AL after subtraction.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Adjusts the result of the subtraction of two unpacked BCD values to create a unpacked BCD result. The AL register is the implied source and destination operand for this instruction. The AAS instruction is only useful when it follows a SUB instruction that subtracts (binary subtraction) one unpacked BCD value from another and stores a byte result in the AL register. The AAA instruction then adjusts the contents of the AL register to contain the correct 1-digit unpacked BCD result.

+

If the subtraction produced a decimal carry, the AH register decrements by 1, and the CF and AF flags are set. If no decimal carry occurred, the CF and AF flags are cleared, and the AH register is unchanged. In either case, the AL register is left with its top four bits set to 0.

+

This instruction executes as described in compatibility mode and legacy mode. It is not valid in 64-bit mode.

+

Operation + ¶ +

+
IF 64-bit mode
+    THEN
+        #UD;
+    ELSE
+        IF ((AL AND 0FH) > 9) or (AF = 1)
+            THEN
+                AX := AX – 6;
+                AH := AH – 1;
+                AF := 1;
+                CF := 1;
+                AL := AL AND 0FH;
+            ELSE
+                CF := 0;
+                AF := 0;
+                AL := AL AND 0FH;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

The AF and CF flags are set to 1 if there is a decimal borrow; otherwise, they are cleared to 0. The OF, SF, ZF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf in 64-bit mode.
diff --git a/x86/adc.html b/x86/adc.html new file mode 100644 index 0000000..94c4a2c --- /dev/null +++ b/x86/adc.html @@ -0,0 +1,313 @@ + +ADC + — Add With Carry

ADC + — Add With Carry

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
14 ibADC AL, imm8IValidValidAdd with carry imm8 to AL.
15 iwADC AX, imm16IValidValidAdd with carry imm16 to AX.
15 idADC EAX, imm32IValidValidAdd with carry imm32 to EAX.
REX.W + 15 idADC RAX, imm32IValidN.E.Add with carry imm32 sign extended to 64-bits to RAX.
80 /2 ibADC r/m8, imm8MIValidValidAdd with carry imm8 to r/m8.
REX + 80 /2 ibADC r/m8*, imm8MIValidN.E.Add with carry imm8 to r/m8.
81 /2 iwADC r/m16, imm16MIValidValidAdd with carry imm16 to r/m16.
81 /2 idADC r/m32, imm32MIValidValidAdd with CF imm32 to r/m32.
REX.W + 81 /2 idADC r/m64, imm32MIValidN.E.Add with CF imm32 sign extended to 64-bits to r/m64.
83 /2 ibADC r/m16, imm8MIValidValidAdd with CF sign-extended imm8 to r/m16.
83 /2 ibADC r/m32, imm8MIValidValidAdd with CF sign-extended imm8 into r/m32.
REX.W + 83 /2 ibADC r/m64, imm8MIValidN.E.Add with CF sign-extended imm8 into r/m64.
10 /rADC r/m8, r8MRValidValidAdd with carry byte register to r/m8.
REX + 10 /rADC r/m8*, r8*MRValidN.E.Add with carry byte register to r/m64.
11 /rADC r/m16, r16MRValidValidAdd with carry r16 to r/m16.
11 /rADC r/m32, r32MRValidValidAdd with CF r32 to r/m32.
REX.W + 11 /rADC r/m64, r64MRValidN.E.Add with CF r64 to r/m64.
12 /rADC r8, r/m8RMValidValidAdd with carry r/m8 to byte register.
REX + 12 /rADC r8*, r/m8*RMValidN.E.Add with carry r/m64 to byte register.
13 /rADC r16, r/m16RMValidValidAdd with carry r/m16 to r16.
13 /rADC r32, r/m32RMValidValidAdd with CF r/m32 to r32.
REX.W + 13 /rADC r64, r/m64RMValidN.E.Add with CF r/m64 to r64.
+
+

*In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
MIModRM:r/m (r, w)imm8/16/32N/AN/A
IAL/AX/EAX/RAXimm8/16/32N/AN/A
+

Description + ¶ +

+

Adds the destination operand (first operand), the source operand (second operand), and the carry (CF) flag and stores the result in the destination operand. The destination operand can be a register or a memory location; the source operand can be an immediate, a register, or a memory location. (However, two memory operands cannot be used in one instruction.) The state of the CF flag represents a carry from a previous addition. When an immediate value is used as an operand, it is sign-extended to the length of the destination operand format.

+

The ADC instruction does not distinguish between signed or unsigned operands. Instead, the processor evaluates the result for both data types and sets the OF and CF flags to indicate a carry in the signed or unsigned result, respectively. The SF flag indicates the sign of the signed result.

+

The ADC instruction is usually executed as part of a multibyte or multiword addition in which an ADD instruction is followed by an ADC instruction.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := DEST + SRC + CF;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
ADC extern unsigned char _addcarry_u8(unsigned char c_in, unsigned char src1, unsigned char src2, unsigned char *sum_out);
+
+
ADC extern unsigned char _addcarry_u16(unsigned char c_in, unsigned short src1, unsigned short src2, unsigned short *sum_out);
+
+
ADC extern unsigned char _addcarry_u32(unsigned char c_in, unsigned int src1, unsigned char int, unsigned int *sum_out);
+
+
ADC extern unsigned char _addcarry_u64(unsigned char c_in, unsigned __int64 src1, unsigned __int64 src2, unsigned __int64 *sum_out);
+
+

Flags Affected + ¶ +

+

The OF, SF, ZF, AF, CF, and PF flags are set according to the result.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/adcx.html b/x86/adcx.html new file mode 100644 index 0000000..bf12cf3 --- /dev/null +++ b/x86/adcx.html @@ -0,0 +1,160 @@ + +ADCX + — Unsigned Integer Addition of Two Operands With Carry Flag

ADCX + — Unsigned Integer Addition of Two Operands With Carry Flag

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32bit Mode SupportCPUID Feature FlagDescription
66 0F 38 F6 /r ADCX r32, r/m32RMV/VADXUnsigned addition of r32 with CF, r/m32 to r32, writes CF.
66 REX.w 0F 38 F6 /r ADCX r64, r/m64RMV/N.E.ADXUnsigned addition of r64 with CF, r/m64 to r64, writes CF.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Performs an unsigned addition of the destination operand (first operand), the source operand (second operand) and the carry-flag (CF) and stores the result in the destination operand. The destination operand is a general-purpose register, whereas the source operand can be a general-purpose register or memory location. The state of CF can represent a carry from a previous addition. The instruction sets the CF flag with the carry generated by the unsigned addition of the operands.

+

The ADCX instruction is executed in the context of multi-precision addition, where we add a series of operands with a carry-chain. At the beginning of a chain of additions, we need to make sure the CF is in a desired initial state. Often, this initial state needs to be 0, which can be achieved with an instruction to zero the CF (e.g. XOR).

+

This instruction is supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode.

+

In 64-bit mode, the default operation size is 32 bits. Using a REX Prefix in the form of REX.R permits access to additional registers (R8-15). Using REX Prefix in the form of REX.W promotes operation to 64 bits.

+

ADCX executes normally either inside or outside a transaction region.

+

Note: ADCX defines the OF flag differently than the ADD/ADC instructions as defined in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A.

+

Operation + ¶ +

+
IF OperandSize is 64-bit
+    THEN CF:DEST[63:0] := DEST[63:0] + SRC[63:0] + CF;
+    ELSE CF:DEST[31:0] := DEST[31:0] + SRC[31:0] + CF;
+FI;
+
+

Flags Affected + ¶ +

+

CF is updated based on result. OF, SF, ZF, AF, and PF flags are unmodified.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
unsigned char _addcarryx_u32 (unsigned char c_in, unsigned int src1, unsigned int src2, unsigned int *sum_out);
+
+
unsigned char _addcarryx_u64 (unsigned char c_in, unsigned __int64 src1, unsigned __int64 src2, unsigned __int64 *sum_out);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.ADX[bit 19] = 0.
#SS(0)For an illegal address in the SS segment.
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
If the DS, ES, FS, or GS register is used to access memory and it contains a null segment selector.
#PF(fault-code)For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.ADX[bit 19] = 0.
#SS(0)For an illegal address in the SS segment.
#GP(0)If any part of the operand lies outside the effective address space from 0 to FFFFH.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.ADX[bit 19] = 0.
#SS(0)For an illegal address in the SS segment.
#GP(0)If any part of the operand lies outside the effective address space from 0 to FFFFH.
#PF(fault-code)For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.ADX[bit 19] = 0.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
diff --git a/x86/add.html b/x86/add.html new file mode 100644 index 0000000..9ee1e8d --- /dev/null +++ b/x86/add.html @@ -0,0 +1,301 @@ + +ADD + — Add

ADD + — Add

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
04 ibADD AL, imm8IValidValidAdd imm8 to AL.
05 iwADD AX, imm16IValidValidAdd imm16 to AX.
05 idADD EAX, imm32IValidValidAdd imm32 to EAX.
REX.W + 05 idADD RAX, imm32IValidN.E.Add imm32 sign-extended to 64-bits to RAX.
80 /0 ibADD r/m8, imm8MIValidValidAdd imm8 to r/m8.
REX + 80 /0 ibADD r/m8*, imm8MIValidN.E.Add sign-extended imm8 to r/m8.
81 /0 iwADD r/m16, imm16MIValidValidAdd imm16 to r/m16.
81 /0 idADD r/m32, imm32MIValidValidAdd imm32 to r/m32.
REX.W + 81 /0 idADD r/m64, imm32MIValidN.E.Add imm32 sign-extended to 64-bits to r/m64.
83 /0 ibADD r/m16, imm8MIValidValidAdd sign-extended imm8 to r/m16.
83 /0 ibADD r/m32, imm8MIValidValidAdd sign-extended imm8 to r/m32.
REX.W + 83 /0 ibADD r/m64, imm8MIValidN.E.Add sign-extended imm8 to r/m64.
00 /rADD r/m8, r8MRValidValidAdd r8 to r/m8.
REX + 00 /rADD r/m8*, r8*MRValidN.E.Add r8 to r/m8.
01 /rADD r/m16, r16MRValidValidAdd r16 to r/m16.
01 /rADD r/m32, r32MRValidValidAdd r32 to r/m32.
REX.W + 01 /rADD r/m64, r64MRValidN.E.Add r64 to r/m64.
02 /rADD r8, r/m8RMValidValidAdd r/m8 to r8.
REX + 02 /rADD r8*, r/m8*RMValidN.E.Add r/m8 to r8.
03 /rADD r16, r/m16RMValidValidAdd r/m16 to r16.
03 /rADD r32, r/m32RMValidValidAdd r/m32 to r32.
REX.W + 03 /rADD r64, r/m64RMValidN.E.Add r/m64 to r64.
+
+

*In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
MIModRM:r/m (r, w)imm8/16/32N/AN/A
IAL/AX/EAX/RAXimm8/16/32N/AN/A
+

Description + ¶ +

+

Adds the destination operand (first operand) and the source operand (second operand) and then stores the result in the destination operand. The destination operand can be a register or a memory location; the source operand can be an immediate, a register, or a memory location. (However, two memory operands cannot be used in one instruction.) When an immediate value is used as an operand, it is sign-extended to the length of the destination operand format.

+

The ADD instruction performs integer addition. It evaluates the result for both signed and unsigned integer operands and sets the OF and CF flags to indicate a carry (overflow) in the signed or unsigned result, respectively. The SF flag indicates the sign of the signed result.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := DEST + SRC;
+
+

Flags Affected + ¶ +

+

The OF, SF, ZF, AF, CF, and PF flags are set according to the result.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/addpd.html b/x86/addpd.html new file mode 100644 index 0000000..1e3917d --- /dev/null +++ b/x86/addpd.html @@ -0,0 +1,203 @@ + +ADDPD + — Add Packed Double Precision Floating-Point Values

ADDPD + — Add Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 58 /r ADDPD xmm1, xmm2/m128AV/VSSE2Add packed double precision floating-point values from xmm2/mem to xmm1 and store result in xmm1.
VEX.128.66.0F.WIG 58 /r VADDPD xmm1,xmm2, xmm3/m128BV/VAVXAdd packed double precision floating-point values from xmm3/mem to xmm2 and store result in xmm1.
VEX.256.66.0F.WIG 58 /r VADDPD ymm1, ymm2, ymm3/m256BV/VAVXAdd packed double precision floating-point values from ymm3/mem to ymm2 and store result in ymm1.
EVEX.128.66.0F.W1 58 /r VADDPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FAdd packed double precision floating-point values from xmm3/m128/m64bcst to xmm2 and store result in xmm1 with writemask k1.
EVEX.256.66.0F.W1 58 /r VADDPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FAdd packed double precision floating-point values from ymm3/m256/m64bcst to ymm2 and store result in ymm1 with writemask k1.
EVEX.512.66.0F.W1 58 /r VADDPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}CV/VAVX512FAdd packed double precision floating-point values from zmm3/m512/m64bcst to zmm2 and store result in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Adds two, four or eight packed double precision floating-point values from the first source operand to the second source operand, and stores the packed double precision floating-point result in the destination operand.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: the first source operand is a XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper Bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+

VADDPD (EVEX Encoded Versions) When SRC2 Operand is a Vector Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC1[i+63:i] + SRC2[i+63:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VADDPD (EVEX Encoded Versions) When SRC2 Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] := SRC1[i+63:i] + SRC2[63:0]
+                ELSE
+                    DEST[i+63:i] := SRC1[i+63:i] + SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VADDPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] + SRC2[63:0]
+DEST[127:64] := SRC1[127:64] + SRC2[127:64]
+DEST[191:128] := SRC1[191:128] + SRC2[191:128]
+DEST[255:192] := SRC1[255:192] + SRC2[255:192]
+DEST[MAXVL-1:256] := 0
+.
+
+

VADDPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] + SRC2[63:0]
+DEST[127:64] := SRC1[127:64] + SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

ADDPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] + SRC[63:0]
+DEST[127:64] := DEST[127:64] + SRC[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VADDPD __m512d _mm512_add_pd (__m512d a, __m512d b);
+
+
VADDPD __m512d _mm512_mask_add_pd (__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VADDPD __m512d _mm512_maskz_add_pd (__mmask8 k, __m512d a, __m512d b);
+
+
VADDPD __m256d _mm256_mask_add_pd (__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VADDPD __m256d _mm256_maskz_add_pd (__mmask8 k, __m256d a, __m256d b);
+
+
VADDPD __m128d _mm_mask_add_pd (__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VADDPD __m128d _mm_maskz_add_pd (__mmask8 k, __m128d a, __m128d b);
+
+
VADDPD __m512d _mm512_add_round_pd (__m512d a, __m512d b, int);
+
+
VADDPD __m512d _mm512_mask_add_round_pd (__m512d s, __mmask8 k, __m512d a, __m512d b, int);
+
+
VADDPD __m512d _mm512_maskz_add_round_pd (__mmask8 k, __m512d a, __m512d b, int);
+
+
ADDPD __m256d _mm256_add_pd (__m256d a, __m256d b);
+
+
ADDPD __m128d _mm_add_pd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/addps.html b/x86/addps.html new file mode 100644 index 0000000..8a90067 --- /dev/null +++ b/x86/addps.html @@ -0,0 +1,216 @@ + +ADDPS + — Add Packed Single Precision Floating-Point Values

ADDPS + — Add Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 58 /r ADDPS xmm1, xmm2/m128AV/VSSEAdd packed single precision floating-point values from xmm2/m128 to xmm1 and store result in xmm1.
VEX.128.0F.WIG 58 /r VADDPS xmm1,xmm2, xmm3/m128BV/VAVXAdd packed single precision floating-point values from xmm3/m128 to xmm2 and store result in xmm1.
VEX.256.0F.WIG 58 /r VADDPS ymm1, ymm2, ymm3/m256BV/VAVXAdd packed single precision floating-point values from ymm3/m256 to ymm2 and store result in ymm1.
EVEX.128.0F.W0 58 /r VADDPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FAdd packed single precision floating-point values from xmm3/m128/m32bcst to xmm2 and store result in xmm1 with writemask k1.
EVEX.256.0F.W0 58 /r VADDPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FAdd packed single precision floating-point values from ymm3/m256/m32bcst to ymm2 and store result in ymm1 with writemask k1.
EVEX.512.0F.W0 58 /r VADDPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst {er}CV/VAVX512FAdd packed single precision floating-point values from zmm3/m512/m32bcst to zmm2 and store result in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Adds four, eight or sixteen packed single precision floating-point values from the first source operand with the second source operand, and stores the packed single precision floating-point result in the destination operand.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: the first source operand is a XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper Bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+

VADDPS (EVEX Encoded Versions) When SRC2 Operand is a Register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC1[i+31:i] + SRC2[i+31:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VADDPS (EVEX Encoded Versions) When SRC2 Operand is a Memory Source + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+                        SRC1[i+31:i] + SRC2[31:0]
+                ELSE
+                    DEST[i+31:i] :=
+                        SRC1[i+31:i] + SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking*
+                            ; merging-masking
+                THEN *DEST[i+31:i]
+                        remains unchanged*
+                ELSE
+                            ; zeroing-masking
+                    DEST[i+31:i] :=
+                        0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VADDPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] + SRC2[31:0]
+DEST[63:32] := SRC1[63:32] + SRC2[63:32]
+DEST[95:64] := SRC1[95:64] + SRC2[95:64]
+DEST[127:96] := SRC1[127:96] + SRC2[127:96]
+DEST[159:128] := SRC1[159:128] + SRC2[159:128]
+DEST[191:160]:= SRC1[191:160] + SRC2[191:160]
+DEST[223:192] := SRC1[223:192] + SRC2[223:192]
+DEST[255:224] := SRC1[255:224] + SRC2[255:224].
+DEST[MAXVL-1:256] := 0
+
+

VADDPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] + SRC2[31:0]
+DEST[63:32] := SRC1[63:32] + SRC2[63:32]
+DEST[95:64] := SRC1[95:64] + SRC2[95:64]
+DEST[127:96] := SRC1[127:96] + SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

ADDPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] + SRC2[31:0]
+DEST[63:32] := SRC1[63:32] + SRC2[63:32]
+DEST[95:64] := SRC1[95:64] + SRC2[95:64]
+DEST[127:96] := SRC1[127:96] + SRC2[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VADDPS __m512 _mm512_add_ps (__m512 a, __m512 b);
+
+
VADDPS __m512 _mm512_mask_add_ps (__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VADDPS __m512 _mm512_maskz_add_ps (__mmask16 k, __m512 a, __m512 b);
+
+
VADDPS __m256 _mm256_mask_add_ps (__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VADDPS __m256 _mm256_maskz_add_ps (__mmask8 k, __m256 a, __m256 b);
+
+
VADDPS __m128 _mm_mask_add_ps (__m128d s, __mmask8 k, __m128 a, __m128 b);
+
+
VADDPS __m128 _mm_maskz_add_ps (__mmask8 k, __m128 a, __m128 b);
+
+
VADDPS __m512 _mm512_add_round_ps (__m512 a, __m512 b, int);
+
+
VADDPS __m512 _mm512_mask_add_round_ps (__m512 s, __mmask16 k, __m512 a, __m512 b, int);
+
+
VADDPS __m512 _mm512_maskz_add_round_ps (__mmask16 k, __m512 a, __m512 b, int);
+
+
ADDPS __m256 _mm256_add_ps (__m256 a, __m256 b);
+
+
ADDPS __m128 _mm_add_ps (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/addsd.html b/x86/addsd.html new file mode 100644 index 0000000..cdbc30b --- /dev/null +++ b/x86/addsd.html @@ -0,0 +1,136 @@ + +ADDSD + — Add Scalar Double Precision Floating-Point Values

ADDSD + — Add Scalar Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 58 /r ADDSD xmm1, xmm2/m64AV/VSSE2Add the low double precision floating-point value from xmm2/mem to xmm1 and store the result in xmm1.
VEX.LIG.F2.0F.WIG 58 /r VADDSD xmm1, xmm2, xmm3/m64BV/VAVXAdd the low double precision floating-point value from xmm3/mem to xmm2 and store the result in xmm1.
EVEX.LLIG.F2.0F.W1 58 /r VADDSD xmm1 {k1}{z}, xmm2, xmm3/m64{er}CV/VAVX512FAdd the low double precision floating-point value from xmm3/m64 to xmm2 and store the result in xmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Adds the low double precision floating-point values from the second source operand and the first source operand and stores the double precision floating-point result in the destination operand.

+

The second source operand can be an XMM register or a 64-bit memory location. The first source and destination operands are XMM registers.

+

128-bit Legacy SSE version: The first source and destination operands are the same. Bits (MAXVL-1:64) of the corresponding destination register remain unchanged.

+

EVEX and VEX.128 encoded version: The first source operand is encoded by EVEX.vvvv/VEX.vvvv. Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX version: The low quadword element of the destination is updated according to the writemask.

+

Software should ensure VADDSD is encoded with VEX.L=0. Encoding VADDSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VADDSD (EVEX Encoded Version) + ¶ +

+
IF (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := SRC1[63:0] + SRC2[63:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VADDSD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] + SRC2[63:0]
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

ADDSD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] + SRC[63:0]
+DEST[MAXVL-1:64] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VADDSD __m128d _mm_mask_add_sd (__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VADDSD __m128d _mm_maskz_add_sd (__mmask8 k, __m128d a, __m128d b);
+
+
VADDSD __m128d _mm_add_round_sd (__m128d a, __m128d b, int);
+
+
VADDSD __m128d _mm_mask_add_round_sd (__m128d s, __mmask8 k, __m128d a, __m128d b, int);
+
+
VADDSD __m128d _mm_maskz_add_round_sd (__mmask8 k, __m128d a, __m128d b, int);
+
+
ADDSD __m128d _mm_add_sd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/addss.html b/x86/addss.html new file mode 100644 index 0000000..858462b --- /dev/null +++ b/x86/addss.html @@ -0,0 +1,136 @@ + +ADDSS + — Add Scalar Single Precision Floating-Point Values

ADDSS + — Add Scalar Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 58 /r ADDSS xmm1, xmm2/m32AV/VSSEAdd the low single precision floating-point value from xmm2/mem to xmm1 and store the result in xmm1.
VEX.LIG.F3.0F.WIG 58 /r VADDSS xmm1,xmm2, xmm3/m32BV/VAVXAdd the low single precision floating-point value from xmm3/mem to xmm2 and store the result in xmm1.
EVEX.LLIG.F3.0F.W0 58 /r VADDSS xmm1{k1}{z}, xmm2, xmm3/m32{er}CV/VAVX512FAdd the low single precision floating-point value from xmm3/m32 to xmm2 and store the result in xmm1with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Adds the low single precision floating-point values from the second source operand and the first source operand, and stores the double precision floating-point result in the destination operand.

+

The second source operand can be an XMM register or a 64-bit memory location. The first source and destination operands are XMM registers.

+

128-bit Legacy SSE version: The first source and destination operands are the same. Bits (MAXVL-1:32) of the corresponding the destination register remain unchanged.

+

EVEX and VEX.128 encoded version: The first source operand is encoded by EVEX.vvvv/VEX.vvvv. Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX version: The low doubleword element of the destination is updated according to the writemask.

+

Software should ensure VADDSS is encoded with VEX.L=0. Encoding VADDSS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VADDSS (EVEX Encoded Versions) + ¶ +

+
IF (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := SRC1[31:0] + SRC2[31:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VADDSS DEST, SRC1, SRC2 (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] + SRC2[31:0]
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

ADDSS DEST, SRC (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := DEST[31:0] + SRC[31:0]
+DEST[MAXVL-1:32] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VADDSS __m128 _mm_mask_add_ss (__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VADDSS __m128 _mm_maskz_add_ss (__mmask8 k, __m128 a, __m128 b);
+
+
VADDSS __m128 _mm_add_round_ss (__m128 a, __m128 b, int);
+
+
VADDSS __m128 _mm_mask_add_round_ss (__m128 s, __mmask8 k, __m128 a, __m128 b, int);
+
+
VADDSS __m128 _mm_maskz_add_round_ss (__mmask8 k, __m128 a, __m128 b, int);
+
+
ADDSS __m128 _mm_add_ss (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/addsubpd.html b/x86/addsubpd.html new file mode 100644 index 0000000..3683177 --- /dev/null +++ b/x86/addsubpd.html @@ -0,0 +1,135 @@ + +ADDSUBPD + — Packed Double Precision Floating-Point Add/Subtract

ADDSUBPD + — Packed Double Precision Floating-Point Add/Subtract

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F D0 /r ADDSUBPD xmm1, xmm2/m128RMV/VSSE3Add/subtract double precision floating-point values from xmm2/m128 to xmm1.
VEX.128.66.0F.WIG D0 /r VADDSUBPD xmm1, xmm2, xmm3/m128RVMV/VAVXAdd/subtract packed double precision floating-point values from xmm3/mem to xmm2 and stores result in xmm1.
VEX.256.66.0F.WIG D0 /r VADDSUBPD ymm1, ymm2, ymm3/m256RVMV/VAVXAdd / subtract packed double precision floating-point values from ymm3/mem to ymm2 and stores result in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Adds odd-numbered double precision floating-point values of the first source operand (second operand) with the corresponding double precision floating-point values from the second source operand (third operand); stores the result in the odd-numbered values of the destination operand (first operand). Subtracts the even-numbered double precision floating-point values from the second source operand from the corresponding double precision floating values in the first source operand; stores the result into the even-numbered values of the destination operand.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified. See Figure 3-3.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+
+ + + + + + + + + + + + + + + +ADDSUBPD xmm1, xmm2/m128 +xmm2/m128 +[127:64] +[63:0] +RESULT: +xmm1[127:64] + xmm2/m128[127:64] +xmm1[63:0] - xmm2/m128[63:0] +xmm1 +[127:64] +[63:0] +
Figure 3-3. ADDSUBPD—Packed Double Precision Floating-Point Add/Subtract
+

Operation + ¶ +

+

ADDSUBPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] - SRC[63:0]
+DEST[127:64] := DEST[127:64] + SRC[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VADDSUBPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] - SRC2[63:0]
+DEST[127:64] := SRC1[127:64] + SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VADDSUBPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] - SRC2[63:0]
+DEST[127:64] := SRC1[127:64] + SRC2[127:64]
+DEST[191:128] := SRC1[191:128] - SRC2[191:128]
+DEST[255:192] := SRC1[255:192] + SRC2[255:192]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
ADDSUBPD __m128d _mm_addsub_pd(__m128d a, __m128d b)
+
+
VADDSUBPD __m256d _mm256_addsub_pd (__m256d a, __m256d b)
+
+

Exceptions + ¶ +

+

When the source operand is a memory operand, it must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions.”

diff --git a/x86/addsubps.html b/x86/addsubps.html new file mode 100644 index 0000000..2aabecb --- /dev/null +++ b/x86/addsubps.html @@ -0,0 +1,166 @@ + +ADDSUBPS + — Packed Single Precision Floating-Point Add/Subtract

ADDSUBPS + — Packed Single Precision Floating-Point Add/Subtract

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F2 0F D0 /r ADDSUBPS xmm1, xmm2/m128RMV/VSSE3Add/subtract single precision floating-point values from xmm2/m128 to xmm1.
VEX.128.F2.0F.WIG D0 /r VADDSUBPS xmm1, xmm2, xmm3/m128RVMV/VAVXAdd/subtract single precision floating-point values from xmm3/mem to xmm2 and stores result in xmm1.
VEX.256.F2.0F.WIG D0 /r VADDSUBPS ymm1, ymm2, ymm3/m256RVMV/VAVXAdd / subtract single precision floating-point values from ymm3/mem to ymm2 and stores result in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Adds odd-numbered single precision floating-point values of the first source operand (second operand) with the corresponding single precision floating-point values from the second source operand (third operand); stores the result in the odd-numbered values of the destination operand (first operand). Subtracts the even-numbered single precision floating-point values from the second source operand from the corresponding single precision floating values in the first source operand; stores the result into the even-numbered values of the destination operand.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified. See Figure 3-4.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +ADDSUBPS xmm1, xmm2/m128 +xmm2/ +[127:96] +[95:64] +[63:32] +[31:0] +m128 +RESULT: +xmm1[127:96] + +xmm1[95:64] - xmm2/ +xmm1[63:32] + +xmm1[31:0] - +xmm1 +xmm2/m128[127:96] +m128[95:64] +xmm2/m128[63:32] +xmm2/m128[31:0] +[127:96] +[95:64] +[63:32] +[31:0] +
Figure 3-4. ADDSUBPS—Packed Single Precision Floating-Point Add/Subtract
+

Operation + ¶ +

+

ADDSUBPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := DEST[31:0] - SRC[31:0]
+DEST[63:32] := DEST[63:32] + SRC[63:32]
+DEST[95:64] := DEST[95:64] - SRC[95:64]
+DEST[127:96] := DEST[127:96] + SRC[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VADDSUBPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC2[31:0]
+DEST[63:32] := SRC1[63:32] + SRC2[63:32]
+DEST[95:64] := SRC1[95:64] - SRC2[95:64]
+DEST[127:96] := SRC1[127:96] + SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

VADDSUBPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC2[31:0]
+DEST[63:32] := SRC1[63:32] + SRC2[63:32]
+DEST[95:64] := SRC1[95:64] - SRC2[95:64]
+DEST[127:96] := SRC1[127:96] + SRC2[127:96]
+DEST[159:128] := SRC1[159:128] - SRC2[159:128]
+DEST[191:160] := SRC1[191:160] + SRC2[191:160]
+DEST[223:192] := SRC1[223:192] - SRC2[223:192]
+DEST[255:224] := SRC1[255:224] + SRC2[255:224]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
ADDSUBPS __m128 _mm_addsub_ps(__m128 a, __m128 b)
+
+
VADDSUBPS __m256 _mm256_addsub_ps (__m256 a, __m256 b)
+
+

Exceptions + ¶ +

+

When the source operand is a memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions.”

diff --git a/x86/adox.html b/x86/adox.html new file mode 100644 index 0000000..4117cb3 --- /dev/null +++ b/x86/adox.html @@ -0,0 +1,160 @@ + +ADOX + — Unsigned Integer Addition of Two Operands With Overflow Flag

ADOX + — Unsigned Integer Addition of Two Operands With Overflow Flag

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32bit Mode SupportCPUID Feature FlagDescription
F3 0F 38 F6 /r ADOX r32, r/m32RMV/VADXUnsigned addition of r32 with OF, r/m32 to r32, writes OF.
F3 REX.w 0F 38 F6 /r ADOX r64, r/m64RMV/N.E.ADXUnsigned addition of r64 with OF, r/m64 to r64, writes OF.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Performs an unsigned addition of the destination operand (first operand), the source operand (second operand) and the overflow-flag (OF) and stores the result in the destination operand. The destination operand is a general-purpose register, whereas the source operand can be a general-purpose register or memory location. The state of OF represents a carry from a previous addition. The instruction sets the OF flag with the carry generated by the unsigned addition of the operands.

+

The ADOX instruction is executed in the context of multi-precision addition, where we add a series of operands with a carry-chain. At the beginning of a chain of additions, we execute an instruction to zero the OF (e.g. XOR).

+

This instruction is supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode.

+

In 64-bit mode, the default operation size is 32 bits. Using a REX Prefix in the form of REX.R permits access to additional registers (R8-15). Using REX Prefix in the form of REX.W promotes operation to 64-bits.

+

ADOX executes normally either inside or outside a transaction region.

+

Note: ADOX defines the CF and OF flags differently than the ADD/ADC instructions as defined in Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A.

+

Operation + ¶ +

+
IF OperandSize is 64-bit
+    THEN OF:DEST[63:0] := DEST[63:0] + SRC[63:0] + OF;
+    ELSE OF:DEST[31:0] := DEST[31:0] + SRC[31:0] + OF;
+FI;
+
+

Flags Affected + ¶ +

+

OF is updated based on result. CF, SF, ZF, AF, and PF flags are unmodified.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
unsigned char _addcarryx_u32 (unsigned char c_in, unsigned int src1, unsigned int src2, unsigned int *sum_out);
+
+
unsigned char _addcarryx_u64 (unsigned char c_in, unsigned __int64 src1, unsigned __int64 src2, unsigned __int64 *sum_out);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.ADX[bit 19] = 0.
#SS(0)For an illegal address in the SS segment.
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
If the DS, ES, FS, or GS register is used to access memory and it contains a null segment selector.
#PF(fault-code)For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.ADX[bit 19] = 0.
#SS(0)For an illegal address in the SS segment.
#GP(0)If any part of the operand lies outside the effective address space from 0 to FFFFH.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.ADX[bit 19] = 0.
#SS(0)For an illegal address in the SS segment.
#GP(0)If any part of the operand lies outside the effective address space from 0 to FFFFH.
#PF(fault-code)For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.ADX[bit 19] = 0.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
diff --git a/x86/aesdec.html b/x86/aesdec.html new file mode 100644 index 0000000..c3fc20a --- /dev/null +++ b/x86/aesdec.html @@ -0,0 +1,149 @@ + +AESDEC + — Perform One Round of an AES Decryption Flow

AESDEC + — Perform One Round of an AES Decryption Flow

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 38 DE /r AESDEC xmm1, xmm2/m128AV/VAESPerform one round of an AES decryption flow, using the Equivalent Inverse Cipher, using one 128-bit data (state) from xmm1 with one 128-bit round key from xmm2/m128.
VEX.128.66.0F38.WIG DE /r VAESDEC xmm1, xmm2, xmm3/m128BV/VAES AVXPerform one round of an AES decryption flow, using the Equivalent Inverse Cipher, using one 128-bit data (state) from xmm2 with one 128-bit round key from xmm3/m128; store the result in xmm1.
VEX.256.66.0F38.WIG DE /r VAESDEC ymm1, ymm2, ymm3/m256BV/VVAESPerform one round of an AES decryption flow, using the Equivalent Inverse Cipher, using two 128-bit data (state) from ymm2 with two 128-bit round keys from ymm3/m256; store the result in ymm1.
EVEX.128.66.0F38.WIG DE /r VAESDEC xmm1, xmm2, xmm3/m128CV/VVAES AVX512VLPerform one round of an AES decryption flow, using the Equivalent Inverse Cipher, using one 128-bit data (state) from xmm2 with one 128-bit round key from xmm3/m128; store the result in xmm1.
EVEX.256.66.0F38.WIG DE /r VAESDEC ymm1, ymm2, ymm3/m256CV/VVAES AVX512VLPerform one round of an AES decryption flow, using the Equivalent Inverse Cipher, using two 128-bit data (state) from ymm2 with two 128-bit round keys from ymm3/m256; store the result in ymm1.
EVEX.512.66.0F38.WIG DE /r VAESDEC zmm1, zmm2, zmm3/m512CV/VVAES AVX512FPerform one round of an AES decryption flow, using the Equivalent Inverse Cipher, using four 128-bit data (state) from zmm2 with four 128-bit round keys from zmm3/m512; store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a single round of the AES decryption flow using the Equivalent Inverse Cipher, using one/two/four (depending on vector length) 128-bit data (state) from the first source operand with one/two/four (depending on vector length) round key(s) from the second source operand, and stores the result in the destination operand.

+

Use the AESDEC instruction for all but the last decryption round. For the last decryption round, use the AESDECLAST instruction.

+

VEX and EVEX encoded versions of the instruction allow 3-operand (non-destructive) operation. The legacy encoded versions of the instruction require that the first source operand and the destination operand are the same and must be an XMM register.

+

The EVEX encoded form of this instruction does not support memory fault suppression.

+

Operation + ¶ +

+

AESDEC + ¶ +

+
STATE := SRC1;
+RoundKey := SRC2;
+STATE := InvShiftRows( STATE );
+STATE := InvSubBytes( STATE );
+STATE := InvMixColumns( STATE );
+DEST[127:0] := STATE XOR RoundKey;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VAESDEC (128b and 256b VEX Encoded Versions) + ¶ +

+
(KL,VL) = (1,128), (2,256)
+FOR i = 0 to KL-1:
+    STATE := SRC1.xmm[i]
+    RoundKey := SRC2.xmm[i]
+    STATE := InvShiftRows( STATE )
+    STATE := InvSubBytes( STATE )
+    STATE := InvMixColumns( STATE )
+    DEST.xmm[i] := STATE XOR RoundKey
+DEST[MAXVL-1:VL] := 0
+
+

VAESDEC (EVEX Encoded Version) + ¶ +

+
(KL,VL) = (1,128), (2,256), (4,512)
+FOR i = 0 to KL-1:
+    STATE := SRC1.xmm[i]
+    RoundKey := SRC2.xmm[i]
+    STATE := InvShiftRows( STATE )
+    STATE := InvSubBytes( STATE )
+    STATE := InvMixColumns( STATE )
+    DEST.xmm[i] := STATE XOR RoundKey
+DEST[MAXVL-1:VL] :=0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)AESDEC __m128i _mm_aesdec (__m128i, __m128i)
+
+
VAESDEC __m256i _mm256_aesdec_epi128(__m256i, __m256i);
+
+
VAESDEC __m512i _mm512_aesdec_epi128(__m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded: See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/aesdec128kl.html b/x86/aesdec128kl.html new file mode 100644 index 0000000..2f54bea --- /dev/null +++ b/x86/aesdec128kl.html @@ -0,0 +1,98 @@ + +AESDEC128KL + — Perform Ten Rounds of AES Decryption Flow With Key Locker Using 128-BitKey

AESDEC128KL + — Perform Ten Rounds of AES Decryption Flow With Key Locker Using 128-BitKey

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 DD !(11):rrr:bbb AESDEC128KL xmm, m384AV/VAESKLEDecrypt xmm using 128-bit AES key indicated by handle at m384 and store result in xmm.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The AESDEC128KL1 instruction performs 10 rounds of AES to decrypt the first operand using the 128-bit key indicated by the handle from the second operand. It stores the result in the first operand if the operation succeeds (e.g., does not run into a handle violation failure).

+

Operation + ¶ +

+

AESDEC128KL + ¶ +

+
Handle := UnalignedLoad of 384 bit (SRC); // Load is not guaranteed to be atomic.
+Illegal Handle = (HandleReservedBitSet (Handle) ||
+                (Handle[0] AND (CPL > 0)) ||
+                Handle [2] ||
+                HandleKeyType (Handle) != HANDLE_KEY_TYPE_AES128);
+IF (Illegal Handle) {
+    THEN RFLAGS.ZF := 1;
+    ELSE
+        (UnwrappedKey, Authentic) := UnwrapKeyAndAuthenticate384 (Handle[383:0], IWKey);
+        IF (Authentic == 0)
+            THEN RFLAGS.ZF := 1;
+            ELSE
+                    DEST := AES128Decrypt (DEST, UnwrappedKey) ;
+                    RFLAGS.ZF := 0;
+        FI;
+FI;
+RFLAGS.OF, SF, AF, PF, CF := 0;
+
+

Flags Affected + ¶ +

+

ZF is set to 0 if the operation succeeded and set to 1 if the operation failed due to a handle violation. The other arithmetic flags (OF, SF, AF, PF, CF) are cleared to 0.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
AESDEC128KL unsigned char _mm_aesdec128kl_u8(__m128i* odata, __m128i idata, const void* h);
+
+
1. Further details on Key Locker and usage of this instruction can be found here:
+
+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.19H:EBX.AESKLE[bit 0] = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

#NM If CR0.TS = 1.

+

#PF If a page fault occurs.

+

#GP(0) If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.

+

If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.

+

If the memory address is in a non-canonical form.

+

#SS(0) If a memory operand effective address is outside the SS segment limit.

+

If a memory address referencing the SS segment is in a non-canonical form.

diff --git a/x86/aesdec256kl.html b/x86/aesdec256kl.html new file mode 100644 index 0000000..790a80e --- /dev/null +++ b/x86/aesdec256kl.html @@ -0,0 +1,98 @@ + +AESDEC256KL + — Perform 14 Rounds of AES Decryption Flow With Key Locker Using 256-Bit Key

AESDEC256KL + — Perform 14 Rounds of AES Decryption Flow With Key Locker Using 256-Bit Key

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 DF !(11):rrr:bbb AESDEC256KL xmm, m512AV/VAESKLEDecrypt xmm using 256-bit AES key indicated by handle at m512 and store result in xmm.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The AESDEC256KL1 instruction performs 14 rounds of AES to decrypt the first operand using the 256-bit key indicated by the handle from the second operand. It stores the result in the first operand if the operation succeeds (e.g., does not run into a handle violation failure).

+

Operation + ¶ +

+

AESDEC256KL + ¶ +

+
Handle := UnalignedLoad of 512 bit (SRC); // Load is not guaranteed to be atomic.
+Illegal Handle = (HandleReservedBitSet (Handle) ||
+                (Handle[0] AND (CPL > 0)) ||
+                Handle [2] ||
+                HandleKeyType (Handle) != HANDLE_KEY_TYPE_AES256);
+IF (Illegal Handle)
+    THEN RFLAGS.ZF := 1;
+    ELSE
+        (UnwrappedKey, Authentic) := UnwrapKeyAndAuthenticate512 (Handle[511:0], IWKey);
+        IF (Authentic == 0)
+            THEN RFLAGS.ZF := 1;
+            ELSE
+                    DEST := AES256Decrypt (DEST, UnwrappedKey) ;
+                    RFLAGS.ZF := 0;
+        FI;
+FI;
+RFLAGS.OF, SF, AF, PF, CF := 0;
+
+

Flags Affected + ¶ +

+

ZF is set to 0 if the operation succeeded and set to 1 if the operation failed due to a handle violation. The other arithmetic flags (OF, SF, AF, PF, CF) are cleared to 0.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
AESDEC256KL unsigned char _mm_aesdec256kl_u8(__m128i* odata, __m128i idata, const void* h);
+
+
1. Further details on Key Locker and usage of this instruction can be found here:
+
+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.19H:EBX.AESKLE[bit 0] = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

#NM If CR0.TS = 1.

+

#PF If a page fault occurs.

+

#GP(0) If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.

+

If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.

+

If the memory address is in a non-canonical form.

+

#SS(0) If a memory operand effective address is outside the SS segment limit.

+

If a memory address referencing the SS segment is in a non-canonical form.

diff --git a/x86/aesdeclast.html b/x86/aesdeclast.html new file mode 100644 index 0000000..0904649 --- /dev/null +++ b/x86/aesdeclast.html @@ -0,0 +1,145 @@ + +AESDECLAST + — Perform Last Round of an AES Decryption Flow

AESDECLAST + — Perform Last Round of an AES Decryption Flow

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 38 DF /r AESDECLAST xmm1, xmm2/m128AV/VAESPerform the last round of an AES decryption flow, using the Equivalent Inverse Cipher, using one 128-bit data (state) from xmm1 with one 128-bit round key from xmm2/m128.
VEX.128.66.0F38.WIG DF /r VAESDECLAST xmm1, xmm2, xmm3/m128BV/VAES AVXPerform the last round of an AES decryption flow, using the Equivalent Inverse Cipher, using one 128-bit data (state) from xmm2 with one 128-bit round key from xmm3/m128; store the result in xmm1.
VEX.256.66.0F38.WIG DF /r VAESDECLAST ymm1, ymm2, ymm3/m256BV/VVAESPerform the last round of an AES decryption flow, using the Equivalent Inverse Cipher, using two 128-bit data (state) from ymm2 with two 128-bit round keys from ymm3/m256; store the result in ymm1.
EVEX.128.66.0F38.WIG DF /r VAESDECLAST xmm1, xmm2, xmm3/m128CV/VVAES AVX512VLPerform the last round of an AES decryption flow, using the Equivalent Inverse Cipher, using one 128-bit data (state) from xmm2 with one 128-bit round key from xmm3/m128; store the result in xmm1.
EVEX.256.66.0F38.WIG DF /r VAESDECLAST ymm1, ymm2, ymm3/m256CV/VVAES AVX512VLPerform the last round of an AES decryption flow, using the Equivalent Inverse Cipher, using two 128-bit data (state) from ymm2 with two 128-bit round keys from ymm3/m256; store the result in ymm1.
EVEX.512.66.0F38.WIG DF /r VAESDECLAST zmm1, zmm2, zmm3/m512CV/VVAES AVX512FPerform the last round of an AES decryption flow, using the Equivalent Inverse Cipher, using four128-bit data (state) from zmm2 with four 128-bit round keys from zmm3/m512; store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs the last round of the AES decryption flow using the Equivalent Inverse Cipher, using one/two/four (depending on vector length) 128-bit data (state) from the first source operand with one/two/four (depending on vector length) round key(s) from the second source operand, and stores the result in the destination operand.

+

VEX and EVEX encoded versions of the instruction allow 3-operand (non-destructive) operation. The legacy encoded versions of the instruction require that the first source operand and the destination operand are the same and must be an XMM register.

+

The EVEX encoded form of this instruction does not support memory fault suppression.

+

Operation + ¶ +

+

AESDECLAST + ¶ +

+
STATE := SRC1;
+RoundKey := SRC2;
+STATE := InvShiftRows( STATE );
+STATE := InvSubBytes( STATE );
+DEST[127:0] := STATE XOR RoundKey;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VAESDECLAST (128b and 256b VEX Encoded Versions) + ¶ +

+
(KL,VL) = (1,128), (2,256)
+FOR i = 0 to KL-1:
+    STATE := SRC1.xmm[i]
+    RoundKey := SRC2.xmm[i]
+    STATE := InvShiftRows( STATE )
+    STATE := InvSubBytes( STATE )
+    DEST.xmm[i] := STATE XOR RoundKey
+DEST[MAXVL-1:VL] := 0
+
+

VAESDECLAST (EVEX Encoded Version) + ¶ +

+
(KL,VL) = (1,128), (2,256), (4,512)
+FOR i = 0 to KL-1:
+    STATE := SRC1.xmm[i]
+    RoundKey := SRC2.xmm[i]
+    STATE := InvShiftRows( STATE )
+    STATE := InvSubBytes( STATE )
+    DEST.xmm[i] := STATE XOR RoundKey
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)AESDECLAST __m128i _mm_aesdeclast (__m128i, __m128i)
+
+
VAESDECLAST __m256i _mm256_aesdeclast_epi128(__m256i, __m256i);
+
+
VAESDECLAST __m512i _mm512_aesdeclast_epi128(__m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded: See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/aesdecwide128kl.html b/x86/aesdecwide128kl.html new file mode 100644 index 0000000..e099413 --- /dev/null +++ b/x86/aesdecwide128kl.html @@ -0,0 +1,101 @@ + +AESDECWIDE128KL + — Perform Ten Rounds of AES Decryption Flow With Key Locker on 8 BlocksUsing 128-Bit Key

AESDECWIDE128KL + — Perform Ten Rounds of AES Decryption Flow With Key Locker on 8 BlocksUsing 128-Bit Key

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 D8 !(11):001:bbb AESDECWIDE128KL m384, <XMM0-7>AV/VAESKLEWIDE_KLDecrypt XMM0-7 using 128-bit AES key indicated by handle at m384 and store each resultant block back to its corresponding register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnTupleOperand 1Operands 2—9
AN/AModRM:r/m (r)Implicit XMM0-7 (r, w)
+

Description + ¶ +

+

The AESDECWIDE128KL1 instruction performs ten rounds of AES to decrypt each of the eight blocks in XMM0-7 using the 128-bit key indicated by the handle from the second operand. It replaces each input block in XMM0-7 with its corresponding decrypted block if the operation succeeds (e.g., does not run into a handle violation failure).

+

Operation + ¶ +

+

AESDECWIDE128KL + ¶ +

+
Handle := UnalignedLoad of 384 bit (SRC); // Load is not guaranteed to be atomic.
+Illegal Handle = (HandleReservedBitSet (Handle) ||
+                (Handle[0] AND (CPL > 0)) ||
+                Handle [2] ||
+                HandleKeyType (Handle) != HANDLE_KEY_TYPE_AES128);
+IF (Illegal Handle)
+    THEN RFLAGS.ZF := 1;
+    ELSE
+        (UnwrappedKey, Authentic) := UnwrapKeyAndAuthenticate384 (Handle[383:0], IWKey);
+        IF Authentic == 0 {
+            THEN RFLAGS.ZF := 1;
+            ELSE
+                    XMM0 := AES128Decrypt (XMM0, UnwrappedKey) ;
+                    XMM1 := AES128Decrypt (XMM1, UnwrappedKey) ;
+                    XMM2 := AES128Decrypt (XMM2, UnwrappedKey) ;
+                    XMM3 := AES128Decrypt (XMM3, UnwrappedKey) ;
+                    XMM4 := AES128Decrypt (XMM4, UnwrappedKey) ;
+                    XMM5 := AES128Decrypt (XMM5, UnwrappedKey) ;
+                    XMM6 := AES128Decrypt (XMM6, UnwrappedKey) ;
+                    XMM7 := AES128Decrypt (XMM7, UnwrappedKey) ;
+                    RFLAGS.ZF := 0;
+        FI;
+FI;
+RFLAGS.OF, SF, AF, PF, CF := 0;
+
+

Flags Affected + ¶ +

+

ZF is set to 0 if the operation succeeded and set to 1 if the operation failed due to a handle violation. The other arithmetic flags (OF, SF, AF, PF, CF) are cleared to 0.

+

1. Further details on Key Locker and usage of this instruction can be found here:

+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
AESDECWIDE128KLunsigned char _mm_aesdecwide128kl_u8(__m128i odata[8], const __m128i idata[8], const void* h);
+
+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.19H:EBX.AESKLE[bit 0] = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

If CPUID.19H:EBX.WIDE_KL[bit 2] = 0.

+

#NM If CR0.TS = 1.

+

#PF If a page fault occurs.

+

#GP(0) If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.

+

If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.

+

If the memory address is in a non-canonical form.

+

#SS(0) If a memory operand effective address is outside the SS segment limit.

+

If a memory address referencing the SS segment is in a non-canonical form.

diff --git a/x86/aesdecwide256kl.html b/x86/aesdecwide256kl.html new file mode 100644 index 0000000..fa1bfd5 --- /dev/null +++ b/x86/aesdecwide256kl.html @@ -0,0 +1,101 @@ + +AESDECWIDE256KL + — Perform 14 Rounds of AES Decryption Flow With Key Locker on 8 BlocksUsing 256-Bit Key

AESDECWIDE256KL + — Perform 14 Rounds of AES Decryption Flow With Key Locker on 8 BlocksUsing 256-Bit Key

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 D8 !(11):011:bbb AESDECWIDE256KL m512, <XMM0-7>AV/VAESKLEWIDE_KLDecrypt XMM0-7 using 256-bit AES key indicated by handle at m512 and store each resultant block back to its corresponding register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnTupleOperand 1Operands 2—9
AN/AModRM:r/m (r)Implicit XMM0-7 (r, w)
+

Description + ¶ +

+

The AESDECWIDE256KL1 instruction performs 14 rounds of AES to decrypt each of the eight blocks in XMM0-7 using the 256-bit key indicated by the handle from the second operand. It replaces each input block in XMM0-7 with its corresponding decrypted block if the operation succeeds (e.g., does not run into a handle violation failure).

+

Operation + ¶ +

+

AESDECWIDE256KL + ¶ +

+
Handle := UnalignedLoad of 512 bit (SRC); // Load is not guaranteed to be atomic.
+Illegal Handle = (HandleReservedBitSet (Handle) ||
+                (Handle[0] AND (CPL > 0)) ||
+                Handle [2] ||
+                HandleKeyType (Handle) != HANDLE_KEY_TYPE_AES256);
+IF (Illegal Handle) {
+    THEN RFLAGS.ZF := 1;
+    ELSE
+        (UnwrappedKey, Authentic) := UnwrapKeyAndAuthenticate512 (Handle[511:0], IWKey);
+        IF (Authentic == 0)
+            THEN RFLAGS.ZF := 1;
+            ELSE
+                XMM0 := AES256Decrypt (XMM0, UnwrappedKey) ;
+                XMM1 := AES256Decrypt (XMM1, UnwrappedKey) ;
+                XMM2 := AES256Decrypt (XMM2, UnwrappedKey) ;
+                XMM3 := AES256Decrypt (XMM3, UnwrappedKey) ;
+                XMM4 := AES256Decrypt (XMM4, UnwrappedKey) ;
+                XMM5 := AES256Decrypt (XMM5, UnwrappedKey) ;
+                XMM6 := AES256Decrypt (XMM6, UnwrappedKey) ;
+                XMM7 := AES256Decrypt (XMM7, UnwrappedKey) ;
+                RFLAGS.ZF := 0;
+        FI;
+FI;
+RFLAGS.OF, SF, AF, PF, CF := 0;
+
+

Flags Affected + ¶ +

+

ZF is set to 0 if the operation succeeded and set to 1 if the operation failed due to a handle violation. The other arithmetic flags (OF, SF, AF, PF, CF) are cleared to 0.

+

1. Further details on Key Locker and usage of this instruction can be found here:

+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
AESDECWIDE256KLunsigned char _mm_aesdecwide256kl_u8(__m128i odata[8], const __m128i idata[8], const void* h);
+
+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.19H:EBX.AESKLE[bit 0] = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

If CPUID.19H:EBX.WIDE_KL[bit 2] = 0.

+

#NM If CR0.TS = 1.

+

#PF If a page fault occurs.

+

#GP(0) If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.

+

If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.

+

If the memory address is in a non-canonical form.

+

#SS(0) If a memory operand effective address is outside the SS segment limit.

+

If a memory address referencing the SS segment is in a non-canonical form.

diff --git a/x86/aesenc.html b/x86/aesenc.html new file mode 100644 index 0000000..5c05a83 --- /dev/null +++ b/x86/aesenc.html @@ -0,0 +1,149 @@ + +AESENC + — Perform One Round of an AES Encryption Flow

AESENC + — Perform One Round of an AES Encryption Flow

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 38 DC /r AESENC xmm1, xmm2/m128AV/VAESPerform one round of an AES encryption flow, using one 128-bit data (state) from xmm1 with one 128-bit round key from xmm2/m128.
VEX.128.66.0F38.WIG DC /r VAESENC xmm1, xmm2, xmm3/m128BV/VAES AVXPerform one round of an AES encryption flow, using one 128-bit data (state) from xmm2 with one 128-bit round key from the xmm3/m128; store the result in xmm1.
VEX.256.66.0F38.WIG DC /r VAESENC ymm1, ymm2, ymm3/m256BV/VVAESPerform one round of an AES encryption flow, using two 128-bit data (state) from ymm2 with two 128-bit round keys from the ymm3/m256; store the result in ymm1.
EVEX.128.66.0F38.WIG DC /r VAESENC xmm1, xmm2, xmm3/m128CV/VVAES AVX512VLPerform one round of an AES encryption flow, using one 128-bit data (state) from xmm2 with one 128-bit round key from the xmm3/m128; store the result in xmm1.
EVEX.256.66.0F38.WIG DC /r VAESENC ymm1, ymm2, ymm3/m256CV/VVAES AVX512VLPerform one round of an AES encryption flow, using two 128-bit data (state) from ymm2 with two 128-bit round keys from the ymm3/m256; store the result in ymm1.
EVEX.512.66.0F38.WIG DC /r VAESENC zmm1, zmm2, zmm3/m512CV/VVAES AVX512FPerform one round of an AES encryption flow, using four 128-bit data (state) from zmm2 with four 128-bit round keys from the zmm3/m512; store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a single round of an AES encryption flow using one/two/four (depending on vector length) 128-bit data (state) from the first source operand with one/two/four (depending on vector length) round key(s) from the second source operand, and stores the result in the destination operand.

+

Use the AESENC instruction for all but the last encryption rounds. For the last encryption round, use the AESENCCLAST instruction.

+

VEX and EVEX encoded versions of the instruction allow 3-operand (non-destructive) operation. The legacy encoded versions of the instruction require that the first source operand and the destination operand are the same and must be an XMM register.

+

The EVEX encoded form of this instruction does not support memory fault suppression.

+

Operation + ¶ +

+

AESENC + ¶ +

+
STATE := SRC1;
+RoundKey := SRC2;
+STATE := ShiftRows( STATE );
+STATE := SubBytes( STATE );
+STATE := MixColumns( STATE );
+DEST[127:0] := STATE XOR RoundKey;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VAESENC (128b and 256b VEX Encoded Versions) + ¶ +

+
(KL,VL) = (1,128), (2,256)
+FOR I := 0 to KL-1:
+    STATE := SRC1.xmm[i]
+    RoundKey := SRC2.xmm[i]
+    STATE := ShiftRows( STATE )
+    STATE := SubBytes( STATE )
+    STATE := MixColumns( STATE )
+    DEST.xmm[i] := STATE XOR RoundKey
+DEST[MAXVL-1:VL] := 0
+
+

VAESENC (EVEX Encoded Version) + ¶ +

+
(KL,VL) = (1,128), (2,256), (4,512)
+FOR i := 0 to KL-1:
+    STATE := SRC1.xmm[i] // xmm[i] is the i’th xmm word in the SIMD register
+    RoundKey := SRC2.xmm[i]
+    STATE := ShiftRows( STATE )
+    STATE := SubBytes( STATE )
+    STATE := MixColumns( STATE )
+    DEST.xmm[i] := STATE XOR RoundKey
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)AESENC __m128i _mm_aesenc (__m128i, __m128i)
+
+
VAESENC __m256i _mm256_aesenc_epi128(__m256i, __m256i);
+
+
VAESENC __m512i _mm512_aesenc_epi128(__m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded: See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/aesenc128kl.html b/x86/aesenc128kl.html new file mode 100644 index 0000000..e163b4c --- /dev/null +++ b/x86/aesenc128kl.html @@ -0,0 +1,100 @@ + +AESENC128KL + — Perform Ten Rounds of AES Encryption Flow With Key Locker Using 128-Bit Key

AESENC128KL + — Perform Ten Rounds of AES Encryption Flow With Key Locker Using 128-Bit Key

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 DC !(11):rrr:bbb AESENC128KL xmm, m384AV/VAESKLEEncrypt xmm using 128-bit AES key indicated by handle at m384 and store result in xmm.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The AESENC128KL1 instruction performs ten rounds of AES to encrypt the first operand using the 128-bit key indicated by the handle from the second operand. It stores the result in the first operand if the operation succeeds (e.g., does not run into a handle violation failure).

+

Operation + ¶ +

+

AESENC128KL + ¶ +

+
Handle := UnalignedLoad of 384 bit (SRC); // Load is not guaranteed to be atomic.
+Illegal Handle = (
+                HandleReservedBitSet (Handle) ||
+                (Handle[0] AND (CPL > 0)) ||
+                Handle [1] ||
+                HandleKeyType (Handle) != HANDLE_KEY_TYPE_AES128
+                );
+IF (Illegal Handle) {
+    THEN RFLAGS.ZF := 1;
+    ELSE
+        (UnwrappedKey, Authentic) := UnwrapKeyAndAuthenticate384 (Handle[383:0], IWKey);
+        IF (Authentic == 0)
+        THEN RFLAGS.ZF := 1;
+        ELSE
+            DEST := AES128Encrypt (DEST, UnwrappedKey) ;
+            RFLAGS.ZF := 0;
+        FI;
+FI;
+RFLAGS.OF, SF, AF, PF, CF := 0;
+
+

Flags Affected + ¶ +

+

ZF is set to 0 if the operation succeeded and set to 1 if the operation failed due to a handle violation. The other arithmetic flags (OF, SF, AF, PF, CF) are cleared to 0.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
AESENC128KL unsigned char _mm_aesenc128kl_u8(__m128i* odata, __m128i idata, const void* h);
+
+
1. Further details on Key Locker and usage of this instruction can be found here:
+
+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.19H:EBX.AESKLE[bit 0] = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

#NM If CR0.TS = 1.

+

#PF If a page fault occurs.

+

#GP(0) If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.

+

If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.

+

If the memory address is in a non-canonical form.

+

#SS(0) If a memory operand effective address is outside the SS segment limit.

+

If a memory address referencing the SS segment is in a non-canonical form.

diff --git a/x86/aesenc256kl.html b/x86/aesenc256kl.html new file mode 100644 index 0000000..0a2df49 --- /dev/null +++ b/x86/aesenc256kl.html @@ -0,0 +1,100 @@ + +AESENC256KL + — Perform 14 Rounds of AES Encryption Flow With Key Locker Using 256-Bit Key

AESENC256KL + — Perform 14 Rounds of AES Encryption Flow With Key Locker Using 256-Bit Key

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 DE !(11):rrr:bbb AESENC256KL xmm, m512AV/VAESKLEEncrypt xmm using 256-bit AES key indicated by handle at m512 and store result in xmm.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The AESENC256KL1 instruction performs 14 rounds of AES to encrypt the first operand using the 256-bit key indicated by the handle from the second operand. It stores the result in the first operand if the operation succeeds (e.g., does not run into a handle violation failure).

+

Operation + ¶ +

+

AESENC256KL + ¶ +

+
Handle := UnalignedLoad of 512 bit (SRC); // Load is not guaranteed to be atomic.
+Illegal Handle = (
+                HandleReservedBitSet (Handle) ||
+                (Handle[0] AND (CPL > 0)) ||
+                Handle [1] ||
+                HandleKeyType (Handle) != HANDLE_KEY_TYPE_AES256
+                );
+IF (Illegal Handle)
+    THEN RFLAGS.ZF := 1;
+    ELSE
+        (UnwrappedKey, Authentic) := UnwrapKeyAndAuthenticate512 (Handle[511:0], IWKey);
+        IF (Authentic == 0)
+            THEN RFLAGS.ZF := 1;
+            ELSE
+                    DEST := AES256Encrypt (DEST, UnwrappedKey) ;
+                    RFLAGS.ZF := 0;
+        FI;
+FI;
+RFLAGS.OF, SF, AF, PF, CF := 0;
+
+

Flags Affected + ¶ +

+

ZF is set to 0 if the operation succeeded and set to 1 if the operation failed due to a handle violation. The other arithmetic flags (OF, SF, AF, PF, CF) are cleared to 0.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
AESENC256KL unsigned char _mm_aesenc256kl_u8(__m128i* odata, __m128i idata, const void* h);
+
+
1. Further details on Key Locker and usage of this instruction can be found here:
+
+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.19H:EBX.AESKLE[bit 0] = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

#NM If CR0.TS = 1.

+

#PF If a page fault occurs.

+

#GP(0) If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.

+

If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.

+

If the memory address is in a non-canonical form.

+

#SS(0) If a memory operand effective address is outside the SS segment limit.

+

If a memory address referencing the SS segment is in a non-canonical form.

diff --git a/x86/aesenclast.html b/x86/aesenclast.html new file mode 100644 index 0000000..add5c1d --- /dev/null +++ b/x86/aesenclast.html @@ -0,0 +1,145 @@ + +AESENCLAST + — Perform Last Round of an AES Encryption Flow

AESENCLAST + — Perform Last Round of an AES Encryption Flow

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 38 DD /r AESENCLAST xmm1, xmm2/m128AV/VAESPerform the last round of an AES encryption flow, using one 128-bit data (state) from xmm1 with one 128-bit round key from xmm2/m128.
VEX.128.66.0F38.WIG DD /r VAESENCLAST xmm1, xmm2, xmm3/m128BV/VAES AVXPerform the last round of an AES encryption flow, using one 128-bit data (state) from xmm2 with one 128-bit round key from xmm3/m128; store the result in xmm1.
VEX.256.66.0F38.WIG DD /r VAESENCLAST ymm1, ymm2, ymm3/m256BV/VVAESPerform the last round of an AES encryption flow, using two 128-bit data (state) from ymm2 with two 128-bit round keys from ymm3/m256; store the result in ymm1.
EVEX.128.66.0F38.WIG DD /r VAESENCLAST xmm1, xmm2, xmm3/m128CV/VVAES AVX512VLPerform the last round of an AES encryption flow, using one 128-bit data (state) from xmm2 with one 128-bit round key from xmm3/m128; store the result in xmm1.
EVEX.256.66.0F38.WIG DD /r VAESENCLAST ymm1, ymm2, ymm3/m256CV/VVAES AVX512VLPerform the last round of an AES encryption flow, using two 128-bit data (state) from ymm2 with two 128-bit round keys from ymm3/m256; store the result in ymm1.
EVEX.512.66.0F38.WIG DD /r VAESENCLAST zmm1, zmm2, zmm3/m512CV/VVAES AVX512FPerform the last round of an AES encryption flow, using four 128-bit data (state) from zmm2 with four 128-bit round keys from zmm3/m512; store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs the last round of an AES encryption flow using one/two/four (depending on vector length) 128-bit data (state) from the first source operand with one/two/four (depending on vector length) round key(s) from the second source operand, and stores the result in the destination operand.

+

VEX and EVEX encoded versions of the instruction allows 3-operand (non-destructive) operation. The legacy encoded versions of the instruction require that the first source operand and the destination operand are the same and must be an XMM register.

+

The EVEX encoded form of this instruction does not support memory fault suppression.

+

Operation + ¶ +

+

AESENCLAST + ¶ +

+
STATE := SRC1;
+RoundKey := SRC2;
+STATE := ShiftRows( STATE );
+STATE := SubBytes( STATE );
+DEST[127:0] := STATE XOR RoundKey;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VAESENCLAST (128b and 256b VEX Encoded Versions) + ¶ +

+
(KL, VL) = (1,128), (2,256)
+FOR I=0 to KL-1:
+    STATE := SRC1.xmm[i]
+    RoundKey := SRC2.xmm[i]
+    STATE := ShiftRows( STATE )
+    STATE := SubBytes( STATE )
+    DEST.xmm[i] := STATE XOR RoundKey
+DEST[MAXVL-1:VL] := 0
+
+

VAESENCLAST (EVEX Encoded Version) + ¶ +

+
(KL,VL) = (1,128), (2,256), (4,512)
+FOR i = 0 to KL-1:
+    STATE := SRC1.xmm[i]
+    RoundKey := SRC2.xmm[i]
+    STATE := ShiftRows( STATE )
+    STATE := SubBytes( STATE )
+    DEST.xmm[i] := STATE XOR RoundKey
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)AESENCLAST __m128i _mm_aesenclast (__m128i, __m128i)
+
+
VAESENCLAST __m256i _mm256_aesenclast_epi128(__m256i, __m256i);
+
+
VAESENCLAST __m512i _mm512_aesenclast_epi128(__m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded: See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/aesencwide128kl.html b/x86/aesencwide128kl.html new file mode 100644 index 0000000..a5ec547 --- /dev/null +++ b/x86/aesencwide128kl.html @@ -0,0 +1,103 @@ + +AESENCWIDE128KL + — Perform Ten Rounds of AES Encryption Flow With Key Locker on 8 BlocksUsing 128-Bit Key

AESENCWIDE128KL + — Perform Ten Rounds of AES Encryption Flow With Key Locker on 8 BlocksUsing 128-Bit Key

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 D8 !(11):000:bbb AESENCWIDE128KL m384, <XMM0-7>AV/VAESKLE WIDE_KLEncrypt XMM0-7 using 128-bit AES key indicated by handle at m384 and store each resultant block back to its corresponding register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnTupleOperand 1Operands 2—9
AN/AModRM:r/m (r)Implicit XMM0-7 (r, w)
+

Description + ¶ +

+

The AESENCWIDE128KL1 instruction performs ten rounds of AES to encrypt each of the eight blocks in XMM0-7 using the 128-bit key indicated by the handle from the second operand. It replaces each input block in XMM0-7 with its corresponding encrypted block if the operation succeeds (e.g., does not run into a handle violation failure).

+

Operation + ¶ +

+

AESENCWIDE128KL + ¶ +

+
Handle := UnalignedLoad of 384 bit (SRC); // Load is not guaranteed to be atomic.
+Illegal Handle = (
+                HandleReservedBitSet (Handle) ||
+                (Handle[0] AND (CPL > 0)) ||
+                Handle [1] ||
+                HandleKeyType (Handle) != HANDLE_KEY_TYPE_AES128
+                );
+IF (Illegal Handle)
+    THEN RFLAGS.ZF := 1;
+    ELSE
+        (UnwrappedKey, Authentic) := UnwrapKeyAndAuthenticate384 (Handle[383:0], IWKey);
+        IF Authentic == 0
+            THEN RFLAGS.ZF := 1;
+            ELSE
+            XMM0 := AES128Encrypt (XMM0, UnwrappedKey) ;
+                    XMM1 := AES128Encrypt (XMM1, UnwrappedKey) ;
+                    XMM2 := AES128Encrypt (XMM2, UnwrappedKey) ;
+                    XMM3 := AES128Encrypt (XMM3, UnwrappedKey) ;
+                    XMM4 := AES128Encrypt (XMM4, UnwrappedKey) ;
+                    XMM5 := AES128Encrypt (XMM5, UnwrappedKey) ;
+                    XMM6 := AES128Encrypt (XMM6, UnwrappedKey) ;
+                    XMM7 := AES128Encrypt (XMM7, UnwrappedKey) ;
+                    RFLAGS.ZF := 0;
+        FI;
+FI;
+RFLAGS.OF, SF, AF, PF, CF := 0;
+1. Further details on Key Locker and usage of this instruction can be found here:
+
+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Flags Affected + ¶ +

+

ZF is set to 0 if the operation succeeded and set to 1 if the operation failed due to a handle violation. The other arithmetic flags (OF, SF, AF, PF, CF) are cleared to 0.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
AESENCWIDE128KLunsigned char _mm_aesencwide128kl_u8(__m128i odata[8], const __m128i idata[8], const void* h);
+
+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.AESKLE = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

If CPUID.19H:EBX.WIDE_KL[bit 2] = 0.

+

#NM If CR0.TS = 1.

+

#PF If a page fault occurs.

+

#GP(0) If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.

+

If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.

+

If the memory address is in a non-canonical form.

+

#SS(0) If a memory operand effective address is outside the SS segment limit.

+

If a memory address referencing the SS segment is in a non-canonical form.

diff --git a/x86/aesencwide256kl.html b/x86/aesencwide256kl.html new file mode 100644 index 0000000..91a732b --- /dev/null +++ b/x86/aesencwide256kl.html @@ -0,0 +1,103 @@ + +AESENCWIDE256KL + — Perform 14 Rounds of AES Encryption Flow With Key Locker on 8 BlocksUsing 256-Bit Key

AESENCWIDE256KL + — Perform 14 Rounds of AES Encryption Flow With Key Locker on 8 BlocksUsing 256-Bit Key

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 D8 !(11):010:bbb AESENCWIDE256KL m512, <XMM0-7>AV/VAESKLE WIDE_KLEncrypt XMM0-7 using 256-bit AES key indicated by handle at m512 and store each resultant block back to its corresponding register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnTupleOperand 1Operands 2—9
AN/AModRM:r/m (r)Implicit XMM0-7 (r, w)
+

Description + ¶ +

+

The AESENCWIDE256KL1 instruction performs 14 rounds of AES to encrypt each of the eight blocks in XMM0-7 using the 256-bit key indicated by the handle from the second operand. It replaces each input block in XMM0-7 with its corresponding encrypted block if the operation succeeds (e.g., does not run into a handle violation failure).

+

Operation + ¶ +

+

AESENCWIDE256KL + ¶ +

+
Handle := UnalignedLoad of 512 bit (SRC); // Load is not guaranteed to be atomic.
+Illegal Handle = (
+                HandleReservedBitSet (Handle) ||
+                (Handle[0] AND (CPL > 0)) ||
+                Handle [1] ||
+                HandleKeyType (Handle) != HANDLE_KEY_TYPE_AES256
+                );
+IF (Illegal Handle)
+    THEN RFLAGS.ZF := 1;
+    ELSE
+        (UnwrappedKey, Authentic) := UnwrapKeyAndAuthenticate512 (Handle[511:0], IWKey);
+        IF (Authentic == 0)
+            THEN RFLAGS.ZF := 1;
+            ELSE
+                    XMM0 := AES256Encrypt (XMM0, UnwrappedKey) ;
+                    XMM1 := AES256Encrypt (XMM1, UnwrappedKey) ;
+                    XMM2 := AES256Encrypt (XMM2, UnwrappedKey) ;
+                    XMM3 := AES256Encrypt (XMM3, UnwrappedKey) ;
+                    XMM4 := AES256Encrypt (XMM4, UnwrappedKey) ;
+                    XMM5 := AES256Encrypt (XMM5, UnwrappedKey) ;
+                    XMM6 := AES256Encrypt (XMM6, UnwrappedKey) ;
+                    XMM7 := AES256Encrypt (XMM7, UnwrappedKey) ;
+                    RFLAGS.ZF := 0;
+        FI;
+FI;
+RFLAGS.OF, SF, AF, PF, CF := 0;
+1. Further details on Key Locker and usage of this instruction can be found here:
+
+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Flags Affected + ¶ +

+

ZF is set to 0 if the operation succeeded and set to 1 if the operation failed due to a handle violation. The other arithmetic flags (OF, SF, AF, PF, CF) are cleared to 0.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
AESENCWIDE256KLunsigned char _mm_aesencwide256kl_u8(__m128i odata[8], const __m128i idata[8], const void* h);
+
+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.19H:EBX.AESKLE[bit 0] = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

If CPUID.19H:EBX.WIDE_KL[bit 2] = 0.

+

#NM If CR0.TS = 1.

+

#PF If a page fault occurs.

+

#GP(0) If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.

+

If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.

+

If the memory address is in a non-canonical form.

+

#SS(0) If a memory operand effective address is outside the SS segment limit.

+

If a memory address referencing the SS segment is in a non-canonical form.

diff --git a/x86/aesimc.html b/x86/aesimc.html new file mode 100644 index 0000000..c7b47ab --- /dev/null +++ b/x86/aesimc.html @@ -0,0 +1,84 @@ + +AESIMC + — Perform the AES InvMixColumn Transformation

AESIMC + — Perform the AES InvMixColumn Transformation

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 38 DB /r AESIMC xmm1, xmm2/m128RMV/VAESPerform the InvMixColumn transformation on a 128-bit round key from xmm2/m128 and store the result in xmm1.
VEX.128.66.0F38.WIG DB /r VAESIMC xmm1, xmm2/m128RMV/VBoth AES and AVX flagsPerform the InvMixColumn transformation on a 128-bit round key from xmm2/m128 and store the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Perform the InvMixColumns transformation on the source operand and store the result in the destination operand. The destination operand is an XMM register. The source operand can be an XMM register or a 128-bit memory location.

+

Note: the AESIMC instruction should be applied to the expanded AES round keys (except for the first and last round key) in order to prepare them for decryption using the “Equivalent Inverse Cipher” (defined in FIPS 197).

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

AESIMC + ¶ +

+
DEST[127:0] := InvMixColumns( SRC );
+DEST[MAXVL-1:128] (Unmodified)
+
+

VAESIMC + ¶ +

+
DEST[127:0] := InvMixColumns( SRC );
+DEST[MAXVL-1:128] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)AESIMC __m128i _mm_aesimc (__m128i)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/aeskeygenassist.html b/x86/aeskeygenassist.html new file mode 100644 index 0000000..d28542b --- /dev/null +++ b/x86/aeskeygenassist.html @@ -0,0 +1,100 @@ + +AESKEYGENASSIST + — AES Round Key Generation Assist

AESKEYGENASSIST + — AES Round Key Generation Assist

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 3A DF /r ib AESKEYGENASSIST xmm1, xmm2/m128, imm8RMIV/VAESAssist in AES round key generation using an 8 bits Round Constant (RCON) specified in the immediate byte, operating on 128 bits of data specified in xmm2/m128 and stores the result in xmm1.
VEX.128.66.0F3A.WIG DF /r ib VAESKEYGENASSIST xmm1, xmm2/m128, imm8RMIV/VBoth AES and AVX flagsAssist in AES round key generation using 8 bits Round Constant (RCON) specified in the immediate byte, operating on 128 bits of data specified in xmm2/m128 and stores the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Assist in expanding the AES cipher key, by computing steps towards generating a round key for encryption, using 128-bit data specified in the source operand and an 8-bit round constant specified as an immediate, store the result in the destination operand.

+

The destination operand is an XMM register. The source operand can be an XMM register or a 128-bit memory location.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

AESKEYGENASSIST + ¶ +

+
X3[31:0] := SRC [127: 96];
+X2[31:0] := SRC [95: 64];
+X1[31:0] := SRC [63: 32];
+X0[31:0] := SRC [31: 0];
+RCON[31:0] := ZeroExtend(imm8[7:0]);
+DEST[31:0] := SubWord(X1);
+DEST[63:32 ] := RotWord( SubWord(X1) ) XOR RCON;
+DEST[95:64] := SubWord(X3);
+DEST[127:96] := RotWord( SubWord(X3) ) XOR RCON;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VAESKEYGENASSIST + ¶ +

+
X3[31:0] := SRC [127: 96];
+X2[31:0] := SRC [95: 64];
+X1[31:0] := SRC [63: 32];
+X0[31:0] := SRC [31: 0];
+RCON[31:0] := ZeroExtend(imm8[7:0]);
+DEST[31:0] := SubWord(X1);
+DEST[63:32 ] := RotWord( SubWord(X1) ) XOR RCON;
+DEST[95:64] := SubWord(X3);
+DEST[127:96] := RotWord( SubWord(X3) ) XOR RCON;
+DEST[MAXVL-1:128] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)AESKEYGENASSIST __m128i _mm_aeskeygenassist (__m128i, const int)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/and.html b/x86/and.html new file mode 100644 index 0000000..60c8165 --- /dev/null +++ b/x86/and.html @@ -0,0 +1,300 @@ + +AND + — Logical AND

AND + — Logical AND

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
24 ibAND AL, imm8IValidValidAL AND imm8.
25 iwAND AX, imm16IValidValidAX AND imm16.
25 idAND EAX, imm32IValidValidEAX AND imm32.
REX.W + 25 idAND RAX, imm32IValidN.E.RAX AND imm32 sign-extended to 64-bits.
80 /4 ibAND r/m8, imm8MIValidValidr/m8 AND imm8.
REX + 80 /4 ibAND r/m8*, imm8MIValidN.E.r/m8 AND imm8.
81 /4 iwAND r/m16, imm16MIValidValidr/m16 AND imm16.
81 /4 idAND r/m32, imm32MIValidValidr/m32 AND imm32.
REX.W + 81 /4 idAND r/m64, imm32MIValidN.E.r/m64 AND imm32 sign extended to 64-bits.
83 /4 ibAND r/m16, imm8MIValidValidr/m16 AND imm8 (sign-extended).
83 /4 ibAND r/m32, imm8MIValidValidr/m32 AND imm8 (sign-extended).
REX.W + 83 /4 ibAND r/m64, imm8MIValidN.E.r/m64 AND imm8 (sign-extended).
20 /rAND r/m8, r8MRValidValidr/m8 AND r8.
REX + 20 /rAND r/m8*, r8*MRValidN.E.r/m64 AND r8 (sign-extended).
21 /rAND r/m16, r16MRValidValidr/m16 AND r16.
21 /rAND r/m32, r32MRValidValidr/m32 AND r32.
REX.W + 21 /rAND r/m64, r64MRValidN.E.r/m64 AND r32.
22 /rAND r8, r/m8RMValidValidr8 AND r/m8.
REX + 22 /rAND r8*, r/m8*RMValidN.E.r/m64 AND r8 (sign-extended).
23 /rAND r16, r/m16RMValidValidr16 AND r/m16.
23 /rAND r32, r/m32RMValidValidr32 AND r/m32.
REX.W + 23 /rAND r64, r/m64RMValidN.E.r64 AND r/m64.
+
+

*In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
MIModRM:r/m (r, w)imm8/16/32N/AN/A
IAL/AX/EAX/RAXimm8/16/32N/AN/A
+

Description + ¶ +

+

Performs a bitwise AND operation on the destination (first) and source (second) operands and stores the result in the destination operand location. The source operand can be an immediate, a register, or a memory location; the destination operand can be a register or a memory location. (However, two memory operands cannot be used in one instruction.) Each bit of the result is set to 1 if both corresponding bits of the first and second operands are 1; otherwise, it is set to 0.

+

This instruction can be used with a LOCK prefix to allow the it to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := DEST AND SRC;
+
+

Flags Affected + ¶ +

+

The OF and CF flags are cleared; the SF, ZF, and PF flags are set according to the result. The state of the AF flag is undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand points to a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/andn.html b/x86/andn.html new file mode 100644 index 0000000..df67b5b --- /dev/null +++ b/x86/andn.html @@ -0,0 +1,74 @@ + +ANDN + — Logical AND NOT

ANDN + — Logical AND NOT

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.0F38.W0 F2 /r ANDN r32a, r32b, r/m32RVMV/VBMI1Bitwise AND of inverted r32b with r/m32, store result in r32a.
VEX.LZ. 0F38.W1 F2 /r ANDN r64a, r64b, r/m64RVMV/N.E.BMI1Bitwise AND of inverted r64b with r/m64, store result in r64a.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical AND of inverted second operand (the first source operand) with the third operand (the

+

second source operand). The result is stored in the first operand (destination operand).

+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
DEST := (NOT SRC1) bitwiseAND SRC2;
+SF := DEST[OperandSize -1];
+ZF := (DEST = 0);
+
+

Flags Affected + ¶ +

+

SF and ZF are updated based on result. OF and CF flags are cleared. AF and PF flags are undefined.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
Auto-generated from high-level language.
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/andnpd.html b/x86/andnpd.html new file mode 100644 index 0000000..98259e0 --- /dev/null +++ b/x86/andnpd.html @@ -0,0 +1,171 @@ + +ANDNPD + — Bitwise Logical AND NOT of Packed Double Precision Floating-Point Values

ANDNPD + — Bitwise Logical AND NOT of Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 55 /r ANDNPD xmm1, xmm2/m128AV/VSSE2Return the bitwise logical AND NOT of packed double precision floating-point values in xmm1 and xmm2/mem.
VEX.128.66.0F 55 /r VANDNPD xmm1, xmm2, xmm3/m128BV/VAVXReturn the bitwise logical AND NOT of packed double precision floating-point values in xmm2 and xmm3/mem.
VEX.256.66.0F 55/r VANDNPD ymm1, ymm2, ymm3/m256BV/VAVXReturn the bitwise logical AND NOT of packed double precision floating-point values in ymm2 and ymm3/mem.
EVEX.128.66.0F.W1 55 /r VANDNPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512DQReturn the bitwise logical AND NOT of packed double precision floating-point values in xmm2 and xmm3/m128/m64bcst subject to writemask k1.
EVEX.256.66.0F.W1 55 /r VANDNPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512DQReturn the bitwise logical AND NOT of packed double precision floating-point values in ymm2 and ymm3/m256/m64bcst subject to writemask k1.
EVEX.512.66.0F.W1 55 /r VANDNPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512DQReturn the bitwise logical AND NOT of packed double precision floating-point values in zmm2 and zmm3/m512/m64bcst subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical AND NOT of the two, four or eight packed double precision floating-point values from the first source operand and the second source operand, and stores the result in the destination operand.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

VANDNPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := (NOT(SRC1[i+63:i])) BITWISE AND SRC2[63:0]
+                ELSE
+                    DEST[i+63:i] := (NOT(SRC1[i+63:i])) BITWISE AND SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] = 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VANDNPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := (NOT(SRC1[63:0])) BITWISE AND SRC2[63:0]
+DEST[127:64] := (NOT(SRC1[127:64])) BITWISE AND SRC2[127:64]
+DEST[191:128] := (NOT(SRC1[191:128])) BITWISE AND SRC2[191:128]
+DEST[255:192] := (NOT(SRC1[255:192])) BITWISE AND SRC2[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VANDNPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := (NOT(SRC1[63:0])) BITWISE AND SRC2[63:0]
+DEST[127:64] := (NOT(SRC1[127:64])) BITWISE AND SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

ANDNPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := (NOT(DEST[63:0])) BITWISE AND SRC[63:0]
+DEST[127:64] := (NOT(DEST[127:64])) BITWISE AND SRC[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VANDNPD __m512d _mm512_andnot_pd (__m512d a, __m512d b);
+
+
VANDNPD __m512d _mm512_mask_andnot_pd (__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VANDNPD __m512d _mm512_maskz_andnot_pd (__mmask8 k, __m512d a, __m512d b);
+
+
VANDNPD __m256d _mm256_mask_andnot_pd (__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VANDNPD __m256d _mm256_maskz_andnot_pd (__mmask8 k, __m256d a, __m256d b);
+
+
VANDNPD __m128d _mm_mask_andnot_pd (__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VANDNPD __m128d _mm_maskz_andnot_pd (__mmask8 k, __m128d a, __m128d b);
+
+
VANDNPD __m256d _mm256_andnot_pd (__m256d a, __m256d b);
+
+
ANDNPD __m128d _mm_andnot_pd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/andnps.html b/x86/andnps.html new file mode 100644 index 0000000..a789d71 --- /dev/null +++ b/x86/andnps.html @@ -0,0 +1,179 @@ + +ANDNPS + — Bitwise Logical AND NOT of Packed Single Precision Floating-Point Values

ANDNPS + — Bitwise Logical AND NOT of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 55 /r ANDNPS xmm1, xmm2/m128AV/VSSEReturn the bitwise logical AND NOT of packed single precision floating-point values in xmm1 and xmm2/mem.
VEX.128.0F 55 /r VANDNPS xmm1, xmm2, xmm3/m128BV/VAVXReturn the bitwise logical AND NOT of packed single precision floating-point values in xmm2 and xmm3/mem.
VEX.256.0F 55 /r VANDNPS ymm1, ymm2, ymm3/m256BV/VAVXReturn the bitwise logical AND NOT of packed single precision floating-point values in ymm2 and ymm3/mem.
EVEX.128.0F.W0 55 /r VANDNPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512DQReturn the bitwise logical AND of packed single precision floating-point values in xmm2 and xmm3/m128/m32bcst subject to writemask k1.
EVEX.256.0F.W0 55 /r VANDNPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512DQReturn the bitwise logical AND of packed single precision floating-point values in ymm2 and ymm3/m256/m32bcst subject to writemask k1.
EVEX.512.0F.W0 55 /r VANDNPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512DQReturn the bitwise logical AND of packed single precision floating-point values in zmm2 and zmm3/m512/m32bcst subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical AND NOT of the four, eight or sixteen packed single precision floating-point values from the first source operand and the second source operand, and stores the result in the destination operand.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+

VANDNPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := (NOT(SRC1[i+31:i])) BITWISE AND SRC2[31:0]
+                ELSE
+                    DEST[i+31:i] := (NOT(SRC1[i+31:i])) BITWISE AND SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] = 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VANDNPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := (NOT(SRC1[31:0])) BITWISE AND SRC2[31:0]
+DEST[63:32] := (NOT(SRC1[63:32])) BITWISE AND SRC2[63:32]
+DEST[95:64] := (NOT(SRC1[95:64])) BITWISE AND SRC2[95:64]
+DEST[127:96] := (NOT(SRC1[127:96])) BITWISE AND SRC2[127:96]
+DEST[159:128] := (NOT(SRC1[159:128])) BITWISE AND SRC2[159:128]
+DEST[191:160] := (NOT(SRC1[191:160])) BITWISE AND SRC2[191:160]
+DEST[223:192] := (NOT(SRC1[223:192])) BITWISE AND SRC2[223:192]
+DEST[255:224] := (NOT(SRC1[255:224])) BITWISE AND SRC2[255:224].
+DEST[MAXVL-1:256] := 0
+
+

VANDNPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := (NOT(SRC1[31:0])) BITWISE AND SRC2[31:0]
+DEST[63:32] := (NOT(SRC1[63:32])) BITWISE AND SRC2[63:32]
+DEST[95:64] := (NOT(SRC1[95:64])) BITWISE AND SRC2[95:64]
+DEST[127:96] := (NOT(SRC1[127:96])) BITWISE AND SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

ANDNPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := (NOT(DEST[31:0])) BITWISE AND SRC[31:0]
+DEST[63:32] := (NOT(DEST[63:32])) BITWISE AND SRC[63:32]
+DEST[95:64] := (NOT(DEST[95:64])) BITWISE AND SRC[95:64]
+DEST[127:96] := (NOT(DEST[127:96])) BITWISE AND SRC[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VANDNPS __m512 _mm512_andnot_ps (__m512 a, __m512 b);
+
+
VANDNPS __m512 _mm512_mask_andnot_ps (__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VANDNPS __m512 _mm512_maskz_andnot_ps (__mmask16 k, __m512 a, __m512 b);
+
+
VANDNPS __m256 _mm256_mask_andnot_ps (__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VANDNPS __m256 _mm256_maskz_andnot_ps (__mmask8 k, __m256 a, __m256 b);
+
+
VANDNPS __m128 _mm_mask_andnot_ps (__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VANDNPS __m128 _mm_maskz_andnot_ps (__mmask8 k, __m128 a, __m128 b);
+
+
VANDNPS __m256 _mm256_andnot_ps (__m256 a, __m256 b);
+
+
ANDNPS __m128 _mm_andnot_ps (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/andpd.html b/x86/andpd.html new file mode 100644 index 0000000..fe37155 --- /dev/null +++ b/x86/andpd.html @@ -0,0 +1,172 @@ + +ANDPD + — Bitwise Logical AND of Packed Double Precision Floating-Point Values

ANDPD + — Bitwise Logical AND of Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 54 /r ANDPD xmm1, xmm2/m128AV/VSSE2Return the bitwise logical AND of packed double precision floating-point values in xmm1 and xmm2/mem.
VEX.128.66.0F 54 /r VANDPD xmm1, xmm2, xmm3/m128BV/VAVXReturn the bitwise logical AND of packed double precision floating-point values in xmm2 and xmm3/mem.
VEX.256.66.0F 54 /r VANDPD ymm1, ymm2, ymm3/m256BV/VAVXReturn the bitwise logical AND of packed double precision floating-point values in ymm2 and ymm3/mem.
EVEX.128.66.0F.W1 54 /r VANDPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512DQReturn the bitwise logical AND of packed double precision floating-point values in xmm2 and xmm3/m128/m64bcst subject to writemask k1.
EVEX.256.66.0F.W1 54 /r VANDPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512DQReturn the bitwise logical AND of packed double precision floating-point values in ymm2 and ymm3/m256/m64bcst subject to writemask k1.
EVEX.512.66.0F.W1 54 /r VANDPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512DQReturn the bitwise logical AND of packed double precision floating-point values in zmm2 and zmm3/m512/m64bcst subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical AND of the two, four or eight packed double precision floating-point values from the first source operand and the second source operand, and stores the result in the destination operand.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

VANDPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := SRC1[i+63:i] BITWISE AND SRC2[63:0]
+                ELSE
+                    DEST[i+63:i] := SRC1[i+63:i] BITWISE AND SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] = 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VANDPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] BITWISE AND SRC2[63:0]
+DEST[127:64] := SRC1[127:64] BITWISE AND SRC2[127:64]
+DEST[191:128] := SRC1[191:128] BITWISE AND SRC2[191:128]
+DEST[255:192] := SRC1[255:192] BITWISE AND SRC2[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VANDPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] BITWISE AND SRC2[63:0]
+DEST[127:64] := SRC1[127:64] BITWISE AND SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

ANDPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] BITWISE AND SRC[63:0]
+DEST[127:64] := DEST[127:64] BITWISE AND SRC[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VANDPD __m512d _mm512_and_pd (__m512d a, __m512d b);
+
+
VANDPD __m512d _mm512_mask_and_pd (__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VANDPD __m512d _mm512_maskz_and_pd (__mmask8 k, __m512d a, __m512d b);
+
+
VANDPD __m256d _mm256_mask_and_pd (__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VANDPD __m256d _mm256_maskz_and_pd (__mmask8 k, __m256d a, __m256d b);
+
+
VANDPD __m128d _mm_mask_and_pd (__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VANDPD __m128d _mm_maskz_and_pd (__mmask8 k, __m128d a, __m128d b);
+
+
VANDPD __m256d _mm256_and_pd (__m256d a, __m256d b);
+
+
ANDPD __m128d _mm_and_pd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/andps.html b/x86/andps.html new file mode 100644 index 0000000..a38c19b --- /dev/null +++ b/x86/andps.html @@ -0,0 +1,179 @@ + +ANDPS + — Bitwise Logical AND of Packed Single Precision Floating-Point Values

ANDPS + — Bitwise Logical AND of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 54 /r ANDPS xmm1, xmm2/m128AV/VSSEReturn the bitwise logical AND of packed single precision floating-point values in xmm1 and xmm2/mem.
VEX.128.0F 54 /r VANDPS xmm1,xmm2, xmm3/m128BV/VAVXReturn the bitwise logical AND of packed single precision floating-point values in xmm2 and xmm3/mem.
VEX.256.0F 54 /r VANDPS ymm1, ymm2, ymm3/m256BV/VAVXReturn the bitwise logical AND of packed single precision floating-point values in ymm2 and ymm3/mem.
EVEX.128.0F.W0 54 /r VANDPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512DQReturn the bitwise logical AND of packed single precision floating-point values in xmm2 and xmm3/m128/m32bcst subject to writemask k1.
EVEX.256.0F.W0 54 /r VANDPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512DQReturn the bitwise logical AND of packed single precision floating-point values in ymm2 and ymm3/m256/m32bcst subject to writemask k1.
EVEX.512.0F.W0 54 /r VANDPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512DQReturn the bitwise logical AND of packed single precision floating-point values in zmm2 and zmm3/m512/m32bcst subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical AND of the four, eight or sixteen packed single precision floating-point values from the first source operand and the second source operand, and stores the result in the destination operand.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+

VANDPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := SRC1[i+31:i] BITWISE AND SRC2[31:0]
+                ELSE
+                    DEST[i+31:i] := SRC1[i+31:i] BITWISE AND SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0;
+
+

VANDPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE AND SRC2[31:0]
+DEST[63:32] := SRC1[63:32] BITWISE AND SRC2[63:32]
+DEST[95:64] := SRC1[95:64] BITWISE AND SRC2[95:64]
+DEST[127:96] := SRC1[127:96] BITWISE AND SRC2[127:96]
+DEST[159:128] := SRC1[159:128] BITWISE AND SRC2[159:128]
+DEST[191:160] := SRC1[191:160] BITWISE AND SRC2[191:160]
+DEST[223:192] := SRC1[223:192] BITWISE AND SRC2[223:192]
+DEST[255:224] := SRC1[255:224] BITWISE AND SRC2[255:224].
+DEST[MAXVL-1:256] := 0;
+
+

VANDPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE AND SRC2[31:0]
+DEST[63:32] := SRC1[63:32] BITWISE AND SRC2[63:32]
+DEST[95:64] := SRC1[95:64] BITWISE AND SRC2[95:64]
+DEST[127:96] := SRC1[127:96] BITWISE AND SRC2[127:96]
+DEST[MAXVL-1:128] := 0;
+
+

ANDPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := DEST[31:0] BITWISE AND SRC[31:0]
+DEST[63:32] := DEST[63:32] BITWISE AND SRC[63:32]
+DEST[95:64] := DEST[95:64] BITWISE AND SRC[95:64]
+DEST[127:96] := DEST[127:96] BITWISE AND SRC[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VANDPS __m512 _mm512_and_ps (__m512 a, __m512 b);
+
+
VANDPS __m512 _mm512_mask_and_ps (__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VANDPS __m512 _mm512_maskz_and_ps (__mmask16 k, __m512 a, __m512 b);
+
+
VANDPS __m256 _mm256_mask_and_ps (__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VANDPS __m256 _mm256_maskz_and_ps (__mmask8 k, __m256 a, __m256 b);
+
+
VANDPS __m128 _mm_mask_and_ps (__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VANDPS __m128 _mm_maskz_and_ps (__mmask8 k, __m128 a, __m128 b);
+
+
VANDPS __m256 _mm256_and_ps (__m256 a, __m256 b);
+
+
ANDPS __m128 _mm_and_ps (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/arpl.html b/x86/arpl.html new file mode 100644 index 0000000..3b296de --- /dev/null +++ b/x86/arpl.html @@ -0,0 +1,116 @@ + +ARPL + — Adjust RPL Field of Segment Selector

ARPL + — Adjust RPL Field of Segment Selector

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
63 /rARPL r/m16, r16MRN. E.ValidAdjust RPL of r/m16 to not less than RPL of r16.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Compares the RPL fields of two segment selectors. The first operand (the destination operand) contains one segment selector and the second operand (source operand) contains the other. (The RPL field is located in bits 0 and 1 of each operand.) If the RPL field of the destination operand is less than the RPL field of the source operand, the ZF flag is set and the RPL field of the destination operand is increased to match that of the source operand. Otherwise, the ZF flag is cleared and no change is made to the destination operand. (The destination operand can be a word register or a memory location; the source operand must be a word register.)

+

The ARPL instruction is provided for use by operating-system procedures (however, it can also be used by applications). It is generally used to adjust the RPL of a segment selector that has been passed to the operating system by an application program to match the privilege level of the application program. Here the segment selector passed to the operating system is placed in the destination operand and segment selector for the application program’s code segment is placed in the source operand. (The RPL field in the source operand represents the privilege level of the application program.) Execution of the ARPL instruction then ensures that the RPL of the segment selector received by the operating system is no lower (does not have a higher privilege) than the privilege level of the application program (the segment selector for the application program’s code segment can be read from the stack following a procedure call).

+

This instruction executes as described in compatibility mode and legacy mode. It is not encodable in 64-bit mode.

+

See “Checking Caller Access Privileges” in Chapter 3, “Protected-Mode Memory Management,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for more information about the use of this instruction.

+

Operation + ¶ +

+
IF 64-BIT MODE
+    THEN
+        See MOVSXD;
+    ELSE
+        IF DEST[RPL] < SRC[RPL]
+            THEN
+                ZF := 1;
+                DEST[RPL] := SRC[RPL];
+            ELSE
+                ZF := 0;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

The ZF flag is set to 1 if the RPL field of the destination operand is less than that of the source operand; otherwise, it is set to 0.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + +
#UDThe ARPL instruction is not recognized in real-address mode.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + +
#UDThe ARPL instruction is not recognized in virtual-8086 mode.
If the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Not applicable.

diff --git a/x86/bextr.html b/x86/bextr.html new file mode 100644 index 0000000..580d973 --- /dev/null +++ b/x86/bextr.html @@ -0,0 +1,81 @@ + +BEXTR + — Bit Field Extract

BEXTR + — Bit Field Extract

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.0F38.W0 F7 /r BEXTR r32a, r/m32, r32bRMVV/VBMI1Contiguous bitwise extract from r/m32 using r32b as control; store result in r32a.
VEX.LZ.0F38.W1 F7 /r BEXTR r64a, r/m64, r64bRMVV/N.E.BMI1Contiguous bitwise extract from r/m64 using r64b as control; store result in r64a.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMVModRM:reg (w)ModRM:r/m (r)VEX.vvvv (r)N/A
+

Description + ¶ +

+

Extracts contiguous bits from the first source operand (the second operand) using an index value and length value specified in the second source operand (the third operand). Bit 7:0 of the second source operand specifies the starting bit position of bit extraction. A START value exceeding the operand size will not extract any bits from the second source operand. Bit 15:8 of the second source operand specifies the maximum number of bits (LENGTH) beginning at the START position to extract. Only bit positions up to (OperandSize -1) of the first source operand are extracted. The extracted bits are written to the destination register, starting from the least significant bit. All higher order bits in the destination operand (starting at bit position LENGTH) are zeroed. The destination register is cleared if no bits are extracted.

+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
START := SRC2[7:0];
+LEN := SRC2[15:8];
+TEMP := ZERO_EXTEND_TO_512 (SRC1 );
+DEST := ZERO_EXTEND(TEMP[START+LEN -1: START]);
+ZF := (DEST = 0);
+
+

Flags Affected + ¶ +

+

ZF is updated based on the result. AF, SF, and PF are undefined. All other flags are cleared.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BEXTR unsigned __int32 _bextr_u32(unsigned __int32 src, unsigned __int32 start. unsigned __int32 len);
+
+
BEXTR unsigned __int64 _bextr_u64(unsigned __int64 src, unsigned __int32 start. unsigned __int32 len);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.W = 1.
diff --git a/x86/blendpd.html b/x86/blendpd.html new file mode 100644 index 0000000..0746b85 --- /dev/null +++ b/x86/blendpd.html @@ -0,0 +1,111 @@ + +BLENDPD + — Blend Packed Double Precision Floating-Point Values

BLENDPD + — Blend Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 3A 0D /r ib BLENDPD xmm1, xmm2/m128, imm8RMIV/VSSE4_1Select packed double precision floating-point values from xmm1 and xmm2/m128 from mask specified in imm8 and store the values into xmm1.
VEX.128.66.0F3A.WIG 0D /r ib VBLENDPD xmm1, xmm2, xmm3/m128, imm8RVMIV/VAVXSelect packed double precision floating-point Values from xmm2 and xmm3/m128 from mask in imm8 and store the values in xmm1.
VEX.256.66.0F3A.WIG 0D /r ib VBLENDPD ymm1, ymm2, ymm3/m256, imm8RVMIV/VAVXSelect packed double precision floating-point Values from ymm2 and ymm3/m256 from mask in imm8 and store the values in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (r, w)ModRM:r/m (r)imm8N/A
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8[3:0]
+

Description + ¶ +

+

Double-precision floating-point values from the second source operand (third operand) are conditionally merged with values from the first source operand (second operand) and written to the destination operand (first operand). The immediate bits [3:0] determine whether the corresponding double precision floating-point value in the destination is copied from the second source or first source. If a bit in the mask, corresponding to a word, is ”1”, then the double precision floating-point value in the second source operand is copied, else the value in the first source operand is copied.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Operation + ¶ +

+

BLENDPD (128-bit Legacy SSE Version) + ¶ +

+
IF (IMM8[0] = 0)THEN DEST[63:0] := DEST[63:0]
+    ELSE DEST [63:0] := SRC[63:0] FI
+IF (IMM8[1] = 0) THEN DEST[127:64] := DEST[127:64]
+    ELSE DEST [127:64] := SRC[127:64] FI
+DEST[MAXVL-1:128] (Unmodified)
+
+

VBLENDPD (VEX.128 Encoded Version) + ¶ +

+
IF (IMM8[0] = 0)THEN DEST[63:0] := SRC1[63:0]
+    ELSE DEST [63:0] := SRC2[63:0] FI
+IF (IMM8[1] = 0) THEN DEST[127:64] := SRC1[127:64]
+    ELSE DEST [127:64] := SRC2[127:64] FI
+DEST[MAXVL-1:128] := 0
+
+

VBLENDPD (VEX.256 Encoded Version) + ¶ +

+
IF (IMM8[0] = 0)THEN DEST[63:0] := SRC1[63:0]
+    ELSE DEST [63:0] := SRC2[63:0] FI
+IF (IMM8[1] = 0) THEN DEST[127:64] := SRC1[127:64]
+    ELSE DEST [127:64] := SRC2[127:64] FI
+IF (IMM8[2] = 0) THEN DEST[191:128] := SRC1[191:128]
+    ELSE DEST [191:128] := SRC2[191:128] FI
+IF (IMM8[3] = 0) THEN DEST[255:192] := SRC1[255:192]
+    ELSE DEST [255:192] := SRC2[255:192] FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BLENDPD __m128d _mm_blend_pd (__m128d v1, __m128d v2, const int mask);
+
+
VBLENDPD __m256d _mm256_blend_pd (__m256d a, __m256d b, const int mask);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/blendps.html b/x86/blendps.html new file mode 100644 index 0000000..86ccdd7 --- /dev/null +++ b/x86/blendps.html @@ -0,0 +1,127 @@ + +BLENDPS + — Blend Packed Single Precision Floating-Point Values

BLENDPS + — Blend Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 3A 0C /r ib BLENDPS xmm1, xmm2/m128, imm8RMIV/VSSE4_1Select packed single precision floating-point values from xmm1 and xmm2/m128 from mask specified in imm8 and store the values into xmm1.
VEX.128.66.0F3A.WIG 0C /r ib VBLENDPS xmm1, xmm2, xmm3/m128, imm8RVMIV/VAVXSelect packed single precision floating-point values from xmm2 and xmm3/m128 from mask in imm8 and store the values in xmm1.
VEX.256.66.0F3A.WIG 0C /r ib VBLENDPS ymm1, ymm2, ymm3/m256, imm8RVMIV/VAVXSelect packed single precision floating-point values from ymm2 and ymm3/m256 from mask in imm8 and store the values in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (r, w)ModRM:r/m (r)imm8N/A
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Packed single precision floating-point values from the second source operand (third operand) are conditionally merged with values from the first source operand (second operand) and written to the destination operand (first operand). The immediate bits [7:0] determine whether the corresponding single precision floating-point value in the destination is copied from the second source or first source. If a bit in the mask, corresponding to a word, is “1”, then the single precision floating-point value in the second source operand is copied, else the value in the first source operand is copied.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: The first source operand an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Operation + ¶ +

+

BLENDPS (128-bit Legacy SSE Version) + ¶ +

+
IF (IMM8[0] = 0) THEN DEST[31:0] :=DEST[31:0]
+    ELSE DEST [31:0] := SRC[31:0] FI
+IF (IMM8[1] = 0) THEN DEST[63:32] := DEST[63:32]
+    ELSE DEST [63:32] := SRC[63:32] FI
+IF (IMM8[2] = 0) THEN DEST[95:64] := DEST[95:64]
+    ELSE DEST [95:64] := SRC[95:64] FI
+IF (IMM8[3] = 0) THEN DEST[127:96] := DEST[127:96]
+    ELSE DEST [127:96] := SRC[127:96] FI
+DEST[MAXVL-1:128] (Unmodified)
+
+

VBLENDPS (VEX.128 Encoded Version) + ¶ +

+
IF (IMM8[0] = 0) THEN DEST[31:0] :=SRC1[31:0]
+    ELSE DEST [31:0] := SRC2[31:0] FI
+IF (IMM8[1] = 0) THEN DEST[63:32] := SRC1[63:32]
+    ELSE DEST [63:32] := SRC2[63:32] FI
+IF (IMM8[2] = 0) THEN DEST[95:64] := SRC1[95:64]
+    ELSE DEST [95:64] := SRC2[95:64] FI
+IF (IMM8[3] = 0) THEN DEST[127:96] := SRC1[127:96]
+    ELSE DEST [127:96] := SRC2[127:96] FI
+DEST[MAXVL-1:128] := 0
+
+

VBLENDPS (VEX.256 Encoded Version) + ¶ +

+
IF (IMM8[0] = 0) THEN DEST[31:0] :=SRC1[31:0]
+    ELSE DEST [31:0] := SRC2[31:0] FI
+IF (IMM8[1] = 0) THEN DEST[63:32] := SRC1[63:32]
+    ELSE DEST [63:32] := SRC2[63:32] FI
+IF (IMM8[2] = 0) THEN DEST[95:64] := SRC1[95:64]
+    ELSE DEST [95:64] := SRC2[95:64] FI
+IF (IMM8[3] = 0) THEN DEST[127:96] := SRC1[127:96]
+    ELSE DEST [127:96] := SRC2[127:96] FI
+IF (IMM8[4] = 0) THEN DEST[159:128] := SRC1[159:128]
+    ELSE DEST [159:128] := SRC2[159:128] FI
+IF (IMM8[5] = 0) THEN DEST[191:160] := SRC1[191:160]
+    ELSE DEST [191:160] := SRC2[191:160] FI
+IF (IMM8[6] = 0) THEN DEST[223:192] := SRC1[223:192]
+    ELSE DEST [223:192] := SRC2[223:192] FI
+IF (IMM8[7] = 0) THEN DEST[255:224] := SRC1[255:224]
+    ELSE DEST [255:224] := SRC2[255:224] FI.
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BLENDPS __m128 _mm_blend_ps (__m128 v1, __m128 v2, const int mask);
+
+
VBLENDPS __m256 _mm256_blend_ps (__m256 a, __m256 b, const int mask);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/blendvpd.html b/x86/blendvpd.html new file mode 100644 index 0000000..2b4d474 --- /dev/null +++ b/x86/blendvpd.html @@ -0,0 +1,126 @@ + +BLENDVPD + — Variable Blend Packed Double Precision Floating-Point Values

BLENDVPD + — Variable Blend Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 38 15 /r BLENDVPD xmm1, xmm2/m128 , <XMM0>RM0V/VSSE4_1Select packed double precision floating-point values from xmm1 and xmm2 from mask specified in XMM0 and store the values in xmm1.
VEX.128.66.0F3A.W0 4B /r /is4 VBLENDVPD xmm1, xmm2, xmm3/m128, xmm4RVMRV/VAVXConditionally copy double precision floating-point values from xmm2 or xmm3/m128 to xmm1, based on mask bits in the mask operand, xmm4.
VEX.256.66.0F3A.W0 4B /r /is4 VBLENDVPD ymm1, ymm2, ymm3/m256, ymm4RVMRV/VAVXConditionally copy double precision floating-point values from ymm2 or ymm3/m256 to ymm1, based on mask bits in the mask operand, ymm4.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RM0ModRM:reg (r, w)ModRM:r/m (r)implicit XMM0N/A
RVMRModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8[7:4]
+

Description + ¶ +

+

Conditionally copy each quadword data element of double precision floating-point value from the second source operand and the first source operand depending on mask bits defined in the mask register operand. The mask bits are the most significant bit in each quadword element of the mask register.

+

Each quadword element of the destination operand is copied from:

+
    +
  • the corresponding quadword element in the second source operand, if a mask bit is “1”; or
  • +
  • the corresponding quadword element in the first source operand, if a mask bit is “0”
+

The register assignment of the implicit mask operand for BLENDVPD is defined to be the architectural register XMM0.

+

128-bit Legacy SSE version: The first source operand and the destination operand is the same. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged. The mask register operand is implicitly defined to be the architectural register XMM0. An attempt to execute BLENDVPD with a VEX prefix will cause #UD.

+

VEX.128 encoded version: The first source operand and the destination operand are XMM registers. The second source operand is an XMM register or 128-bit memory location. The mask operand is the third source register, and encoded in bits[7:4] of the immediate byte(imm8). The bits[3:0] of imm8 are ignored. In 32-bit mode, imm8[7] is ignored. The upper bits (MAXVL-1:128) of the corresponding YMM register (destination register) are zeroed. VEX.W must be 0, otherwise, the instruction will #UD.

+

VEX.256 encoded version: The first source operand and destination operand are YMM registers. The second source operand can be a YMM register or a 256-bit memory location. The mask operand is the third source register, and encoded in bits[7:4] of the immediate byte(imm8). The bits[3:0] of imm8 are ignored. In 32-bit mode, imm8[7] is ignored. VEX.W must be 0, otherwise, the instruction will #UD.

+

VBLENDVPD permits the mask to be any XMM or YMM register. In contrast, BLENDVPD treats XMM0 implicitly as the mask and do not support non-destructive destination operation.

+

Operation + ¶ +

+

BLENDVPD (128-bit Legacy SSE Version) + ¶ +

+
MASK := XMM0
+IF (MASK[63] = 0) THEN DEST[63:0] := DEST[63:0]
+    ELSE DEST [63:0] := SRC[63:0] FI
+IF (MASK[127] = 0) THEN DEST[127:64] := DEST[127:64]
+    ELSE DEST [127:64] := SRC[127:64] FI
+DEST[MAXVL-1:128] (Unmodified)
+
+

VBLENDVPD (VEX.128 Encoded Version) + ¶ +

+
MASK := SRC3
+IF (MASK[63] = 0) THEN DEST[63:0] := SRC1[63:0]
+    ELSE DEST [63:0] := SRC2[63:0] FI
+IF (MASK[127] = 0) THEN DEST[127:64] := SRC1[127:64]
+    ELSE DEST [127:64] := SRC2[127:64] FI
+DEST[MAXVL-1:128] := 0
+
+

VBLENDVPD (VEX.256 Encoded Version) + ¶ +

+
MASK := SRC3
+IF (MASK[63] = 0) THEN DEST[63:0] := SRC1[63:0]
+    ELSE DEST [63:0] := SRC2[63:0] FI
+IF (MASK[127] = 0) THEN DEST[127:64] := SRC1[127:64]
+    ELSE DEST [127:64] := SRC2[127:64] FI
+IF (MASK[191] = 0) THEN DEST[191:128] := SRC1[191:128]
+    ELSE DEST [191:128] := SRC2[191:128] FI
+IF (MASK[255] = 0) THEN DEST[255:192] := SRC1[255:192]
+    ELSE DEST [255:192] := SRC2[255:192] FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BLENDVPD __m128d _mm_blendv_pd(__m128d v1, __m128d v2, __m128d v3);
+
+
VBLENDVPD __m128 _mm_blendv_pd (__m128d a, __m128d b, __m128d mask);
+
+
VBLENDVPD __m256 _mm256_blendv_pd (__m256d a, __m256d b, __m256d mask);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.W = 1.
diff --git a/x86/blendvps.html b/x86/blendvps.html new file mode 100644 index 0000000..3ace23d --- /dev/null +++ b/x86/blendvps.html @@ -0,0 +1,142 @@ + +BLENDVPS + — Variable Blend Packed Single Precision Floating-Point Values

BLENDVPS + — Variable Blend Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 38 14 /r BLENDVPS xmm1, xmm2/m128, <XMM0>RM0V/VSSE4_1Select packed single precision floating-point values from xmm1 and xmm2/m128 from mask specified in XMM0 and store the values into xmm1.
VEX.128.66.0F3A.W0 4A /r /is4 VBLENDVPS xmm1, xmm2, xmm3/m128, xmm4RVMRV/VAVXConditionally copy single precision floating-point values from xmm2 or xmm3/m128 to xmm1, based on mask bits in the specified mask operand, xmm4.
VEX.256.66.0F3A.W0 4A /r /is4 VBLENDVPS ymm1, ymm2, ymm3/m256, ymm4RVMRV/VAVXConditionally copy single precision floating-point values from ymm2 or ymm3/m256 to ymm1, based on mask bits in the specified mask register, ymm4.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RM0ModRM:reg (r, w)ModRM:r/m (r)implicit XMM0N/A
RVMRModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8[7:4]
+

Description + ¶ +

+

Conditionally copy each dword data element of single precision floating-point value from the second source operand and the first source operand depending on mask bits defined in the mask register operand. The mask bits are the most significant bit in each dword element of the mask register.

+

Each quadword element of the destination operand is copied from:

+
    +
  • the corresponding dword element in the second source operand, if a mask bit is “1”; or
  • +
  • the corresponding dword element in the first source operand, if a mask bit is “0”.
+

The register assignment of the implicit mask operand for BLENDVPS is defined to be the architectural register XMM0.

+

128-bit Legacy SSE version: The first source operand and the destination operand is the same. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged. The mask register operand is implicitly defined to be the architectural register XMM0. An attempt to execute BLENDVPS with a VEX prefix will cause #UD.

+

VEX.128 encoded version: The first source operand and the destination operand are XMM registers. The second source operand is an XMM register or 128-bit memory location. The mask operand is the third source register, and encoded in bits[7:4] of the immediate byte(imm8). The bits[3:0] of imm8 are ignored. In 32-bit mode, imm8[7] is ignored. The upper bits (MAXVL-1:128) of the corresponding YMM register (destination register) are zeroed. VEX.W must be 0, otherwise, the instruction will #UD.

+

VEX.256 encoded version: The first source operand and destination operand are YMM registers. The second source operand can be a YMM register or a 256-bit memory location. The mask operand is the third source register, and encoded in bits[7:4] of the immediate byte(imm8). The bits[3:0] of imm8 are ignored. In 32-bit mode, imm8[7] is ignored. VEX.W must be 0, otherwise, the instruction will #UD.

+

VBLENDVPS permits the mask to be any XMM or YMM register. In contrast, BLENDVPS treats XMM0 implicitly as the mask and do not support non-destructive destination operation.

+

Operation + ¶ +

+

BLENDVPS (128-bit Legacy SSE Version) + ¶ +

+
MASK := XMM0
+IF (MASK[31] = 0) THEN DEST[31:0] := DEST[31:0]
+    ELSE DEST [31:0] := SRC[31:0] FI
+IF (MASK[63] = 0) THEN DEST[63:32] := DEST[63:32]
+    ELSE DEST [63:32] := SRC[63:32] FI
+IF (MASK[95] = 0) THEN DEST[95:64] := DEST[95:64]
+    ELSE DEST [95:64] := SRC[95:64] FI
+IF (MASK[127] = 0) THEN DEST[127:96] := DEST[127:96]
+    ELSE DEST [127:96] := SRC[127:96] FI
+DEST[MAXVL-1:128] (Unmodified)
+
+

VBLENDVPS (VEX.128 Encoded Version) + ¶ +

+
MASK := SRC3
+IF (MASK[31] = 0) THEN DEST[31:0] := SRC1[31:0]
+    ELSE DEST [31:0] := SRC2[31:0] FI
+IF (MASK[63] = 0) THEN DEST[63:32] := SRC1[63:32]
+    ELSE DEST [63:32] := SRC2[63:32] FI
+IF (MASK[95] = 0) THEN DEST[95:64] := SRC1[95:64]
+    ELSE DEST [95:64] := SRC2[95:64] FI
+IF (MASK[127] = 0) THEN DEST[127:96] := SRC1[127:96]
+    ELSE DEST [127:96] := SRC2[127:96] FI
+DEST[MAXVL-1:128] := 0
+
+

VBLENDVPS (VEX.256 Encoded Version) + ¶ +

+
MASK := SRC3
+IF (MASK[31] = 0) THEN DEST[31:0] := SRC1[31:0]
+    ELSE DEST [31:0] := SRC2[31:0] FI
+IF (MASK[63] = 0) THEN DEST[63:32] := SRC1[63:32]
+    ELSE DEST [63:32] := SRC2[63:32] FI
+IF (MASK[95] = 0) THEN DEST[95:64] := SRC1[95:64]
+    ELSE DEST [95:64] := SRC2[95:64] FI
+IF (MASK[127] = 0) THEN DEST[127:96] := SRC1[127:96]
+    ELSE DEST [127:96] := SRC2[127:96] FI
+IF (MASK[159] = 0) THEN DEST[159:128] := SRC1[159:128]
+    ELSE DEST [159:128] := SRC2[159:128] FI
+IF (MASK[191] = 0) THEN DEST[191:160] := SRC1[191:160]
+    ELSE DEST [191:160] := SRC2[191:160] FI
+IF (MASK[223] = 0) THEN DEST[223:192] := SRC1[223:192]
+    ELSE DEST [223:192] := SRC2[223:192] FI
+IF (MASK[255] = 0) THEN DEST[255:224] := SRC1[255:224]
+    ELSE DEST [255:224] := SRC2[255:224] FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BLENDVPS __m128 _mm_blendv_ps(__m128 v1, __m128 v2, __m128 v3);
+
+
VBLENDVPS __m128 _mm_blendv_ps (__m128 a, __m128 b, __m128 mask);
+
+
VBLENDVPS __m256 _mm256_blendv_ps (__m256 a, __m256 b, __m256 mask);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.W = 1.
diff --git a/x86/blsi.html b/x86/blsi.html new file mode 100644 index 0000000..db9e1a0 --- /dev/null +++ b/x86/blsi.html @@ -0,0 +1,81 @@ + +BLSI + — Extract Lowest Set Isolated Bit

BLSI + — Extract Lowest Set Isolated Bit

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.0F38.W0 F3 /3 BLSI r32, r/m32VMV/VBMI1Extract lowest set bit from r/m32 and set that bit in r32.
VEX.LZ.0F38.W1 F3 /3 BLSI r64, r/m64VMV/N.E.BMI1Extract lowest set bit from r/m64, and set that bit in r64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
VMVEX.vvvv (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Extracts the lowest set bit from the source operand and set the corresponding bit in the destination register. All other bits in the destination operand are zeroed. If no bits are set in the source operand, BLSI sets all the bits in the destination to 0 and sets ZF and CF.

+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
temp := (-SRC) bitwiseAND (SRC);
+SF := temp[OperandSize -1];
+ZF := (temp = 0);
+IF SRC = 0
+    CF := 0;
+ELSE
+    CF := 1;
+FI
+DEST := temp;
+
+

Flags Affected + ¶ +

+

ZF and SF are updated based on the result. CF is set if the source is not zero. OF flags are cleared. AF and PF flags are undefined.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BLSI unsigned __int32 _blsi_u32(unsigned __int32 src);
+
+
BLSI unsigned __int64 _blsi_u64(unsigned __int64 src);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/blsmsk.html b/x86/blsmsk.html new file mode 100644 index 0000000..2c66ea5 --- /dev/null +++ b/x86/blsmsk.html @@ -0,0 +1,81 @@ + +BLSMSK + — Get Mask Up to Lowest Set Bit

BLSMSK + — Get Mask Up to Lowest Set Bit

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.0F38.W0 F3 /2 BLSMSK r32, r/m32VMV/VBMI1Set all lower bits in r32 to “1” starting from bit 0 to lowest set bit in r/m32.
VEX.LZ.0F38.W1 F3 /2 BLSMSK r64, r/m64VMV/N.E.BMI1Set all lower bits in r64 to “1” starting from bit 0 to lowest set bit in r/m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
VMVEX.vvvv (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Sets all the lower bits of the destination operand to “1” up to and including lowest set bit (=1) in the source operand. If source operand is zero, BLSMSK sets all bits of the destination operand to 1 and also sets CF to 1.

+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
temp := (SRC-1) XOR (SRC) ;
+SF := temp[OperandSize -1];
+ZF := 0;
+IF SRC = 0
+    CF := 1;
+ELSE
+    CF := 0;
+FI
+DEST := temp;
+
+

Flags Affected + ¶ +

+

SF is updated based on the result. CF is set if the source if zero. ZF and OF flags are cleared. AF and PF flag are undefined.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BLSMSK unsigned __int32 _blsmsk_u32(unsigned __int32 src);
+
+
BLSMSK unsigned __int64 _blsmsk_u64(unsigned __int64 src);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/blsr.html b/x86/blsr.html new file mode 100644 index 0000000..b954bce --- /dev/null +++ b/x86/blsr.html @@ -0,0 +1,81 @@ + +BLSR + — Reset Lowest Set Bit

BLSR + — Reset Lowest Set Bit

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.0F38.W0 F3 /1 BLSR r32, r/m32VMV/VBMI1Reset lowest set bit of r/m32, keep all other bits of r/m32 and write result to r32.
VEX.LZ.0F38.W1 F3 /1 BLSR r64, r/m64VMV/N.E.BMI1Reset lowest set bit of r/m64, keep all other bits of r/m64 and write result to r64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
VMVEX.vvvv (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Copies all bits from the source operand to the destination operand and resets (=0) the bit position in the destination operand that corresponds to the lowest set bit of the source operand. If the source operand is zero BLSR sets CF.

+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
temp := (SRC-1) bitwiseAND ( SRC );
+SF := temp[OperandSize -1];
+ZF := (temp = 0);
+IF SRC = 0
+    CF := 1;
+ELSE
+    CF := 0;
+FI
+DEST := temp;
+
+

Flags Affected + ¶ +

+

ZF and SF flags are updated based on the result. CF is set if the source is zero. OF flag is cleared. AF and PF flags are undefined.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BLSR unsigned __int32 _blsr_u32(unsigned __int32 src);
+
+
BLSR unsigned __int64 _blsr_u64(unsigned __int64 src);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/bndcl.html b/x86/bndcl.html new file mode 100644 index 0000000..4dd8641 --- /dev/null +++ b/x86/bndcl.html @@ -0,0 +1,132 @@ + +BNDCL + — Check Lower Bound

BNDCL + — Check Lower Bound

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 1A /r BNDCL bnd, r/m32RMN.E./VMPXGenerate a #BR if the address in r/m32 is lower than the lower bound in bnd.LB.
F3 0F 1A /r BNDCL bnd, r/m64RMV/N.E.MPXGenerate a #BR if the address in r/m64 is lower than the lower bound in bnd.LB.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (w)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Compare the address in the second operand with the lower bound in bnd. The second operand can be either a register or memory operand. If the address is lower than the lower bound in bnd.LB, it will set BNDSTATUS to 01H and signal a #BR exception.

+

This instruction does not cause any memory access, and does not read or write any flags.

+

Operation + ¶ +

+

BNDCL BND, reg + ¶ +

+
IF reg < BND.LB Then
+    BNDSTATUS := 01H;
+    #BR;
+FI;
+
+

BNDCL BND, mem + ¶ +

+
TEMP := LEA(mem);
+IF TEMP < BND.LB Then
+    BNDSTATUS := 01H;
+    #BR;
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BNDCL void _bnd_chk_ptr_lbounds(const void *q)
+
+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#BRIf lower bound check fails.
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 67H prefix is not used and CS.D=0.
If 67H prefix is used and CS.D=1.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + +
#BRIf lower bound check fails.
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + +
#BRIf lower bound check fails.
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf ModRM.r/m and REX encodes BND4-BND15 when Intel MPX is enabled.
+

Same exceptions as in protected mode.

diff --git a/x86/bndcu.bndcn.html b/x86/bndcu.bndcn.html new file mode 100644 index 0000000..d47b960 --- /dev/null +++ b/x86/bndcu.bndcn.html @@ -0,0 +1,164 @@ + +BNDCU/BNDCN + — Check Upper Bound

BNDCU/BNDCN + — Check Upper Bound

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 1A /r BNDCU bnd, r/m32RMN.E./VMPXGenerate a #BR if the address in r/m32 is higher than the upper bound in bnd.UB (bnb.UB in 1's complement form).
F2 0F 1A /r BNDCU bnd, r/m64RMV/N.E.MPXGenerate a #BR if the address in r/m64 is higher than the upper bound in bnd.UB (bnb.UB in 1's complement form).
F2 0F 1B /r BNDCN bnd, r/m32RMN.E./VMPXGenerate a #BR if the address in r/m32 is higher than the upper bound in bnd.UB (bnb.UB not in 1's complement form).
F2 0F 1B /r BNDCN bnd, r/m64RMV/N.E.MPXGenerate a #BR if the address in r/m64 is higher than the upper bound in bnd.UB (bnb.UB not in 1's complement form).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (w)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Compare the address in the second operand with the upper bound in bnd. The second operand can be either a register or a memory operand. If the address is higher than the upper bound in bnd.UB, it will set BNDSTATUS to 01H and signal a #BR exception.

+

BNDCU perform 1’s complement operation on the upper bound of bnd first before proceeding with address comparison. BNDCN perform address comparison directly using the upper bound in bnd that is already reverted out of 1’s complement form.

+

This instruction does not cause any memory access, and does not read or write any flags.

+

Effective address computation of m32/64 has identical behavior to LEA

+

Operation + ¶ +

+

BNDCU BND, reg + ¶ +

+
IF reg > NOT(BND.UB) Then
+    BNDSTATUS := 01H;
+    #BR;
+FI;
+
+

BNDCU BND, mem + ¶ +

+
TEMP := LEA(mem);
+IF TEMP > NOT(BND.UB) Then
+    BNDSTATUS := 01H;
+    #BR;
+FI;
+
+

BNDCN BND, reg + ¶ +

+
IF reg > BND.UB Then
+    BNDSTATUS := 01H;
+    #BR;
+FI;
+
+

BNDCN BND, mem + ¶ +

+
TEMP := LEA(mem);
+IF TEMP > BND.UB Then
+    BNDSTATUS := 01H;
+    #BR;
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BNDCU .void _bnd_chk_ptr_ubounds(const void *q)
+
+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#BRIf upper bound check fails.
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 67H prefix is not used and CS.D=0.
If 67H prefix is used and CS.D=1.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + +
#BRIf upper bound check fails.
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + +
#BRIf upper bound check fails.
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf ModRM.r/m and REX encodes BND4-BND15 when Intel MPX is enabled.
+

Same exceptions as in protected mode.

diff --git a/x86/bndldx.html b/x86/bndldx.html new file mode 100644 index 0000000..e211d88 --- /dev/null +++ b/x86/bndldx.html @@ -0,0 +1,185 @@ + +BNDLDX + — Load Extended Bounds Using Address Translation

BNDLDX + — Load Extended Bounds Using Address Translation

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 1A /r BNDLDX bnd, mibRMV/VMPXLoad the bounds stored in a bound table entry (BTE) into bnd with address translation using the base of mib and conditional on the index of mib matching the pointer value in the BTE.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (w)SIB.base (r): Address of pointer SIB.index(r)N/A
+

Description + ¶ +

+

BNDLDX uses the linear address constructed from the base register and displacement of the SIB-addressing form of the memory operand (mib) to perform address translation to access a bound table entry and conditionally load the bounds in the BTE to the destination. The destination register is updated with the bounds in the BTE, if the content of the index register of mib matches the pointer value stored in the BTE.

+

If the pointer value comparison fails, the destination is updated with INIT bounds (lb = 0x0, ub = 0x0) (note: as articulated earlier, the upper bound is represented using 1's complement, therefore, the 0x0 value of upper bound allows for access to full memory).

+

This instruction does not cause memory access to the linear address of mib nor the effective address referenced by the base, and does not read or write any flags.

+

Segment overrides apply to the linear address computation with the base of mib, and are used during address translation to generate the address of the bound table entry. By default, the address of the BTE is assumed to be linear address. There are no segmentation checks performed on the base of mib.

+

The base of mib will not be checked for canonical address violation as it does not access memory.

+

Any encoding of this instruction that does not specify base or index register will treat those registers as zero (constant). The reg-reg form of this instruction will remain a NOP.

+

The scale field of the SIB byte has no effect on these instructions and is ignored.

+

The bound register may be partially updated on memory faults. The order in which memory operands are loaded is implementation specific.

+

Operation + ¶ +

+
base := mib.SIB.base ? mib.SIB.base + Disp: 0;
+ptr_value := mib.SIB.index ? mib.SIB.index : 0;
+
+

Outside 64-bit Mode + ¶ +

+
A_BDE[31:0] := (Zero_extend32(base[31:12] « 2) + (BNDCFG[31:12] «12 );
+A_BT[31:0] := LoadFrom(A_BDE );
+IF A_BT[0] equal 0 Then
+    BNDSTATUS := A_BDE | 02H;
+    #BR;
+FI;
+A_BTE[31:0] := (Zero_extend32(base[11:2] « 4) + (A_BT[31:2] « 2 );
+Temp_lb[31:0] := LoadFrom(A_BTE);
+Temp_ub[31:0] := LoadFrom(A_BTE + 4);
+Temp_ptr[31:0] := LoadFrom(A_BTE + 8);
+IF Temp_ptr equal ptr_value Then
+    BND.LB := Temp_lb;
+    BND.UB := Temp_ub;
+ELSE
+    BND.LB := 0;
+    BND.UB := 0;
+FI;
+
+

In 64-bit Mode + ¶ +

+
A_BDE[63:0] := (Zero_extend64(base[47+MAWA:20] « 3) + (BNDCFG[63:12] «12 );1
+A_BT[63:0] := LoadFrom(A_BDE);
+IF A_BT[0] equal 0 Then
+    BNDSTATUS := A_BDE | 02H;
+    #BR;
+FI;
+A_BTE[63:0] := (Zero_extend64(base[19:3] « 5) + (A_BT[63:3] « 3 );
+Temp_lb[63:0] := LoadFrom(A_BTE);
+Temp_ub[63:0] := LoadFrom(A_BTE + 8);
+Temp_ptr[63:0] := LoadFrom(A_BTE + 16);
+IF Temp_ptr equal ptr_value Then
+    BND.LB := Temp_lb;
+    BND.UB := Temp_ub;
+ELSE
+    BND.LB := 0;
+    BND.UB := 0;
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BNDLDX: Generated by compiler as needed.
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#BRIf the bound directory entry is invalid.
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 67H prefix is not used and CS.D=0.
If 67H prefix is used and CS.D=1.
#GP(0)If a destination effective address of the Bound Table entry is outside the DS segment limit.
If DS register contains a NULL segment selector.
#PF(faultcode) If a page fault occurs.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + +
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
#GP(0)If a destination effective address of the Bound Table entry is outside the DS segment limit.
+
+

1. If CPL < 3, the supervisor MAWA (MAWAS) is used; this value is 0. If CPL = 3, the user MAWA (MAWAU) is used; this value is enumerated in CPUID.(EAX=07H,ECX=0H):ECX.MAWAU[bits 21:17]. See Appendix E.3.1 of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
#GP(0)If a destination effective address of the Bound Table entry is outside the DS segment limit.
#PF(faultcode) If a page fault occurs.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#BRIf the bound directory entry is invalid.
#UDIf ModRM is RIP relative.
If the LOCK prefix is used.
If ModRM.r/m and REX encodes BND4-BND15 when Intel MPX is enabled.
#GP(0)If the memory address (A_BDE or A_BTE) is in a non-canonical form.
#PF(faultcode) If a page fault occurs.
diff --git a/x86/bndmk.html b/x86/bndmk.html new file mode 100644 index 0000000..542d410 --- /dev/null +++ b/x86/bndmk.html @@ -0,0 +1,125 @@ + +BNDMK + — Make Bounds

BNDMK + — Make Bounds

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 1B /r BNDMK bnd, m32RMN.E./VMPXMake lower and upper bounds from m32 and store them in bnd.
F3 0F 1B /r BNDMK bnd, m64RMV/N.E.MPXMake lower and upper bounds from m64 and store them in bnd.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (w)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Makes bounds from the second operand and stores the lower and upper bounds in the bound register bnd. The second operand must be a memory operand. The content of the base register from the memory operand is stored in the lower bound bnd.LB. The 1's complement of the effective address of m32/m64 is stored in the upper bound b.UB. Computation of m32/m64 has identical behavior to LEA.

+

This instruction does not cause any memory access, and does not read or write any flags.

+

If the instruction did not specify base register, the lower bound will be zero. The reg-reg form of this instruction retains legacy behavior (NOP).

+

The instruction causes an invalid-opcode exception (#UD) if executed in 64-bit mode with RIP-relative addressing.

+

Operation + ¶ +

+
BND.LB := SRCMEM.base;
+IF 64-bit mode Then
+    BND.UB := NOT(LEA.64_bits(SRCMEM));
+ELSE
+    BND.UB := Zero_Extend.64_bits(NOT(LEA.32_bits(SRCMEM)));
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BNDMKvoid * _bnd_set_ptr_bounds(const void * q, size_t size);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 67H prefix is not used and CS.D=0.
If 67H prefix is used and CS.D=1.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + +
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + +
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If ModRM.r/m and REX encodes BND4-BND15 when Intel MPX is enabled.
If RIP-relative addressing is used.
#SS(0)If the memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
+

Same exceptions as in protected mode.

diff --git a/x86/bndmov.html b/x86/bndmov.html new file mode 100644 index 0000000..698afa5 --- /dev/null +++ b/x86/bndmov.html @@ -0,0 +1,274 @@ + +BNDMOV + — Move Bounds

BNDMOV + — Move Bounds

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 1A /r BNDMOV bnd1, bnd2/m64RMN.E./VMPXMove lower and upper bound from bnd2/m64 to bound register bnd1.
66 0F 1A /r BNDMOV bnd1, bnd2/m128RMV/N.E.MPXMove lower and upper bound from bnd2/m128 to bound register bnd1.
66 0F 1B /r BNDMOV bnd1/m64, bnd2MRN.E./VMPXMove lower and upper bound from bnd2 to bnd1/m64.
66 0F 1B /r BNDMOV bnd1/m128, bnd2MRV/N.E.MPXMove lower and upper bound from bnd2 to bound register bnd1/m128.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (w)ModRM:r/m (r)N/A
MRModRM:r/m (w)ModRM:reg (r)N/A
+

Description + ¶ +

+

BNDMOV moves a pair of lower and upper bound values from the source operand (the second operand) to the destination (the first operand). Each operation is 128-bit move. The exceptions are same as the MOV instruction. The memory format for loading/store bounds in 64-bit mode is shown in Figure 3-5.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +BNDMOV to memory in 64-bit mode +Upper Bound (UB) +Lower Bound (LB) +0 Byteoffset +16 +8 +BNDMOV to memory in 32-bit mode +Upper Bound (UB) +Lower Bound (LB) +0 Byteoffset +16 +8 +4 +
Figure 3-5. Memory Layout of BNDMOV to/from Memory
+

This instruction does not change flags.

+

Operation + ¶ +

+

BNDMOV register to register + ¶ +

+
DEST.LB := SRC.LB;
+DEST.UB := SRC.UB;
+
+

BNDMOV from memory + ¶ +

+
IF 64-bit mode THEN
+        DEST.LB := LOAD_QWORD(SRC);
+        DEST.UB := LOAD_QWORD(SRC+8);
+    ELSE
+        DEST.LB := LOAD_DWORD_ZERO_EXT(SRC);
+        DEST.UB := LOAD_DWORD_ZERO_EXT(SRC+4);
+FI;
+
+

BNDMOV to memory + ¶ +

+
IF 64-bit mode THEN
+        DEST[63:0] := SRC.LB;
+        DEST[127:64] := SRC.UB;
+    ELSE
+        DEST[31:0] := SRC.LB;
+        DEST[63:32] := SRC.UB;
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BNDMOV void * _bnd_copy_ptr_bounds(const void *q, const void *r)
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used but the destination is not a memory operand.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 67H prefix is not used and CS.D=0.
If 67H prefix is used and CS.D=1.
#SS(0)If the memory operand effective address is outside the SS segment limit.
#GP(0)If the memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the destination operand points to a non-writable segment
If the DS, ES, FS, or GS segment register contains a NULL segment selector.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL is 3.
#PF(faultcode) If a page fault occurs.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + +
#UDIf the LOCK prefix is used but the destination is not a memory operand.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
#GP(0)If the memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf the memory operand effective address is outside the SS segment limit.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used but the destination is not a memory operand.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
#GP(0)If the memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If the memory operand effective address is outside the SS segment limit.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL is 3.
#PF(faultcode) If a page fault occurs.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used but the destination is not a memory operand.
If ModRM.r/m and REX encodes BND4-BND15 when Intel MPX is enabled.
#SS(0)If the memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL is 3.
#PF(faultcode) If a page fault occurs.
diff --git a/x86/bndstx.html b/x86/bndstx.html new file mode 100644 index 0000000..c66dd7a --- /dev/null +++ b/x86/bndstx.html @@ -0,0 +1,174 @@ + +BNDSTX + — Store Extended Bounds Using Address Translation

BNDSTX + — Store Extended Bounds Using Address Translation

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 1B /r BNDSTX mib, bndMRV/VMPXStore the bounds in bnd and the pointer value in the index register of mib to a bound table entry (BTE) with address translation using the base of mib.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
MRSIB.base (r): Address of pointer SIB.index(r)ModRM:reg (r)N/A
+

Description + ¶ +

+

BNDSTX uses the linear address constructed from the displacement and base register of the SIB-addressing form of the memory operand (mib) to perform address translation to store to a bound table entry. The bounds in the source operand bnd are written to the lower and upper bounds in the BTE. The content of the index register of mib is written to the pointer value field in the BTE.

+

This instruction does not cause memory access to the linear address of mib nor the effective address referenced by the base, and does not read or write any flags.

+

Segment overrides apply to the linear address computation with the base of mib, and are used during address translation to generate the address of the bound table entry. By default, the address of the BTE is assumed to be linear address. There are no segmentation checks performed on the base of mib.

+

The base of mib will not be checked for canonical address violation as it does not access memory.

+

Any encoding of this instruction that does not specify base or index register will treat those registers as zero (constant). The reg-reg form of this instruction will remain a NOP.

+

The scale field of the SIB byte has no effect on these instructions and is ignored.

+

The bound register may be partially updated on memory faults. The order in which memory operands are loaded is implementation specific.

+

Operation + ¶ +

+
base := mib.SIB.base ? mib.SIB.base + Disp: 0;
+ptr_value := mib.SIB.index ? mib.SIB.index : 0;
+
+

Outside 64-bit Mode + ¶ +

+
A_BDE[31:0] := (Zero_extend32(base[31:12] « 2) + (BNDCFG[31:12] «12 );
+A_BT[31:0] := LoadFrom(A_BDE);
+IF A_BT[0] equal 0 Then
+    BNDSTATUS := A_BDE | 02H;
+    #BR;
+FI;
+A_DEST[31:0] := (Zero_extend32(base[11:2] « 4) + (A_BT[31:2] « 2 ); // address of Bound table entry
+A_DEST[8][31:0] := ptr_value;
+A_DEST[0][31:0] := BND.LB;
+A_DEST[4][31:0] := BND.UB;
+
+

In 64-bit Mode + ¶ +

+
A_BDE[63:0] := (Zero_extend64(base[47+MAWA:20] « 3) + (BNDCFG[63:12] «12 );1
+A_BT[63:0] := LoadFrom(A_BDE);
+IF A_BT[0] equal 0 Then
+    BNDSTATUS := A_BDE | 02H;
+    #BR;
+FI;
+A_DEST[63:0] := (Zero_extend64(base[19:3] « 5) + (A_BT[63:3] « 3 ); // address of Bound table entry
+A_DEST[16][63:0] := ptr_value;
+A_DEST[0][63:0] := BND.LB;
+A_DEST[8][63:0] := BND.UB;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BNDSTX: _bnd_store_ptr_bounds(const void **ptr_addr, const void *ptr_val);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#BRIf the bound directory entry is invalid.
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 67H prefix is not used and CS.D=0.
If 67H prefix is used and CS.D=1.
#GP(0)If a destination effective address of the Bound Table entry is outside the DS segment limit.
If DS register contains a NULL segment selector.
If the destination operand points to a non-writable segment
#PF(faultcode) If a page fault occurs.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + +
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
#GP(0)If a destination effective address of the Bound Table entry is outside the DS segment limit.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If ModRM.r/m encodes BND4-BND7 when Intel MPX is enabled.
If 16-bit addressing is used.
#GP(0)If a destination effective address of the Bound Table entry is outside the DS segment limit.
#PF(faultcode) If a page fault occurs.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+
+

1. If CPL < 3, the supervisor MAWA (MAWAS) is used; this value is 0. If CPL = 3, the user MAWA (MAWAU) is used; this value is enumerated in CPUID.(EAX=07H,ECX=0H):ECX.MAWAU[bits 21:17]. See Appendix E.3.1 of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#BRIf the bound directory entry is invalid.
#UDIf ModRM is RIP relative.
If the LOCK prefix is used.
If ModRM.r/m and REX encodes BND4-BND15 when Intel MPX is enabled.
#GP(0)If the memory address (A_BDE or A_BTE) is in a non-canonical form.
If the destination operand points to a non-writable segment
#PF(faultcode) If a page fault occurs.
diff --git a/x86/bound.html b/x86/bound.html new file mode 100644 index 0000000..6019aaf --- /dev/null +++ b/x86/bound.html @@ -0,0 +1,150 @@ + +BOUND + — Check Array Index Against Bounds

BOUND + — Check Array Index Against Bounds

+ + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
62 /rBOUND r16, m16&16RMInvalidValidCheck if r16 (array index) is within bounds specified by m16&16.
62 /rBOUND r32, m32&32RMInvalidValidCheck if r32 (array index) is within bounds specified by m32&32.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

BOUND determines if the first operand (array index) is within the bounds of an array specified the second operand (bounds operand). The array index is a signed integer located in a register. The bounds operand is a memory location that contains a pair of signed doubleword-integers (when the operand-size attribute is 32) or a pair of signed word-integers (when the operand-size attribute is 16). The first doubleword (or word) is the lower bound of the array and the second doubleword (or word) is the upper bound of the array. The array index must be greater than or equal to the lower bound and less than or equal to the upper bound plus the operand size in bytes. If the index is not within bounds, a BOUND range exceeded exception (#BR) is signaled. When this exception is generated, the saved return instruction pointer points to the BOUND instruction.

+

The bounds limit data structure (two words or doublewords containing the lower and upper limits of the array) is usually placed just before the array itself, making the limits addressable via a constant offset from the beginning of the array. Because the address of the array already will be present in a register, this practice avoids extra bus cycles to obtain the effective address of the array bounds.

+

This instruction executes as described in compatibility mode and legacy mode. It is not valid in 64-bit mode.

+

Operation + ¶ +

+
IF 64bit Mode
+    THEN
+        #UD;
+    ELSE
+        IF (ArrayIndex < LowerBound OR ArrayIndex > UpperBound) THEN
+        (* Below lower bound or above upper bound *)
+            IF <equation for PL enabled> THEN BNDSTATUS := 0
+            #BR;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#BRIf the bounds test fails.
#UDIf second operand is not a memory location.
If the LOCK prefix is used.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#BRIf the bounds test fails.
#UDIf second operand is not a memory location.
If the LOCK prefix is used.
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#BRIf the bounds test fails.
#UDIf second operand is not a memory location.
If the LOCK prefix is used.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf in 64-bit mode.
diff --git a/x86/bsf.html b/x86/bsf.html new file mode 100644 index 0000000..4938a1e --- /dev/null +++ b/x86/bsf.html @@ -0,0 +1,156 @@ + +BSF + — Bit Scan Forward

BSF + — Bit Scan Forward

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
0F BC /rBSF r16, r/m16RMValidValidBit scan forward on r/m16.
0F BC /rBSF r32, r/m32RMValidValidBit scan forward on r/m32.
REX.W + 0F BC /rBSF r64, r/m64RMValidN.E.Bit scan forward on r/m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Searches the source operand (second operand) for the least significant set bit (1 bit). If a least significant 1 bit is found, its bit index is stored in the destination operand (first operand). The source operand can be a register or a memory location; the destination operand is a register. The bit index is an unsigned offset from bit 0 of the source operand. If the content of the source operand is 0, the content of the destination operand is undefined.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF SRC = 0
+    THEN
+        ZF := 1;
+        DEST is undefined;
+    ELSE
+        ZF := 0;
+        temp := 0;
+        WHILE Bit(SRC, temp) = 0
+        DO
+            temp := temp + 1;
+        OD;
+        DEST := temp;
+FI;
+
+

Flags Affected + ¶ +

+

The ZF flag is set to 1 if the source operand is 0; otherwise, the ZF flag is cleared. The CF, OF, SF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/bsr.html b/x86/bsr.html new file mode 100644 index 0000000..9ed8edd --- /dev/null +++ b/x86/bsr.html @@ -0,0 +1,156 @@ + +BSR + — Bit Scan Reverse

BSR + — Bit Scan Reverse

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
0F BD /rBSR r16, r/m16RMValidValidBit scan reverse on r/m16.
0F BD /rBSR r32, r/m32RMValidValidBit scan reverse on r/m32.
REX.W + 0F BD /rBSR r64, r/m64RMValidN.E.Bit scan reverse on r/m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Searches the source operand (second operand) for the most significant set bit (1 bit). If a most significant 1 bit is found, its bit index is stored in the destination operand (first operand). The source operand can be a register or a memory location; the destination operand is a register. The bit index is an unsigned offset from bit 0 of the source operand. If the content source operand is 0, the content of the destination operand is undefined.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF SRC = 0
+    THEN
+        ZF := 1;
+        DEST is undefined;
+    ELSE
+        ZF := 0;
+        temp := OperandSize – 1;
+        WHILE Bit(SRC, temp) = 0
+        DO
+            temp := temp - 1;
+        OD;
+        DEST := temp;
+FI;
+
+

Flags Affected + ¶ +

+

The ZF flag is set to 1 if the source operand is 0; otherwise, the ZF flag is cleared. The CF, OF, SF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/bswap.html b/x86/bswap.html new file mode 100644 index 0000000..eb48b2e --- /dev/null +++ b/x86/bswap.html @@ -0,0 +1,87 @@ + +BSWAP + — Byte Swap

BSWAP + — Byte Swap

+ + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
0F C8+rdBSWAP r32OValid*ValidReverses the byte order of a 32-bit register.
REX.W + 0F C8+rdBSWAP r64OValidN.E.Reverses the byte order of a 64-bit register.
+
+

* SeeIA-32ArchitectureCompatibilitysectionbelow.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
Oopcode + rd (r, w)N/AN/AN/A
+

Description + ¶ +

+

Reverses the byte order of a 32-bit or 64-bit (destination) register. This instruction is provided for converting little-endian values to big-endian format and vice versa. To swap bytes in a word value (16-bit register), use the XCHG instruction. When the BSWAP instruction references a 16-bit register, the result is undefined.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

IA-32 Architecture Legacy Compatibility + ¶ +

+

The BSWAP instruction is not supported on IA-32 processors earlier than the Intel486TM processor family. For compatibility with this instruction, software should include functionally equivalent code for execution on Intel processors earlier than the Intel486 processor family.

+

Operation + ¶ +

+
TEMP := DEST
+IF 64-bit mode AND OperandSize = 64
+    THEN
+        DEST[7:0] := TEMP[63:56];
+        DEST[15:8] := TEMP[55:48];
+        DEST[23:16] := TEMP[47:40];
+        DEST[31:24] := TEMP[39:32];
+        DEST[39:32] := TEMP[31:24];
+        DEST[47:40] := TEMP[23:16];
+        DEST[55:48] := TEMP[15:8];
+        DEST[63:56] := TEMP[7:0];
+    ELSE
+        DEST[7:0] := TEMP[31:24];
+        DEST[15:8] := TEMP[23:16];
+        DEST[23:16] := TEMP[15:8];
+        DEST[31:24] := TEMP[7:0];
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/bt.html b/x86/bt.html new file mode 100644 index 0000000..089aacc --- /dev/null +++ b/x86/bt.html @@ -0,0 +1,181 @@ + +BT + — Bit Test

BT + — Bit Test

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
0F A3 /rBT r/m16, r16MRValidValidStore selected bit in CF flag.
0F A3 /rBT r/m32, r32MRValidValidStore selected bit in CF flag.
REX.W + 0F A3 /rBT r/m64, r64MRValidN.E.Store selected bit in CF flag.
0F BA /4 ibBT r/m16, imm8MIValidValidStore selected bit in CF flag.
0F BA /4 ibBT r/m32, imm8MIValidValidStore selected bit in CF flag.
REX.W + 0F BA /4 ibBT r/m64, imm8MIValidN.E.Store selected bit in CF flag.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (r)ModRM:reg (r)N/AN/A
MIModRM:r/m (r)imm8N/AN/A
+

Description + ¶ +

+

Selects the bit in a bit string (specified with the first operand, called the bit base) at the bit-position designated by the bit offset (specified by the second operand) and stores the value of the bit in the CF flag. The bit base operand can be a register or a memory location; the bit offset operand can be a register or an immediate value:

+
    +
  • If the bit base operand specifies a register, the instruction takes the modulo 16, 32, or 64 of the bit offset operand (modulo size depends on the mode and register size; 64-bit operands are available only in 64-bit mode).
  • +
  • If the bit base operand specifies a memory location, the operand represents the address of the byte in memory that contains the bit base (bit 0 of the specified byte) of the bit string. The range of the bit position that can be referenced by the offset operand depends on the operand size.
+

See also: Bit(BitBase, BitOffset) on page 3-11.

+

Some assemblers support immediate bit offsets larger than 31 by using the immediate bit offset field in combination with the displacement field of the memory operand. In this case, the low-order 3 or 5 bits (3 for 16-bit operands, 5 for 32-bit operands) of the immediate bit offset are stored in the immediate bit offset field, and the high-order bits are shifted and combined with the byte displacement in the addressing mode by the assembler. The processor will ignore the high order bits if they are not zero.

+

When accessing a bit in memory, the processor may access 4 bytes starting from the memory address for a 32-bit operand size, using by the following relationship:

+

Effective Address + (4 ∗ (BitOffset DIV 32))

+

Or, it may access 2 bytes starting from the memory address for a 16-bit operand, using this relationship:

+

Effective Address + (2 ∗ (BitOffset DIV 16))

+

It may do so even when only a single byte needs to be accessed to reach the given bit. When using this bit addressing mechanism, software should avoid referencing areas of memory close to address space holes. In particular, it should avoid references to memory-mapped I/O registers. Instead, software should use the MOV instructions to load from or store to these addresses, and use the register form of these instructions to manipulate the data.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bit operands. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
CF := Bit(BitBase, BitOffset);
+
+

Flags Affected + ¶ +

+

The CF flag contains the value of the selected bit. The ZF flag is unaffected. The OF, SF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/btc.html b/x86/btc.html new file mode 100644 index 0000000..e0e1f07 --- /dev/null +++ b/x86/btc.html @@ -0,0 +1,180 @@ + +BTC + — Bit Test and Complement

BTC + — Bit Test and Complement

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
0F BB /rBTC r/m16, r16MRValidValidStore selected bit in CF flag and complement.
0F BB /rBTC r/m32, r32MRValidValidStore selected bit in CF flag and complement.
REX.W + 0F BB /rBTC r/m64, r64MRValidN.E.Store selected bit in CF flag and complement.
0F BA /7 ibBTC r/m16, imm8MIValidValidStore selected bit in CF flag and complement.
0F BA /7 ibBTC r/m32, imm8MIValidValidStore selected bit in CF flag and complement.
REX.W + 0F BA /7 ibBTC r/m64, imm8MIValidN.E.Store selected bit in CF flag and complement.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
MIModRM:r/m (r, w)imm8N/AN/A
+

Description + ¶ +

+

Selects the bit in a bit string (specified with the first operand, called the bit base) at the bit-position designated by the bit offset operand (second operand), stores the value of the bit in the CF flag, and complements the selected bit in the bit string. The bit base operand can be a register or a memory location; the bit offset operand can be a register or an immediate value:

+
    +
  • If the bit base operand specifies a register, the instruction takes the modulo 16, 32, or 64 of the bit offset operand (modulo size depends on the mode and register size; 64-bit operands are available only in 64-bit mode). This allows any bit position to be selected.
  • +
  • If the bit base operand specifies a memory location, the operand represents the address of the byte in memory that contains the bit base (bit 0 of the specified byte) of the bit string. The range of the bit position that can be referenced by the offset operand depends on the operand size.
+

See also: Bit(BitBase, BitOffset) on page 3-11.

+

Some assemblers support immediate bit offsets larger than 31 by using the immediate bit offset field in combination with the displacement field of the memory operand. See “BT—Bit Test” in this chapter for more information on this addressing mechanism.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
CF := Bit(BitBase, BitOffset);
+Bit(BitBase, BitOffset) := NOT Bit(BitBase, BitOffset);
+
+

Flags Affected + ¶ +

+

The CF flag contains the value of the selected bit before it is complemented. The ZF flag is unaffected. The OF, SF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand points to a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/btr.html b/x86/btr.html new file mode 100644 index 0000000..65bb1a8 --- /dev/null +++ b/x86/btr.html @@ -0,0 +1,180 @@ + +BTR + — Bit Test and Reset

BTR + — Bit Test and Reset

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
0F B3 /rBTR r/m16, r16MRValidValidStore selected bit in CF flag and clear.
0F B3 /rBTR r/m32, r32MRValidValidStore selected bit in CF flag and clear.
REX.W + 0F B3 /rBTR r/m64, r64MRValidN.E.Store selected bit in CF flag and clear.
0F BA /6 ibBTR r/m16, imm8MIValidValidStore selected bit in CF flag and clear.
0F BA /6 ibBTR r/m32, imm8MIValidValidStore selected bit in CF flag and clear.
REX.W + 0F BA /6 ibBTR r/m64, imm8MIValidN.E.Store selected bit in CF flag and clear.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
MIModRM:r/m (r, w)imm8N/AN/A
+

Description + ¶ +

+

Selects the bit in a bit string (specified with the first operand, called the bit base) at the bit-position designated by the bit offset operand (second operand), stores the value of the bit in the CF flag, and clears the selected bit in the bit string to 0. The bit base operand can be a register or a memory location; the bit offset operand can be a register or an immediate value:

+
    +
  • If the bit base operand specifies a register, the instruction takes the modulo 16, 32, or 64 of the bit offset operand (modulo size depends on the mode and register size; 64-bit operands are available only in 64-bit mode). This allows any bit position to be selected.
  • +
  • If the bit base operand specifies a memory location, the operand represents the address of the byte in memory that contains the bit base (bit 0 of the specified byte) of the bit string. The range of the bit position that can be referenced by the offset operand depends on the operand size.
+

See also: Bit(BitBase, BitOffset) on page 3-11.

+

Some assemblers support immediate bit offsets larger than 31 by using the immediate bit offset field in combination with the displacement field of the memory operand. See “BT—Bit Test” in this chapter for more information on this addressing mechanism.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
CF := Bit(BitBase, BitOffset);
+Bit(BitBase, BitOffset) := 0;
+
+

Flags Affected + ¶ +

+

The CF flag contains the value of the selected bit before it is cleared. The ZF flag is unaffected. The OF, SF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand points to a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/bts.html b/x86/bts.html new file mode 100644 index 0000000..0c5d068 --- /dev/null +++ b/x86/bts.html @@ -0,0 +1,180 @@ + +BTS + — Bit Test and Set

BTS + — Bit Test and Set

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
0F AB /rBTS r/m16, r16MRValidValidStore selected bit in CF flag and set.
0F AB /rBTS r/m32, r32MRValidValidStore selected bit in CF flag and set.
REX.W + 0F AB /rBTS r/m64, r64MRValidN.E.Store selected bit in CF flag and set.
0F BA /5 ibBTS r/m16, imm8MIValidValidStore selected bit in CF flag and set.
0F BA /5 ibBTS r/m32, imm8MIValidValidStore selected bit in CF flag and set.
REX.W + 0F BA /5 ibBTS r/m64, imm8MIValidN.E.Store selected bit in CF flag and set.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
MIModRM:r/m (r, w)imm8N/AN/A
+

Description + ¶ +

+

Selects the bit in a bit string (specified with the first operand, called the bit base) at the bit-position designated by the bit offset operand (second operand), stores the value of the bit in the CF flag, and sets the selected bit in the bit string to 1. The bit base operand can be a register or a memory location; the bit offset operand can be a register or an immediate value:

+
    +
  • If the bit base operand specifies a register, the instruction takes the modulo 16, 32, or 64 of the bit offset operand (modulo size depends on the mode and register size; 64-bit operands are available only in 64-bit mode). This allows any bit position to be selected.
  • +
  • If the bit base operand specifies a memory location, the operand represents the address of the byte in memory that contains the bit base (bit 0 of the specified byte) of the bit string. The range of the bit position that can be referenced by the offset operand depends on the operand size.
+

See also: Bit(BitBase, BitOffset) on page 3-11.

+

Some assemblers support immediate bit offsets larger than 31 by using the immediate bit offset field in combination with the displacement field of the memory operand. See “BT—Bit Test” in this chapter for more information on this addressing mechanism.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
CF := Bit(BitBase, BitOffset);
+Bit(BitBase, BitOffset) := 1;
+
+

Flags Affected + ¶ +

+

The CF flag contains the value of the selected bit before it is set. The ZF flag is unaffected. The OF, SF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand points to a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/bzhi.html b/x86/bzhi.html new file mode 100644 index 0000000..c65b9d4 --- /dev/null +++ b/x86/bzhi.html @@ -0,0 +1,82 @@ + +BZHI + — Zero High Bits Starting with Specified Bit Position

BZHI + — Zero High Bits Starting with Specified Bit Position

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.0F38.W0 F5 /r BZHI r32a, r/m32, r32bRMVV/VBMI2Zero bits in r/m32 starting with the position in r32b, write result to r32a.
VEX.LZ.0F38.W1 F5 /r BZHI r64a, r/m64, r64bRMVV/N.E.BMI2Zero bits in r/m64 starting with the position in r64b, write result to r64a.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMVModRM:reg (w)ModRM:r/m (r)VEX.vvvv (r)N/A
+

Description + ¶ +

+

BZHI copies the bits of the first source operand (the second operand) into the destination operand (the first operand) and clears the higher bits in the destination according to the INDEX value specified by the second source operand (the third operand). The INDEX is specified by bits 7:0 of the second source operand. The INDEX value is saturated at the value of OperandSize -1. CF is set, if the number contained in the 8 low bits of the third operand is greater than OperandSize -1.

+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
N := SRC2[7:0]
+DEST := SRC1
+IF (N < OperandSize)
+    DEST[OperandSize-1:N] := 0
+FI
+IF (N > OperandSize - 1)
+    CF := 1
+ELSE
+    CF := 0
+FI
+
+

Flags Affected + ¶ +

+

ZF and SF flags are updated based on the result. CF flag is set as specified in the Operation section. OF flag is cleared. AF and PF flags are undefined.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
BZHI unsigned __int32 _bzhi_u32(unsigned __int32 src, unsigned __int32 index);
+
+
BZHI unsigned __int64 _bzhi_u64(unsigned __int64 src, unsigned __int32 index);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/call.html b/x86/call.html new file mode 100644 index 0000000..61157a5 --- /dev/null +++ b/x86/call.html @@ -0,0 +1,890 @@ + +CALL + — Call Procedure

CALL + — Call Procedure

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
E8 cwCALL rel16DN.S.ValidCall near, relative, displacement relative to next instruction.
E8 cdCALL rel32DValidValidCall near, relative, displacement relative to next instruction. 32-bit displacement sign extended to 64-bits in 64-bit mode.
FF /2CALL r/m16MN.E.ValidCall near, absolute indirect, address given in r/m16.
FF /2CALL r/m32MN.E.ValidCall near, absolute indirect, address given in r/m32.
FF /2CALL r/m64MValidN.E.Call near, absolute indirect, address given in r/m64.
9A cdCALL ptr16:16DInvalidValidCall far, absolute, address given in operand.
9A cpCALL ptr16:32DInvalidValidCall far, absolute, address given in operand.
FF /3CALL m16:16MValidValidCall far, absolute indirect address given in m16:16. In 32-bit mode: if selector points to a gate, then RIP = 32-bit zero extended displacement taken from gate; else RIP = zero extended 16-bit offset from far pointer referenced in the instruction.
FF /3CALL m16:32MValidValidIn 64-bit mode: If selector points to a gate, then RIP = 64-bit displacement taken from gate; else RIP = zero extended 32-bit offset from far pointer referenced in the instruction.
REX.W FF /3CALL m16:64MValidN.E.In 64-bit mode: If selector points to a gate, then RIP = 64-bit displacement taken from gate; else RIP = 64-bit offset from far pointer referenced in the instruction.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
DOffsetN/AN/AN/A
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Saves procedure linking information on the stack and branches to the called procedure specified using the target operand. The target operand specifies the address of the first instruction in the called procedure. The operand can be an immediate value, a general-purpose register, or a memory location.

+

This instruction can be used to execute four types of calls:

+
    +
  • Near Call — A call to a procedure in the current code segment (the segment currently pointed to by the CS register), sometimes referred to as an intra-segment call.
  • +
  • Far Call — A call to a procedure located in a different segment than the current code segment, sometimes referred to as an inter-segment call.
  • +
  • Inter-privilege-level far call — A far call to a procedure in a segment at a different privilege level than that of the currently executing program or procedure.
  • +
  • Task switch — A call to a procedure located in a different task.
+

The latter two call types (inter-privilege-level call and task switch) can only be executed in protected mode. See “Calling Procedures Using Call and RET” in Chapter 6 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for additional information on near, far, and inter-privilege-level calls. See Chapter 8, “Task Management,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for information on performing task switches with the CALL instruction.

+

Near Call. When executing a near call, the processor pushes the value of the EIP register (which contains the offset of the instruction following the CALL instruction) on the stack (for use later as a return-instruction pointer). The processor then branches to the address in the current code segment specified by the target operand. The target operand specifies either an absolute offset in the code segment (an offset from the base of the code segment) or a relative offset (a signed displacement relative to the current value of the instruction pointer in the EIP register; this value points to the instruction following the CALL instruction). The CS register is not changed on near calls.

+

For a near call absolute, an absolute offset is specified indirectly in a general-purpose register or a memory location (r/m16, r/m32, or r/m64). The operand-size attribute determines the size of the target operand (16, 32 or 64 bits). When in 64-bit mode, the operand size for near call (and all near branches) is forced to 64-bits. Absolute offsets are loaded directly into the EIP(RIP) register. If the operand size attribute is 16, the upper two bytes of the EIP register are cleared, resulting in a maximum instruction pointer size of 16 bits. When accessing an absolute offset indirectly using the stack pointer [ESP] as the base register, the base value used is the value of the ESP before the instruction executes.

+

A relative offset (rel16 or rel32) is generally specified as a label in assembly code. But at the machine code level, it is encoded as a signed, 16- or 32-bit immediate value. This value is added to the value in the EIP(RIP) register. In 64-bit mode the relative offset is always a 32-bit immediate value which is sign extended to 64-bits before it is added to the value in the RIP register for the target calculation. As with absolute offsets, the operand-size attribute determines the size of the target operand (16, 32, or 64 bits). In 64-bit mode the target operand will always be 64-bits because the operand size is forced to 64-bits for near branches.

+

Far Calls in Real-Address or Virtual-8086 Mode. When executing a far call in real- address or virtual-8086 mode, the processor pushes the current value of both the CS and EIP registers on the stack for use as a return-instruction pointer. The processor then performs a “far branch” to the code segment and offset specified with the target operand for the called procedure. The target operand specifies an absolute far address either directly with a pointer (ptr16:16 or ptr16:32) or indirectly with a memory location (m16:16 or m16:32). With the pointer method, the segment and offset of the called procedure is encoded in the instruction using a 4-byte (16-bit operand size) or 6-byte (32-bit operand size) far address immediate. With the indirect method, the target operand specifies a memory location that contains a 4-byte (16-bit operand size) or 6-byte (32-bit operand size) far address. The operand-size attribute determines the size of the offset (16 or 32 bits) in the far address. The far address is loaded directly into the CS and EIP registers. If the operand-size attribute is 16, the upper two bytes of the EIP register are cleared.

+

Far Calls in Protected Mode. When the processor is operating in protected mode, the CALL instruction can be used to perform the following types of far calls:

+
    +
  • Far call to the same privilege level
  • +
  • Far call to a different privilege level (inter-privilege level call)
  • +
  • Task switch (far call to another task)
+

In protected mode, the processor always uses the segment selector part of the far address to access the corresponding descriptor in the GDT or LDT. The descriptor type (code segment, call gate, task gate, or TSS) and access rights determine the type of call operation to be performed.

+

If the selected descriptor is for a code segment, a far call to a code segment at the same privilege level is performed. (If the selected code segment is at a different privilege level and the code segment is non-conforming, a general-protection exception is generated.) A far call to the same privilege level in protected mode is very similar to one carried out in real-address or virtual-8086 mode. The target operand specifies an absolute far address either directly with a pointer (ptr16:16 or ptr16:32) or indirectly with a memory location (m16:16 or m16:32). The operand- size attribute determines the size of the offset (16 or 32 bits) in the far address. The new code segment selector and its descriptor are loaded into CS register; the offset from the instruction is loaded into the EIP register.

+

A call gate (described in the next paragraph) can also be used to perform a far call to a code segment at the same privilege level. Using this mechanism provides an extra level of indirection and is the preferred method of making calls between 16-bit and 32-bit code segments.

+

When executing an inter-privilege-level far call, the code segment for the procedure being called must be accessed through a call gate. The segment selector specified by the target operand identifies the call gate. The target operand can specify the call gate segment selector either directly with a pointer (ptr16:16 or ptr16:32) or indirectly with a memory location (m16:16 or m16:32). The processor obtains the segment selector for the new code segment and the new instruction pointer (offset) from the call gate descriptor. (The offset from the target operand is ignored when a call gate is used.)

+

On inter-privilege-level calls, the processor switches to the stack for the privilege level of the called procedure. The segment selector for the new stack segment is specified in the TSS for the currently running task. The branch to the new code segment occurs after the stack switch. (Note that when using a call gate to perform a far call to a segment at the same privilege level, no stack switch occurs.) On the new stack, the processor pushes the segment selector and stack pointer for the calling procedure’s stack, an optional set of parameters from the calling procedures stack, and the segment selector and instruction pointer for the calling procedure’s code segment. (A value in the call gate descriptor determines how many parameters to copy to the new stack.) Finally, the processor branches to the address of the procedure being called within the new code segment.

+

Executing a task switch with the CALL instruction is similar to executing a call through a call gate. The target operand specifies the segment selector of the task gate for the new task activated by the switch (the offset in the target operand is ignored). The task gate in turn points to the TSS for the new task, which contains the segment selectors for the task’s code and stack segments. Note that the TSS also contains the EIP value for the next instruction that was to be executed before the calling task was suspended. This instruction pointer value is loaded into the EIP register to re-start the calling task.

+

The CALL instruction can also specify the segment selector of the TSS directly, which eliminates the indirection of the task gate. See Chapter 8, “Task Management,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for information on the mechanics of a task switch.

+

When you execute at task switch with a CALL instruction, the nested task flag (NT) is set in the EFLAGS register and the new TSS’s previous task link field is loaded with the old task’s TSS selector. Code is expected to suspend this nested task by executing an IRET instruction which, because the NT flag is set, automatically uses the previous task link to return to the calling task. (See “Task Linking” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for information on nested tasks.) Switching tasks with the CALL instruction differs in this regard from JMP instruction. JMP does not set the NT flag and therefore does not expect an IRET instruction to suspend the task.

+

Mixing 16-Bit and 32-Bit Calls. When making far calls between 16-bit and 32-bit code segments, use a call gate. If the far call is from a 32-bit code segment to a 16-bit code segment, the call should be made from the first 64 KBytes of the 32-bit code segment. This is because the operand-size attribute of the instruction is set to 16, so only a 16-bit return address offset can be saved. Also, the call should be made using a 16-bit call gate so that 16-bit values can be pushed on the stack. See Chapter 22, “Mixing 16-Bit and 32-Bit Code,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B, for more information.

+

Far Calls in Compatibility Mode. When the processor is operating in compatibility mode, the CALL instruction can be used to perform the following types of far calls:

+
    +
  • Far call to the same privilege level, remaining in compatibility mode
  • +
  • Far call to the same privilege level, transitioning to 64-bit mode
  • +
  • Far call to a different privilege level (inter-privilege level call), transitioning to 64-bit mode
+

Note that a CALL instruction can not be used to cause a task switch in compatibility mode since task switches are not supported in IA-32e mode.

+

In compatibility mode, the processor always uses the segment selector part of the far address to access the corresponding descriptor in the GDT or LDT. The descriptor type (code segment, call gate) and access rights determine the type of call operation to be performed.

+

If the selected descriptor is for a code segment, a far call to a code segment at the same privilege level is performed. (If the selected code segment is at a different privilege level and the code segment is non-conforming, a general-protection exception is generated.) A far call to the same privilege level in compatibility mode is very similar to one carried out in protected mode. The target operand specifies an absolute far address either directly with a pointer (ptr16:16 or ptr16:32) or indirectly with a memory location (m16:16 or m16:32). The operand-size attribute determines the size of the offset (16 or 32 bits) in the far address. The new code segment selector and its descriptor are loaded into CS register and the offset from the instruction is loaded into the EIP register. The difference is that 64-bit mode may be entered. This specified by the L bit in the new code segment descriptor.

+

Note that a 64-bit call gate (described in the next paragraph) can also be used to perform a far call to a code segment at the same privilege level. However, using this mechanism requires that the target code segment descriptor have the L bit set, causing an entry to 64-bit mode.

+

When executing an inter-privilege-level far call, the code segment for the procedure being called must be accessed through a 64-bit call gate. The segment selector specified by the target operand identifies the call gate. The target

+

operand can specify the call gate segment selector either directly with a pointer (ptr16:16 or ptr16:32) or indirectly with a memory location (m16:16 or m16:32). The processor obtains the segment selector for the new code segment and the new instruction pointer (offset) from the 16-byte call gate descriptor. (The offset from the target operand is ignored when a call gate is used.)

+

On inter-privilege-level calls, the processor switches to the stack for the privilege level of the called procedure. The segment selector for the new stack segment is set to NULL. The new stack pointer is specified in the TSS for the currently running task. The branch to the new code segment occurs after the stack switch. (Note that when using a call gate to perform a far call to a segment at the same privilege level, an implicit stack switch occurs as a result of entering 64-bit mode. The SS selector is unchanged, but stack segment accesses use a segment base of 0x0, the limit is ignored, and the default stack size is 64-bits. The full value of RSP is used for the offset, of which the upper 32-bits are undefined.) On the new stack, the processor pushes the segment selector and stack pointer for the calling procedure’s stack and the segment selector and instruction pointer for the calling procedure’s code segment. (Parameter copy is not supported in IA-32e mode.) Finally, the processor branches to the address of the procedure being called within the new code segment.

+

Near/(Far) Calls in 64-bit Mode. When the processor is operating in 64-bit mode, the CALL instruction can be used to perform the following types of far calls:

+
    +
  • Far call to the same privilege level, transitioning to compatibility mode
  • +
  • Far call to the same privilege level, remaining in 64-bit mode
  • +
  • Far call to a different privilege level (inter-privilege level call), remaining in 64-bit mode
+

Note that in this mode the CALL instruction can not be used to cause a task switch in 64-bit mode since task switches are not supported in IA-32e mode.

+

In 64-bit mode, the processor always uses the segment selector part of the far address to access the corresponding descriptor in the GDT or LDT. The descriptor type (code segment, call gate) and access rights determine the type of call operation to be performed.

+

If the selected descriptor is for a code segment, a far call to a code segment at the same privilege level is performed. (If the selected code segment is at a different privilege level and the code segment is non-conforming, a general-protection exception is generated.) A far call to the same privilege level in 64-bit mode is very similar to one carried out in compatibility mode. The target operand specifies an absolute far address indirectly with a memory location (m16:16, m16:32 or m16:64). The form of CALL with a direct specification of absolute far address is not defined in 64-bit mode. The operand-size attribute determines the size of the offset (16, 32, or 64 bits) in the far address. The new code segment selector and its descriptor are loaded into the CS register; the offset from the instruction is loaded into the EIP register. The new code segment may specify entry either into compatibility or 64-bit mode, based on the L bit value.

+

A 64-bit call gate (described in the next paragraph) can also be used to perform a far call to a code segment at the same privilege level. However, using this mechanism requires that the target code segment descriptor have the L bit set.

+

When executing an inter-privilege-level far call, the code segment for the procedure being called must be accessed through a 64-bit call gate. The segment selector specified by the target operand identifies the call gate. The target operand can only specify the call gate segment selector indirectly with a memory location (m16:16, m16:32 or m16:64). The processor obtains the segment selector for the new code segment and the new instruction pointer (offset) from the 16-byte call gate descriptor. (The offset from the target operand is ignored when a call gate is used.)

+

On inter-privilege-level calls, the processor switches to the stack for the privilege level of the called procedure. The segment selector for the new stack segment is set to NULL. The new stack pointer is specified in the TSS for the currently running task. The branch to the new code segment occurs after the stack switch.

+

Note that when using a call gate to perform a far call to a segment at the same privilege level, an implicit stack switch occurs as a result of entering 64-bit mode. The SS selector is unchanged, but stack segment accesses use a segment base of 0x0, the limit is ignored, and the default stack size is 64-bits. (The full value of RSP is used for the offset.) On the new stack, the processor pushes the segment selector and stack pointer for the calling procedure’s stack and the segment selector and instruction pointer for the calling procedure’s code segment. (Parameter copy is not supported in IA-32e mode.) Finally, the processor branches to the address of the procedure being called within the new code segment.

+

Refer to Chapter 6, “Procedure Calls, Interrupts, and Exceptions‚” and Chapter 17, “Control-flow Enforcement Technology (CET)‚” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for CET details.

+

Instruction ordering. Instructions following a far call may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the far call have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible).

+

Instructions sequentially following a near indirect CALL instruction (i.e., those not at the target) may be executed speculatively. If software needs to prevent this (e.g., in order to prevent a speculative execution side channel), then an LFENCE instruction opcode can be placed after the near indirect CALL in order to block speculative execution.

+

Operation + ¶ +

+
IF near call
+    THEN IF near relative call
+        THEN
+            IF OperandSize = 64
+                THEN
+                    tempDEST := SignExtend(DEST); (* DEST is rel32 *)
+                    tempRIP := RIP + tempDEST;
+                    IF stack not large enough for a 8-byte return address
+                        THEN #SS(0); FI;
+                    Push(RIP);
+                    IF ShadowStackEnabled(CPL) AND DEST != 0
+                        ShadowStackPush8B(RIP);
+                    FI;
+                    RIP := tempRIP;
+            FI;
+            IF OperandSize = 32
+                THEN
+                    tempEIP := EIP + DEST; (* DEST is rel32 *)
+                    IF tempEIP is not within code segment limit THEN #GP(0); FI;
+                    IF stack not large enough for a 4-byte return address
+                        THEN #SS(0); FI;
+                    Push(EIP);
+                    IF ShadowStackEnabled(CPL) AND DEST != 0
+                        ShadowStackPush4B(EIP);
+                    FI;
+                    EIP := tempEIP;
+            FI;
+            IF OperandSize = 16
+                THEN
+                    tempEIP := (EIP + DEST) AND 0000FFFFH; (* DEST is rel16 *)
+                    IF tempEIP is not within code segment limit THEN #GP(0); FI;
+                    IF stack not large enough for a 2-byte return address
+                        THEN #SS(0); FI;
+                    Push(IP);
+                    IF ShadowStackEnabled(CPL) AND DEST != 0
+                        (* IP is zero extended and pushed as a 32 bit value on shadow stack *)
+                        ShadowStackPush4B(IP);
+                    FI;
+                    EIP := tempEIP;
+            FI;
+        ELSE (* Near absolute call *)
+            IF OperandSize = 64
+                THEN
+                    tempRIP := DEST; (* DEST is r/m64 *)
+                    IF stack not large enough for a 8-byte return address
+                        THEN #SS(0); FI;
+                    Push(RIP);
+                    IF ShadowStackEnabled(CPL)
+                        ShadowStackPush8B(RIP);
+                    FI;
+                    RIP := tempRIP;
+            FI;
+            IF OperandSize = 32
+                THEN
+                    tempEIP := DEST; (* DEST is r/m32 *)
+                    IF tempEIP is not within code segment limit THEN #GP(0); FI;
+                    IF stack not large enough for a 4-byte return address
+                        THEN #SS(0); FI;
+                    Push(EIP);
+                    IF ShadowStackEnabled(CPL)
+                        ShadowStackPush4B(EIP);
+                    FI;
+                    EIP := tempEIP;
+            FI;
+            IF OperandSize = 16
+                THEN
+                    tempEIP := DEST AND 0000FFFFH; (* DEST is r/m16 *)
+                    IF tempEIP is not within code segment limit THEN #GP(0); FI;
+                    IF stack not large enough for a 2-byte return address
+                        THEN #SS(0); FI;
+                    Push(IP);
+                    IF ShadowStackEnabled(CPL)
+                        (* IP is zero extended and pushed as a 32 bit value on shadow stack *)
+                        ShadowStackPush4B(IP);
+                    FI;
+                    EIP := tempEIP;
+            FI;
+    FI;rel/abs
+    IF (Call near indirect, absolute indirect)
+        IF EndbranchEnabledAndNotSuppressed(CPL)
+            IF CPL = 3
+                THEN
+                    IF ( no 3EH prefix OR IA32_U_CET.NO_TRACK_EN == 0 )
+                        THEN
+                            IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                    FI;
+                ELSE
+                    IF ( no 3EH prefix OR IA32_S_CET.NO_TRACK_EN == 0 )
+                        THEN
+                            IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                    FI;
+            FI;
+        FI;
+    FI;
+FI; near
+IF far call and (PE = 0 or (PE = 1 and VM = 1)) (* Real-address or virtual-8086 mode *)
+    THEN
+        IF OperandSize = 32
+            THEN
+                IF stack not large enough for a 6-byte return address
+                    THEN #SS(0); FI;
+                IF DEST[31:16] is not zero THEN #GP(0); FI;
+                Push(CS); (* Padded with 16 high-order bits *)
+                Push(EIP);
+                CS := DEST[47:32]; (* DEST is ptr16:32 or [m16:32] *)
+                EIP := DEST[31:0]; (* DEST is ptr16:32 or [m16:32] *)
+            ELSE (* OperandSize = 16 *)
+                IF stack not large enough for a 4-byte return address
+                    THEN #SS(0); FI;
+                Push(CS);
+                Push(IP);
+                CS := DEST[31:16]; (* DEST is ptr16:16 or [m16:16] *)
+                EIP := DEST[15:0]; (* DEST is ptr16:16 or [m16:16]; clear upper 16 bits *)
+        FI;
+FI;
+IF far call and (PE = 1 and VM = 0) (* Protected mode or IA-32e Mode, not virtual-8086 mode*)
+    THEN
+        IF segment selector in target operand NULL
+            THEN #GP(0); FI;
+        IF segment selector index not within descriptor table limits
+            THEN #GP(new code segment selector); FI;
+        Read type and access rights of selected segment descriptor;
+        IF IA32_EFER.LMA = 0
+            THEN
+                IF segment type is not a conforming or nonconforming code segment, call
+                gate, task gate, or TSS
+                    THEN #GP(segment selector); FI;
+            ELSE
+                IF segment type is not a conforming or nonconforming code segment or
+                64-bit call gate,
+                    THEN #GP(segment selector); FI;
+        FI;
+        Depending on type and access rights:
+            GO TO CONFORMING-CODE-SEGMENT;
+            GO TO NONCONFORMING-CODE-SEGMENT;
+            GO TO CALL-GATE;
+            GO TO TASK-GATE;
+            GO TO TASK-STATE-SEGMENT;
+FI;
+CONFORMING-CODE-SEGMENT:
+    IF L bit = 1 and D bit = 1 and IA32_EFER.LMA = 1
+        THEN GP(new code segment selector); FI;
+    IF DPL > CPL
+        THEN #GP(new code segment selector); FI;
+    IF segment not present
+        THEN #NP(new code segment selector); FI;
+    IF stack not large enough for return address
+        THEN #SS(0); FI;
+    tempEIP := DEST(Offset);
+    IF target mode = Compatibility mode
+        THEN tempEIP := tempEIP AND 00000000_FFFFFFFFH; FI;
+    IF OperandSize = 16
+        THEN
+            tempEIP := tempEIP AND 0000FFFFH; FI; (* Clear upper 16 bits *)
+    IF (IA32_EFER.LMA = 0 or target mode = Compatibility mode) and (tempEIP outside new code segment limit)
+        THEN #GP(0); FI;
+    IF tempEIP is non-canonical
+        THEN #GP(0); FI;
+    IF ShadowStackEnabled(CPL)
+        IF OperandSize = 32
+            THEN
+                tempPushLIP = CSBASE + EIP;
+            ELSE
+                IF OperandSize = 16
+                    THEN
+                        tempPushLIP = CSBASE + IP;
+                    ELSE (* OperandSize = 64 *)
+                        tempPushLIP = RIP;
+                FI;
+        FI;
+        tempPushCS = CS;
+    FI;
+    IF OperandSize = 32
+        THEN
+            Push(CS); (* Padded with 16 high-order bits *)
+            Push(EIP);
+            CS := DEST(CodeSegmentSelector);
+            (* Segment descriptor information also loaded *)
+            CS(RPL) := CPL;
+            EIP := tempEIP;
+        ELSE
+            IF OperandSize = 16
+                THEN
+                    Push(CS);
+                    Push(IP);
+                    CS := DEST(CodeSegmentSelector);
+                    (* Segment descriptor information also loaded *)
+                    CS(RPL) := CPL;
+                    EIP := tempEIP;
+                ELSE (* OperandSize = 64 *)
+                    Push(CS); (* Padded with 48 high-order bits *)
+                    Push(RIP);
+                    CS := DEST(CodeSegmentSelector);
+                    (* Segment descriptor information also loaded *)
+                    CS(RPL) := CPL;
+                    RIP := tempEIP;
+            FI;
+    FI;
+    IF ShadowStackEnabled(CPL)
+        IF (IA32_EFER.LMA and DEST(CodeSegmentSelector).L) = 0
+            (* If target is legacy or compatibility mode then the SSP must be in low 4GB *)
+            IF (SSP & 0xFFFFFFFF00000000 != 0)
+                THEN #GP(0); FI;
+        FI;
+        (* align to 8 byte boundary if not already aligned *)
+        tempSSP = SSP;
+        Shadow_stack_store 4 bytes of 0 to (SSP – 4)
+        SSP = SSP & 0xFFFFFFFFFFFFFFF8H
+        ShadowStackPush8B(tempPushCS); (* Padded with 48 high-order bits of 0 *)
+        ShadowStackPush8B(tempPushLIP); (* Padded with 32 high-order bits of 0 for 32 bit LIP*)
+        ShadowStackPush8B(tempSSP);
+    FI;
+    IF EndbranchEnabled(CPL)
+        IF CPL = 3
+            THEN
+                IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                IA32_U_CET.SUPPRESS = 0
+            ELSE
+                IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                IA32_S_CET.SUPPRESS = 0
+        FI;
+    FI;
+END;
+NONCONFORMING-CODE-SEGMENT:
+    IF L-Bit = 1 and D-BIT = 1 and IA32_EFER.LMA = 1
+        THEN GP(new code segment selector); FI;
+    IF (RPL > CPL) or (DPL ≠ CPL)
+        THEN #GP(new code segment selector); FI;
+    IF segment not present
+        THEN #NP(new code segment selector); FI;
+    IF stack not large enough for return address
+        THEN #SS(0); FI;
+    tempEIP := DEST(Offset);
+    IF target mode = Compatibility mode
+        THEN tempEIP := tempEIP AND 00000000_FFFFFFFFH; FI;
+    IF OperandSize = 16
+        THEN tempEIP := tempEIP AND 0000FFFFH; FI; (* Clear upper 16 bits *)
+    IF (IA32_EFER.LMA = 0 or target mode = Compatibility mode) and (tempEIP outside new code segment limit)
+        THEN #GP(0); FI;
+    IF tempEIP is non-canonical
+        THEN #GP(0); FI;
+    IF ShadowStackEnabled(CPL)
+        IF IA32_EFER.LMA & CS.L
+            tempPushLIP = RIP
+        ELSE
+            tempPushLIP = CSBASE + EIP;
+        FI;
+        tempPushCS = CS;
+    FI;
+    IF OperandSize = 32
+        THEN
+            Push(CS); (* Padded with 16 high-order bits *)
+            Push(EIP);
+            CS := DEST(CodeSegmentSelector);
+            (* Segment descriptor information also loaded *)
+            CS(RPL) := CPL;
+            EIP := tempEIP;
+        ELSE
+            IF OperandSize = 16
+                THEN
+                    Push(CS);
+                    Push(IP);
+                    CS := DEST(CodeSegmentSelector);
+                    (* Segment descriptor information also loaded *)
+                    CS(RPL) := CPL;
+                    EIP := tempEIP;
+                ELSE (* OperandSize = 64 *)
+                    Push(CS); (* Padded with 48 high-order bits *)
+                    Push(RIP);
+                    CS := DEST(CodeSegmentSelector);
+                    (* Segment descriptor information also loaded *)
+                    CS(RPL) := CPL;
+                    RIP := tempEIP;
+            FI;
+    FI;
+    IF ShadowStackEnabled(CPL)
+        IF (IA32_EFER.LMA and DEST(CodeSegmentSelector).L) = 0
+            (* If target is legacy or compatibility mode then the SSP must be in low 4GB *)
+            IF (SSP & 0xFFFFFFFF00000000 != 0)
+                THEN #GP(0); FI;
+        FI;
+    (* align to 8 byte boundary if not already aligned *)
+    tempSSP = SSP;
+    Shadow_stack_store 4 bytes of 0 to (SSP – 4)
+    SSP = SSP & 0xFFFFFFFFFFFFFFF8H
+    ShadowStackPush8B(tempPushCS); (* Padded with 48 high-order 0 bits *)
+    ShadowStackPush8B(tempPushLIP); (* Padded 32 high-order bits of 0 for 32 bit LIP*)
+    ShadowStackPush8B(tempSSP);
+    FI;
+    IF EndbranchEnabled(CPL)
+        IF CPL = 3
+            THEN
+                IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                IA32_U_CET.SUPPRESS = 0
+            ELSE
+                IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                IA32_S_CET.SUPPRESS = 0
+        FI;
+    FI;
+END;
+CALL-GATE:
+    IF call gate (DPL < CPL) or (RPL > DPL)
+        THEN #GP(call-gate selector); FI;
+    IF call gate not present
+        THEN #NP(call-gate selector); FI;
+    IF call-gate code-segment selector is NULL
+        THEN #GP(0); FI;
+    IF call-gate code-segment selector index is outside descriptor table limits
+        THEN #GP(call-gate code-segment selector); FI;
+    Read call-gate code-segment descriptor;
+    IF call-gate code-segment descriptor does not indicate a code segment
+    or call-gate code-segment descriptor DPL > CPL
+        THEN #GP(call-gate code-segment selector); FI;
+    IF IA32_EFER.LMA = 1 AND (call-gate code-segment descriptor is
+    not a 64-bit code segment or call-gate code-segment descriptor has both L-bit and D-bit set)
+        THEN #GP(call-gate code-segment selector); FI;
+    IF call-gate code segment not present
+        THEN #NP(call-gate code-segment selector); FI;
+    IF call-gate code segment is non-conforming and DPL < CPL
+        THEN go to MORE-PRIVILEGE;
+        ELSE go to SAME-PRIVILEGE;
+    FI;
+END;
+MORE-PRIVILEGE:
+    IF current TSS is 32-bit
+        THEN
+            TSSstackAddress := (new code-segment DPL ∗ 8) + 4;
+            IF (TSSstackAddress + 5) > current TSS limit
+                THEN #TS(current TSS selector); FI;
+            NewSS := 2 bytes loaded from (TSS base + TSSstackAddress + 4);
+            NewESP := 4 bytes loaded from (TSS base + TSSstackAddress);
+        ELSE
+            IF current TSS is 16-bit
+                THEN
+                    TSSstackAddress := (new code-segment DPL ∗ 4) + 2
+                    IF (TSSstackAddress + 3) > current TSS limit
+                        THEN #TS(current TSS selector); FI;
+                    NewSS := 2 bytes loaded from (TSS base + TSSstackAddress + 2);
+                    NewESP := 2 bytes loaded from (TSS base + TSSstackAddress);
+                ELSE (* current TSS is 64-bit *)
+                    TSSstackAddress := (new code-segment DPL ∗ 8) + 4;
+                    IF (TSSstackAddress + 7) > current TSS limit
+                        THEN #TS(current TSS selector); FI;
+                    NewSS := new code-segment DPL; (* NULL selector with RPL = new CPL *)
+                    NewRSP := 8 bytes loaded from (current TSS base + TSSstackAddress);
+            FI;
+    FI;
+    IF IA32_EFER.LMA = 0 and NewSS is NULL
+        THEN #TS(NewSS); FI;
+    Read new stack-segment descriptor;
+    IF IA32_EFER.LMA = 0 and (NewSS RPL ≠ new code-segment DPL
+    or new stack-segment DPL ≠ new code-segment DPL or new stack segment is not a
+    writable data segment)
+        THEN #TS(NewSS); FI
+    IF IA32_EFER.LMA = 0 and new stack segment not present
+        THEN #SS(NewSS); FI;
+    IF CallGateSize = 32
+        THEN
+            IF new stack does not have room for parameters plus 16 bytes
+                THEN #SS(NewSS); FI;
+            IF CallGate(InstructionPointer) not within new code-segment limit
+                THEN #GP(0); FI;
+            SS:=newSS; (*Segmentdescriptorinformationalsoloaded*)
+            ESP := newESP;
+            CS:EIP := CallGate(CS:InstructionPointer);
+            (* Segment descriptor information also loaded *)
+            Push(oldSS:oldESP); (* From calling procedure *)
+            temp := parameter count from call gate, masked to 5 bits;
+            Push(parameters from calling procedure’s stack, temp)
+            Push(oldCS:oldEIP); (* Return address to calling procedure *)
+        ELSE
+            IF CallGateSize = 16
+                THEN
+                    IF new stack does not have room for parameters plus 8 bytes
+                        THEN #SS(NewSS); FI;
+                    IF (CallGate(InstructionPointer) AND FFFFH) not in new code-segment limit
+                        THEN #GP(0); FI;
+                    SS:=newSS; (*Segmentdescriptorinformationalsoloaded*)
+                    ESP := newESP;
+                    CS:IP := CallGate(CS:InstructionPointer);
+                    (* Segment descriptor information also loaded *)
+                    Push(oldSS:oldESP); (* From calling procedure *)
+                    temp := parameter count from call gate, masked to 5 bits;
+                    Push(parameters from calling procedure’s stack, temp)
+                    Push(oldCS:oldEIP); (* Return address to calling procedure *)
+                ELSE (* CallGateSize = 64 *)
+                    IF pushing 32 bytes on the stack would use a non-canonical address
+                        THEN #SS(NewSS); FI;
+                    IF (CallGate(InstructionPointer) is non-canonical)
+                        THEN #GP(0); FI;
+                    SS := NewSS; (* NewSS is NULL)
+                    RSP := NewESP;
+                    CS:IP := CallGate(CS:InstructionPointer);
+                    (* Segment descriptor information also loaded *)
+                    Push(oldSS:oldESP); (* From calling procedure *)
+                    Push(oldCS:oldEIP); (* Return address to calling procedure *)
+            FI;
+    FI;
+    IF ShadowStackEnabled(CPL) AND CPL = 3
+        THEN
+            IF IA32_EFER.LMA = 0
+                THEN IA32_PL3_SSP := SSP;
+                ELSE (* adjust so bits 63:N get the value of bit N–1, where N is the CPU’s maximum linear-address width *)
+                    IA32_PL3_SSP := LA_adjust(SSP);
+            FI;
+    FI;
+    CPL := CodeSegment(DPL)
+    CS(RPL) := CPL
+    IF ShadowStackEnabled(CPL)
+        oldSSP := SSP
+        SSP := IA32_PLi_SSP; (* where i is the CPL *)
+        IF SSP & 0x07 != 0 (* if SSP not aligned to 8 bytes then #GP *)
+            THEN #GP(0); FI;
+        (* Token and CS:LIP:oldSSP pushed on shadow stack must be contained in a naturally aligned 32-byte region*)
+        IF (SSP & ~0x1F) != ((SSP – 24) & ~0x1F)
+            #GP(0); FI;
+        IF ((IA32_EFER.LMA and CS.L) = 0 AND SSP[63:32] != 0)
+            THEN #GP(0); FI;
+        expected_token_value = SSP (* busy bit - bit position 0 - must be clear *)
+        new_token_value = SSP | BUSY_BIT (* Set the busy bit *)
+        IF shadow_stack_lock_cmpxchg8b(SSP, new_token_value, expected_token_value) != expected_token_value
+            THEN #GP(0); FI;
+        IF oldSS.DPL != 3
+            ShadowStackPush8B(oldCS); (* Padded with 48 high-order bits of 0 *)
+            ShadowStackPush8B(oldCSBASE+oldRIP); (* Padded with 32 high-order bits of 0 for 32 bit LIP*)
+            ShadowStackPush8B(oldSSP);
+        FI;
+    FI;
+    IF EndbranchEnabled (CPL)
+        IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+        IA32_S_CET.SUPPRESS = 0
+    FI;
+END;
+SAME-PRIVILEGE:
+    IF CallGateSize = 32
+        THEN
+            IF stack does not have room for 8 bytes
+                THEN #SS(0); FI;
+            IF CallGate(InstructionPointer) not within code segment limit
+                THEN #GP(0); FI;
+            CS:EIP := CallGate(CS:EIP) (* Segment descriptor information also loaded *)
+            Push(oldCS:oldEIP); (* Return address to calling procedure *)
+        ELSE
+            If CallGateSize = 16
+                THEN
+                    IF stack does not have room for 4 bytes
+                        THEN #SS(0); FI;
+                    IF CallGate(InstructionPointer) not within code segment limit
+                        THEN #GP(0); FI;
+                    CS:IP := CallGate(CS:instruction pointer);
+                    (* Segment descriptor information also loaded *)
+                    Push(oldCS:oldIP); (* Return address to calling procedure *)
+                ELSE (* CallGateSize = 64)
+                    IF pushing 16 bytes on the stack touches non-canonical addresses
+                        THEN #SS(0); FI;
+                    IF RIP non-canonical
+                        THEN #GP(0); FI;
+                    CS:IP := CallGate(CS:instruction pointer);
+                    (* Segment descriptor information also loaded *)
+                    Push(oldCS:oldIP); (* Return address to calling procedure *)
+            FI;
+    FI;
+    CS(RPL) := CPL
+    IF ShadowStackEnabled(CPL)
+        (* Align to next 8 byte boundary *)
+        tempSSP = SSP;
+        Shadow_stack_store 4 bytes of 0 to (SSP – 4)
+        SSP = SSP & 0xFFFFFFFFFFFFFFF8H;
+        (* push cs:lip:ssp on shadow stack *)
+        ShadowStackPush8B(oldCS); (* Padded with 48 high-order bits of 0 *)
+        ShadowStackPush8B(oldCSBASE + oldRIP); (* Padded with 32 high-order bits of 0 for 32 bit LIP*)
+        ShadowStackPush8B(tempSSP);
+    FI;
+    IF EndbranchEnabled (CPL)
+        IF CPL = 3
+            THEN
+                IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH;
+                IA32_U_CET.SUPPRESS = 0
+            ELSE
+                IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH;
+                IA32_S_CET.SUPPRESS = 0
+        FI;
+    FI;
+END;
+TASK-GATE:
+    IF task gate DPL < CPL or RPL
+        THEN #GP(task gate selector); FI;
+    IF task gate not present
+        THEN #NP(task gate selector); FI;
+    Read the TSS segment selector in the task-gate descriptor;
+    IF TSS segment selector local/global bit is set to local
+    or index not within GDT limits
+        THEN #GP(TSS selector); FI;
+    Access TSS descriptor in GDT;
+    IF descriptor is not a TSS segment
+        THEN #GP(TSS selector); FI;
+    IF TSS descriptor specifies that the TSS is busy
+        THEN #GP(TSS selector); FI;
+    IF TSS not present
+        THEN #NP(TSS selector); FI;
+    SWITCH-TASKS (with nesting) to TSS;
+    IF EIP not within code segment limit
+        THEN #GP(0); FI;
+END;
+TASK-STATE-SEGMENT:
+    IF TSS DPL < CPL or RPL
+    or TSS descriptor indicates TSS not available
+        THEN #GP(TSS selector); FI;
+    IF TSS is not present
+        THEN #NP(TSS selector); FI;
+    SWITCH-TASKS (with nesting) to TSS;
+    IF EIP not within code segment limit
+        THEN #GP(0); FI;
+END;
+
+

Flags Affected + ¶ +

+

All flags are affected if a task switch occurs; no flags are affected if a task switch does not occur.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the target offset in destination operand is beyond the new code segment limit.
If the segment selector in the destination operand is NULL.
If the code segment selector in the gate is NULL.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If target mode is compatibility mode and SSP is not in low 4GB.
If SSP in IA32_PLi_SSP (where i is the new CPL) is not 8 byte aligned.
If the token and the stack frame to be pushed on shadow stack are not contained in a naturally aligned 32-byte region of the shadow stack.
If “supervisor Shadow Stack” token on new shadow stack is marked busy.
If destination mode is 32-bit or compatibility mode, but SSP address in “supervisor shadow stack” token is beyond 4GB.
If SSP address in “supervisor shadow stack” token does not match SSP address in IA32_PLi_SSP (where i is the new CPL).
#GP(selector)If a code segment or gate or TSS selector index is outside descriptor table limits.
If the segment descriptor pointed to by the segment selector in the destination operand is not for a conforming-code segment, nonconforming-code segment, call gate, task gate, or task state segment.
If the DPL for a nonconforming-code segment is not equal to the CPL or the RPL for the segment’s segment selector is greater than the CPL.
If the DPL for a conforming-code segment is greater than the CPL.
If the DPL from a call-gate, task-gate, or TSS segment descriptor is less than the CPL or than the RPL of the call-gate, task-gate, or TSS’s segment selector.
If the segment descriptor for a segment selector from a call gate does not indicate it is a code segment.
If the segment selector from a call gate is beyond the descriptor table limits.
If the DPL for a code-segment obtained from a call gate is greater than the CPL.
If the segment selector for a TSS has its local/global bit set for local.
If a TSS segment descriptor specifies that the TSS is busy or not available.
#SS(0)If pushing the return address, parameters, or stack segment pointer onto the stack exceeds the bounds of the stack segment, when no stack switch occurs.
If a memory operand effective address is outside the SS segment limit.
#SS(selector)If pushing the return address, parameters, or stack segment pointer onto the stack exceeds the bounds of the stack segment, when a stack switch occurs.
If the SS register is being loaded as part of a stack switch and the segment pointed to is marked not present.
If stack segment does not have room for the return address, parameters, or stack segment pointer, when stack switch occurs.
#NP(selector)If a code segment, data segment, call gate, task gate, or TSS is not present.
#TS(selector)If the new stack segment selector and ESP are beyond the end of the TSS.
If the new stack segment selector is NULL.
If the RPL of the new stack segment selector in the TSS is not equal to the DPL of the code segment being accessed.
If DPL of the stack segment descriptor for the new stack segment is not equal to the DPL of the code segment descriptor.
If the new stack segment is not a writable data segment.
If segment-selector index for stack segment is outside descriptor table limits.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the target offset is beyond the code segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the target offset is beyond the code segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+ + + + + + +
#GP(selector)If a memory address accessed by the selector is in non-canonical space.
#GP(0)If the target offset in the destination operand is non-canonical.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory address is non-canonical.
If target offset in destination operand is non-canonical.
If the segment selector in the destination operand is NULL.
If the code segment selector in the 64-bit gate is NULL.
If target mode is compatibility mode and SSP is not in low 4GB.
If SSP in IA32_PLi_SSP (where i is the new CPL) is not 8 byte aligned.
If the token and the stack frame to be pushed on shadow stack are not contained in a naturally aligned 32-byte region of the shadow stack.
If “supervisor Shadow Stack” token on new shadow stack is marked busy.
If destination mode is 32-bit mode or compatibility mode, but SSP address in “super-visor shadow” stack token is beyond 4GB.
If SSP address in “supervisor shadow stack” token does not match SSP address in IA32_PLi_SSP (where i is the new CPL).
#GP(selector)If code segment or 64-bit call gate is outside descriptor table limits.
If code segment or 64-bit call gate overlaps non-canonical space.
If the segment descriptor pointed to by the segment selector in the destination operand is not for a conforming-code segment, nonconforming-code segment, or 64-bit call gate.
If the segment descriptor pointed to by the segment selector in the destination operand is a code segment and has both the D-bit and the L- bit set.
If the DPL for a nonconforming-code segment is not equal to the CPL, or the RPL for the segment’s segment selector is greater than the CPL.
If the DPL for a conforming-code segment is greater than the CPL.
If the DPL from a 64-bit call-gate is less than the CPL or than the RPL of the 64-bit call-gate.
If the upper type field of a 64-bit call gate is not 0x0.
If the segment selector from a 64-bit call gate is beyond the descriptor table limits.
If the DPL for a code-segment obtained from a 64-bit call gate is greater than the CPL.
If the code segment descriptor pointed to by the selector in the 64-bit gate doesn't have the L-bit set and the D-bit clear.
If the segment descriptor for a segment selector from the 64-bit call gate does not indicate it is a code segment.
#SS(0)If pushing the return offset or CS selector onto the stack exceeds the bounds of the stack segment when no stack switch occurs.
If a memory operand effective address is outside the SS segment limit.
If the stack address is in a non-canonical form.
#SS(selector)If pushing the old values of SS selector, stack pointer, EFLAGS, CS selector, offset, or error code onto the stack violates the canonical boundary when a stack switch occurs.
#NP(selector)If a code segment or 64-bit call gate is not present.
#TS(selector)If the load of the new RSP exceeds the limit of the TSS.
#UD(64-bit mode only) If a far call is direct to an absolute address in memory.
If the LOCK prefix is used.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
diff --git a/x86/capabilities.html b/x86/capabilities.html new file mode 100644 index 0000000..ced29d1 --- /dev/null +++ b/x86/capabilities.html @@ -0,0 +1,159 @@ + +GETSEC[CAPABILITIES] + — Report the SMX Capabilities

GETSEC[CAPABILITIES] + — Report the SMX Capabilities

+ + + + + + + + + +
OpcodeInstructionDescription
NP 0F 37 (EAX = 0)GETSEC[CAPABILITIES]Report the SMX capabilities. The capabilities index is input in EBX with the result returned in EAX.
+

Description + ¶ +

+

The GETSEC[CAPABILITIES] function returns a bit vector of supported GETSEC leaf functions. The CAPABILITIES leaf of GETSEC is selected with EAX set to 0 at entry. EBX is used as the selector for returning the bit vector field in EAX. GETSEC[CAPABILITIES] may be executed at all privilege levels, but the CR4.SMXE bit must be set or an undefined opcode exception (#UD) is returned.

+

With EBX = 0 upon execution of GETSEC[CAPABILITIES], EAX returns the a bit vector representing status on the presence of a Intel® TXT-capable chipset and the first 30 available GETSEC leaf functions. The format of the returned bit vector is provided in Table 7-3.

+

If bit 0 is set to 1, then an Intel® TXT-capable chipset has been sampled present by the processor. If bits in the range of 1-30 are set, then the corresponding GETSEC leaf function is available. If the bit value at a given bit index is 0, then the GETSEC leaf function corresponding to that index is unsupported and attempted execution results in a #UD.

+

Bit 31 of EAX indicates if further leaf indexes are supported. If the Extended Leafs bit 31 is set, then additional leaf functions are accessed by repeating GETSEC[CAPABILITIES] with EBX incremented by one. When the most significant bit of EAX is not set, then additional GETSEC leaf functions are not supported; indexing EBX to a higher value results in EAX returning zero.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldBit positionDescription
Chipset Present0Intel® TXT-capable chipset is present.
Undefined1Reserved
ENTERACCS2GETSEC[ENTERACCS] is available.
EXITAC3GETSEC[EXITAC] is available.
SENTER4GETSEC[SENTER] is available.
SEXIT5GETSEC[SEXIT] is available.
PARAMETERS6GETSEC[PARAMETERS] is available.
SMCTRL7GETSEC[SMCTRL] is available.
WAKEUP8GETSEC[WAKEUP] is available.
Undefined30:9Reserved
Extended Leafs31Reserved for extended information reporting of GETSEC capabilities.
+
Table 7-3. GETSEC Capability Result Encoding (EBX = 0)
+

Operation + ¶ +

+
IF (CR4.SMXE=0)
+    THEN #UD;
+ELSIF (in VMX non-root operation)
+    THEN VM Exit (reason=”GETSEC instruction”);
+IF (EBX=0) THEN
+        BitVector := 0;
+        IF (TXT chipset present)
+            BitVector[Chipset present] := 1;
+        IF (ENTERACCS Available)
+            THEN BitVector[ENTERACCS] := 1;
+        IF (EXITAC Available)
+            THEN BitVector[EXITAC] := 1;
+        IF (SENTER Available)
+            THEN BitVector[SENTER] := 1;
+        IF (SEXIT Available)
+            THEN BitVector[SEXIT] := 1;
+        IF (PARAMETERS Available)
+            THEN BitVector[PARAMETERS] := 1;
+        IF (SMCTRL Available)
+            THEN BitVector[SMCTRL] := 1;
+        IF (WAKEUP Available)
+            THEN BitVector[WAKEUP] := 1;
+        EAX := BitVector;
+ELSE
+    EAX := 0;
+END;;
+
+

Flags Affected + ¶ +

+

None.

+

Use of Prefixes + ¶ +

+

LOCK Causes #UD.

+

REP* Cause #UD (includes REPNE/REPNZ and REP/REPE/REPZ).

+

Operand size Causes #UD.

+

NP 66/F2/F3 prefixes are not allowed.

+

Segmentoverrides Ignored.

+

Address size Ignored.

+

REX Ignored.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf CR4.SMXE = 0.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf CR4.SMXE = 0.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDIf CR4.SMXE = 0.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDIf CR4.SMXE = 0.
+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf CR4.SMXE = 0.
+

VM-exit Condition + ¶ +

+

Reason (GETSEC) If in VMX non-root operation.

diff --git a/x86/cbw.cwde.cdqe.html b/x86/cbw.cwde.cdqe.html new file mode 100644 index 0000000..99f406d --- /dev/null +++ b/x86/cbw.cwde.cdqe.html @@ -0,0 +1,82 @@ + +CBW/CWDE/CDQE + — Convert Byte to Word/Convert Word to Doubleword/Convert Doubleword toQuadword

CBW/CWDE/CDQE + — Convert Byte to Word/Convert Word to Doubleword/Convert Doubleword toQuadword

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
98CBWZOValidValidAX := sign-extend of AL.
98CWDEZOValidValidEAX := sign-extend of AX.
REX.W + 98CDQEZOValidN.E.RAX := sign-extend of EAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Double the size of the source operand by means of sign extension. The CBW (convert byte to word) instruction copies the sign (bit 7) in the source operand into every bit in the AH register. The CWDE (convert word to double-word) instruction copies the sign (bit 15) of the word in the AX register into the high 16 bits of the EAX register.

+

CBW and CWDE reference the same opcode. The CBW instruction is intended for use when the operand-size attribute is 16; CWDE is intended for use when the operand-size attribute is 32. Some assemblers may force the operand size. Others may treat these two mnemonics as synonyms (CBW/CWDE) and use the setting of the operand-size attribute to determine the size of values to be converted.

+

In 64-bit mode, the default operation size is the size of the destination register. Use of the REX.W prefix promotes this instruction (CDQE when promoted) to operate on 64-bit operands. In which case, CDQE copies the sign (bit 31) of the doubleword in the EAX register into the high 32 bits of RAX.

+

Operation + ¶ +

+
IF OperandSize = 16 (* Instruction = CBW *)
+    THEN
+        AX := SignExtend(AL);
+    ELSE IF (OperandSize = 32, Instruction = CWDE)
+        EAX := SignExtend(AX); FI;
+    ELSE (* 64-Bit Mode, OperandSize = 64, Instruction = CDQE*)
+        RAX := SignExtend(EAX);
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/clac.html b/x86/clac.html new file mode 100644 index 0000000..5daa849 --- /dev/null +++ b/x86/clac.html @@ -0,0 +1,101 @@ + +CLAC + — Clear AC Flag in EFLAGS Register

CLAC + — Clear AC Flag in EFLAGS Register

+ + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 CA CLACZOV/VSMAPClear the AC flag in the EFLAGS register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Clears the AC flag bit in EFLAGS register. This disables any alignment checking of user-mode data accesses. Ifthe SMAP bit is set in the CR4 register, this disallows explicit supervisor-mode data accesses to user-mode pages.

+

This instruction's operation is the same in non-64-bit modes and 64-bit mode. Attempts to execute CLAC when CPL > 0 cause #UD.

+

Operation + ¶ +

+
EFLAGS.AC := 0;
+
+

Flags Affected + ¶ +

+

AC cleared. Other flags are unaffected.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + +
#UDIf the LOCK prefix is used.
If the CPL > 0.
If CPUID.(EAX=07H, ECX=0H):EBX.SMAP[bit 20] = 0.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.SMAP[bit 20] = 0.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe CLAC instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + + + + + +
#UDIf the LOCK prefix is used.
If the CPL > 0.
If CPUID.(EAX=07H, ECX=0H):EBX.SMAP[bit 20] = 0.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + +
#UDIf the LOCK prefix is used.
If the CPL > 0.
If CPUID.(EAX=07H, ECX=0H):EBX.SMAP[bit 20] = 0.
diff --git a/x86/clc.html b/x86/clc.html new file mode 100644 index 0000000..5d4ddc3 --- /dev/null +++ b/x86/clc.html @@ -0,0 +1,57 @@ + +CLC + — Clear Carry Flag

CLC + — Clear Carry Flag

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
F8CLCZOValidValidClear CF flag.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Clears the CF flag in the EFLAGS register. Operation is the same in all modes.

+

Operation + ¶ +

+
CF := 0;
+
+

Flags Affected + ¶ +

+

The CF flag is set to 0. The OF, ZF, SF, AF, and PF flags are unaffected.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/cld.html b/x86/cld.html new file mode 100644 index 0000000..fcea47c --- /dev/null +++ b/x86/cld.html @@ -0,0 +1,57 @@ + +CLD + — Clear Direction Flag

CLD + — Clear Direction Flag

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
FCCLDZOValidValidClear DF flag.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Clears the DF flag in the EFLAGS register. When the DF flag is set to 0, string operations increment the index registers (ESI and/or EDI). Operation is the same in all modes.

+

Operation + ¶ +

+
DF := 0;
+
+

Flags Affected + ¶ +

+

The DF flag is set to 0. The CF, OF, ZF, SF, AF, and PF flags are unaffected.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/cldemote.html b/x86/cldemote.html new file mode 100644 index 0000000..d49f1cd --- /dev/null +++ b/x86/cldemote.html @@ -0,0 +1,96 @@ + +CLDEMOTE + — Cache Line Demote

CLDEMOTE + — Cache Line Demote

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 1C /0 CLDEMOTE m8AV/VCLDEMOTEHint to hardware to move the cache line containing m8 to a more distant level of the cache without writing back to memory.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. The Mod field of the ModR/M byte cannot have value 11B.

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
AModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Hints to hardware that the cache line that contains the linear address specified with the memory operand should be moved (“demoted”) from the cache(s) closest to the processor core to a level more distant from the processor core. This may accelerate subsequent accesses to the line by other cores in the same coherence domain, especially if the line was written by the core that demotes the line. Moving the line in such a manner is a performance optimization, i.e., it is a hint which does not modify architectural state. Hardware may choose which level in the cache hierarchy to retain the line (e.g., L3 in typical server designs). The source operand is a byte memory location.

+

The availability of the CLDEMOTE instruction is indicated by the presence of the CPUID feature flag CLDEMOTE (bit 25 of the ECX register in sub-leaf 07H, see “CPUID—CPU Identification”). On processors which do not support the CLDEMOTE instruction (including legacy hardware) the instruction will be treated as a NOP.

+

A CLDEMOTE instruction is ordered with respect to stores to the same cache line, but unordered with respect to other instructions including memory fences, CLDEMOTE, CLWB or CLFLUSHOPT instructions to a different cache line. Since CLDEMOTE will retire in order with respect to stores to the same cache line, software should ensure that after issuing CLDEMOTE the line is not accessed again immediately by the same core to avoid cache data movement penalties.

+

The effective memory type of the page containing the affected line determines the effect; cacheable types are likely to generate a data movement operation, while uncacheable types may cause the instruction to be ignored.

+

Speculative fetching can occur at any time and is not tied to instruction execution. The CLDEMOTE instruction is not ordered with respect to PREFETCHh instructions or any of the speculative fetching mechanisms. That is, data can be speculatively loaded into a cache line just before, during, or after the execution of a CLDEMOTE instruction that references the cache line.

+

Unlike CLFLUSH, CLFLUSHOPT, and CLWB instructions, CLDEMOTE is not guaranteed to write back modified data to memory.

+

The CLDEMOTE instruction may be ignored by hardware in certain cases and is not a guarantee.

+

The CLDEMOTE instruction can be used at all privilege levels. In certain processor implementations the CLDEMOTE instruction may set the A bit but not the D bit in the page tables.

+

If the line is not found in the cache, the instruction will be treated as a NOP.

+

In some implementations, the CLDEMOTE instruction may always cause a transactional abort with Transactional Synchronization Extensions (TSX). However, programmers must not rely on CLDEMOTE instruction to force a transactional abort.

+

Operation + ¶ +

+
Cache_Line_Demote(m8);
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
CLDEMOTE void _cldemote(const void*);
+
+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
diff --git a/x86/clflush.html b/x86/clflush.html new file mode 100644 index 0000000..40657d9 --- /dev/null +++ b/x86/clflush.html @@ -0,0 +1,121 @@ + +CLFLUSH + — Flush Cache Line

CLFLUSH + — Flush Cache Line

+ + + + + + + + + + + + + +
Opcode / InstructionOp/En64-bit ModeCompat/Leg ModeDescription
NP 0F AE /7 CLFLUSH m8MValidValidFlushes cache line containing m8.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Invalidates from every level of the cache hierarchy in the cache coherence domain the cache line that contains the linear address specified with the memory operand. If that cache line contains modified data at any level of the cache hierarchy, that data is written back to memory. The source operand is a byte memory location.

+

The availability of CLFLUSH is indicated by the presence of the CPUID feature flag CLFSH (CPUID.01H:EDX[bit 19]). The aligned cache line size affected is also indicated with the CPUID instruction (bits 8 through 15 of the EBX register when the initial value in the EAX register is 1).

+

The memory attribute of the page containing the affected line has no effect on the behavior of this instruction. It should be noted that processors are free to speculatively fetch and cache data from system memory regions assigned a memory-type allowing for speculative reads (such as, the WB, WC, and WT memory types). PREFETCHh instructions can be used to provide the processor with hints for this speculative behavior. Because this speculative fetching can occur at any time and is not tied to instruction execution, the CLFLUSH instruction is not ordered with respect to PREFETCHh instructions or any of the speculative fetching mechanisms (that is, data can be speculatively loaded into a cache line just before, during, or after the execution of a CLFLUSH instruction that references the cache line).

+

Executions of the CLFLUSH instruction are ordered with respect to each other and with respect to writes, locked read-modify-write instructions, and fence instructions.1 They are not ordered with respect to executions of CLFLUSHOPT and CLWB. Software can use the SFENCE instruction to order an execution of CLFLUSH relative to one of those operations.

+
+

1. Earlier versions of this manual specified that executions of the CLFLUSH instruction were ordered only by the MFENCE instruction. All processors implementing the CLFLUSH instruction also order it relative to the other operations enumerated above.

+

The CLFLUSH instruction can be used at all privilege levels and is subject to all permission checking and faults associated with a byte load (and in addition, a CLFLUSH instruction is allowed to flush a linear address in an execute-only segment). Like a load, the CLFLUSH instruction sets the A bit but not the D bit in the page tables.

+

In some implementations, the CLFLUSH instruction may always cause transactional abort with Transactional Synchronization Extensions (TSX). The CLFLUSH instruction is not expected to be commonly used inside typical transactional regions. However, programmers must not rely on CLFLUSH instruction to force a transactional abort, since whether they cause transactional abort is implementation dependent.

+

The CLFLUSH instruction was introduced with the SSE2 extensions; however, because it has its own CPUID feature flag, it can be implemented in IA-32 processors that do not include the SSE2 extensions. Also, detecting the presence of the SSE2 extensions with the CPUID instruction does not guarantee that the CLFLUSH instruction is implemented in the processor.

+

CLFLUSH operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
Flush_Cache_Line(SRC);
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
CLFLUSH void _mm_clflush(void const *p)
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code)For a page fault.
#UDIf CPUID.01H:EDX.CLFSH[bit 19] = 0.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#GPIf any part of the operand lies outside the effective address space from 0 to FFFFH.
#UDIf CPUID.01H:EDX.CLFSH[bit 19] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+ + + +
#PF(fault-code)For a page fault.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)For a page fault.
#UDIf CPUID.01H:EDX.CLFSH[bit 19] = 0.
If the LOCK prefix is used.
diff --git a/x86/clflushopt.html b/x86/clflushopt.html new file mode 100644 index 0000000..8a3ddaf --- /dev/null +++ b/x86/clflushopt.html @@ -0,0 +1,124 @@ + +CLFLUSHOPT + — Flush Cache Line Optimized

CLFLUSHOPT + — Flush Cache Line Optimized

+ + + + + + + + + + + + + +
Opcode / InstructionOp/En64-bit ModeCompat/Leg ModeDescription
NFx 66 0F AE /7 CLFLUSHOPT m8MValidValidFlushes cache line containing m8.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Invalidates from every level of the cache hierarchy in the cache coherence domain the cache line that contains the linear address specified with the memory operand. If that cache line contains modified data at any level of the cache hierarchy, that data is written back to memory. The source operand is a byte memory location.

+

The availability of CLFLUSHOPT is indicated by the presence of the CPUID feature flag CLFLUSHOPT (CPUID.(EAX=07H,ECX=0H):EBX[bit 23]). The aligned cache line size affected is also indicated with the CPUID instruction (bits 8 through 15 of the EBX register when the initial value in the EAX register is 1).

+

The memory attribute of the page containing the affected line has no effect on the behavior of this instruction. It should be noted that processors are free to speculatively fetch and cache data from system memory regions assigned a memory-type allowing for speculative reads (such as, the WB, WC, and WT memory types). PREFETCHh instructions can be used to provide the processor with hints for this speculative behavior. Because this speculative fetching can occur at any time and is not tied to instruction execution, the CLFLUSH instruction is not ordered with respect to PREFETCHh instructions or any of the speculative fetching mechanisms (that is, data can be speculatively loaded into a cache line just before, during, or after the execution of a CLFLUSH instruction that references the cache line).

+

Executions of the CLFLUSHOPT instruction are ordered with respect to fence instructions and to locked read-modify-write instructions; they are also ordered with respect to older writes to the cache line being invalidated. They are not ordered with respect to other executions of CLFLUSHOPT, to executions of CLFLUSH and CLWB, or to younger writes to the cache line being invalidated. Software can use the SFENCE instruction to order an execution of CLFLUSHOPT relative to one of those operations.

+

The CLFLUSHOPT instruction can be used at all privilege levels and is subject to all permission checking and faults associated with a byte load (and in addition, a CLFLUSHOPT instruction is allowed to flush a linear address in an execute-only segment). Like a load, the CLFLUSHOPT instruction sets the A bit but not the D bit in the page tables.

+

In some implementations, the CLFLUSHOPT instruction may always cause transactional abort with Transactional Synchronization Extensions (TSX). The CLFLUSHOPT instruction is not expected to be commonly used inside typical transactional regions. However, programmers must not rely on CLFLUSHOPT instruction to force a transactional abort, since whether they cause transactional abort is implementation dependent.

+

CLFLUSHOPT operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
Flush_Cache_Line_Optimized(SRC);
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
CLFLUSHOPT void _mm_clflushopt(void const *p)
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code)For a page fault.
#UDIf CPUID.(EAX=07H,ECX=0H):EBX.CLFLUSHOPT[bit 23] = 0.
If the LOCK prefix is used.
If an instruction prefix F2H or F3H is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GPIf any part of the operand lies outside the effective address space from 0 to FFFFH.
#UDIf CPUID.(EAX=07H,ECX=0H):EBX.CLFLUSHOPT[bit 23] = 0.
If the LOCK prefix is used.
If an instruction prefix F2H or F3H is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+ + + +
#PF(fault-code)For a page fault.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)For a page fault.
#UDIf CPUID.(EAX=07H,ECX=0H):EBX.CLFLUSHOPT[bit 23] = 0.
If the LOCK prefix is used.
If an instruction prefix F2H or F3H is used.
diff --git a/x86/cli.html b/x86/cli.html new file mode 100644 index 0000000..0a0b21e --- /dev/null +++ b/x86/cli.html @@ -0,0 +1,150 @@ + +CLI + — Clear Interrupt Flag

CLI + — Clear Interrupt Flag

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
FACLIZOValidValidClear interrupt flag; interrupts disabled when interrupt flag cleared.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

In most cases, CLI clears the IF flag in the EFLAGS register and no other flags are affected. Clearing the IF flag causes the processor to ignore maskable external interrupts. The IF flag and the CLI and STI instruction have no effect on the generation of exceptions and NMI interrupts.

+

Operation is different in two modes defined as follows:

+
    +
  • PVI mode (protected-mode virtual interrupts): CR0.PE = 1, EFLAGS.VM = 0, CPL = 3, and CR4.PVI = 1;
  • +
  • VME mode (virtual-8086 mode extensions): CR0.PE = 1, EFLAGS.VM = 1, and CR4.VME = 1.
+

If IOPL < 3 and either VME mode or PVI mode is active, CLI clears the VIF flag in the EFLAGS register, leaving IF unaffected.

+

Table 3-7 indicates the action of the CLI instruction depending on the processor operating mode, IOPL, and CPL.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModeIOPLCLI Result
Real-addressX1IF = 0
Protected, not PVI2≥ CPLIF = 0
< CPL#GP fault
Protected, PVI33IF = 0
0–2VIF = 0
Virtual-8086, not VME33IF = 0
0–2#GP fault
Virtual-8086, VME33IF = 0
0–2VIF = 0
+
Table 3-7. Decision Table for CLI Results
+
+

1. X = This setting has no effect on instruction operation.

+

2. For this table, “protected mode” applies whenever CR0.PE = 1 and EFLAGS.VM = 0; it includes compatibility mode and 64-bit mode.

+

3. PVI mode and virtual-8086 mode each imply CPL = 3.

+

Operation + ¶ +

+
IF CR0.PE = 0
+    THEN IF := 0; (* Reset Interrupt Flag *)
+    ELSE
+        IF IOPL ≥ CPL (* CPL = 3 if EFLAGS.VM = 1 *)
+            THEN IF := 0; (* Reset Interrupt Flag *)
+            ELSE
+                IF VME mode OR PVI mode
+                    THEN VIF := 0; (* Reset Virtual Interrupt Flag *)
+                    ELSE #GP(0);
+                FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

Either the IF flag or the VIF flag is cleared to 0. Other flags are unaffected.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If CPL is greater than IOPL and PVI mode is not active.
If CPL is greater than IOPL and less than 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If IOPL is less than 3 and VME mode is not active.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/clrssbsy.html b/x86/clrssbsy.html new file mode 100644 index 0000000..66b27c2 --- /dev/null +++ b/x86/clrssbsy.html @@ -0,0 +1,150 @@ + +CLRSSBSY + — Clear Busy Flag in a Supervisor Shadow Stack Token

CLRSSBSY + — Clear Busy Flag in a Supervisor Shadow Stack Token

+ + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F AE /6 CLRSSBSY m64MV/VCET_SSClear busy flag in supervisor shadow stack token reference by m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
MN/AModRM:r/m (r, w)N/AN/AN/A
+

Description + ¶ +

+

Clear busy flag in supervisor shadow stack token reference by m64. Subsequent to marking the shadow stack as not busy the SSP is loaded with value 0.

+

Operation + ¶ +

+
IF (CR4.CET = 0)
+    THEN #UD; FI;
+IF (IA32_S_CET.SH_STK_EN = 0)
+    THEN #UD; FI;
+IF CPL > 0
+    THEN GP(0); FI;
+SSP_LA = Linear_Address(mem operand)
+IF SSP_LA not aligned to 8 bytes
+    THEN #GP(0); FI;
+expected_token_value=SSP_LA|BUSY_BIT (*busybit-bitposition0-mustbeset*)
+new_token_value = SSP_LA (* Clear the busy bit *)
+IF shadow_stack_lock_cmpxchg8b(SSP_LA, new_token_value, expected_token_value) != expected_token_value
+    invalid_token := 1; FI
+(* Set the CF if invalid token was detected *)
+RFLAGS.CF = (invalid_token == 1) ? 1 : 0;
+RFLAGS.ZF,PF,AF,OF,SF := 0;
+SSP := 0
+
+

Flags Affected + ¶ +

+

CF is set if an invalid token was detected, else it is cleared. ZF, PF, AF, OF, and SF are cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
IF IA32_S_CET.SH_STK_EN = 0.
#GP(0)If memory operand linear address not aligned to 8 bytes.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If destination is located in a non-writeable segment.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If CPL is not 0.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe CLRSSBSY instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe CLRSSBSY instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + + + + + + + +
#UDSame exceptions as in protected mode.
#GP(0)Same exceptions as in protected mode.
#PF(fault-code)If a page fault occurs.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
IF IA32_S_CET.SH_STK_EN = 0.
#GP(0)If memory operand linear address not aligned to 8 bytes.
If CPL is not 0.
If the memory address is in a non-canonical form.
If token is invalid.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
diff --git a/x86/clts.html b/x86/clts.html new file mode 100644 index 0000000..2b0944f --- /dev/null +++ b/x86/clts.html @@ -0,0 +1,97 @@ + +CLTS + — Clear Task-Switched Flag in CR0

CLTS + — Clear Task-Switched Flag in CR0

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
0F 06CLTSZOValidValidClears TS flag in CR0.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Clears the task-switched (TS) flag in the CR0 register. This instruction is intended for use in operating-system procedures. It is a privileged instruction that can only be executed at a CPL of 0. It is allowed to be executed in real-address mode to allow initialization for protected mode.

+

The processor sets the TS flag every time a task switch occurs. The flag is used to synchronize the saving of FPU context in multitasking applications. See the description of the TS flag in the section titled “Control Registers” in Chapter 2 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for more information about this flag.

+

CLTS operation is the same in non-64-bit modes and 64-bit mode.

+

See Chapter 26, “VMX Non-Root Operation,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more information about the behavior of this instruction in VMX non-root operation.

+

Operation + ¶ +

+
CR0.TS[bit 3] := 0;
+
+

Flags Affected + ¶ +

+

The TS flag in CR0 register is cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the current privilege level is not 0.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)CLTS is not recognized in virtual-8086 mode.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the CPL is greater than 0.
#UDIf the LOCK prefix is used.
diff --git a/x86/clui.html b/x86/clui.html new file mode 100644 index 0000000..8b63384 --- /dev/null +++ b/x86/clui.html @@ -0,0 +1,95 @@ + +CLUI + — Clear User Interrupt Flag

CLUI + — Clear User Interrupt Flag

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 01 EE CLUIZOV/IUINTRClear user interrupt flag; user interrupts blocked when user interrupt flag cleared.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

Description + ¶ +

+

CLUI clears the user interrupt flag (UIF). Its effect takes place immediately: a user interrupt cannot be delivered on the instruction boundary following CLUI.

+

An execution of CLUI inside a transactional region causes a transactional abort; the abort loads EAX as it would have had it been caused due to an execution of CLI.

+

Operation + ¶ +

+
UIF := 0;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDThe CLUI instruction is not recognized in protected mode.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe CLUI instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe CLUI instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe CLUI instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + +
#UDIf the LOCK prefix is used.
If executed inside an enclave.
If CR4.UINTR = 0.
If CPUID.07H.0H:EDX.UINTR[bit 5] = 0.
diff --git a/x86/clwb.html b/x86/clwb.html new file mode 100644 index 0000000..b1d4e59 --- /dev/null +++ b/x86/clwb.html @@ -0,0 +1,124 @@ + +CLWB + — Cache Line Write Back

CLWB + — Cache Line Write Back

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F AE /6 CLWB m8MV/VCLWBWrites back modified cache line containing m8, and may retain the line in cache hierarchy in non-modified state.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. The Mod field of the ModR/M byte cannot have value 11B.

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Writes back to memory the cache line (if modified) that contains the linear address specified with the memory operand from any level of the cache hierarchy in the cache coherence domain. The line may be retained in the cache hierarchy in non-modified state. Retaining the line in the cache hierarchy is a performance optimization (treated as a hint by hardware) to reduce the possibility of cache miss on a subsequent access. Hardware may choose to retain the line at any of the levels in the cache hierarchy, and in some cases, may invalidate the line from the cache hierarchy. The source operand is a byte memory location.

+

The availability of CLWB instruction is indicated by the presence of the CPUID feature flag CLWB (bit 24 of the EBX register, see “CPUID — CPU Identification” in this chapter). The aligned cache line size affected is also indicated with the CPUID instruction (bits 8 through 15 of the EBX register when the initial value in the EAX register is 1).

+

The memory attribute of the page containing the affected line has no effect on the behavior of this instruction. It should be noted that processors are free to speculatively fetch and cache data from system memory regions that are assigned a memory-type allowing for speculative reads (such as, the WB, WC, and WT memory types). PREFETCHh instructions can be used to provide the processor with hints for this speculative behavior. Because this speculative fetching can occur at any time and is not tied to instruction execution, the CLWB instruction is not ordered with respect to PREFETCHh instructions or any of the speculative fetching mechanisms (that is, data can be speculatively loaded into a cache line just before, during, or after the execution of a CLWB instruction that references the cache line).

+

Executions of the CLWB instruction are ordered with respect to fence instructions and to locked read-modify-write instructions; they are also ordered with respect to older writes to the cache line being written back. They are not ordered with respect to other executions of CLWB, to executions of CLFLUSH and CLFLUSHOPT, or to younger writes to the cache line being written back. Software can use the SFENCE instruction to order an execution of CLWB relative to one of those operations.

+

For usages that require only writing back modified data from cache lines to memory (do not require the line to be invalidated), and expect to subsequently access the data, software is recommended to use CLWB (with appropriate fencing) instead of CLFLUSH or CLFLUSHOPT for improved performance.

+

The CLWB instruction can be used at all privilege levels and is subject to all permission checking and faults associated with a byte load. Like a load, the CLWB instruction sets the accessed flag but not the dirty flag in the page tables.

+

In some implementations, the CLWB instruction may always cause transactional abort with Transactional Synchronization Extensions (TSX). CLWB instruction is not expected to be commonly used inside typical transactional regions. However, programmers must not rely on CLWB instruction to force a transactional abort, since whether they cause transactional abort is implementation dependent.

+

Operation + ¶ +

+
Cache_Line_Write_Back(m8);
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
CLWB void _mm_clwb(void const *p);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.CLWB[bit 24] = 0.
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code)For a page fault.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.CLWB[bit 24] = 0.
#GPIf any part of the operand lies outside the effective address space from 0 to FFFFH.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+ + + +
#PF(fault-code)For a page fault.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.CLWB[bit 24] = 0.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)For a page fault.
diff --git a/x86/cmc.html b/x86/cmc.html new file mode 100644 index 0000000..a7544c9 --- /dev/null +++ b/x86/cmc.html @@ -0,0 +1,57 @@ + +CMC + — Complement Carry Flag

CMC + — Complement Carry Flag

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
F5CMCZOValidValidComplement CF flag.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Complements the CF flag in the EFLAGS register. CMC operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
EFLAGS.CF[bit 0] := NOT EFLAGS.CF[bit 0];
+
+

Flags Affected + ¶ +

+

The CF flag contains the complement of its original value. The OF, ZF, SF, AF, and PF flags are unaffected.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/cmovcc.html b/x86/cmovcc.html new file mode 100644 index 0000000..46bb05c --- /dev/null +++ b/x86/cmovcc.html @@ -0,0 +1,763 @@ + +CMOVcc + — Conditional Move

CMOVcc + — Conditional Move

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 47 /rCMOVA r16, r/m16RMValidValidMove if above (CF=0 and ZF=0).
0F 47 /rCMOVA r32, r/m32RMValidValidMove if above (CF=0 and ZF=0).
REX.W + 0F 47 /rCMOVA r64, r/m64RMValidN.E.Move if above (CF=0 and ZF=0).
0F 43 /rCMOVAE r16, r/m16RMValidValidMove if above or equal (CF=0).
0F 43 /rCMOVAE r32, r/m32RMValidValidMove if above or equal (CF=0).
REX.W + 0F 43 /rCMOVAE r64, r/m64RMValidN.E.Move if above or equal (CF=0).
0F 42 /rCMOVB r16, r/m16RMValidValidMove if below (CF=1).
0F 42 /rCMOVB r32, r/m32RMValidValidMove if below (CF=1).
REX.W + 0F 42 /rCMOVB r64, r/m64RMValidN.E.Move if below (CF=1).
0F 46 /rCMOVBE r16, r/m16RMValidValidMove if below or equal (CF=1 or ZF=1).
0F 46 /rCMOVBE r32, r/m32RMValidValidMove if below or equal (CF=1 or ZF=1).
REX.W + 0F 46 /rCMOVBE r64, r/m64RMValidN.E.Move if below or equal (CF=1 or ZF=1).
0F 42 /rCMOVC r16, r/m16RMValidValidMove if carry (CF=1).
0F 42 /rCMOVC r32, r/m32RMValidValidMove if carry (CF=1).
REX.W + 0F 42 /rCMOVC r64, r/m64RMValidN.E.Move if carry (CF=1).
0F 44 /rCMOVE r16, r/m16RMValidValidMove if equal (ZF=1).
0F 44 /rCMOVE r32, r/m32RMValidValidMove if equal (ZF=1).
REX.W + 0F 44 /rCMOVE r64, r/m64RMValidN.E.Move if equal (ZF=1).
0F 4F /rCMOVG r16, r/m16RMValidValidMove if greater (ZF=0 and SF=OF).
0F 4F /rCMOVG r32, r/m32RMValidValidMove if greater (ZF=0 and SF=OF).
REX.W + 0F 4F /rCMOVG r64, r/m64RMV/N.E.N/AMove if greater (ZF=0 and SF=OF).
0F 4D /rCMOVGE r16, r/m16RMValidValidMove if greater or equal (SF=OF).
0F 4D /rCMOVGE r32, r/m32RMValidValidMove if greater or equal (SF=OF).
REX.W + 0F 4D /rCMOVGE r64, r/m64RMValidN.E.Move if greater or equal (SF=OF).
0F 4C /rCMOVL r16, r/m16RMValidValidMove if less (SF≠ OF).
0F 4C /rCMOVL r32, r/m32RMValidValidMove if less (SF≠ OF).
REX.W + 0F 4C /rCMOVL r64, r/m64RMValidN.E.Move if less (SF≠ OF).
0F 4E /rCMOVLE r16, r/m16RMValidValidMove if less or equal (ZF=1 or SF≠ OF).
0F 4E /rCMOVLE r32, r/m32RMValidValidMove if less or equal (ZF=1 or SF≠ OF).
REX.W + 0F 4E /rCMOVLE r64, r/m64RMValidN.E.Move if less or equal (ZF=1 or SF≠ OF).
0F 46 /rCMOVNA r16, r/m16RMValidValidMove if not above (CF=1 or ZF=1).
0F 46 /rCMOVNA r32, r/m32RMValidValidMove if not above (CF=1 or ZF=1).
REX.W + 0F 46 /rCMOVNA r64, r/m64RMValidN.E.Move if not above (CF=1 or ZF=1).
0F 42 /rCMOVNAE r16, r/m16RMValidValidMove if not above or equal (CF=1).
0F 42 /rCMOVNAE r32, r/m32RMValidValidMove if not above or equal (CF=1).
REX.W + 0F 42 /rCMOVNAE r64, r/m64RMValidN.E.Move if not above or equal (CF=1).
0F 43 /rCMOVNB r16, r/m16RMValidValidMove if not below (CF=0).
0F 43 /rCMOVNB r32, r/m32RMValidValidMove if not below (CF=0).
REX.W + 0F 43 /rCMOVNB r64, r/m64RMValidN.E.Move if not below (CF=0).
0F 47 /rCMOVNBE r16, r/m16RMValidValidMove if not below or equal (CF=0 and ZF=0).
0F 47 /rCMOVNBE r32, r/m32RMValidValidMove if not below or equal (CF=0 and ZF=0).
REX.W + 0F 47 /rCMOVNBE r64, r/m64RMValidN.E.Move if not below or equal (CF=0 and ZF=0).
0F 43 /rCMOVNC r16, r/m16RMValidValidMove if not carry (CF=0).
0F 43 /rCMOVNC r32, r/m32RMValidValidMove if not carry (CF=0).
REX.W + 0F 43 /rCMOVNC r64, r/m64RMValidN.E.Move if not carry (CF=0).
0F 45 /rCMOVNE r16, r/m16RMValidValidMove if not equal (ZF=0).
0F 45 /rCMOVNE r32, r/m32RMValidValidMove if not equal (ZF=0).
REX.W + 0F 45 /rCMOVNE r64, r/m64RMValidN.E.Move if not equal (ZF=0).
0F 4E /rCMOVNG r16, r/m16RMValidValidMove if not greater (ZF=1 or SF≠ OF).
0F 4E /rCMOVNG r32, r/m32RMValidValidMove if not greater (ZF=1 or SF≠ OF).
REX.W + 0F 4E /rCMOVNG r64, r/m64RMValidN.E.Move if not greater (ZF=1 or SF≠ OF).
0F 4C /rCMOVNGE r16, r/m16RMValidValidMove if not greater or equal (SF≠ OF).
0F 4C /rCMOVNGE r32, r/m32RMValidValidMove if not greater or equal (SF≠ OF).
REX.W + 0F 4C /rCMOVNGE r64, r/m64RMValidN.E.Move if not greater or equal (SF≠ OF).
0F 4D /rCMOVNL r16, r/m16RMValidValidMove if not less (SF=OF).
0F 4D /rCMOVNL r32, r/m32RMValidValidMove if not less (SF=OF).
REX.W + 0F 4D /rCMOVNL r64, r/m64RMValidN.E.Move if not less (SF=OF).
0F 4F /rCMOVNLE r16, r/m16RMValidValidMove if not less or equal (ZF=0 and SF=OF).
0F 4F /rCMOVNLE r32, r/m32RMValidValidMove if not less or equal (ZF=0 and SF=OF).
REX.W + 0F 4F /rCMOVNLE r64, r/m64RMValidN.E.Move if not less or equal (ZF=0 and SF=OF).
0F 41 /rCMOVNO r16, r/m16RMValidValidMove if not overflow (OF=0).
0F 41 /rCMOVNO r32, r/m32RMValidValidMove if not overflow (OF=0).
REX.W + 0F 41 /rCMOVNO r64, r/m64RMValidN.E.Move if not overflow (OF=0).
0F 4B /rCMOVNP r16, r/m16RMValidValidMove if not parity (PF=0).
0F 4B /rCMOVNP r32, r/m32RMValidValidMove if not parity (PF=0).
REX.W + 0F 4B /rCMOVNP r64, r/m64RMValidN.E.Move if not parity (PF=0).
0F 49 /rCMOVNS r16, r/m16RMValidValidMove if not sign (SF=0).
0F 49 /rCMOVNS r32, r/m32RMValidValidMove if not sign (SF=0).
REX.W + 0F 49 /rCMOVNS r64, r/m64RMValidN.E.Move if not sign (SF=0).
0F 45 /rCMOVNZ r16, r/m16RMValidValidMove if not zero (ZF=0).
0F 45 /rCMOVNZ r32, r/m32RMValidValidMove if not zero (ZF=0).
REX.W + 0F 45 /rCMOVNZ r64, r/m64RMValidN.E.Move if not zero (ZF=0).
0F 40 /rCMOVO r16, r/m16RMValidValidMove if overflow (OF=1).
0F 40 /rCMOVO r32, r/m32RMValidValidMove if overflow (OF=1).
REX.W + 0F 40 /rCMOVO r64, r/m64RMValidN.E.Move if overflow (OF=1).
0F 4A /rCMOVP r16, r/m16RMValidValidMove if parity (PF=1).
0F 4A /rCMOVP r32, r/m32RMValidValidMove if parity (PF=1).
REX.W + 0F 4A /rCMOVP r64, r/m64RMValidN.E.Move if parity (PF=1).
0F 4A /rCMOVPE r16, r/m16RMValidValidMove if parity even (PF=1).
0F 4A /rCMOVPE r32, r/m32RMValidValidMove if parity even (PF=1).
REX.W + 0F 4A /rCMOVPE r64, r/m64RMValidN.E.Move if parity even (PF=1).
0F 4B /rCMOVPO r16, r/m16RMValidValidMove if parity odd (PF=0).
0F 4B /rCMOVPO r32, r/m32RMValidValidMove if parity odd (PF=0).
REX.W + 0F 4B /rCMOVPO r64, r/m64RMValidN.E.Move if parity odd (PF=0).
0F 48 /rCMOVS r16, r/m16RMValidValidMove if sign (SF=1).
0F 48 /rCMOVS r32, r/m32RMValidValidMove if sign (SF=1).
REX.W + 0F 48 /rCMOVS r64, r/m64RMValidN.E.Move if sign (SF=1).
0F 44 /rCMOVZ r16, r/m16RMValidValidMove if zero (ZF=1).
0F 44 /rCMOVZ r32, r/m32RMValidValidMove if zero (ZF=1).
REX.W + 0F 44 /rCMOVZ r64, r/m64RMValidN.E.Move if zero (ZF=1).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Each of the CMOVcc instructions performs a move operation if the status flags in the EFLAGS register (CF, OF, PF, SF, and ZF) are in a specified state (or condition). A condition code (cc) is associated with each instruction to indicate the condition being tested for. If the condition is not satisfied, a move is not performed and execution continues with the instruction following the CMOVcc instruction.

+

Specifically, CMOVcc loads data from its source operand into a temporary register unconditionally (regardless of the condition code and the status flags in the EFLAGS register). If the condition code associated with the instruction (cc) is satisfied, the data in the temporary register is then copied into the instruction's destination operand.

+

These instructions can move 16-bit, 32-bit or 64-bit values from memory to a general-purpose register or from one general-purpose register to another. Conditional moves of 8-bit register operands are not supported.

+

The condition for each CMOVcc mnemonic is given in the description column of the above table. The terms “less” and “greater” are used for comparisons of signed integers and the terms “above” and “below” are used for unsigned integers.

+

Because a particular state of the status flags can sometimes be interpreted in two ways, two mnemonics are defined for some opcodes. For example, the CMOVA (conditional move if above) instruction and the CMOVNBE (conditional move if not below or equal) instruction are alternate mnemonics for the opcode 0F 47H.

+

The CMOVcc instructions were introduced in P6 family processors; however, these instructions may not be supported by all IA-32 processors. Software can determine if the CMOVcc instructions are supported by checking the processor’s feature information with the CPUID instruction (see “CPUID—CPU Identification” in this chapter).

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
temp := SRC
+IF condition TRUE
+    THEN DEST := temp;
+ELSE IF (OperandSize = 32 and IA-32e mode active)
+    THEN DEST[63:32] := 0;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/cmp.html b/x86/cmp.html new file mode 100644 index 0000000..b02c33c --- /dev/null +++ b/x86/cmp.html @@ -0,0 +1,296 @@ + +CMP + — Compare Two Operands

CMP + — Compare Two Operands

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
3C ibCMP AL, imm8IValidValidCompare imm8 with AL.
3D iwCMP AX, imm16IValidValidCompare imm16 with AX.
3D idCMP EAX, imm32IValidValidCompare imm32 with EAX.
REX.W + 3D idCMP RAX, imm32IValidN.E.Compare imm32 sign-extended to 64-bits with RAX.
80 /7 ibCMP r/m8, imm8MIValidValidCompare imm8 with r/m8.
REX + 80 /7 ibCMP r/m8*, imm8MIValidN.E.Compare imm8 with r/m8.
81 /7 iwCMP r/m16, imm16MIValidValidCompare imm16 with r/m16.
81 /7 idCMP r/m32, imm32MIValidValidCompare imm32 with r/m32.
REX.W + 81 /7 idCMP r/m64, imm32MIValidN.E.Compare imm32 sign-extended to 64-bits with r/m64.
83 /7 ibCMP r/m16, imm8MIValidValidCompare imm8 with r/m16.
83 /7 ibCMP r/m32, imm8MIValidValidCompare imm8 with r/m32.
REX.W + 83 /7 ibCMP r/m64, imm8MIValidN.E.Compare imm8 with r/m64.
38 /rCMP r/m8, r8MRValidValidCompare r8 with r/m8.
REX + 38 /rCMP r/m8*, r8*MRValidN.E.Compare r8 with r/m8.
39 /rCMP r/m16, r16MRValidValidCompare r16 with r/m16.
39 /rCMP r/m32, r32MRValidValidCompare r32 with r/m32.
REX.W + 39 /rCMP r/m64,r64MRValidN.E.Compare r64 with r/m64.
3A /rCMP r8, r/m8RMValidValidCompare r/m8 with r8.
REX + 3A /rCMP r8*, r/m8*RMValidN.E.Compare r/m8 with r8.
3B /rCMP r16, r/m16RMValidValidCompare r/m16 with r16.
3B /rCMP r32, r/m32RMValidValidCompare r/m32 with r32.
REX.W + 3B /rCMP r64, r/m64RMValidN.E.Compare r/m64 with r64.
+
+

* In64-bitmode,r/m8cannotbeencodedtoaccessthefollowingbyteregistersifaREXprefixisused:AH,BH,CH,DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)N/AN/A
MRModRM:r/m (r)ModRM:reg (r)N/AN/A
MIModRM:r/m (r)imm8/16/32N/AN/A
IAL/AX/EAX/RAX (r)imm8/16/32N/AN/A
+

Description + ¶ +

+

Compares the first source operand with the second source operand and sets the status flags in the EFLAGS register according to the results. The comparison is performed by subtracting the second operand from the first operand and then setting the status flags in the same manner as the SUB instruction. When an immediate value is used as an operand, it is sign-extended to the length of the first operand.

+

The condition codes used by the Jcc, CMOVcc, and SETcc instructions are based on the results of a CMP instruction. Appendix B, “EFLAGS Condition Codes,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, shows the relationship of the status flags and the condition codes.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
temp := SRC1 − SignExtend(SRC2);
+ModifyStatusFlags; (* Modify status flags in the same manner as the SUB instruction*)
+
+

Flags Affected + ¶ +

+

The CF, OF, SF, ZF, AF, and PF flags are set according to the result.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/cmppd.html b/x86/cmppd.html new file mode 100644 index 0000000..47d5c6e --- /dev/null +++ b/x86/cmppd.html @@ -0,0 +1,680 @@ + +CMPPD + — Compare Packed Double Precision Floating-Point Values

CMPPD + — Compare Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F C2 /r ib CMPPD xmm1, xmm2/m128, imm8AV/VSSE2Compare packed double precision floating-point values in xmm2/m128 and xmm1 using bits 2:0 of imm8 as a comparison predicate.
VEX.128.66.0F.WIG C2 /r ib VCMPPD xmm1, xmm2, xmm3/m128, imm8BV/VAVXCompare packed double precision floating-point values in xmm3/m128 and xmm2 using bits 4:0 of imm8 as a comparison predicate.
VEX.256.66.0F.WIG C2 /r ib VCMPPD ymm1, ymm2, ymm3/m256, imm8BV/VAVXCompare packed double precision floating-point values in ymm3/m256 and ymm2 using bits 4:0 of imm8 as a comparison predicate.
EVEX.128.66.0F.W1 C2 /r ib VCMPPD k1 {k2}, xmm2, xmm3/m128/m64bcst, imm8CV/VAVX512VL AVX512FCompare packed double precision floating-point values in xmm3/m128/m64bcst and xmm2 using bits 4:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F.W1 C2 /r ib VCMPPD k1 {k2}, ymm2, ymm3/m256/m64bcst, imm8CV/VAVX512VL AVX512FCompare packed double precision floating-point values in ymm3/m256/m64bcst and ymm2 using bits 4:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F.W1 C2 /r ib VCMPPD k1 {k2}, zmm2, zmm3/m512/m64bcst{sae}, imm8CV/VAVX512FCompare packed double precision floating-point values in zmm3/m512/m64bcst and zmm2 using bits 4:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Performs a SIMD compare of the packed double precision floating-point values in the second source operand and the first source operand and returns the result of the comparison to the destination operand. The comparison predicate operand (immediate byte) specifies the type of comparison performed on each pair of packed values in the two source operands.

+

EVEX encoded versions: The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand (first operand) is an opmask register. Comparison results are written to the destination operand under the writemask k2. Each comparison result is a single mask bit of 1 (comparison true) or 0 (comparison false).

+

VEX.256 encoded version: The first source operand (second operand) is a YMM register. The second source operand (third operand) can be a YMM register or a 256-bit memory location. The destination operand (first operand) is a YMM register. Four comparisons are performed with results written to the destination operand. The result of each comparison is a quadword mask of all 1s (comparison true) or all 0s (comparison false).

+

128-bit Legacy SSE version: The first source and destination operand (first operand) is an XMM register. The second source operand (second operand) can be an XMM register or 128-bit memory location. Bits (MAXVL-1:128) of the corresponding ZMM destination register remain unchanged. Two comparisons are performed with results written to bits 127:0 of the destination operand. The result of each comparison is a quadword mask of all 1s (comparison true) or all 0s (comparison false).

+

VEX.128 encoded version: The first source operand (second operand) is an XMM register. The second source operand (third operand) can be an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination ZMM register are zeroed. Two comparisons are performed with results written to bits 127:0 of the destination operand.

+

The comparison predicate operand is an 8-bit immediate:

+
    +
  • For instructions encoded using the VEX or EVEX prefix, bits 4:0 define the type of comparison to be performed (see Table 3-1). Bits 5 through 7 of the immediate are reserved.
  • +
  • For instruction encodings that do not use VEX prefix, bits 2:0 define the type of comparison to be made (see the first 8 rows of Table 3-1). Bits 3 through 7 of the immediate are reserved.
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Predicateimm8 ValueDescriptionResult: A Is 1st Operand, B Is 2nd OperandSignals #IA on QNAN
A >BA<BA=BUnordered1
EQ_OQ (EQ)0HEqual (ordered, non-signaling)FalseFalseTrueFalseNo
LT_OS (LT)1HLess-than (ordered, signaling)FalseTrueFalseFalseYes
LE_OS (LE)2HLess-than-or-equal (ordered, signaling)FalseTrueTrueFalseYes
UNORD_Q (UNORD)3HUnordered (non-signaling)FalseFalseFalseTrueNo
NEQ_UQ (NEQ)4HNot-equal (unordered, non-signaling)TrueTrueFalseTrueNo
NLT_US (NLT)5HNot-less-than (unordered, signaling)TrueFalseTrueTrueYes
NLE_US (NLE)6HNot-less-than-or-equal (unordered, signaling)TrueFalseFalseTrueYes
ORD_Q (ORD)7HOrdered (non-signaling)TrueTrueTrueFalseNo
EQ_UQ8HEqual (unordered, non-signaling)FalseFalseTrueTrueNo
NGE_US (NGE)9HNot-greater-than-or-equal (unordered, signaling)FalseTrueFalseTrueYes
NGT_US (NGT)AHNot-greater-than (unordered, signaling)FalseTrueTrueTrueYes
FALSE_OQ(FALSE)BHFalse (ordered, non-signaling)FalseFalseFalseFalseNo
NEQ_OQCHNot-equal (ordered, non-signaling)TrueTrueFalseFalseNo
GE_OS (GE)DHGreater-than-or-equal (ordered, signaling)TrueFalseTrueFalseYes
GT_OS (GT)EHGreater-than (ordered, signaling)TrueFalseFalseFalseYes
TRUE_UQ(TRUE)FHTrue (unordered, non-signaling)TrueTrueTrueTrueNo
EQ_OS10HEqual (ordered, signaling)FalseFalseTrueFalseYes
LT_OQ11HLess-than (ordered, nonsignaling)FalseTrueFalseFalseNo
LE_OQ12HLess-than-or-equal (ordered, nonsignaling)FalseTrueTrueFalseNo
UNORD_S13HUnordered (signaling)FalseFalseFalseTrueYes
NEQ_US14HNot-equal (unordered, signaling)TrueTrueFalseTrueYes
NLT_UQ15HNot-less-than (unordered, nonsignaling)TrueFalseTrueTrueNo
NLE_UQ16HNot-less-than-or-equal (unordered, nonsignaling)TrueFalseFalseTrueNo
ORD_S17HOrdered (signaling)TrueTrueTrueFalseYes
EQ_US18HEqual (unordered, signaling)FalseFalseTrueTrueYes
NGE_UQ19HNot-greater-than-or-equal (unordered, non-signaling)FalseTrueFalseTrueNo
NGT_UQ1AHNot-greater-than (unordered, nonsignaling)FalseTrueTrueTrueNo
FALSE_OS1BHFalse (ordered, signaling)FalseFalseFalseFalseYes
NEQ_OS1CHNot-equal (ordered, signaling)TrueTrueFalseFalseYes
GE_OQ1DHGreater-than-or-equal (ordered, nonsignaling)TrueFalseTrueFalseNo
GT_OQ1EHGreater-than (ordered, nonsignaling)TrueFalseFalseFalseNo
TRUE_US1FHTrue (unordered, signaling)TrueTrueTrueTrueYes
+
Table 3-1. Comparison Predicate for CMPPD and CMPPS Instructions
+
+

1. If either operand A or B is a NAN.

+

The unordered relationship is true when at least one of the two source operands being compared is a NaN; the ordered relationship is true when neither source operand is a NaN.

+

A subsequent computational instruction that uses the mask result in the destination operand as an input operand will not generate an exception, because a mask of all 0s corresponds to a floating-point value of +0.0 and a mask of all 1s corresponds to a QNaN.

+

Note that processors with “CPUID.1H:ECX.AVX =0” do not implement the “greater-than”, “greater-than-or-equal”, “not-greater than”, and “not-greater-than-or-equal relations” predicates. These comparisons can be made either by using the inverse relationship (that is, use the “not-less-than-or-equal” to make a “greater-than” comparison) or by using software emulation. When using software emulation, the program must swap the operands (copying registers when necessary to protect the data that will now be in the destination), and then perform the compare using a different predicate. The predicate to be used for these emulations is listed in the first 8 rows of Table 3-7 (Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A) under the heading Emulation.

+

Compilers and assemblers may implement the following two-operand pseudo-ops in addition to the three-operand CMPPD instruction, for processors with “CPUID.1H:ECX.AVX =0”. See Table 3-2. The compiler should treat reserved imm8 values as illegal syntax.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPPD Implementation
CMPEQPD xmm1, xmm2CMPPD xmm1, xmm2, 0
CMPLTPD xmm1, xmm2CMPPD xmm1, xmm2, 1
CMPLEPD xmm1, xmm2CMPPD xmm1, xmm2, 2
CMPUNORDPD xmm1, xmm2CMPPD xmm1, xmm2, 3
CMPNEQPD xmm1, xmm2CMPPD xmm1, xmm2, 4
CMPNLTPD xmm1, xmm2CMPPD xmm1, xmm2, 5
CMPNLEPD xmm1, xmm2CMPPD xmm1, xmm2, 6
CMPORDPD xmm1, xmm2CMPPD xmm1, xmm2, 7
+
Table 3-2. Pseudo-Op and CMPPD Implementation
+

The greater-than relations that the processor does not implement require more than one instruction to emulate in software and therefore should not be implemented as pseudo-ops. (For these, the programmer should reverse the operands of the corresponding less than relations and use move instructions to ensure that the mask is moved to the correct destination register and that the source operand is left intact.)

+

Processors with “CPUID.1H:ECX.AVX =1” implement the full complement of 32 predicates shown in Table 3-3, software emulation is no longer needed. Compilers and assemblers may implement the following three-operand pseudo-ops in addition to the four-operand VCMPPD instruction. See Table 3-3, where the notations of reg1 reg2, and reg3 represent either XMM registers or YMM registers. The compiler should treat reserved imm8 values as

+

illegal syntax. Alternately, intrinsics can map the pseudo-ops to pre-defined constants to support a simpler intrinsic interface. Compilers and assemblers may implement three-operand pseudo-ops for EVEX encoded VCMPPD instructions in a similar fashion by extending the syntax listed in Table 3-3.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPPD Implementation
VCMPEQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 0
VCMPLTPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 1
VCMPLEPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 2
VCMPUNORDPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 3
VCMPNEQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 4
VCMPNLTPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 5
VCMPNLEPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 6
VCMPORDPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 7
VCMPEQ_UQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 8
VCMPNGEPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 9
VCMPNGTPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 0AH
VCMPFALSEPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 0BH
VCMPNEQ_OQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 0CH
VCMPGEPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 0DH
VCMPGTPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 0EH
VCMPTRUEPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 0FH
VCMPEQ_OSPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 10H
VCMPLT_OQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 11H
VCMPLE_OQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 12H
VCMPUNORD_SPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 13H
VCMPNEQ_USPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 14H
VCMPNLT_UQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 15H
VCMPNLE_UQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 16H
VCMPORD_SPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 17H
VCMPEQ_USPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 18H
VCMPNGE_UQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 19H
VCMPNGT_UQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 1AH
VCMPFALSE_OSPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 1BH
VCMPNEQ_OSPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 1CH
VCMPGE_OQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 1DH
VCMPGT_OQPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 1EH
VCMPTRUE_USPD reg1, reg2, reg3VCMPPD reg1, reg2, reg3, 1FH
+
Table 3-3. Pseudo-Op and VCMPPD Implementation
+

Operation + ¶ +

+
CASE (COMPARISON PREDICATE) OF
+0: OP3 := EQ_OQ; OP5 := EQ_OQ;
+    1: OP3 := LT_OS; OP5 := LT_OS;
+    2: OP3 := LE_OS; OP5 := LE_OS;
+    3: OP3 := UNORD_Q; OP5 := UNORD_Q;
+    4: OP3 := NEQ_UQ; OP5 := NEQ_UQ;
+    5: OP3 := NLT_US; OP5 := NLT_US;
+    6: OP3 := NLE_US; OP5 := NLE_US;
+    7: OP3 := ORD_Q; OP5 := ORD_Q;
+    8: OP5 := EQ_UQ;
+    9: OP5 := NGE_US;
+    10: OP5 := NGT_US;
+    11: OP5 := FALSE_OQ;
+    12: OP5 := NEQ_OQ;
+    13: OP5 := GE_OS;
+    14: OP5 := GT_OS;
+    15: OP5 := TRUE_UQ;
+    16: OP5 := EQ_OS;
+    17: OP5 := LT_OQ;
+    18: OP5 := LE_OQ;
+    19: OP5 := UNORD_S;
+    20: OP5 := NEQ_US;
+    21: OP5 := NLT_UQ;
+    22: OP5 := NLE_UQ;
+    23: OP5 := ORD_S;
+    24: OP5 := EQ_US;
+    25: OP5 := NGE_UQ;
+    26: OP5 := NGT_UQ;
+    27: OP5 := FALSE_OS;
+    28: OP5 := NEQ_OS;
+    29: OP5 := GE_OQ;
+    30: OP5 := GT_OQ;
+    31: OP5 := TRUE_US;
+    DEFAULT: Reserved;
+ESAC;
+
+

VCMPPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k2[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    CMP := SRC1[i+63:i] OP5 SRC2[63:0]
+                ELSE
+                    CMP := SRC1[i+63:i] OP5 SRC2[i+63:i]
+            FI;
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                        ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VCMPPD (VEX.256 Encoded Version) + ¶ +

+
CMP0 := SRC1[63:0] OP5 SRC2[63:0];
+CMP1 := SRC1[127:64] OP5 SRC2[127:64];
+CMP2 := SRC1[191:128] OP5 SRC2[191:128];
+CMP3 := SRC1[255:192] OP5 SRC2[255:192];
+IF CMP0 = TRUE
+    THEN DEST[63:0] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[63:0] := 0000000000000000H; FI;
+IF CMP1 = TRUE
+    THEN DEST[127:64] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[127:64] := 0000000000000000H; FI;
+IF CMP2 = TRUE
+    THEN DEST[191:128] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[191:128] := 0000000000000000H; FI;
+IF CMP3 = TRUE
+    THEN DEST[255:192] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[255:192] := 0000000000000000H; FI;
+DEST[MAXVL-1:256] := 0
+
+

VCMPPD (VEX.128 Encoded Version) + ¶ +

+
CMP0 := SRC1[63:0] OP5 SRC2[63:0];
+CMP1 := SRC1[127:64] OP5 SRC2[127:64];
+IF CMP0 = TRUE
+    THEN DEST[63:0] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[63:0] := 0000000000000000H; FI;
+IF CMP1 = TRUE
+    THEN DEST[127:64] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[127:64] := 0000000000000000H; FI;
+DEST[MAXVL-1:128] := 0
+
+

CMPPD (128-bit Legacy SSE Version) + ¶ +

+
CMP0 := SRC1[63:0] OP3 SRC2[63:0];
+CMP1 := SRC1[127:64] OP3 SRC2[127:64];
+IF CMP0 = TRUE
+    THEN DEST[63:0] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[63:0] := 0000000000000000H; FI;
+IF CMP1 = TRUE
+    THEN DEST[127:64] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[127:64] := 0000000000000000H; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCMPPD __mmask8 _mm512_cmp_pd_mask( __m512d a, __m512d b, int imm);
+
+
VCMPPD __mmask8 _mm512_cmp_round_pd_mask( __m512d a, __m512d b, int imm, int sae);
+
+
VCMPPD __mmask8 _mm512_mask_cmp_pd_mask( __mmask8 k1, __m512d a, __m512d b, int imm);
+
+
VCMPPD __mmask8 _mm512_mask_cmp_round_pd_mask( __mmask8 k1, __m512d a, __m512d b, int imm, int sae);
+
+
VCMPPD __mmask8 _mm256_cmp_pd_mask( __m256d a, __m256d b, int imm);
+
+
VCMPPD __mmask8 _mm256_mask_cmp_pd_mask( __mmask8 k1, __m256d a, __m256d b, int imm);
+
+
VCMPPD __mmask8 _mm_cmp_pd_mask( __m128d a, __m128d b, int imm);
+
+
VCMPPD __mmask8 _mm_mask_cmp_pd_mask( __mmask8 k1, __m128d a, __m128d b, int imm);
+
+
VCMPPD __m256 _mm256_cmp_pd(__m256d a, __m256d b, int imm)
+
+
(V)CMPPD __m128 _mm_cmp_pd(__m128d a, __m128d b, int imm)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid if SNaN operand and invalid if QNaN and predicate as listed in Table 3-1, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/cmpps.html b/x86/cmpps.html new file mode 100644 index 0000000..6838e83 --- /dev/null +++ b/x86/cmpps.html @@ -0,0 +1,408 @@ + +CMPPS + — Compare Packed Single Precision Floating-Point Values

CMPPS + — Compare Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F C2 /r ib CMPPS xmm1, xmm2/m128, imm8AV/VSSECompare packed single precision floating-point values in xmm2/m128 and xmm1 using bits 2:0 of imm8 as a comparison predicate.
VEX.128.0F.WIG C2 /r ib VCMPPS xmm1, xmm2, xmm3/m128, imm8BV/VAVXCompare packed single precision floating-point values in xmm3/m128 and xmm2 using bits 4:0 of imm8 as a comparison predicate.
VEX.256.0F.WIG C2 /r ib VCMPPS ymm1, ymm2, ymm3/m256, imm8BV/VAVXCompare packed single precision floating-point values in ymm3/m256 and ymm2 using bits 4:0 of imm8 as a comparison predicate.
EVEX.128.0F.W0 C2 /r ib VCMPPS k1 {k2}, xmm2, xmm3/m128/m32bcst, imm8CV/VAVX512VL AVX512FCompare packed single precision floating-point values in xmm3/m128/m32bcst and xmm2 using bits 4:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.0F.W0 C2 /r ib VCMPPS k1 {k2}, ymm2, ymm3/m256/m32bcst, imm8CV/VAVX512VL AVX512FCompare packed single precision floating-point values in ymm3/m256/m32bcst and ymm2 using bits 4:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.0F.W0 C2 /r ib VCMPPS k1 {k2}, zmm2, zmm3/m512/m32bcst{sae}, imm8CV/VAVX512FCompare packed single precision floating-point values in zmm3/m512/m32bcst and zmm2 using bits 4:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Performs a SIMD compare of the packed single precision floating-point values in the second source operand and the first source operand and returns the result of the comparison to the destination operand. The comparison predicate operand (immediate byte) specifies the type of comparison performed on each of the pairs of packed values.

+

EVEX encoded versions: The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand (first operand) is an opmask register. Comparison results are written to the destination operand under the writemask k2. Each comparison result is a single mask bit of 1 (comparison true) or 0 (comparison false).

+

VEX.256 encoded version: The first source operand (second operand) is a YMM register. The second source operand (third operand) can be a YMM register or a 256-bit memory location. The destination operand (first operand) is a YMM register. Eight comparisons are performed with results written to the destination operand. The result of each comparison is a doubleword mask of all 1s (comparison true) or all 0s (comparison false).

+

128-bit Legacy SSE version: The first source and destination operand (first operand) is an XMM register. The second source operand (second operand) can be an XMM register or 128-bit memory location. Bits (MAXVL-1:128) of the corresponding ZMM destination register remain unchanged. Four comparisons are performed with results written to bits 127:0 of the destination operand. The result of each comparison is a doubleword mask of all 1s (comparison true) or all 0s (comparison false).

+

VEX.128 encoded version: The first source operand (second operand) is an XMM register. The second source operand (third operand) can be an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination ZMM register are zeroed. Four comparisons are performed with results written to bits 127:0 of the destination operand.

+

The comparison predicate operand is an 8-bit immediate:

+
    +
  • For instructions encoded using the VEX prefix and EVEX prefix, bits 4:0 define the type of comparison to be performed (see Table 3-1). Bits 5 through 7 of the immediate are reserved.
  • +
  • For instruction encodings that do not use VEX prefix, bits 2:0 define the type of comparison to be made (see the first 8 rows of Table 3-1). Bits 3 through 7 of the immediate are reserved.
+

The unordered relationship is true when at least one of the two source operands being compared is a NaN; the ordered relationship is true when neither source operand is a NaN.

+

A subsequent computational instruction that uses the mask result in the destination operand as an input operand will not generate an exception, because a mask of all 0s corresponds to a floating-point value of +0.0 and a mask of all 1s corresponds to a QNaN.

+

Note that processors with “CPUID.1H:ECX.AVX =0” do not implement the “greater-than”, “greater-than-or-equal”, “not-greater than”, and “not-greater-than-or-equal relations” predicates. These comparisons can be made either by using the inverse relationship (that is, use the “not-less-than-or-equal” to make a “greater-than” comparison) or by using software emulation. When using software emulation, the program must swap the operands (copying registers when necessary to protect the data that will now be in the destination), and then perform the compare using a different predicate. The predicate to be used for these emulations is listed in the first 8 rows of Table 3-7 (Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A) under the heading Emulation.

+

Compilers and assemblers may implement the following two-operand pseudo-ops in addition to the three-operand CMPPS instruction, for processors with “CPUID.1H:ECX.AVX =0”. See Table 3-4. The compiler should treat reserved imm8 values as illegal syntax.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPPS Implementation
CMPEQPS xmm1, xmm2CMPPS xmm1, xmm2, 0
CMPLTPS xmm1, xmm2CMPPS xmm1, xmm2, 1
CMPLEPS xmm1, xmm2CMPPS xmm1, xmm2, 2
CMPUNORDPS xmm1, xmm2CMPPS xmm1, xmm2, 3
CMPNEQPS xmm1, xmm2CMPPS xmm1, xmm2, 4
CMPNLTPS xmm1, xmm2CMPPS xmm1, xmm2, 5
CMPNLEPS xmm1, xmm2CMPPS xmm1, xmm2, 6
CMPORDPS xmm1, xmm2CMPPS xmm1, xmm2, 7
+
Table 3-4. Pseudo-Op and CMPPS Implementation
+

The greater-than relations that the processor does not implement require more than one instruction to emulate in software and therefore should not be implemented as pseudo-ops. (For these, the programmer should reverse the operands of the corresponding less than relations and use move instructions to ensure that the mask is moved to the correct destination register and that the source operand is left intact.)

+

Processors with “CPUID.1H:ECX.AVX =1” implement the full complement of 32 predicates shown in Table 3-5, software emulation is no longer needed. Compilers and assemblers may implement the following three-operand pseudo-ops in addition to the four-operand VCMPPS instruction. See Table 3-5, where the notation of reg1 and reg2 represent either XMM registers or YMM registers. The compiler should treat reserved imm8 values as illegal syntax. Alternately, intrinsics can map the pseudo-ops to pre-defined constants to support a simpler intrinsic interface. Compilers and assemblers may implement three-operand pseudo-ops for EVEX encoded VCMPPS instructions in a similar fashion by extending the syntax listed in Table 3-5.

+

:

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPPS Implementation
VCMPEQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 0
VCMPLTPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 1
VCMPLEPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 2
VCMPUNORDPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 3
VCMPNEQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 4
VCMPNLTPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 5
VCMPNLEPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 6
VCMPORDPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 7
VCMPEQ_UQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 8
VCMPNGEPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 9
VCMPNGTPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 0AH
VCMPFALSEPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 0BH
VCMPNEQ_OQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 0CH
VCMPGEPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 0DH
VCMPGTPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 0EH
VCMPTRUEPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 0FH
VCMPEQ_OSPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 10H
VCMPLT_OQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 11H
VCMPLE_OQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 12H
VCMPUNORD_SPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 13H
VCMPNEQ_USPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 14H
VCMPNLT_UQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 15H
VCMPNLE_UQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 16H
VCMPORD_SPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 17H
VCMPEQ_USPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 18H
VCMPNGE_UQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 19H
VCMPNGT_UQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 1AH
VCMPFALSE_OSPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 1BH
VCMPNEQ_OSPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 1CH
VCMPGE_OQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 1DH
VCMPGT_OQPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 1EH
VCMPTRUE_USPS reg1, reg2, reg3VCMPPS reg1, reg2, reg3, 1FH
+
Table 3-5. Pseudo-Op and VCMPPS Implementation
+

Operation + ¶ +

+
CASE (COMPARISON PREDICATE) OF
+    0: OP3 := EQ_OQ; OP5 := EQ_OQ;
+    1: OP3 := LT_OS; OP5 := LT_OS;
+    2: OP3 := LE_OS; OP5 := LE_OS;
+    3: OP3 := UNORD_Q; OP5 := UNORD_Q;
+    4: OP3 := NEQ_UQ; OP5 := NEQ_UQ;
+    5: OP3 := NLT_US; OP5 := NLT_US;
+    6: OP3 := NLE_US; OP5 := NLE_US;
+    7: OP3 := ORD_Q; OP5 := ORD_Q;
+    8: OP5 := EQ_UQ;
+    9: OP5 := NGE_US;
+    10: OP5 := NGT_US;
+    11: OP5 := FALSE_OQ;
+    12: OP5 := NEQ_OQ;
+    13: OP5 := GE_OS;
+    14: OP5 := GT_OS;
+    15: OP5 := TRUE_UQ;
+    16: OP5 := EQ_OS;
+    17: OP5 := LT_OQ;
+    18: OP5 := LE_OQ;
+    19: OP5 := UNORD_S;
+    20: OP5 := NEQ_US;
+    21: OP5 := NLT_UQ;
+    22: OP5 := NLE_UQ;
+    23: OP5 := ORD_S;
+    24: OP5 := EQ_US;
+    25: OP5 := NGE_UQ;
+    26: OP5 := NGT_UQ;
+    27: OP5 := FALSE_OS;
+    28: OP5 := NEQ_OS;
+    29: OP5 := GE_OQ;
+    30: OP5 := GT_OQ;
+    31: OP5 := TRUE_US;
+    DEFAULT: Reserved
+ESAC;
+
+

VCMPPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k2[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    CMP := SRC1[i+31:i] OP5 SRC2[31:0]
+                ELSE
+                    CMP := SRC1[i+31:i] OP5 SRC2[i+31:i]
+            FI;
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                        ; zeroing-masking onlyFI;
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VCMPPS (VEX.256 Encoded Version) + ¶ +

+
CMP0 := SRC1[31:0] OP5 SRC2[31:0];
+CMP1 := SRC1[63:32] OP5 SRC2[63:32];
+CMP2 := SRC1[95:64] OP5 SRC2[95:64];
+CMP3 := SRC1[127:96] OP5 SRC2[127:96];
+CMP4 := SRC1[159:128] OP5 SRC2[159:128];
+CMP5 := SRC1[191:160] OP5 SRC2[191:160];
+CMP6 := SRC1[223:192] OP5 SRC2[223:192];
+CMP7 := SRC1[255:224] OP5 SRC2[255:224];
+IF CMP0 = TRUE
+    THEN DEST[31:0] :=FFFFFFFFH;
+    ELSE DEST[31:0] := 000000000H; FI;
+IF CMP1 = TRUE
+    THEN DEST[63:32] := FFFFFFFFH;
+    ELSE DEST[63:32] :=000000000H; FI;
+IF CMP2 = TRUE
+    THEN DEST[95:64] := FFFFFFFFH;
+    ELSE DEST[95:64] := 000000000H; FI;
+IF CMP3 = TRUE
+    THEN DEST[127:96] := FFFFFFFFH;
+    ELSE DEST[127:96] := 000000000H; FI;
+IF CMP4 = TRUE
+    THEN DEST[159:128] := FFFFFFFFH;
+    ELSE DEST[159:128] := 000000000H; FI;
+IF CMP5 = TRUE
+    THEN DEST[191:160] := FFFFFFFFH;
+    ELSE DEST[191:160] := 000000000H; FI;
+IF CMP6 = TRUE
+    THEN DEST[223:192] := FFFFFFFFH;
+    ELSE DEST[223:192] :=000000000H; FI;
+IF CMP7 = TRUE
+    THEN DEST[255:224] := FFFFFFFFH;
+    ELSE DEST[255:224] := 000000000H; FI;
+DEST[MAXVL-1:256] := 0
+
+

VCMPPS (VEX.128 Encoded Version) + ¶ +

+
CMP0 := SRC1[31:0] OP5 SRC2[31:0];
+CMP1 := SRC1[63:32] OP5 SRC2[63:32];
+CMP2 := SRC1[95:64] OP5 SRC2[95:64];
+CMP3 := SRC1[127:96] OP5 SRC2[127:96];
+IF CMP0 = TRUE
+    THEN DEST[31:0] :=FFFFFFFFH;
+    ELSE DEST[31:0] := 000000000H; FI;
+IF CMP1 = TRUE
+    THEN DEST[63:32] := FFFFFFFFH;
+    ELSE DEST[63:32] := 000000000H; FI;
+IF CMP2 = TRUE
+    THEN DEST[95:64] := FFFFFFFFH;
+    ELSE DEST[95:64] := 000000000H; FI;
+IF CMP3 = TRUE
+    THEN DEST[127:96] := FFFFFFFFH;
+    ELSE DEST[127:96] :=000000000H; FI;
+DEST[MAXVL-1:128] := 0
+
+

CMPPS (128-bit Legacy SSE Version) + ¶ +

+
CMP0 := SRC1[31:0] OP3 SRC2[31:0];
+CMP1 := SRC1[63:32] OP3 SRC2[63:32];
+CMP2 := SRC1[95:64] OP3 SRC2[95:64];
+CMP3 := SRC1[127:96] OP3 SRC2[127:96];
+IF CMP0 = TRUE
+    THEN DEST[31:0] :=FFFFFFFFH;
+    ELSE DEST[31:0] := 000000000H; FI;
+IF CMP1 = TRUE
+    THEN DEST[63:32] := FFFFFFFFH;
+    ELSE DEST[63:32] := 000000000H; FI;
+IF CMP2 = TRUE
+    THEN DEST[95:64] := FFFFFFFFH;
+    ELSE DEST[95:64] := 000000000H; FI;
+IF CMP3 = TRUE
+    THEN DEST[127:96] := FFFFFFFFH;
+    ELSE DEST[127:96] :=000000000H; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCMPPS __mmask16 _mm512_cmp_ps_mask( __m512 a, __m512 b, int imm);
+
+
VCMPPS __mmask16 _mm512_cmp_round_ps_mask( __m512 a, __m512 b, int imm, int sae);
+
+
VCMPPS __mmask16 _mm512_mask_cmp_ps_mask( __mmask16 k1, __m512 a, __m512 b, int imm);
+
+
VCMPPS __mmask16 _mm512_mask_cmp_round_ps_mask( __mmask16 k1, __m512 a, __m512 b, int imm, int sae);
+
+
VCMPPS __mmask8 _mm256_cmp_ps_mask( __m256 a, __m256 b, int imm);
+
+
VCMPPS __mmask8 _mm256_mask_cmp_ps_mask( __mmask8 k1, __m256 a, __m256 b, int imm);
+
+
VCMPPS __mmask8 _mm_cmp_ps_mask( __m128 a, __m128 b, int imm);
+
+
VCMPPS __mmask8 _mm_mask_cmp_ps_mask( __mmask8 k1, __m128 a, __m128 b, int imm);
+
+
VCMPPS __m256 _mm256_cmp_ps(__m256 a, __m256 b, int imm)
+
+
CMPPS __m128 _mm_cmp_ps(__m128 a, __m128 b, int imm)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid if SNaN operand and invalid if QNaN and predicate as listed in Table 3-1, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/cmps.cmpsb.cmpsw.cmpsd.cmpsq.html b/x86/cmps.cmpsb.cmpsw.cmpsd.cmpsq.html new file mode 100644 index 0000000..17a3119 --- /dev/null +++ b/x86/cmps.cmpsb.cmpsw.cmpsd.cmpsq.html @@ -0,0 +1,265 @@ + +CMPS/CMPSB/CMPSW/CMPSD/CMPSQ + — Compare String Operands

CMPS/CMPSB/CMPSW/CMPSD/CMPSQ + — Compare String Operands

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
A6CMPS m8, m8ZOValidValidFor legacy mode, compare byte at address DS:(E)SI with byte at address ES:(E)DI; For 64-bit mode compare byte at address (R|E)SI to byte at address (R|E)DI. The status flags are set accordingly.
A7CMPS m16, m16ZOValidValidFor legacy mode, compare word at address DS:(E)SI with word at address ES:(E)DI; For 64-bit mode compare word at address (R|E)SI with word at address (R|E)DI. The status flags are set accordingly.
A7CMPS m32, m32ZOValidValidFor legacy mode, compare dword at address DS:(E)SI at dword at address ES:(E)DI; For 64-bit mode compare dword at address (R|E)SI at dword at address (R|E)DI. The status flags are set accordingly.
REX.W + A7CMPS m64, m64ZOValidN.E.Compares quadword at address (R|E)SI with quadword at address (R|E)DI and sets the status flags accordingly.
A6CMPSBZOValidValidFor legacy mode, compare byte at address DS:(E)SI with byte at address ES:(E)DI; For 64-bit mode compare byte at address (R|E)SI with byte at address (R|E)DI. The status flags are set accordingly.
A7CMPSWZOValidValidFor legacy mode, compare word at address DS:(E)SI with word at address ES:(E)DI; For 64-bit mode compare word at address (R|E)SI with word at address (R|E)DI. The status flags are set accordingly.
A7CMPSDZOValidValidFor legacy mode, compare dword at address DS:(E)SI with dword at address ES:(E)DI; For 64-bit mode compare dword at address (R|E)SI with dword at address (R|E)DI. The status flags are set accordingly.
REX.W + A7CMPSQZOValidN.E.Compares quadword at address (R|E)SI with quadword at address (R|E)DI and sets the status flags accordingly.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Compares the byte, word, doubleword, or quadword specified with the first source operand with the byte, word, doubleword, or quadword specified with the second source operand and sets the status flags in the EFLAGS register according to the results.

+

Both source operands are located in memory. The address of the first source operand is read from DS:SI, DS:ESI or RSI (depending on the address-size attribute of the instruction is 16, 32, or 64, respectively). The address of the second source operand is read from ES:DI, ES:EDI or RDI (again depending on the address-size attribute of the instruction is 16, 32, or 64). The DS segment may be overridden with a segment override prefix, but the ES segment cannot be overridden.

+

At the assembly-code level, two forms of this instruction are allowed: the “explicit-operands” form and the “no-operands” form. The explicit-operands form (specified with the CMPS mnemonic) allows the two source operands to be specified explicitly. Here, the source operands should be symbols that indicate the size and location of the source values. This explicit-operand form is provided to allow documentation. However, note that the documentation provided by this form can be misleading. That is, the source operand symbols must specify the correct type (size) of the operands (bytes, words, or doublewords, quadwords), but they do not have to specify the correct loca-

+

tion. Locations of the source operands are always specified by the DS:(E)SI (or RSI) and ES:(E)DI (or RDI) registers, which must be loaded correctly before the compare string instruction is executed.

+

The no-operands form provides “short forms” of the byte, word, and doubleword versions of the CMPS instructions. Here also the DS:(E)SI (or RSI) and ES:(E)DI (or RDI) registers are assumed by the processor to specify the location of the source operands. The size of the source operands is selected with the mnemonic: CMPSB (byte comparison), CMPSW (word comparison), CMPSD (doubleword comparison), or CMPSQ (quadword comparison using REX.W).

+

After the comparison, the (E/R)SI and (E/R)DI registers increment or decrement automatically according to the setting of the DF flag in the EFLAGS register. (If the DF flag is 0, the (E/R)SI and (E/R)DI register increment; if the DF flag is 1, the registers decrement.) The registers increment or decrement by 1 for byte operations, by 2 for word operations, 4 for doubleword operations. If operand size is 64, RSI and RDI registers increment by 8 for quadword operations.

+

The CMPS, CMPSB, CMPSW, CMPSD, and CMPSQ instructions can be preceded by the REP prefix for block comparisons. More often, however, these instructions will be used in a LOOP construct that takes some action based on the setting of the status flags before the next comparison is made. See “REP/REPE/REPZ /REPNE/REPNZ—Repeat String Operation Prefix” in Chapter 4 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2B, for a description of the REP prefix.

+

In 64-bit mode, the instruction’s default address size is 64 bits, 32 bit address size is supported using the prefix 67H. Use of the REX.W prefix promotes doubleword operation to 64 bits (see CMPSQ). See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
temp := SRC1 - SRC2;
+SetStatusFlags(temp);
+IF (64-Bit Mode)
+    THEN
+        IF (Byte comparison)
+        THEN IF DF = 0
+            THEN
+                (R|E)SI := (R|E)SI + 1;
+                (R|E)DI := (R|E)DI + 1;
+            ELSE
+                (R|E)SI := (R|E)SI – 1;
+                (R|E)DI := (R|E)DI – 1;
+            FI;
+        ELSE IF (Word comparison)
+            THEN IF DF = 0
+                THEN
+                    (R|E)SI
+                        := (R|E)SI + 2;
+                    (R|E)DI
+                        := (R|E)DI + 2;
+                ELSE
+                    (R|E)SI
+                        := (R|E)SI – 2;
+                    (R|E)DI
+                        := (R|E)DI – 2;
+                FI;
+        ELSE IF (Doubleword
+                        comparison)
+            THEN IF DF = 0
+                THEN
+                    (R|E)SI
+                        := (R|E)SI + 4;
+                    (R|E)DI
+                        := (R|E)DI + 4;
+                ELSE
+                    (R|E)SI
+                        := (R|E)SI – 4;
+                    (R|E)DI
+                        := (R|E)DI – 4;
+                FI;
+        ELSE (* Quadword comparison *)
+            THEN IF DF = 0
+                (R|E)SI := (R|E)SI + 8;
+                (R|E)DI := (R|E)DI + 8;
+            ELSE
+                (R|E)SI := (R|E)SI – 8;
+                (R|E)DI := (R|E)DI – 8;
+            FI;
+        FI;
+    ELSE (* Non-64-bit Mode *)
+        IF (byte comparison)
+        THEN IF DF = 0
+            THEN
+                (E)SI := (E)SI + 1;
+                (E)DI := (E)DI + 1;
+            ELSE
+                (E)SI := (E)SI – 1;
+                (E)DI := (E)DI – 1;
+            FI;
+        ELSE IF (Word comparison)
+            THENIFDF =0
+                (E)SI := (E)SI + 2;
+                (E)DI := (E)DI + 2;
+            ELSE
+                (E)SI := (E)SI – 2;
+                (E)DI := (E)DI – 2;
+            FI;
+        ELSE (* Doubleword comparison *)
+            THEN IF DF = 0
+                (E)SI := (E)SI + 4;
+                (E)DI := (E)DI + 4;
+            ELSE
+                (E)SI := (E)SI – 4;
+                (E)DI := (E)DI – 4;
+            FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

The CF, OF, SF, ZF, AF, and PF flags are set according to the temporary result of the comparison.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/cmpsd.html b/x86/cmpsd.html new file mode 100644 index 0000000..87bcf50 --- /dev/null +++ b/x86/cmpsd.html @@ -0,0 +1,316 @@ + +CMPSD + — Compare Scalar Double Precision Floating-Point Value

CMPSD + — Compare Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F C2 /r ib CMPSD xmm1, xmm2/m64, imm8AV/VSSE2Compare low double precision floating-point value in xmm2/m64 and xmm1 using bits 2:0 of imm8 as comparison predicate.
VEX.LIG.F2.0F.WIG C2 /r ib VCMPSD xmm1, xmm2, xmm3/m64, imm8BV/VAVXCompare low double precision floating-point value in xmm3/m64 and xmm2 using bits 4:0 of imm8 as comparison predicate.
EVEX.LLIG.F2.0F.W1 C2 /r ib VCMPSD k1 {k2}, xmm2, xmm3/m64{sae}, imm8CV/VAVX512FCompare low double precision floating-point value in xmm3/m64 and xmm2 using bits 4:0 of imm8 as comparison predicate with writemask k2 and leave the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Compares the low double precision floating-point values in the second source operand and the first source operand and returns the result of the comparison to the destination operand. The comparison predicate operand (immediate operand) specifies the type of comparison performed.

+

128-bit Legacy SSE version: The first source and destination operand (first operand) is an XMM register. The second source operand (second operand) can be an XMM register or 64-bit memory location. Bits (MAXVL-1:64) of the corresponding YMM destination register remain unchanged. The comparison result is a quadword mask of all 1s (comparison true) or all 0s (comparison false).

+

VEX.128 encoded version: The first source operand (second operand) is an XMM register. The second source operand (third operand) can be an XMM register or a 64-bit memory location. The result is stored in the low quadword of the destination operand; the high quadword is filled with the contents of the high quadword of the first source operand. Bits (MAXVL-1:128) of the destination ZMM register are zeroed. The comparison result is a quadword mask of all 1s (comparison true) or all 0s (comparison false).

+

EVEX encoded version: The first source operand (second operand) is an XMM register. The second source operand can be a XMM register or a 64-bit memory location. The destination operand (first operand) is an opmask register. The comparison result is a single mask bit of 1 (comparison true) or 0 (comparison false), written to the destination starting from the LSB according to the writemask k2. Bits (MAX_KL-1:128) of the destination register are cleared.

+

The comparison predicate operand is an 8-bit immediate:

+
    +
  • For instructions encoded using the VEX prefix, bits 4:0 define the type of comparison to be performed (see Table 3-1). Bits 5 through 7 of the immediate are reserved.
  • +
  • For instruction encodings that do not use VEX prefix, bits 2:0 define the type of comparison to be made (see the first 8 rows of Table 3-1). Bits 3 through 7 of the immediate are reserved.
+

The unordered relationship is true when at least one of the two source operands being compared is a NaN; the ordered relationship is true when neither source operand is a NaN.

+

A subsequent computational instruction that uses the mask result in the destination operand as an input operand will not generate an exception, because a mask of all 0s corresponds to a floating-point value of +0.0 and a mask of all 1s corresponds to a QNaN.

+

Note that processors with “CPUID.1H:ECX.AVX =0” do not implement the “greater-than”, “greater-than-or-equal”, “not-greater than”, and “not-greater-than-or-equal relations” predicates. These comparisons can be made either

+

by using the inverse relationship (that is, use the “not-less-than-or-equal” to make a “greater-than” comparison) or by using software emulation. When using software emulation, the program must swap the operands (copying registers when necessary to protect the data that will now be in the destination), and then perform the compare using a different predicate. The predicate to be used for these emulations is listed in the first 8 rows of Table 3-7 (Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A) under the heading Emulation.

+

Compilers and assemblers may implement the following two-operand pseudo-ops in addition to the three-operand CMPSD instruction, for processors with “CPUID.1H:ECX.AVX =0”. See Table 3-6. The compiler should treat reserved imm8 values as illegal syntax.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPSD Implementation
CMPEQSD xmm1, xmm2CMPSD xmm1, xmm2, 0
CMPLTSD xmm1, xmm2CMPSD xmm1, xmm2, 1
CMPLESD xmm1, xmm2CMPSD xmm1, xmm2, 2
CMPUNORDSD xmm1, xmm2CMPSD xmm1, xmm2, 3
CMPNEQSD xmm1, xmm2CMPSD xmm1, xmm2, 4
CMPNLTSD xmm1, xmm2CMPSD xmm1, xmm2, 5
CMPNLESD xmm1, xmm2CMPSD xmm1, xmm2, 6
CMPORDSD xmm1, xmm2CMPSD xmm1, xmm2, 7
+
Table 3-6. Pseudo-Op and CMPSD Implementation
+

The greater-than relations that the processor does not implement require more than one instruction to emulate in software and therefore should not be implemented as pseudo-ops. (For these, the programmer should reverse the operands of the corresponding less than relations and use move instructions to ensure that the mask is moved to the correct destination register and that the source operand is left intact.)

+

Processors with “CPUID.1H:ECX.AVX =1” implement the full complement of 32 predicates shown in Table 3-7, software emulation is no longer needed. Compilers and assemblers may implement the following three-operand pseudo-ops in addition to the four-operand VCMPSD instruction. See Table 3-7, where the notations of reg1 reg2, and reg3 represent either XMM registers or YMM registers. The compiler should treat reserved imm8 values as illegal syntax. Alternately, intrinsics can map the pseudo-ops to pre-defined constants to support a simpler intrinsic interface. Compilers and assemblers may implement three-operand pseudo-ops for EVEX encoded VCMPSD instructions in a similar fashion by extending the syntax listed in Table 3-7.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPSD Implementation
VCMPEQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 0
VCMPLTSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 1
VCMPLESD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 2
VCMPUNORDSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 3
VCMPNEQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 4
VCMPNLTSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 5
VCMPNLESD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 6
VCMPORDSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 7
VCMPEQ_UQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 8
VCMPNGESD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 9
VCMPNGTSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 0AH
VCMPFALSESD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 0BH
VCMPNEQ_OQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 0CH
VCMPGESD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 0DH
+
Table 3-7. Pseudo-Op and VCMPSD Implementation
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPSD Implementation
VCMPGTSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 0EH
VCMPTRUESD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 0FH
VCMPEQ_OSSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 10H
VCMPLT_OQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 11H
VCMPLE_OQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 12H
VCMPUNORD_SSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 13H
VCMPNEQ_USSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 14H
VCMPNLT_UQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 15H
VCMPNLE_UQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 16H
VCMPORD_SSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 17H
VCMPEQ_USSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 18H
VCMPNGE_UQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 19H
VCMPNGT_UQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 1AH
VCMPFALSE_OSSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 1BH
VCMPNEQ_OSSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 1CH
VCMPGE_OQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 1DH
VCMPGT_OQSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 1EH
VCMPTRUE_USSD reg1, reg2, reg3VCMPSD reg1, reg2, reg3, 1FH
+
Table 3-7. Pseudo-Op and VCMPSD Implementation
+

Software should ensure VCMPSD is encoded with VEX.L=0. Encoding VCMPSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+
CASE (COMPARISON PREDICATE) OF
+    0: OP3 := EQ_OQ; OP5 := EQ_OQ;
+    1: OP3 := LT_OS; OP5 := LT_OS;
+    2: OP3 := LE_OS; OP5 := LE_OS;
+    3: OP3 := UNORD_Q; OP5 := UNORD_Q;
+    4: OP3 := NEQ_UQ; OP5 := NEQ_UQ;
+    5: OP3 := NLT_US; OP5 := NLT_US;
+    6: OP3 := NLE_US; OP5 := NLE_US;
+    7: OP3 := ORD_Q; OP5 := ORD_Q;
+    8: OP5 := EQ_UQ;
+    9: OP5 := NGE_US;
+    10: OP5 := NGT_US;
+    11: OP5 := FALSE_OQ;
+    12: OP5 := NEQ_OQ;
+    13: OP5 := GE_OS;
+    14: OP5 := GT_OS;
+    15: OP5 := TRUE_UQ;
+    16: OP5 := EQ_OS;
+    17: OP5 := LT_OQ;
+    18: OP5 := LE_OQ;
+    19: OP5 := UNORD_S;
+    20: OP5 := NEQ_US;
+    21: OP5 := NLT_UQ;
+    22: OP5 := NLE_UQ;
+    23: OP5 := ORD_S;
+    24: OP5 := EQ_US;
+    25: OP5 := NGE_UQ;
+    26: OP5 := NGT_UQ;
+    27: OP5 := FALSE_OS;
+    28: OP5 := NEQ_OS;
+    29: OP5 := GE_OQ;
+    30: OP5 := GT_OQ;
+    31: OP5 := TRUE_US;
+    DEFAULT: Reserved
+ESAC;
+
+

VCMPSD (EVEX Encoded Version) + ¶ +

+
CMP0 := SRC1[63:0] OP5 SRC2[63:0];
+IF k2[0] or *no writemask*
+    THEN IF CMP0 = TRUE
+        THEN DEST[0] := 1;
+        ELSE DEST[0] := 0; FI;
+    ELSE DEST[0] := 0
+            ; zeroing-masking only
+FI;
+DEST[MAX_KL-1:1] := 0
+
+

CMPSD (128-bit Legacy SSE Version) + ¶ +

+
CMP0 := DEST[63:0] OP3 SRC[63:0];
+IF CMP0 = TRUE
+THEN DEST[63:0] := FFFFFFFFFFFFFFFFH;
+ELSE DEST[63:0] := 0000000000000000H; FI;
+DEST[MAXVL-1:64] (Unmodified)
+
+

VCMPSD (VEX.128 Encoded Version) + ¶ +

+
CMP0 := SRC1[63:0] OP5 SRC2[63:0];
+IF CMP0 = TRUE
+THEN DEST[63:0] := FFFFFFFFFFFFFFFFH;
+ELSE DEST[63:0] := 0000000000000000H; FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCMPSD __mmask8 _mm_cmp_sd_mask( __m128d a, __m128d b, int imm);
+
+
VCMPSD __mmask8 _mm_cmp_round_sd_mask( __m128d a, __m128d b, int imm, int sae);
+
+
VCMPSD __mmask8 _mm_mask_cmp_sd_mask( __mmask8 k1, __m128d a, __m128d b, int imm);
+
+
VCMPSD __mmask8 _mm_mask_cmp_round_sd_mask( __mmask8 k1, __m128d a, __m128d b, int imm, int sae);
+
+
(V)CMPSD __m128d _mm_cmp_sd(__m128d a, __m128d b, const int imm)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid if SNaN operand, Invalid if QNaN and predicate as listed in Table 3-1, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/cmpss.html b/x86/cmpss.html new file mode 100644 index 0000000..a468ccd --- /dev/null +++ b/x86/cmpss.html @@ -0,0 +1,315 @@ + +CMPSS + — Compare Scalar Single Precision Floating-Point Value

CMPSS + — Compare Scalar Single Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F C2 /r ib CMPSS xmm1, xmm2/m32, imm8AV/VSSECompare low single precision floating-point value in xmm2/m32 and xmm1 using bits 2:0 of imm8 as comparison predicate.
VEX.LIG.F3.0F.WIG C2 /r ib VCMPSS xmm1, xmm2, xmm3/m32, imm8BV/VAVXCompare low single precision floating-point value in xmm3/m32 and xmm2 using bits 4:0 of imm8 as comparison predicate.
EVEX.LLIG.F3.0F.W0 C2 /r ib VCMPSS k1 {k2}, xmm2, xmm3/m32{sae}, imm8CV/VAVX512FCompare low single precision floating-point value in xmm3/m32 and xmm2 using bits 4:0 of imm8 as comparison predicate with writemask k2 and leave the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Compares the low single precision floating-point values in the second source operand and the first source operand and returns the result of the comparison to the destination operand. The comparison predicate operand (immediate operand) specifies the type of comparison performed.

+

128-bit Legacy SSE version: The first source and destination operand (first operand) is an XMM register. The second source operand (second operand) can be an XMM register or 32-bit memory location. Bits (MAXVL-1:32) of the corresponding YMM destination register remain unchanged. The comparison result is a doubleword mask of all 1s (comparison true) or all 0s (comparison false).

+

VEX.128 encoded version: The first source operand (second operand) is an XMM register. The second source operand (third operand) can be an XMM register or a 32-bit memory location. The result is stored in the low 32 bits of the destination operand; bits 127:32 of the destination operand are copied from the first source operand. Bits (MAXVL-1:128) of the destination ZMM register are zeroed. The comparison result is a doubleword mask of all 1s (comparison true) or all 0s (comparison false).

+

EVEX encoded version: The first source operand (second operand) is an XMM register. The second source operand can be a XMM register or a 32-bit memory location. The destination operand (first operand) is an opmask register. The comparison result is a single mask bit of 1 (comparison true) or 0 (comparison false), written to the destination starting from the LSB according to the writemask k2. Bits (MAX_KL-1:128) of the destination register are cleared.

+

The comparison predicate operand is an 8-bit immediate:

+
    +
  • For instructions encoded using the VEX prefix, bits 4:0 define the type of comparison to be performed (see Table 3-1). Bits 5 through 7 of the immediate are reserved.
  • +
  • For instruction encodings that do not use VEX prefix, bits 2:0 define the type of comparison to be made (see the first 8 rows of Table 3-1). Bits 3 through 7 of the immediate are reserved.
+

The unordered relationship is true when at least one of the two source operands being compared is a NaN; the ordered relationship is true when neither source operand is a NaN.

+

A subsequent computational instruction that uses the mask result in the destination operand as an input operand will not generate an exception, because a mask of all 0s corresponds to a floating-point value of +0.0 and a mask of all 1s corresponds to a QNaN.

+

Note that processors with “CPUID.1H:ECX.AVX =0” do not implement the “greater-than”, “greater-than-or-equal”, “not-greater than”, and “not-greater-than-or-equal relations” predicates. These comparisons can be made either by using the inverse relationship (that is, use the “not-less-than-or-equal” to make a “greater-than” comparison) or by using software emulation. When using software emulation, the program must swap the operands (copying registers when necessary to protect the data that will now be in the destination), and then perform the compare using a different predicate. The predicate to be used for these emulations is listed in the first 8 rows of Table 3-7 (Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A) under the heading Emulation.

+

Compilers and assemblers may implement the following two-operand pseudo-ops in addition to the three-operand CMPSS instruction, for processors with “CPUID.1H:ECX.AVX =0”. See Table 3-8. The compiler should treat reserved imm8 values as illegal syntax.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPSS Implementation
CMPEQSS xmm1, xmm2CMPSS xmm1, xmm2, 0
CMPLTSS xmm1, xmm2CMPSS xmm1, xmm2, 1
CMPLESS xmm1, xmm2CMPSS xmm1, xmm2, 2
CMPUNORDSS xmm1, xmm2CMPSS xmm1, xmm2, 3
CMPNEQSS xmm1, xmm2CMPSS xmm1, xmm2, 4
CMPNLTSS xmm1, xmm2CMPSS xmm1, xmm2, 5
CMPNLESS xmm1, xmm2CMPSS xmm1, xmm2, 6
CMPORDSS xmm1, xmm2CMPSS xmm1, xmm2, 7
+
Table 3-8. Pseudo-Op and CMPSS Implementation
+

The greater-than relations that the processor does not implement require more than one instruction to emulate in software and therefore should not be implemented as pseudo-ops. (For these, the programmer should reverse the operands of the corresponding less than relations and use move instructions to ensure that the mask is moved to the correct destination register and that the source operand is left intact.)

+

Processors with “CPUID.1H:ECX.AVX =1” implement the full complement of 32 predicates shown in Table 3-7, software emulation is no longer needed. Compilers and assemblers may implement the following three-operand pseudo-ops in addition to the four-operand VCMPSS instruction. See Table 3-9, where the notations of reg1 reg2, and reg3 represent either XMM registers or YMM registers. The compiler should treat reserved imm8 values as illegal syntax. Alternately, intrinsics can map the pseudo-ops to pre-defined constants to support a simpler intrinsic interface. Compilers and assemblers may implement three-operand pseudo-ops for EVEX encoded VCMPSS instructions in a similar fashion by extending the syntax listed in Table 3-9.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPSS Implementation
VCMPEQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 0
VCMPLTSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 1
VCMPLESS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 2
VCMPUNORDSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 3
VCMPNEQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 4
VCMPNLTSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 5
VCMPNLESS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 6
VCMPORDSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 7
VCMPEQ_UQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 8
VCMPNGESS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 9
VCMPNGTSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 0AH
VCMPFALSESS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 0BH
+
Table 3-9. Pseudo-Op and VCMPSS Implementation
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpCMPSS Implementation
VCMPNEQ_OQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 0CH
VCMPGESS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 0DH
VCMPGTSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 0EH
VCMPTRUESS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 0FH
VCMPEQ_OSSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 10H
VCMPLT_OQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 11H
VCMPLE_OQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 12H
VCMPUNORD_SSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 13H
VCMPNEQ_USSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 14H
VCMPNLT_UQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 15H
VCMPNLE_UQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 16H
VCMPORD_SSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 17H
VCMPEQ_USSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 18H
VCMPNGE_UQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 19H
VCMPNGT_UQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 1AH
VCMPFALSE_OSSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 1BH
VCMPNEQ_OSSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 1CH
VCMPGE_OQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 1DH
VCMPGT_OQSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 1EH
VCMPTRUE_USSS reg1, reg2, reg3VCMPSS reg1, reg2, reg3, 1FH
+
Table 3-9. Pseudo-Op and VCMPSS Implementation
+

Software should ensure VCMPSS is encoded with VEX.L=0. Encoding VCMPSS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+
CASE (COMPARISON PREDICATE) OF
+    0: OP3 := EQ_OQ; OP5 := EQ_OQ;
+    1: OP3 := LT_OS; OP5 := LT_OS;
+    2: OP3 := LE_OS; OP5 := LE_OS;
+    3: OP3 := UNORD_Q; OP5 := UNORD_Q;
+    4: OP3 := NEQ_UQ; OP5 := NEQ_UQ;
+    5: OP3 := NLT_US; OP5 := NLT_US;
+    6: OP3 := NLE_US; OP5 := NLE_US;
+    7: OP3 := ORD_Q; OP5 := ORD_Q;
+    8: OP5 := EQ_UQ;
+    9: OP5 := NGE_US;
+    10: OP5 := NGT_US;
+    11: OP5 := FALSE_OQ;
+    12: OP5 := NEQ_OQ;
+    13: OP5 := GE_OS;
+    14: OP5 := GT_OS;
+    15: OP5 := TRUE_UQ;
+    16: OP5 := EQ_OS;
+    17: OP5 := LT_OQ;
+    18: OP5 := LE_OQ;
+    19: OP5 := UNORD_S;
+    20: OP5 := NEQ_US;
+    21: OP5 := NLT_UQ;
+    22: OP5 := NLE_UQ;
+    23: OP5 := ORD_S;
+    24: OP5 := EQ_US;
+    25: OP5 := NGE_UQ;
+    26: OP5 := NGT_UQ;
+    27: OP5 := FALSE_OS;
+    28: OP5 := NEQ_OS;
+    29: OP5 := GE_OQ;
+    30: OP5 := GT_OQ;
+    31: OP5 := TRUE_US;
+    DEFAULT: Reserved
+ESAC;
+
+

VCMPSS (EVEX Encoded Version) + ¶ +

+
CMP0 := SRC1[31:0] OP5 SRC2[31:0];
+IF k2[0] or *no writemask*
+    THEN IF CMP0 = TRUE
+        THEN DEST[0] := 1;
+        ELSE DEST[0] := 0; FI;
+    ELSE DEST[0] := 0
+            ; zeroing-masking only
+FI;
+DEST[MAX_KL-1:1] := 0
+
+

CMPSS (128-bit Legacy SSE Version) + ¶ +

+
CMP0 := DEST[31:0] OP3 SRC[31:0];
+IF CMP0 = TRUE
+THEN DEST[31:0] := FFFFFFFFH;
+ELSE DEST[31:0] := 00000000H; FI;
+DEST[MAXVL-1:32] (Unmodified)
+
+

VCMPSS (VEX.128 Encoded Version) + ¶ +

+
CMP0 := SRC1[31:0] OP5 SRC2[31:0];
+IF CMP0 = TRUE
+THEN DEST[31:0] := FFFFFFFFH;
+ELSE DEST[31:0] := 00000000H; FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCMPSS __mmask8 _mm_cmp_ss_mask( __m128 a, __m128 b, int imm);
+
+
VCMPSS __mmask8 _mm_cmp_round_ss_mask( __m128 a, __m128 b, int imm, int sae);
+
+
VCMPSS __mmask8 _mm_mask_cmp_ss_mask( __mmask8 k1, __m128 a, __m128 b, int imm);
+
+
VCMPSS __mmask8 _mm_mask_cmp_round_ss_mask( __mmask8 k1, __m128 a, __m128 b, int imm, int sae);
+
+
(V)CMPSS __m128 _mm_cmp_ss(__m128 a, __m128 b, const int imm)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid if SNaN operand, Invalid if QNaN and predicate as listed in Table 3-1, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/cmpxchg.html b/x86/cmpxchg.html new file mode 100644 index 0000000..34c4edc --- /dev/null +++ b/x86/cmpxchg.html @@ -0,0 +1,172 @@ + +CMPXCHG + — Compare and Exchange

CMPXCHG + — Compare and Exchange

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F B0/r CMPXCHG r/m8, r8MRValidValid*Compare AL with r/m8. If equal, ZF is set and r8 is loaded into r/m8. Else, clear ZF and load r/m8 into AL.
REX + 0F B0/r CMPXCHG r/m8**,r8MRValidN.E.Compare AL with r/m8. If equal, ZF is set and r8 is loaded into r/m8. Else, clear ZF and load r/m8 into AL.
0F B1/r CMPXCHG r/m16, r16MRValidValid*Compare AX with r/m16. If equal, ZF is set and r16 is loaded into r/m16. Else, clear ZF and load r/m16 into AX.
0F B1/r CMPXCHG r/m32, r32MRValidValid*Compare EAX with r/m32. If equal, ZF is set and r32 is loaded into r/m32. Else, clear ZF and load r/m32 into EAX.
REX.W + 0F B1/r CMPXCHG r/m64, r64MRValidN.E.Compare RAX with r/m64. If equal, ZF is set and r64 is loaded into r/m64. Else, clear ZF and load r/m64 into RAX.
+
+

* SeetheIA-32ArchitectureCompatibilitysectionbelow.

+

** In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Compares the value in the AL, AX, EAX, or RAX register with the first operand (destination operand). If the two values are equal, the second operand (source operand) is loaded into the destination operand. Otherwise, the destination operand is loaded into the AL, AX, EAX or RAX register. RAX register is available only in 64-bit mode.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically. To simplify the interface to the processor’s bus, the destination operand receives a write cycle without regard to the result of the comparison. The destination operand is written back if the comparison fails; otherwise, the source operand is written into the destination. (The processor never produces a locked read without also producing a locked write.)

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

IA-32 Architecture Compatibility + ¶ +

+

This instruction is not supported on Intel processors earlier than the Intel486 processors.

+

Operation + ¶ +

+
(* Accumulator = AL, AX, EAX, or RAX depending on whether a byte, word, doubleword, or quadword comparison is being performed *)
+TEMP := DEST
+IF accumulator = TEMP
+    THEN
+        ZF := 1;
+        DEST := SRC;
+    ELSE
+        ZF := 0;
+        accumulator := TEMP;
+        DEST := TEMP;
+FI;
+
+

Flags Affected + ¶ +

+

The ZF flag is set if the values in the destination operand and register AL, AX, or EAX are equal; otherwise it is cleared. The CF, PF, AF, SF, and OF flags are set according to the results of the comparison operation.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/cmpxchg8b.cmpxchg16b.html b/x86/cmpxchg8b.cmpxchg16b.html new file mode 100644 index 0000000..10cead9 --- /dev/null +++ b/x86/cmpxchg8b.cmpxchg16b.html @@ -0,0 +1,173 @@ + +CMPXCHG8B/CMPXCHG16B + — Compare and Exchange Bytes

CMPXCHG8B/CMPXCHG16B + — Compare and Exchange Bytes

+ + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F C7 /1 CMPXCHG8B m64MValidValid*Compare EDX:EAX with m64. If equal, set ZF and load ECX:EBX into m64. Else, clear ZF and load m64 into EDX:EAX.
REX.W + 0F C7 /1 CMPXCHG16B m128MValidN.E.Compare RDX:RAX with m128. If equal, set ZF and load RCX:RBX into m128. Else, clear ZF and load m128 into RDX:RAX.
+
+

*See IA-32 Architecture Compatibility section below.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r, w)N/AN/AN/A
+

Description + ¶ +

+

Compares the 64-bit value in EDX:EAX (or 128-bit value in RDX:RAX if operand size is 128 bits) with the operand (destination operand). If the values are equal, the 64-bit value in ECX:EBX (or 128-bit value in RCX:RBX) is stored in the destination operand. Otherwise, the value in the destination operand is loaded into EDX:EAX (or RDX:RAX). The destination operand is an 8-byte memory location (or 16-byte memory location if operand size is 128 bits). For the EDX:EAX and ECX:EBX register pairs, EDX and ECX contain the high-order 32 bits and EAX and EBX contain the low-order 32 bits of a 64-bit value. For the RDX:RAX and RCX:RBX register pairs, RDX and RCX contain the high-order 64 bits and RAX and RBX contain the low-order 64bits of a 128-bit value.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically. To simplify the interface to the processor’s bus, the destination operand receives a write cycle without regard to the result of the comparison. The destination operand is written back if the comparison fails; otherwise, the source operand is written into the destination. (The processor never produces a locked read without also producing a locked write.)

+

In 64-bit mode, default operation size is 64 bits. Use of the REX.W prefix promotes operation to 128 bits. Note that CMPXCHG16B requires that the destination (memory) operand be 16-byte aligned. See the summary chart at the beginning of this section for encoding data and limits. For information on the CPUID flag that indicates CMPX-CHG16B, see page 3-243.

+

IA-32 Architecture Compatibility + ¶ +

+

This instruction encoding is not supported on Intel processors earlier than the Pentium processors.

+

Operation + ¶ +

+
IF (64-Bit Mode and OperandSize = 64)
+    THEN
+        TEMP128 := DEST
+        IF (RDX:RAX = TEMP128)
+            THEN
+                ZF := 1;
+                DEST := RCX:RBX;
+            ELSE
+                ZF := 0;
+                RDX:RAX := TEMP128;
+                DEST := TEMP128;
+                FI;
+        FI
+    ELSE
+        TEMP64 := DEST;
+        IF (EDX:EAX = TEMP64)
+            THEN
+                ZF := 1;
+                DEST := ECX:EBX;
+            ELSE
+                ZF := 0;
+                EDX:EAX := TEMP64;
+                DEST := TEMP64;
+                FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

The ZF flag is set if the destination operand and EDX:EAX are equal; otherwise it is cleared. The CF, PF, AF, SF, and OF flags are unaffected.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#UDIf the destination is not a memory operand.
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#UDIf the destination operand is not a memory location.
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#UDIf the destination operand is not a memory location.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
If memory operand for CMPXCHG16B is not aligned on a 16-byte boundary.
If CPUID.01H:ECX.CMPXCHG16B[bit 13] = 0.
#UDIf the destination operand is not a memory location.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
diff --git a/x86/comisd.html b/x86/comisd.html new file mode 100644 index 0000000..0ffe42e --- /dev/null +++ b/x86/comisd.html @@ -0,0 +1,113 @@ + +COMISD + — Compare Scalar Ordered Double Precision Floating-Point Values and Set EFLAGS

COMISD + — Compare Scalar Ordered Double Precision Floating-Point Values and Set EFLAGS

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 2F /r COMISD xmm1, xmm2/m64AV/VSSE2Compare low double precision floating-point values in xmm1 and xmm2/mem64 and set the EFLAGS flags accordingly.
VEX.LIG.66.0F.WIG 2F /r VCOMISD xmm1, xmm2/m64AV/VAVXCompare low double precision floating-point values in xmm1 and xmm2/mem64 and set the EFLAGS flags accordingly.
EVEX.LLIG.66.0F.W1 2F /r VCOMISD xmm1, xmm2/m64{sae}BV/VAVX512FCompare low double precision floating-point values in xmm1 and xmm2/mem64 and set the EFLAGS flags accordingly.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BTuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Compares the double precision floating-point values in the low quadwords of operand 1 (first operand) and operand 2 (second operand), and sets the ZF, PF, and CF flags in the EFLAGS register according to the result (unordered, greater than, less than, or equal). The OF, SF, and AF flags in the EFLAGS register are set to 0. The unordered result is returned if either source operand is a NaN (QNaN or SNaN).

+

Operand 1 is an XMM register; operand 2 can be an XMM register or a 64 bit memory location. The COMISD instruction differs from the UCOMISD instruction in that it signals a SIMD floating-point invalid operation exception (#I) when a source operand is either a QNaN or SNaN. The UCOMISD instruction signals an invalid operation exception only if a source operand is an SNaN.

+

The EFLAGS register is not updated if an unmasked SIMD floating-point exception is generated.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Software should ensure VCOMISD is encoded with VEX.L=0. Encoding VCOMISD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

COMISD (All Versions) + ¶ +

+
RESULT := OrderedCompare(DEST[63:0] <> SRC[63:0]) {
+(* Set EFLAGS *) CASE (RESULT) OF
+    UNORDERED: ZF,PF,CF := 111;
+    GREATER_THAN: ZF,PF,CF := 000;
+    LESS_THAN: ZF,PF,CF := 001;
+    EQUAL: ZF,PF,CF := 100;
+ESAC;
+OF, AF, SF := 0; }
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCOMISD int _mm_comi_round_sd(__m128d a, __m128d b, int imm, int sae);
+
+
VCOMISD int _mm_comieq_sd (__m128d a, __m128d b)
+
+
VCOMISD int _mm_comilt_sd (__m128d a, __m128d b)
+
+
VCOMISD int _mm_comile_sd (__m128d a, __m128d b)
+
+
VCOMISD int _mm_comigt_sd (__m128d a, __m128d b)
+
+
VCOMISD int _mm_comige_sd (__m128d a, __m128d b)
+
+
VCOMISD int _mm_comineq_sd (__m128d a, __m128d b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN or QNaN operands), Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/comiss.html b/x86/comiss.html new file mode 100644 index 0000000..28cdea1 --- /dev/null +++ b/x86/comiss.html @@ -0,0 +1,114 @@ + +COMISS + — Compare Scalar Ordered Single Precision Floating-Point Values and Set EFLAGS

COMISS + — Compare Scalar Ordered Single Precision Floating-Point Values and Set EFLAGS

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 2F /r COMISS xmm1, xmm2/m32AV/VSSECompare low single precision floating-point values in xmm1 and xmm2/mem32 and set the EFLAGS flags accordingly.
VEX.LIG.0F.WIG 2F /r VCOMISS xmm1, xmm2/m32AV/VAVXCompare low single precision floating-point values in xmm1 and xmm2/mem32 and set the EFLAGS flags accordingly.
EVEX.LLIG.0F.W0 2F /r VCOMISS xmm1, xmm2/m32{sae}BV/VAVX512FCompare low single precision floating-point values in xmm1 and xmm2/mem32 and set the EFLAGS flags accordingly.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BTuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Compares the single precision floating-point values in the low quadwords of operand 1 (first operand) and operand 2 (second operand), and sets the ZF, PF, and CF flags in the EFLAGS register according to the result (unordered, greater than, less than, or equal). The OF, SF, and AF flags in the EFLAGS register are set to 0. The unordered result is returned if either source operand is a NaN (QNaN or SNaN).

+

Operand 1 is an XMM register; operand 2 can be an XMM register or a 32 bit memory location.

+

The COMISS instruction differs from the UCOMISS instruction in that it signals a SIMD floating-point invalid operation exception (#I) when a source operand is either a QNaN or SNaN. The UCOMISS instruction signals an invalid operation exception only if a source operand is an SNaN.

+

The EFLAGS register is not updated if an unmasked SIMD floating-point exception is generated.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Software should ensure VCOMISS is encoded with VEX.L=0. Encoding VCOMISS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

COMISS (All Versions) + ¶ +

+
RESULT := OrderedCompare(DEST[31:0] <> SRC[31:0]) {
+(* Set EFLAGS *) CASE (RESULT) OF
+    UNORDERED: ZF,PF,CF := 111;
+    GREATER_THAN: ZF,PF,CF := 000;
+    LESS_THAN: ZF,PF,CF := 001;
+    EQUAL: ZF,PF,CF := 100;
+ESAC;
+OF, AF, SF := 0; }
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCOMISS int _mm_comi_round_ss(__m128 a, __m128 b, int imm, int sae);
+
+
VCOMISS int _mm_comieq_ss (__m128 a, __m128 b)
+
+
VCOMISS int _mm_comilt_ss (__m128 a, __m128 b)
+
+
VCOMISS int _mm_comile_ss (__m128 a, __m128 b)
+
+
VCOMISS int _mm_comigt_ss (__m128 a, __m128 b)
+
+
VCOMISS int _mm_comige_ss (__m128 a, __m128 b)
+
+
VCOMISS int _mm_comineq_ss (__m128 a, __m128 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN or QNaN operands), Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cpuid.html b/x86/cpuid.html new file mode 100644 index 0000000..d1a28ac --- /dev/null +++ b/x86/cpuid.html @@ -0,0 +1,2180 @@ + +CPUID + — CPU Identification

CPUID + — CPU Identification

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F A2CPUIDZOValidValidReturns processor identification and feature information to the EAX, EBX, ECX, and EDX registers, as determined by input entered in EAX (in some cases, ECX as well).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

The ID flag (bit 21) in the EFLAGS register indicates support for the CPUID instruction. If a software procedure can set and clear this flag, the processor executing the procedure supports the CPUID instruction. This instruction operates the same in non-64-bit modes and 64-bit mode.

+

CPUID returns processor identification and feature information in the EAX, EBX, ECX, and EDX registers.1 The instruction’s output is dependent on the contents of the EAX register upon execution (in some cases, ECX as well). For example, the following pseudocode loads EAX with 00H and causes CPUID to return a Maximum Return Value and the Vendor Identification String in the appropriate registers:

+

MOV EAX, 00H

+

CPUID

+

Table 3-8 shows information returned, depending on the initial value loaded into the EAX register.

+

Two types of information are returned: basic and extended function information. If a value entered for CPUID.EAX is higher than the maximum input value for basic or extended function for that processor then the data for the highest basic information leaf is returned. For example, using some Intel processors, the following is true:

+

CPUID.EAX = 05H (* Returns MONITOR/MWAIT leaf. *)

+

CPUID.EAX = 0AH (* Returns Architectural Performance Monitoring leaf. *) CPUID.EAX = 0BH (* Returns Extended Topology Enumeration leaf. *)2 CPUID.EAX =1FH (* Returns V2 Extended Topology Enumeration leaf. *)2

+

CPUID.EAX = 80000008H (* Returns linear/physical address size data. *)

+

CPUID.EAX = 8000000AH (* INVALID: Returns same information as CPUID.EAX = 0BH. *)

+

If a value entered for CPUID.EAX is less than or equal to the maximum input value and the leaf is not supported on that processor then 0 is returned in all the registers.

+

When CPUID returns the highest basic leaf information as a result of an invalid input EAX value, any dependence on input ECX value in the basic leaf is honored.

+

CPUID can be executed at any privilege level to serialize instruction execution. Serializing instruction execution guarantees that any modifications to flags, registers, and memory for previous instructions are completed before the next instruction is fetched and executed.

+

See also:

+

“Serializing Instructions” in Chapter 9, “Multiple-Processor Management,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

“Caching Translation Information” in Chapter 4, “Paging,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+
+

1. On Intel 64 processors, CPUID clears the high 32 bits of the RAX/RBX/RCX/RDX registers in all modes.

+

2. CPUID leaf 1FH is a preferred superset to leaf 0BH. Intel recommends first checking for the existence of CPUID leaf 1FH before using leaf 0BH.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Initial EAX +Value AX +
Basic CPUID Information
0HEAX Maximum Input Value for Basic CPUID Information. EBX “Genu” ECX “ntel” EDX “ineI”
01HEAX Version Information: Type, Family, Model, and Stepping ID (see Figure 3-6). EBX Bits 07-00: Brand Index. Bits 15-08: CLFLUSH line size (Value ∗ 8 = cache line size in bytes; used also by CLFLUSHOPT). Bits 23-16: Maximum number of addressable IDs for logical processors in this physical package*. Bits 31-24: Initial APIC ID**. ECX Feature Information (see Figure 3-7 and Table 3-10). EDX Feature Information (see Figure 3-8 and Table 3-11). NOTES: * Thenearestpower-of-2integerthatisnotsmallerthanEBX[23:16]isthenumberofuniqueinitialAPIC IDs reserved for addressing different logical processors in a physical package. This field is only valid if CPUID.1.EDX.HTT[bit 28]= 1. ** The 8-bit initial APIC ID in EBX[31:24] is replaced by the 32-bit x2APIC ID, available in Leaf 0BH and Leaf 1FH.
02HEAX Cache and TLB Information (see Table 3-12). EBX Cache and TLB Information. ECX Cache and TLB Information. EDX Cache and TLB Information.
03HEAX Reserved. EBX Reserved. ECX Bits 00-31 of 96-bit processor serial number. (Available in Pentium III processor only; otherwise, the value in this register is reserved.) EDX Bits 32-63 of 96-bit processor serial number. (Available in Pentium III processor only; otherwise, the value in this register is reserved.) NOTES: Processor serial number (PSN) is not supported in the Pentium 4 processor or later. On all models, use the PSN flag (returned using CPUID) to check for PSN support before accessing the feature.
CPUID leaves above 2 and below 80000000H are visible only when IA32_MISC_ENABLE[bit 22] has its default value of 0.
Deterministic Cache Parameters Leaf (Initial EAX Value = 04H)
04HNOTES: Leaf 04H output depends on the initial value in ECX.* See also: “INPUT EAX = 04H: Returns Deterministic Cache Parameters for Each Level” on page 251. EAX Bits 04-00: Cache Type Field. 0 = Null - No more caches. 1 = Data Cache. 2 = Instruction Cache. 3 = Unified Cache. 4-31 = Reserved.
+
Table 3-8. Information Returned by CPUID Instruction
+
+ + + + + + + + + + + +
Initial EAX ValueInformation Provided about the Processor
Bits 07-05: Cache Level (starts at 1). Bit 08: Self Initializing cache level (does not need SW initialization). Bit 09: Fully Associative cache. Bits 13-10: Reserved. Bits 25-14: Maximum number of addressable IDs for logical processors sharing this cache**, ***. Bits 31-26: Maximum number of addressable IDs for processor cores in the physical package**, ****, *****. EBX Bits 11-00: L = System Coherency Line Size**. Bits 21-12: P = Physical Line partitions**. Bits 31-22: W = Ways of associativity**. ECX Bits 31-00: S = Number of Sets**. EDX Bit 00: Write-Back Invalidate/Invalidate. 0 = WBINVD/INVD from threads sharing this cache acts upon lower level caches for threads sharing this cache. 1 = WBINVD/INVD is not guaranteed to act upon lower level caches of non-originating threads sharing this cache. Bit 01: Cache Inclusiveness. 0 = Cache is not inclusive of lower cache levels. 1 = Cache is inclusive of lower cache levels. Bit 02: Complex Cache Indexing. 0 = Direct mapped cache. 1 = A complex function is used to index the cache, potentially using all address bits. Bits 31-03: Reserved = 0. NOTES: * If ECX contains an invalid sub leaf index, EAX/EBX/ECX/EDX return 0. Sub-leaf index n+1 is invalid if sub-leaf n returns EAX[4:0] as 0. ** Add one to the return value to get the result. ***The nearest power-of-2 integer that is not smaller than (1 + EAX[25:14]) is the number of unique initial APIC IDs reserved for addressing different logical processors sharing this cache. **** The nearest power-of-2 integer that is not smaller than (1 + EAX[31:26]) is the number of unique Core_IDs reserved for addressing different processor cores in a physical package. Core ID is a subset of bits of the initial APIC ID. ***** The returned value is constant for valid initial values in ECX. Valid ECX values start from 0.
MONITOR/MWAIT Leaf (Initial EAX Value = 05H)
05HEAX Bits 15-00: Smallest monitor-line size in bytes (default is processor's monitor granularity). Bits 31-16: Reserved = 0. EBX Bits 15-00: Largest monitor-line size in bytes (default is processor's monitor granularity). Bits 31-16: Reserved = 0. ECX Bit 00: Enumeration of Monitor-Mwait extensions (beyond EAX and EBX registers) supported. Bit 01: Supports treating interrupts as break-event for MWAIT, even when interrupts disabled. Bits 31-02: Reserved. EDX Bits 03-00: Number of C0* sub C-states supported using MWAIT. Bits 07-04: Number of C1* sub C-states supported using MWAIT. Bits 11-08: Number of C2* sub C-states supported using MWAIT. Bits 15-12: Number of C3* sub C-states supported using MWAIT. Bits 19-16: Number of C4* sub C-states supported using MWAIT. Bits 23-20: Number of C5* sub C-states supported using MWAIT. Bits 27-24: Number of C6* sub C-states supported using MWAIT. Bits 31-28: Number of C7* sub C-states supported using MWAIT.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + +
Initial EAX +Value AX +
NOTE: * ThedefinitionofC0throughC7statesforMWAITextensionareprocessor-specificC-states,notACPIC-states.
Thermal and Power Management Leaf (Initial EAX Value = 06H)
06HEAX Bit 00: Digital temperature sensor is supported if set. Bit 01: Intel Turbo Boost Technology available (see description of IA32_MISC_ENABLE[38]). Bit 02: ARAT. APIC-Timer-always-running feature is supported if set. Bit 03: Reserved. Bit 04: PLN. Power limit notification controls are supported if set. Bit 05: ECMD. Clock modulation duty cycle extension is supported if set. Bit 06: PTM. Package thermal management is supported if set. Bit 07: HWP. HWP base registers (IA32_PM_ENABLE[bit 0], IA32_HWP_CAPABILITIES, IA32_HWP_RE-QUEST, IA32_HWP_STATUS) are supported if set. Bit 08: HWP_Notification. IA32_HWP_INTERRUPT MSR is supported if set. Bit 09: HWP_Activity_Window. IA32_HWP_REQUEST[bits 41:32] is supported if set. Bit 10: HWP_Energy_Performance_Preference. IA32_HWP_REQUEST[bits 31:24] is supported if set. Bit 11: HWP_Package_Level_Request. IA32_HWP_REQUEST_PKG MSR is supported if set. Bit 12: Reserved. Bit 13: HDC. HDC base registers IA32_PKG_HDC_CTL, IA32_PM_CTL1, IA32_THREAD_STALL MSRs are supported if set. Bit 14: Intel® Turbo Boost Max Technology 3.0 available. Bit 15: HWP Capabilities. Highest Performance change is supported if set. Bit 16: HWP PECI override is supported if set. Bit 17: Flexible HWP is supported if set. Bit 18: Fast access mode for the IA32_HWP_REQUEST MSR is supported if set. Bit 19: HW_FEEDBACK. IA32_HW_FEEDBACK_PTR MSR, IA32_HW_FEEDBACK_CONFIG MSR, IA32_PACK-AGE_THERM_STATUS MSR bit 26, and IA32_PACKAGE_THERM_INTERRUPT MSR bit 25 are supported if set. Bit 20: Ignoring Idle Logical Processor HWP request is supported if set. Bits 22-21: Reserved. Bit 23: Intel® Thread Director supported if set. IA32_HW_FEEDBACK_CHAR and IA32_HW_FEEDBACK_-THREAD_CONFIG MSRs are supported if set. Bit 24: IA32_THERM_INTERRUPT MSR bit 25 is supported if set. Bits 31-25: Reserved. EBX Bits 03-00: Number of Interrupt Thresholds in Digital Thermal Sensor. Bits 31-04: Reserved. ECX Bit 00: Hardware Coordination Feedback Capability (Presence of IA32_MPERF and IA32_APERF). The capability to provide a measure of delivered processor performance (since last reset of the counters), as a percentage of the expected processor performance when running at the TSC frequency. Bits 02-01: Reserved = 0. Bit 03: The processor supports performance-energy bias preference if CPUID.06H:ECX.SETBH[bit 3] is set and it also implies the presence of a new architectural MSR called IA32_ENERGY_PERF_BIAS (1B0H). Bits 07-04: Reserved = 0. Bits 15-08: Number of Intel® Thread Director classes supported by the processor. Information for that many classes is written into the Intel Thread Director Table by the hardware. Bits 31-16: Reserved = 0.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + +
Initial EAX +Value AX +
EDX Bits 07-00: Bitmap of supported hardware feedback interface capabilities. 0 = When set to 1, indicates support for performance capability reporting. 1 = When set to 1, indicates support for energy efficiency capability reporting. 2-7 = Reserved Bits 11-08: Enumerates the size of the hardware feedback interface structure in number of 4 KB pages; add one to the return value to get the result. Bits 31-16: Index (starting at 0) of this logical processor's row in the hardware feedback interface structure. Note that on some parts the index may be same for multiple logical processors. On some parts the indices may not be contiguous, i.e., there may be unused rows in the hardware feedback interface structure. NOTE: Bits 0 and 1 will always be set together.
Structured Extended Feature Flags Enumeration Leaf (Initial EAX Value = 07H, ECX = 0)
07HEAX Bits 31-00: Reports the maximum input value for supported leaf 7 sub-leaves. EBX Bit 00: FSGSBASE. Supports RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE if 1. Bit 01: IA32_TSC_ADJUST MSR is supported if 1. Bit 02: SGX. Supports Intel® Software Guard Extensions (Intel® SGX Extensions) if 1. Bit 03: BMI1. Bit 04: HLE. Bit 05: AVX2. Supports Intel® Advanced Vector Extensions 2 (Intel® AVX2) if 1. Bit 06: FDP_EXCPTN_ONLY. x87 FPU Data Pointer updated only on x87 exceptions if 1. Bit 07: SMEP. Supports Supervisor-Mode Execution Prevention if 1. Bit 08: BMI2. Bit 09: Supports Enhanced REP MOVSB/STOSB if 1. Bit 10: INVPCID. If 1, supports INVPCID instruction for system software that manages process-context identifiers. Bit 11: RTM. Bit 12: RDT-M. Supports Intel® Resource Director Technology (Intel® RDT) Monitoring capability if 1. Bit 13: Deprecates FPU CS and FPU DS values if 1. Bit 14: MPX. Supports Intel® Memory Protection Extensions if 1. Bit 15: RDT-A. Supports Intel® Resource Director Technology (Intel® RDT) Allocation capability if 1. Bit 16: AVX512F. Bit 17: AVX512DQ. Bit 18: RDSEED. Bit 19: ADX. Bit 20: SMAP. Supports Supervisor-Mode Access Prevention (and the CLAC/STAC instructions) if 1. Bit 21: AVX512_IFMA. Bit 22: Reserved. Bit 23: CLFLUSHOPT. Bit 24: CLWB. Bit 25: Intel Processor Trace. Bit 26: AVX512PF. (Intel® Xeon PhiTM only.) Bit 27: AVX512ER. (Intel® Xeon PhiTM only.) Bit 28: AVX512CD. Bit 29: SHA. supports Intel® Secure Hash Algorithm Extensions (Intel® SHA Extensions) if 1. Bit 30: AVX512BW. Bit 31: AVX512VL.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + +
Initial EAX ValueInformation Provided about the Processor
ECX Bit 00: PREFETCHWT1. (Intel® Xeon PhiTM only.) Bit 01: AVX512_VBMI. Bit 02: UMIP. Supports user-mode instruction prevention if 1. Bit 03: PKU. Supports protection keys for user-mode pages if 1. Bit 04: OSPKE. If 1, OS has set CR4.PKE to enable protection keys (and the RDPKRU/WRPKRU instructions). Bit 05: WAITPKG. Bit 06: AVX512_VBMI2. Bit 07: CET_SS. Supports CET shadow stack features if 1. Processors that set this bit define bits 1:0 of the IA32_U_CET and IA32_S_CET MSRs. Enumerates support for the following MSRs: IA32_INTERRUPT_SPP_TABLE_ADDR, IA32_PL3_SSP, IA32_PL2_SSP, IA32_PL1_SSP, and IA32_PL0_SSP. Bit 08: GFNI. Bit 09: VAES. Bit 10: VPCLMULQDQ. Bit 11: AVX512_VNNI. Bit 12: AVX512_BITALG. Bits 13: TME_EN. If 1, the following MSRs are supported: IA32_TME_CAPABILITY, IA32_TME_ACTIVATE, IA32_TME_EXCLUDE_MASK, and IA32_TME_EXCLUDE_BASE. Bit 14: AVX512_VPOPCNTDQ. Bit 15: Reserved. Bit 16: LA57. Supports 57-bit linear addresses and five-level paging if 1. Bits 21-17: The value of MAWAU used by the BNDLDX and BNDSTX instructions in 64-bit mode. Bit 22: RDPID and IA32_TSC_AUX are available if 1. Bit 23: KL. Supports Key Locker if 1. Bit 24: BUS_LOCK_DETECT. If 1, indicates support for OS bus-lock detection. Bit 25: CLDEMOTE. Supports cache line demote if 1. Bit 26: Reserved. Bit 27: MOVDIRI. Supports MOVDIRI if 1. Bit 28: MOVDIR64B. Supports MOVDIR64B if 1. Bit 29: ENQCMD. Supports Enqueue Stores if 1. Bit 30: SGX_LC. Supports SGX Launch Configuration if 1. Bit 31: PKS. Supports protection keys for supervisor-mode pages if 1. EDX Bit 00: Reserved. Bit 01: SGX-KEYS. If 1, Attestation Services for Intel® SGX is supported. Bit 02: AVX512_4VNNIW. (Intel® Xeon PhiTM only.) Bit 03: AVX512_4FMAPS. (Intel® Xeon PhiTM only.) Bit 04: Fast Short REP MOV. Bit 05: UINTR. If 1, the processor supports user interrupts. Bits 07-06: Reserved. Bit 08: AVX512_VP2INTERSECT. Bit 09: SRBDS_CTRL. If 1, enumerates support for the IA32_MCU_OPT_CTRL MSR and indicates its bit 0 (RNGDS_MITG_DIS) is also supported. Bit 10: MD_CLEAR supported. Bit 11: RTM_ALWAYS_ABORT. If set, any execution of XBEGIN immediately aborts and transitions to the specified fallback address. Bit 12: Reserved. Bit 13: If 1, RTM_FORCE_ABORT supported. Processors that set this bit support the IA32_TSX_FORCE_ABORT MSR. They allow software to set IA32_TSX_FORCE_ABORT[0] (RTM_FORCE_ABORT). Bit 14: SERIALIZE. Bit 15: Hybrid. If 1, the processor is identified as a hybrid part. If CPUID.0.MAXLEAF 1AH and CPUID.1A.EAX ≠ 0, then the Native Model ID Enumeration Leaf 1AH exists. Bit 16: TSXLDTRK. If 1, the processor supports Intel TSX suspend/resume of load address tracking.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + +
Initial EAX ValueInformation Provided about the Processor
Bit 17: Reserved. Bit 18: PCONFIG. Supports PCONFIG if 1. Bit 19: Architectural LBRs. If 1, indicates support for architectural LBRs. Bit 20: CET_IBT. Supports CET indirect branch tracking features if 1. Processors that set this bit define bits 5:2 and bits 63:10 of the IA32_U_CET and IA32_S_CET MSRs. Bit 21: Reserved. Bit 22: AMX-BF16. If 1, the processor supports tile computational operations on bfloat16 numbers. Bit 23: AVX512_FP16. Bit 24: AMX-TILE. If 1, the processor supports tile architecture. Bits 25: AMX-INT8. If 1, the processor supports tile computational operations on 8-bit integers. Bit 26: Enumerates support for indirect branch restricted speculation (IBRS) and the indirect branch predictor barrier (IBPB). Processors that set this bit support the IA32_SPEC_CTRL MSR and the IA32_PRED_CMD MSR. They allow software to set IA32_SPEC_CTRL[0] (IBRS) and IA32_PRED_CMD[0] (IBPB). Bit 27: Enumerates support for single thread indirect branch predictors (STIBP). Processors that set this bit support the IA32_SPEC_CTRL MSR. They allow software to set IA32_SPEC_CTRL[1] (STIBP). Bit 28: Enumerates support for L1D_FLUSH. Processors that set this bit support the IA32_FLUSH_CMD MSR. They allow software to set IA32_FLUSH_CMD[0] (L1D_FLUSH). Bit 29: Enumerates support for the IA32_ARCH_CAPABILITIES MSR. Bit 30: Enumerates support for the IA32_CORE_CAPABILITIES MSR. IA32_CORE_CAPABILITIES is an architectural MSR that enumerates model-specific features. A bit being set in this MSR indicates that a model specific feature is supported; software must still consult CPUID family/model/stepping to determine the behavior of the enumerated feature as features enumerated in IA32_CORE_CAPABILITIES may have different behavior on different processor models. Some of these features may have behavior that is consistent across processor models (and for which consultation of CPUID family/model/stepping is not necessary); such features are identified explicitly where they are documented in this manual. Bit 31: Enumerates support for Speculative Store Bypass Disable (SSBD). Processors that set this bit support the IA32_SPEC_CTRL MSR. They allow software to set IA32_SPEC_CTRL[2] (SSBD). NOTE: * If ECX contains an invalid sub-leaf index, EAX/EBX/ECX/EDX return 0. Sub-leaf index n is invalid if n exceeds the value that sub-leaf 0 returns in EAX.
Structured Extended Feature Enumeration Sub-leaf (Initial EAX Value = 07H, ECX = 1)
07HNOTES: Leaf 07H output depends on the initial value in ECX. If ECX contains an invalid sub leaf index, EAX/EBX/ECX/EDX return 0. EAX This field reports 0 if the sub-leaf index, 1, is invalid. Bits 03-00: Reserved. Bit 04: AVX-VNNI. AVX (VEX-encoded) versions of the Vector Neural Network Instructions. Bit 05: AVX512_BF16. Vector Neural Network Instructions supporting BFLOAT16 inputs and conversion instructions from IEEE single precision. Bits 09-06: Reserved. Bit 10: If 1, supports fast zero-length REP MOVSB. Bit 11: If 1, supports fast short REP STOSB. Bit 12: If 1, supports fast short REP CMPSB, REP SCASB. Bits 21-13: Reserved. Bit 22: HRESET. If 1, supports history reset via the HRESET instruction and the IA32_HRESET_ENABLE MSR. When set, indicates that the Processor History Reset Leaf (EAX = 20H) is valid. Bits 29-23: Reserved.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + +
Initial EAX ValueInformation Provided about the Processor
Bit 30: INVD_DISABLE_POST_BIOS_DONE. If 1, supports INVD execution prevention after BIOS Done. Bit 31: Reserved. EBX This field reports 0 if the sub-leaf index, 1, is invalid. Bit 00: Enumerates the presence of the IA32_PPIN and IA32_PPIN_CTL MSRs. If 1, these MSRs are supported. Bits 31-01: Reserved. ECX This field reports 0 if the sub-leaf index, 1, is invalid; otherwise it is reserved. EDX This field reports 0 if the sub-leaf index, 1, is invalid. Bits 17-00: Reserved. Bit 18: CET_SSS. If 1, indicates that an operating system can enable supervisor shadow stacks as long as it ensures that a supervisor shadow stack cannot become prematurely busy due to page faults (see Section 17.2.3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1). When emulating the CPUID instruction, a virtual-machine monitor (VMM) should return this bit as 1 only if it ensures that VM exits cannot cause a guest supervisor shadow stack to appear to be prematurely busy. Such a VMM could set the “prematurely busy shadow stack” VM-exit control and use the additional information that it provides. Bits 31-19: Reserved.
Structured Extended Feature Enumeration Sub-leaf (Initial EAX Value = 07H, ECX = 2)
07HNOTES: Leaf 07H output depends on the initial value in ECX. If ECX contains an invalid sub leaf index, EAX/EBX/ECX/EDX return 0. EAX This field reports 0 if the sub-leaf index, 2, is invalid; otherwise it is reserved. EBX This field reports 0 if the sub-leaf index, 2, is invalid; otherwise it is reserved. ECX This field reports 0 if the sub-leaf index, 2, is invalid; otherwise it is reserved. EDX This field reports 0 if the sub-leaf index, 2, is invalid. Bit 00: PSFD. If 1, indicates bit 7 of the IA32_SPEC_CTRL MSR is supported. Bit 7 of this MSR disables Fast Store Forwarding Predictor without disabling Speculative Store Bypass. Bit 01: IPRED_CTRL. If 1, indicates bits 3 and 4 of the IA32_SPEC_CTRL MSR are supported. Bit 3 of this MSR enables IPRED_DIS control for CPL3. Bit 4 of this MSR enables IPRED_DIS control for CPL0/1/2. Bit 02: RRSBA_CTRL. If 1, indicates bits 5 and 6 of the IA32_SPEC_CTRL MSR are supported. Bit 5 of this MSR disables RRSBA behavior for CPL3. Bit 6 of this MSR disables RRSBA behavior for CPL0/1/2. Bit 03: DDPD_U. If 1, indicates bit 8 of the IA32_SPEC_CTRL MSR is supported. Bit 8 of this MSR disables Data Dependent Prefetcher. Bit 04: BHI_CTRL. If 1, indicates bit 10 of the IA32_SPEC_CTRL MSR is supported. Bit 10 of this MSR enables BHI_DIS_S behavior. Bit 05: MCDT_NO. Processors that enumerate this bit as 1 do not exhibit MXCSR Configuration Dependent Timing (MCDT) behavior and do not need to be mitigated to avoid data-dependent behavior for certain instructions. Bits 31-06: Reserved.
Direct Cache Access Information Leaf (Initial EAX Value = 09H)
09HEAX Value of bits [31:0] of IA32_PLATFORM_DCA_CAP MSR (address 1F8H). EBX Reserved. ECX Reserved. EDX Reserved.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + +
Initial EAX +Value AX +
Architectural Performance Monitoring Leaf (Initial EAX Value = 0AH)
0AHEAX Bits 07-00: Version ID of architectural performance monitoring. Bits 15-08: Number of general-purpose performance monitoring counter per logical processor. Bits 23-16: Bit width of general-purpose, performance monitoring counter. Bits 31-24: Length of EBX bit vector to enumerate architectural performance monitoring events. Architectural event x is supported if EBX[x]=0 && EAX[31:24]>x. EBX Bit 00: Core cycle event not available if 1 or if EAX[31:24]<1. Bit 01: Instruction retired event not available if 1 or if EAX[31:24]<2. Bit 02: Reference cycles event not available if 1 or if EAX[31:24]<3. Bit 03: Last-level cache reference event not available if 1 or if EAX[31:24]<4. Bit 04: Last-level cache misses event not available if 1 or if EAX[31:24]<5. Bit 05: Branch instruction retired event not available if 1 or if EAX[31:24]<6. Bit 06: Branch mispredict retired event not available if 1 or if EAX[31:24]<7. Bit 07: Top-down slots event not available if 1 or if EAX[31:24]<8. Bits 31-08: Reserved = 0. ECX Bits 31-00: Supported fixed counters bit mask. Fixed-function performance counter 'i' is supported if bit ‘i’ is 1 (first counter index starts at zero). It is recommended to use the following logic to determine if a Fixed Counter is supported: FxCtr[i]_is_supported := ECX[i] || (EDX[4:0] > i); EDX Bits 04-00: Number of contiguous fixed-function performance counters starting from 0 (if Version ID > 1). Bits 12-05: Bit width of fixed-function performance counters (if Version ID > 1). Bits 14-13: Reserved = 0. Bit 15: AnyThread deprecation. Bits 31-16: Reserved = 0.
Extended Topology Enumeration Leaf (Initial EAX Value = 0BH)
0BHNOTES: CPUID leaf 1FH is a preferred superset to leaf 0BH. Intel recommends first checking for the existence of Leaf 1FH before using leaf 0BH. The sub-leaves of CPUID leaf 0BH describe an ordered hierarchy of logical processors starting from the smallest-scoped domain of a Logical Processor (sub-leaf index 0) to the Core domain (sub-leaf index 1) to the largest-scoped domain (the last valid sub-leaf index) that is implicitly subordinate to the unenumerated highest-scoped domain of the processor package (socket). The details of each valid domain is enumerated by a corresponding sub-leaf. Details for a domain include its type and how all instances of that domain determine the number of logical processors and x2 APIC ID partitioning at the next higher-scoped domain. The ordering of domains within the hierarchy is fixed architecturally as shown below. For a given processor, not all domains may be relevant or enumerated; however, the logical processor and core domains are always enumerated. For two valid sub-leaves N and N+1, sub-leaf N+1 represents the next immediate higher-scoped domain with respect to the domain of sub-leaf N for the given processor. If sub-leaf index “N” returns an invalid domain type in ECX[15:08] (00H), then all sub-leaves with an index greater than “N” shall also return an invalid domain type. A sub-leaf returning an invalid domain always returns 0 in EAX and EBX. EAX Bits 04-00: The number of bits that the x2APIC ID must be shifted to the right to address instances of the next higher-scoped domain. When logical processor is not supported by the processor, the value of this field at the Logical Processor domain sub-leaf may be returned as either 0 (no allocated bits in the x2APIC ID) or 1 (one allocated bit in the x2APIC ID); software should plan accordingly. Bits 31-05: Reserved.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + +
Initial EAX +Value AX +
EBX Bits 15-00: The number of logical processors across all instances of this domain within the next higher-scoped domain. (For example, in a processor socket/package comprising “M” dies of “N” cores each, where each core has “L” logical processors, the “die” domain sub-leaf value of this field would be M*N*L.) This number reflects configuration as shipped by Intel. Note, software must not use this field to enumerate processor topology*. Bits 31-16: Reserved. ECX Bits 07-00: The input ECX sub-leaf index. Bits 15-08: Domain Type. This field provides an identification value which indicates the domain as shown below. Although domains are ordered, their assigned identification values are not and software should not depend on it. Domain Domain Type Identification Value Hierarchy Lowest Logical Processor 1 Highest Core 2 (Note that enumeration values of 0 and 3-255 are reserved.) Bits 31-16: Reserved. EDX Bits 31-00: x2APIC ID of the current logical processor. NOTES: * Software must not use the value of EBX[15:0] to enumerate processor topology of the system. The value is only intended for display and diagnostic purposes. The actual number of logical processors available to BIOS/OS/Applications may be different from the value of EBX[15:0], depending on software and platform hardware configurations.
Processor Extended State Enumeration Main Leaf (Initial EAX Value = 0DH, ECX = 0)
0DHNOTES: Leaf 0DH main leaf (ECX = 0). EAX Bits 31-00: Reports the supported bits of the lower 32 bits of XCR0. XCR0[n] can be set to 1 only if EAX[n] is 1. Bit 00: x87 state. Bit 01: SSE state. Bit 02: AVX state. Bits 04-03: MPX state. Bits 07-05: AVX-512 state. Bit 08: Used for IA32_XSS. Bit 09: PKRU state. Bits 16-10: Used for IA32_XSS. Bit 17: TILECFG state. Bit 18: TILEDATA state. Bits 31-19: Reserved. EBX Bits 31-00: Maximum size (bytes, from the beginning of the XSAVE/XRSTOR save area) required by enabled features in XCR0. May be different than ECX if some features at the end of the XSAVE save area are not enabled. ECX Bit 31-00: Maximum size (bytes, from the beginning of the XSAVE/XRSTOR save area) of the XSAVE/XRSTOR save area required by all supported features in the processor, i.e., all the valid bit fields in XCR0. EDX Bit 31-00: Reports the supported bits of the upper 32 bits of XCR0. XCR0[n+32] can be set to 1 only if EDX[n] is 1. Bits 31-00: Reserved.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + +
Initial EAX +Value AX +
Processor Extended State Enumeration Sub-leaf (Initial EAX Value = 0DH, ECX = 1)
0DHEAX Bit 00: XSAVEOPT is available. Bit 01: Supports XSAVEC and the compacted form of XRSTOR if set. Bit 02: Supports XGETBV with ECX = 1 if set. Bit 03: Supports XSAVES/XRSTORS and IA32_XSS if set. Bit 04: Supports extended feature disable (XFD) if set. Bits 31-05: Reserved. EBX Bits 31-00: The size in bytes of the XSAVE area containing all states enabled by XCRO | IA32_XSS. NOTES: If EAX[3] is enumerated as 0 and EAX[1] is enumerated as 1, EBX enumerates the size of the XSAVE area containing all states enabled by XCRO. If EAX[1] and EAX[3] are both enumerated as 0, EBX enumerates zero. ECX Bits 31-00: Reports the supported bits of the lower 32 bits of the IA32_XSS MSR. IA32_XSS[n] can be set to 1 only if ECX[n] is 1. Bits 07-00: Used for XCR0. Bit 08: PT state. Bit 09: Used for XCR0. Bit 10: PASID state. Bit 11: CET user state. Bit 12: CET supervisor state. Bit 13: HDC state. Bit 14: UINTR state. Bit 15: LBR state (only for the architectural LBR feature). Bit 16: HWP state. Bits 18-17: Used for XCR0. Bits 31-19: Reserved. EDX Bits 31-00: Reports the supported bits of the upper 32 bits of the IA32_XSS MSR. IA32_XSS[n+32] can be set to 1 only if EDX[n] is 1. Bits 31-00: Reserved.
Processor Extended State Enumeration Sub-leaves (Initial EAX Value = 0DH, ECX = n, n > 1)
0DHNOTES: Leaf 0DH output depends on the initial value in ECX. Each sub-leaf index (starting at position 2) is supported if it corresponds to a supported bit in either the XCR0 register or the IA32_XSS MSR. * If ECX contains an invalid sub-leaf index, EAX/EBX/ECX/EDX return 0. Sub-leaf n (0 ≤ n ≤ 31) is invalid if sub-leaf 0 returns 0 in EAX[n] and sub-leaf 1 returns 0 in ECX[n]. Sub-leaf n (32 ≤ n ≤ 63) is invalid if sub-leaf 0 returns 0 in EDX[n-32] and sub-leaf 1 returns 0 in EDX[n-32]. EAX Bits 31-00: The size in bytes (from the offset specified in EBX) of the save area for an extended state feature associated with a valid sub-leaf index, n. EBX Bits 31-00: The offset in bytes of this extended state component’s save area from the beginning of the XSAVE/XRSTOR area. This field reports 0 if the sub-leaf index, n, does not map to a valid bit in the XCR0 register*. ECX Bit 00 is set if the bit n (corresponding to the sub-leaf index) is supported in the IA32_XSS MSR; it is clear if bit n is instead supported in XCR0. Bit 01 is set if, when the compacted format of an XSAVE area is used, this extended state component located on the next 64-byte boundary following the preceding state component (otherwise, it is located immediately following the preceding state component). Bits 31-02 are reserved. This field reports 0 if the sub-leaf index, n, is invalid*.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
Initial EAX +Value AX +
EDX This field reports 0 if the sub-leaf index, n, is invalid*; otherwise it is reserved.
Intel® Resource Director Technology (Intel® RDT) Monitoring Enumeration Sub-leaf (Initial EAX Value = 0FH, ECX = 0)
0FHNOTES: Leaf 0FH output depends on the initial value in ECX. Sub-leaf index 0 reports valid resource type starting at bit position 1 of EDX. EAX Reserved. EBX Bits 31-00: Maximum range (zero-based) of RMID within this physical processor of all types. ECX Reserved. EDX Bit 00: Reserved. Bit 01: Supports L3 Cache Intel RDT Monitoring if 1. Bits 31-02: Reserved.
L3 Cache Intel® RDT Monitoring Capability Enumeration Sub-leaf (Initial EAX Value = 0FH, ECX = 1)
0FHNOTES: Leaf 0FH output depends on the initial value in ECX. EAX Bits 07-00:The counter width is encoded as an offset from 24b. A value of zero in this field indicates that 24-bit counters are supported. A value of 8 in this field indicates that 32-bit counters are supported. Bit 08: If 1, indicates the presence of an overflow bit in the IA32_QM_CTR MSR (bit 61). Bit 09: If 1, indicates the presence of non-CPU agent Intel RDT CMT support. Bit 10: If 1, indicates the presence of non-CPU agent Intel RDT MBM support. Bits 31-11: Reserved. EBX Bits 31-00: Conversion factor from reported IA32_QM_CTR value to occupancy metric (bytes) and Memory Bandwidth Monitoring (MBM) metrics. ECX Maximum range (zero-based) of RMID of this resource type. EDX Bit 00: Supports L3 occupancy monitoring if 1. Bit 01: Supports L3 Total Bandwidth monitoring if 1. Bit 02: Supports L3 Local Bandwidth monitoring if 1. Bits 31-03: Reserved.
Intel® Resource Director Technology (Intel® RDT) Allocation Enumeration Sub-leaf (Initial EAX Value = 10H, ECX = 0)
10HNOTES: Leaf 10H output depends on the initial value in ECX. Sub-leaf index 0 reports valid resource identification (ResID) starting at bit position 1 of EBX. EAX Reserved. EBX Bit 00: Reserved. Bit 01: Supports L3 Cache Allocation Technology if 1. Bit 02: Supports L2 Cache Allocation Technology if 1. Bit 03: Supports Memory Bandwidth Allocation if 1. Bits 31-04: Reserved. ECX Reserved. EDX Reserved.
L3 Cache Allocation Technology Enumeration Sub-leaf (Initial EAX Value = 10H, ECX = ResID =1)
10HNOTES: Leaf 10H output depends on the initial value in ECX.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + + + + + + +
Initial EAX +Value AX +
EAX Bits 04-00: Length of the capacity bit mask for the corresponding ResID. Add one to the return value to get the result. Bits 31-05: Reserved. EBX Bits 31-00: Bit-granular map of isolation/contention of allocation units. ECX Bit 00: Reserved. Bit 01: If 1, indicates L3 CAT for non-CPU agents is supported. Bit 02: If 1, indicates L3 Code and Data Prioritization Technology is supported. Bit 03: If 1, indicates non-contiguous capacity bitmask is supported. The bits that are set in the various IA32_L3_MASK_n registers do not have to be contiguous. Bits 31-04: Reserved. EDX Bits 15-00: Highest Class of Service (COS) number supported for this ResID. Bits 31-16: Reserved.
L2 Cache Allocation Technology Enumeration Sub-leaf (Initial EAX Value = 10H, ECX = ResID =2)
10HNOTES: Leaf 10H output depends on the initial value in ECX. EAX Bits 04-00: Length of the capacity bit mask for the corresponding ResID. Add one to the return value to get the result. Bits 31-05: Reserved. EBX Bits 31-00: Bit-granular map of isolation/contention of allocation units. ECX Bits 01-00: Reserved. Bit 02: CDP. If 1, indicates L2 Code and Data Prioritization Technology is supported. Bit 03: If 1, indicates non-contiguous capacity bitmask is supported. The bits that are set in the various IA32_L2_MASK_n registers do not have to be contiguous. Bits 31-04: Reserved. EDX Bits 15-00: Highest COS number supported for this ResID. Bits 31-16: Reserved.
Memory Bandwidth Allocation Enumeration Sub-leaf (Initial EAX Value = 10H, ECX = ResID =3)
10HNOTES: Leaf 10H output depends on the initial value in ECX. EAX Bits 11-00: Reports the maximum MBA throttling value supported for the corresponding ResID. Add one to the return value to get the result. Bits 31-12: Reserved. EBX Bits 31-00: Reserved. ECX Bits 01-00: Reserved. Bit 02: Reports whether the response of the delay values is linear. Bits 31-03: Reserved. EDX Bits 15-00: Highest COS number supported for this ResID. Bits 31-16: Reserved.
Intel® SGX Capability Enumeration Leaf, Sub-leaf 0 (Initial EAX Value = 12H, ECX = 0)
12HNOTES: Leaf 12H sub-leaf 0 (ECX = 0) is supported if CPUID.(EAX=07H, ECX=0H):EBX[SGX] = 1.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + +
Initial EAX ValueInformation Provided about the Processor
EAX Bit 00: SGX1. If 1, Indicates Intel SGX supports the collection of SGX1 leaf functions. Bit 01: SGX2. If 1, Indicates Intel SGX supports the collection of SGX2 leaf functions. Bits 04-02: Reserved. Bit 05: If 1, indicates Intel SGX supports ENCLV instruction leaves EINCVIRTCHILD, EDECVIRTCHILD, and ESETCONTEXT. Bit 06: If 1, indicates Intel SGX supports ENCLS instruction leaves ETRACKC, ERDINFO, ELDBC, and ELDUC. Bit 07: If 1, indicates Intel SGX supports ENCLU instruction leaf EVERIFYREPORT2. Bits 09-08: Reserved. Bit 10: If 1, indicates Intel SGX supports ENCLS instruction leaf EUPDATESVN. Bit 11: If 1, indicates Intel SGX supports ENCLU instruction leaf EDECCSSA. Bits 31-12: Reserved. EBX Bits 31-00: MISCSELECT. Bit vector of supported extended SGX features. ECX Bits 31-00: Reserved. EDX Bits 07-00: MaxEnclaveSize_Not64. The maximum supported enclave size in non-64-bit mode is 2^(EDX[7:0]). Bits 15-08: MaxEnclaveSize_64. The maximum supported enclave size in 64-bit mode is 2^(EDX[15:8]). Bits 31-16: Reserved.
Intel SGX Attributes Enumeration Leaf, Sub-leaf 1 (Initial EAX Value = 12H, ECX = 1)
12HNOTES: Leaf 12H sub-leaf 1 (ECX = 1) is supported if CPUID.(EAX=07H, ECX=0H):EBX[SGX] = 1. EAX Bit 31-00: Reports the valid bits of SECS.ATTRIBUTES[31:0] that software can set with ECREATE. EBX Bit 31-00: Reports the valid bits of SECS.ATTRIBUTES[63:32] that software can set with ECREATE. ECX Bit 31-00: Reports the valid bits of SECS.ATTRIBUTES[95:64] that software can set with ECREATE. EDX Bit 31-00: Reports the valid bits of SECS.ATTRIBUTES[127:96] that software can set with ECREATE.
Intel® SGX EPC Enumeration Leaf, Sub-leaves (Initial EAX Value = 12H, ECX = 2 or higher)
12HNOTES: Leaf 12H sub-leaf 2 or higher (ECX >= 2) is supported if CPUID.(EAX=07H, ECX=0H):EBX[SGX] = 1. For sub-leaves (ECX = 2 or higher), definition of EDX,ECX,EBX,EAX[31:4] depends on the sub-leaf type listed below. EAX Bit 03-00: Sub-leaf Type 0000b: Indicates this sub-leaf is invalid. 0001b: This sub-leaf enumerates an EPC section. EBX:EAX and EDX:ECX provide information on the Enclave Page Cache (EPC) section. All other type encodings are reserved. Type 0000b. This sub-leaf is invalid. EDX:ECX:EBX:EAX return 0.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + +
Initial EAX ValueInformation Provided about the Processor
Type 0001b. This sub-leaf enumerates an EPC sections with EDX:ECX, EBX:EAX defined as follows. EAX[11:04]: Reserved (enumerate 0). EAX[31:12]: Bits 31:12 of the physical address of the base of the EPC section. EBX[19:00]: Bits 51:32 of the physical address of the base of the EPC section. EBX[31:20]: Reserved. ECX[03:00]: EPC section property encoding defined as follows: If ECX[3:0] = 0000b, then all bits of the EDX:ECX pair are enumerated as 0. If ECX[3:0] = 0001b, then this section has confidentiality and integrity protection. If ECX[3:0] = 0010b, then this section has confidentiality protection only. All other encodings are reserved. ECX[11:04]: Reserved (enumerate 0). ECX[31:12]: Bits 31:12 of the size of the corresponding EPC section within the Processor Reserved Memory. EDX[19:00]: Bits 51:32 of the size of the corresponding EPC section within the Processor Reserved Memory. EDX[31:20]: Reserved.
Intel® Processor Trace Enumeration Main Leaf (Initial EAX Value = 14H, ECX = 0)
14HNOTES: Leaf 14H main leaf (ECX = 0). EAX Bits 31-00: Reports the maximum sub-leaf supported in leaf 14H. EBX Bit 00: If 1, indicates that IA32_RTIT_CTL.CR3Filter can be set to 1, and that IA32_RTIT_CR3_MATCH MSR can be accessed. Bit 01: If 1, indicates support of Configurable PSB and Cycle-Accurate Mode. Bit 02: If 1, indicates support of IP Filtering, TraceStop filtering, and preservation of Intel PT MSRs across warm reset. Bit 03: If 1, indicates support of MTC timing packet and suppression of COFI-based packets. Bit 04: If 1, indicates support of PTWRITE. Writes can set IA32_RTIT_CTL[12] (PTWEn) and IA32_RTIT_CTL[5] (FUPonPTW), and PTWRITE can generate packets. Bit 05: If 1, indicates support of Power Event Trace. Writes can set IA32_RTIT_CTL[4] (PwrEvtEn), enabling Power Event Trace packet generation. Bit 06: If 1, indicates support for PSB and PMI preservation. Writes can set IA32_RTIT_CTL[56] (InjectPsbPmiOnEnable), enabling the processor to set IA32_RTIT_STATUS[7] (PendTopaPMI) and/or IA32_R-TIT_STATUS[6] (PendPSB) in order to preserve ToPA PMIs and/or PSBs otherwise lost due to Intel PT disable. Writes can also set PendToPAPMI and PendPSB. Bit 07: If 1, writes can set IA32_RTIT_CTL[31] (EventEn), enabling Event Trace packet generation. Bit 08: If 1, writes can set IA32_RTIT_CTL[55] (DisTNT), disabling TNT packet generation. Bit 31-09: Reserved. ECX Bit 00: If 1, Tracing can be enabled with IA32_RTIT_CTL.ToPA = 1, hence utilizing the ToPA output scheme; IA32_RTIT_OUTPUT_BASE and IA32_RTIT_OUTPUT_MASK_PTRS MSRs can be accessed. Bit 01: If 1, ToPA tables can hold any number of output entries, up to the maximum allowed by the MaskOrTableOffset field of IA32_RTIT_OUTPUT_MASK_PTRS. Bit 02: If 1, indicates support of Single-Range Output scheme. Bit 03: If 1, indicates support of output to Trace Transport subsystem. Bit 30-04: Reserved. Bit 31: If 1, generated packets which contain IP payloads have LIP values, which include the CS base component. EDX Bits 31-00: Reserved.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + + + + + + + + +
Initial EAX +Value AX +
Intel® Processor Trace Enumeration Sub-leaf (Initial EAX Value = 14H, ECX = 1)
14HEAX Bits 02-00: Number of configurable Address Ranges for filtering. Bits 15-03: Reserved. Bits 31-16: Bitmap of supported MTC period encodings. EBX Bits 15-00: Bitmap of supported Cycle Threshold value encodings. Bit 31-16: Bitmap of supported Configurable PSB frequency encodings. ECX Bits 31-00: Reserved. EDX Bits 31-00: Reserved.
Time Stamp Counter and Nominal Core Crystal Clock Information Leaf (Initial EAX Value = 15H)
15HNOTES: If EBX[31:0] is 0, the TSC/”core crystal clock” ratio is not enumerated. EBX[31:0]/EAX[31:0] indicates the ratio of the TSC frequency and the core crystal clock frequency. If ECX is 0, the nominal core crystal clock frequency is not enumerated. “TSC frequency” = “core crystal clock frequency” * EBX/EAX. The core crystal clock may differ from the reference clock, bus clock, or core clock frequencies. EAX Bits 31-00: An unsigned integer which is the denominator of the TSC/”core crystal clock” ratio. EBX Bits 31-00: An unsigned integer which is the numerator of the TSC/”core crystal clock” ratio. ECX Bits 31-00: An unsigned integer which is the nominal frequency of the core crystal clock in Hz. EDX Bits 31-00: Reserved = 0.
Processor Frequency Information Leaf (Initial EAX Value = 16H)
16HEAX Bits 15-00: Processor Base Frequency (in MHz). Bits 31-16: Reserved =0. EBX Bits 15-00: Maximum Frequency (in MHz). Bits 31-16: Reserved = 0. ECX Bits 15-00: Bus (Reference) Frequency (in MHz). Bits 31-16: Reserved = 0. EDX Reserved. NOTES: * Data is returned from this interface in accordance with the processor's specification and does not reflect actual values. Suitable use of this data includes the display of processor information in like manner to the processor brand string and for determining the appropriate range to use when displaying processor information e.g. frequency history graphs. The returned information should not be used for any other purpose as the returned information does not accurately correlate to information / counters returned by other processor interfaces. While a processor may support the Processor Frequency Information leaf, fields that return a value of zero are not supported.
System-On-Chip Vendor Attribute Enumeration Main Leaf (Initial EAX Value = 17H, ECX = 0)
17HNOTES: Leaf 17H main leaf (ECX = 0). Leaf 17H output depends on the initial value in ECX. Leaf 17H sub-leaves 1 through 3 reports SOC Vendor Brand String. Leaf 17H is valid if MaxSOCID_Index >= 3. Leaf 17H sub-leaves 4 and above are reserved.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + + + + + + +
Initial EAX +Value AX +
EAX Bits 31-00: MaxSOCID_Index. Reports the maximum input value of supported sub-leaf in leaf 17H. EBX Bits 15-00: SOC Vendor ID. Bit 16: IsVendorScheme. If 1, the SOC Vendor ID field is assigned via an industry standard enumeration scheme. Otherwise, the SOC Vendor ID field is assigned by Intel. Bits 31-17: Reserved = 0. ECX Bits 31-00: Project ID. A unique number an SOC vendor assigns to its SOC projects. EDX Bits 31-00: Stepping ID. A unique number within an SOC project that an SOC vendor assigns.
System-On-Chip Vendor Attribute Enumeration Sub-leaf (Initial EAX Value = 17H, ECX = 1..3)
17HEAX Bit 31-00: SOC Vendor Brand String. UTF-8 encoded string. EBX Bit 31-00: SOC Vendor Brand String. UTF-8 encoded string. ECX Bit 31-00: SOC Vendor Brand String. UTF-8 encoded string. EDX Bit 31-00: SOC Vendor Brand String. UTF-8 encoded string. NOTES: Leaf 17H output depends on the initial value in ECX. SOC Vendor Brand String is a UTF-8 encoded string padded with trailing bytes of 00H. The complete SOC Vendor Brand String is constructed by concatenating in ascending order of EAX:EBX:ECX:EDX and from the sub-leaf 1 fragment towards sub-leaf 3.
System-On-Chip Vendor Attribute Enumeration Sub-leaves (Initial EAX Value = 17H, ECX > MaxSOCID_Index)
17HNOTES: Leaf 17H output depends on the initial value in ECX. EAX Bits 31-00: Reserved = 0. EBX Bits 31-00: Reserved = 0. ECX Bits 31-00: Reserved = 0. EDX Bits 31-00: Reserved = 0.
Deterministic Address Translation Parameters Main Leaf (Initial EAX Value = 18H, ECX = 0)
18HNOTES: Each sub-leaf enumerates a different address translation structure. If ECX contains an invalid sub-leaf index, EAX/EBX/ECX/EDX return 0. Sub-leaf index n is invalid if n exceeds the value that sub-leaf 0 returns in EAX. A sub-leaf index is also invalid if EDX[4:0] returns 0. Valid sub-leaves do not need to be contiguous or in any particular order. A valid sub-leaf may be in a higher input ECX value than an invalid sub-leaf or than a valid sub-leaf of a higher or lower-level structure. * Some unified TLBs will allow a single TLB entry to satisfy data read/write and instruction fetches. Others will require separate entries (e.g., one loaded on data read/write and another loaded on an instruction fetch). See the Intel® 64 and IA-32 Architectures Optimization Reference Manual for details of a particular product. ** Add one to the return value to get the result. EAX Bits 31-00: Reports the maximum input value of supported sub-leaf in leaf 18H.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + +
Initial EAX +Value AX +
EBX Bit 00: 4K page size entries supported by this structure. Bit 01: 2MB page size entries supported by this structure. Bit 02: 4MB page size entries supported by this structure. Bit 03: 1 GB page size entries supported by this structure. Bits 07-04: Reserved. Bits 10-08: Partitioning (0: Soft partitioning between the logical processors sharing this structure). Bits 15-11: Reserved. Bits 31-16: W = Ways of associativity. ECX Bits 31-00: S = Number of Sets. EDX Bits 04-00: Translation cache type field. 00000b: Null (indicates this sub-leaf is not valid). 00001b: Data TLB. 00010b: Instruction TLB. 00011b: Unified TLB*. 00100b: Load Only TLB. Hit on loads; fills on both loads and stores. 00101b: Store Only TLB. Hit on stores; fill on stores. All other encodings are reserved. Bits 07-05: Translation cache level (starts at 1). Bit 08: Fully associative structure. Bits 13-09: Reserved. Bits 25-14: Maximum number of addressable IDs for logical processors sharing this translation cache.** Bits 31-26: Reserved.
Deterministic Address Translation Parameters Sub-leaf (Initial EAX Value = 18H, ECX ≥ 1)
18HNOTES: Each sub-leaf enumerates a different address translation structure. If ECX contains an invalid sub-leaf index, EAX/EBX/ECX/EDX return 0. Sub-leaf index n is invalid if n exceeds the value that sub-leaf 0 returns in EAX. A sub-leaf index is also invalid if EDX[4:0] returns 0. Valid sub-leaves do not need to be contiguous or in any particular order. A valid sub-leaf may be in a higher input ECX value than an invalid sub-leaf or than a valid sub-leaf of a higher or lower-level structure. * Some unified TLBs will allow a single TLB entry to satisfy data read/write and instruction fetches. Others will require separate entries (e.g., one loaded on data read/write and another loaded on an instruction fetch. See the Intel® 64 and IA-32 Architectures Optimization Reference Manual for details of a particular product. ** Add one to the return value to get the result. EAX Bits 31-00: Reserved. EBX Bit 00: 4K page size entries supported by this structure. Bit 01: 2MB page size entries supported by this structure. Bit 02: 4MB page size entries supported by this structure. Bit 03: 1 GB page size entries supported by this structure. Bits 07-04: Reserved. Bits 10-08: Partitioning (0: Soft partitioning between the logical processors sharing this structure). Bits 15-11: Reserved. Bits 31-16: W = Ways of associativity. ECX Bits 31-00: S = Number of Sets.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + +
Initial EAX +Value AX +
EDX Bits 04-00: Translation cache type field. 0000b: Null (indicates this sub-leaf is not valid). 0001b: Data TLB. 0010b: Instruction TLB. 0011b: Unified TLB*. All other encodings are reserved. Bits 07-05: Translation cache level (starts at 1). Bit 08: Fully associative structure. Bits 13-09: Reserved. Bits 25-14: Maximum number of addressable IDs for logical processors sharing this translation cache** Bits 31-26: Reserved.
Key Locker Leaf (Initial EAX Value = 19H)
19HEAX Bit 00: Key Locker restriction of CPL0-only supported. Bit 01: Key Locker restriction of no-encrypt supported. Bit 02: Key Locker restriction of no-decrypt supported. Bits 31-03: Reserved. EBX Bit 00: AESKLE. If 1, the AES Key Locker instructions are fully enabled. Bit 01: Reserved. Bit 02: If 1, the AES wide Key Locker instructions are supported. Bit 03: Reserved. Bit 04: If 1, the platform supports the Key Locker MSRs (IA32_COPY_LOCAL_TO_PLATFORM, IA23_COPY_PLATFORM_TO_LOCAL, IA32_COPY_STATUS, and IA32_IWKEYBACKUP_STATUS) and backing up the internal wrapping key. Bits 31-05: Reserved. ECX Bit 00: If 1, the NoBackup parameter to LOADIWKEY is supported. Bit 01: If 1, KeySource encoding of 1 (randomization of the internal wrapping key) is supported. Bits 31-02: Reserved. EDX Reserved.
Native Model ID Enumeration Leaf (Initial EAX Value = 1AH, ECX = 0)
1AHNOTES: This leaf exists on all hybrid parts, however this leaf is not only available on hybrid parts. The following algorithm is used for detection of this leaf: If CPUID.0.MAXLEAF 1AH and CPUID.1A.EAX ≠ 0, then the leaf exists. EAX Enumerates the native model ID and core type. Bits 31-24: Core type* 10H: Reserved 20H: Intel Atom® 30H: Reserved 40H: Intel® CoreTM Bits 23-00: Native model ID of the core. The core-type and native model ID can be used to uniquely identify the microarchitecture of the core. This native model ID is not unique across core types, and not related to the model ID reported in CPUID leaf 01H, and does not identify the SOC. * The core type may only be used as an identification of the microarchitecture for this logical processor and its numeric value has no significance, neither large nor small. This field neither implies nor expresses any other attribute to this logical processor and software should not assume any. EBX Reserved. ECX Reserved. EDX Reserved.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + + + + + + + + +
Initial EAX +Value AX +
PCONFIG Information Sub-leaf (Initial EAX Value = 1BH, ECX ≥ 0)
1BHFor details on this sub-leaf, see “INPUT EAX = 1BH: Returns PCONFIG Information” on page 3-253. NOTE: Leaf 1BH is supported if CPUID.(EAX=07H, ECX=0H):EDX[18] = 1.
Last Branch Records Information Leaf (Initial EAX Value = 1CH)
1CHNOTE: This leaf pertains to the architectural feature. EAX Bits 07-00: Supported LBR Depth Values. For each bit n set in this field, the IA32_LBR_DEPTH.DEPTH value 8*(n+1) is supported. Bits 29-08: Reserved. Bit 30: Deep C-state Reset. If set, indicates that LBRs may be cleared on an MWAIT that requests a C-state numerically greater than C1. Bit 31: IP Values Contain LIP. If set, LBR IP values contain LIP. If clear, IP values contain Effective IP. EBX Bit 00: CPL Filtering Supported. If set, the processor supports setting IA32_LBR_CTL[2:1] to non-zero value. Bit 01: Branch Filtering Supported. If set, the processor supports setting IA32_LBR_CTL[22:16] to nonzero value. Bit 02: Call-stack Mode Supported. If set, the processor supports setting IA32_LBR_CTL[3] to 1. Bits 31-03: Reserved. ECX Bit 00: Mispredict Bit Supported. IA32_LBR_x_INFO[63] holds indication of branch misprediction (MISPRED). Bit 01: Timed LBRs Supported. IA32_LBR_x_INFO[15:0] holds CPU cycles since last LBR entry (CYC_CNT), and IA32_LBR_x_INFO[60] holds an indication of whether the value held there is valid (CYC_CNT_VALID). Bit 02: Branch Type Field Supported. IA32_LBR_INFO_x[59:56] holds indication of the recorded operation's branch type (BR_TYPE). Bits 31-03: Reserved. EDX Bits 31-00: Reserved.
Tile Information Main Leaf (Initial EAX Value = 1DH, ECX = 0)
1DHNOTES: For sub-leaves of 1DH, they are indexed by the palette id. Leaf 1DH sub-leaves 2 and above are reserved. EAX Bits 31-00: max_palette. Highest numbered palette sub-leaf. Value = 1. EBX Bits 31-00: Reserved = 0. ECX Bits 31-00: Reserved = 0. EDX Bits 31-00: Reserved = 0.
Tile Palette 1 Sub-leaf (Initial EAX Value = 1DH, ECX = 1)
1DHEAX Bits 15-00: Palette 1 total_tile_bytes. Value = 8192. Bits 31-16: Palette 1 bytes_per_tile. Value = 1024. EBX Bits 15-00: Palette 1 bytes_per_row. Value = 64. Bits 31-16: Palette 1 max_names (number of tile registers). Value = 8. ECX Bits 15-00: Palette 1 max_rows. Value = 16. Bits 31-16: Reserved = 0. EDX Bits 31-00: Reserved = 0.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + +
Initial EAX +Value AX +
TMUL Information Main Leaf (Initial EAX Value = 1EH, ECX = 0)
1EHNOTE: Leaf 1EH sub-leaves 1 and above are reserved. EAX Bits 31-00: Reserved = 0. EBX Bits 07-00: tmul_maxk (rows or columns). Value = 16. Bits 23-08: tmul_maxn (column bytes). Value = 64. Bits 31-24: Reserved = 0. ECX Bits 31-00: Reserved = 0. EDX Bits 31-00: Reserved = 0.
V2 Extended Topology Enumeration Leaf (Initial EAX Value = 1FH)
1FHNOTES: CPUID leaf 1FH is a preferred superset to leaf 0BH. Intel recommends using leaf 1FH when available rather than leaf 0BH and ensuring that any leaf 0BH algorithms are updated to support leaf 1FH. The sub-leaves of CPUID leaf 1FH describe an ordered hierarchy of logical processors starting from the smallest-scoped domain of a Logical Processor (sub-leaf index 0) to the Core domain (sub-leaf index 1) to the largest-scoped domain (the last valid sub-leaf index) that is implicitly subordinate to the unenumerated highest-scoped domain of the processor package (socket). The details of each valid domain is enumerated by a corresponding sub-leaf. Details for a domain include its type and how all instances of that domain determine the number of logical processors and x2 APIC ID partitioning at the next higher-scoped domain. The ordering of domains within the hierarchy is fixed architecturally as shown below. For a given processor, not all domains may be relevant or enumerated; however, the logical processor and core domains are always enumerated. As an example, a processor may report an ordered hierarchy consisting only of “Logical Processor,” “Core,” and “Die.” For two valid sub-leaves N and N+1, sub-leaf N+1 represents the next immediate higher-scoped domain with respect to the domain of sub-leaf N for the given processor. If sub-leaf index “N” returns an invalid domain type in ECX[15:08] (00H), then all sub-leaves with an index greater than “N” shall also return an invalid domain type. A sub-leaf returning an invalid domain always returns 0 in EAX and EBX. EAX Bits 04-00: The number of bits that the x2APIC ID must be shifted to the right to address instances of the next higher-scoped domain. When logical processor is not supported by the processor, the value of this field at the Logical Processor domain sub-leaf may be returned as either 0 (no allocated bits in the x2APIC ID) or 1 (one allocated bit in the x2APIC ID); software should plan accordingly. Bits 31-05: Reserved. EBX Bits 15-00: The number of logical processors across all instances of this domain within the next higher-scoped domain relative to this current logical processor. (For example, in a processor socket/package comprising “M” dies of “N” cores each, where each core has “L” logical processors, the “die” domain sub-leaf value of this field would be M*N*L. In an asymmetric topology this would be the summation of the value across the lower domain level instances to create each upper domain level instance.) This number reflects configuration as shipped by Intel. Note, software must not use this field to enumerate processor topology*. Bits 31-16: Reserved.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Initial EAX +Value AX +
ECX Bits 07-00: The input ECX sub-leaf index. Bits 15-08: Domain Type. This field provides an identification value which indicates the domain as shown below. Although domains are ordered, as also shown below, their assigned identification values are not and software should not depend on it. (For example, if a new domain between core and module is specified, it will have an identification value higher than 5.) Hierarchy Domain Domain Type Identification Value Lowest Logical Processor 1 ... Core 2 ... Module 3 ... Tile 4 ... Die 5 ... DieGrp 6 Highest Package/Socket (implied) (Note that enumeration values of 0 and 7-255 are reserved.) Bits 31-16: Reserved. EDX Bits 31-00: x2APIC ID of the current logical processor. It is always valid and does not vary with the sub-leaf index in ECX. NOTES: * Software must not use the value of EBX[15:0] to enumerate processor topology of the system. The value is only intended for display and diagnostic purposes. The actual number of logical processors available to BIOS/OS/Applications may be different from the value of EBX[15:0], depending on software and platform hardware configurations.
Processor History Reset Sub-leaf (Initial EAX Value = 20H, ECX = 0)
20HEAX Reports the maximum number of sub-leaves that are supported in leaf 20H. EBX Indicates which bits may be set in the IA32_HRESET_ENABLE MSR to enable reset of different components of hardware-maintained history. Bit 00: Indicates support for both HRESET’s EAX[0] parameter, and IA32_HRESET_ENABLE[0] set by the OS to enable reset of Intel® Thread Director history. Bits 31-01: Reserved = 0. ECX Reserved. EDX Reserved.
Unimplemented CPUID Leaf Functions
21HInvalid. No existing or future CPU will return processor identification or feature information if the initial EAX value is 21H. If the value returned by CPUID.0:EAX (the maximum input value for basic CPUID information) is at least 21H, 0 is returned in the registers EAX, EBX, ECX, and EDX. Otherwise, the data for the highest basic information leaf is returned.
40000000H − 4FFFFFFFHInvalid. No existing or future CPU will return processor identification or feature information if the initial EAX value is in the range 40000000H to 4FFFFFFFH.
Extended Function CPUID Information
80000000HEAX Maximum Input Value for Extended Function CPUID Information. EBX Reserved. ECX Reserved. EDX Reserved.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + + + + + + + + + + +
Initial EAX ValueInformation Provided about the Processor
80000001HEAX Extended Processor Signature and Feature Bits. EBX Reserved. ECX Bit 00: LAHF/SAHF available in 64-bit mode.* Bits 04-01: Reserved. Bit 05: LZCNT. Bits 07-06: Reserved. Bit 08: PREFETCHW. Bits 31-09: Reserved. EDX Bits 10-00: Reserved. Bit 11: SYSCALL/SYSRET.** Bits 19-12: Reserved = 0. Bit 20: Execute Disable Bit available. Bits 25-21: Reserved = 0. Bit 26: 1-GByte pages are available if 1. Bit 27: RDTSCP and IA32_TSC_AUX are available if 1. Bit 28: Reserved = 0. Bit 29: Intel® 64 Architecture available if 1. Bits 31-30: Reserved = 0. NOTES: * LAHFandSAHFarealwaysavailableinothermodes,regardlessoftheenumerationofthisfeatureflag. ** Intel processors support SYSCALL and SYSRET only in 64-bit mode. This feature flag is always enumerated as 0 outside 64-bit mode.
80000002HEAX Processor Brand String. EBX Processor Brand String Continued. ECX Processor Brand String Continued. EDX Processor Brand String Continued.
80000003HEAX Processor Brand String Continued. EBX Processor Brand String Continued. ECX Processor Brand String Continued. EDX Processor Brand String Continued.
80000004HEAX Processor Brand String Continued. EBX Processor Brand String Continued. ECX Processor Brand String Continued. EDX Processor Brand String Continued.
80000005HEAX Reserved = 0. EBX Reserved = 0. ECX Reserved = 0. EDX Reserved = 0.
80000006HEAX Reserved = 0. EBX Reserved = 0. ECX Bits 07-00: Cache Line size in bytes. Bits 11-08: Reserved. Bits 15-12: L2 Associativity field *. Bits 31-16: Cache size in 1K units. EDX Reserved = 0.
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+
+ + + + + + + + + + + + +
Initial EAX ValueInformation Provided about the Processor
NOTES: * L2 associativity field encodings: 00H - Disabled 08H - 16 ways 01H - 1 way (direct mapped) 09H - Reserved 02H - 2 ways 0AH - 32 ways 03H - Reserved 0BH - 48 ways 04H - 4 ways 0CH - 64 ways 05H - Reserved 0DH - 96 ways 06H - 8 ways 0EH - 128 ways 07H - See CPUID leaf 04H, sub-leaf 2** 0FH - Fully associative ** CPUID leaf 04H provides details of deterministic cache parameters, including the L2 cache in sub-leaf 2
80000007HEAX Reserved = 0. EBX Reserved = 0. ECX Reserved = 0. EDX Bits 07-00: Reserved = 0. Bit 08: Invariant TSC available if 1. Bits 31-09: Reserved = 0.
80000008HEAX Linear/Physical Address size. Bits 07-00: #Physical Address Bits*. Bits 15-08: #Linear Address Bits. Bits 31-16: Reserved = 0. EBX Bits 08-00: Reserved = 0. Bit 09: WBNOINVD is available if 1. Bits 31-10: Reserved = 0. ECX Reserved = 0. EDX Reserved = 0. NOTES: * IfCPUID.80000008H:EAX[7:0]issupported,themaximumphysicaladdressnumbersupportedshould come from this field. If TME-MK is enabled, the number of bits that can be used to address physical memory is CPUID.80000008H:EAX[7:0] - IA32_TME_ACTIVATE[35:32].
+
Table 3-8. Information Returned by CPUID Instruction (Contd.)
+

INPUT EAX = 0: Returns CPUID’s Highest Value for Basic Processor Information and the Vendor Identification String + ¶ +

+

When CPUID executes with EAX set to 0, the processor returns the highest value the CPUID recognizes for returning basic processor information. The value is returned in the EAX register and is processor specific.

+

A vendor identification string is also returned in EBX, EDX, and ECX. For Intel processors, the string is “GenuineIntel” and is expressed:

+

EBX := 756e6547h (* “Genu”, with G in the low eight bits of BL *)

+

EDX := 49656e69h (* “ineI”, with i in the low eight bits of DL *)

+

ECX := 6c65746eh (* “ntel”, with n in the low eight bits of CL *)

+

INPUT EAX = 80000000H: Returns CPUID’s Highest Value for Extended Processor Information + ¶ +

+

When CPUID executes with EAX set to 80000000H, the processor returns the highest value the processor recognizes for returning extended processor information. The value is returned in the EAX register and is processor specific.

+

IA32_BIOS_SIGN_ID Returns Microcode Update Signature + ¶ +

+

For processors that support the microcode update facility, the IA32_BIOS_SIGN_ID MSR is loaded with the update signature whenever CPUID executes. The signature is returned in the upper DWORD. For details, see Chapter 10 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

INPUT EAX = 01H: Returns Model, Family, Stepping Information + ¶ +

+

When CPUID executes with EAX set to 01H, version information is returned in EAX (see Figure 3-6). For example: model, family, and processor type for the Intel Xeon processor 5100 series is as follows:

+
    +
  • Model — 1111B
  • +
  • Family — 0101B
  • +
  • Processor Type — 00B
+

See Table 3-9 for available processor type values. Stepping IDs are provided as needed.

+
+ + + + + + + + + + + + + + + + + + + + + + +31 2827 +2019 161514131211 +8 7 +4 3 +0 +Stepping +Extended +Extended +Family +Model +EAX +ID +Family ID +Model ID +ID +Extended Family ID (0) +Extended Model ID (0) +Processor Type +Family (0FH for the Pentium 4 Processor Family) +Model +Reserved +
Figure 3-6. Version Information Returned by CPUID in EAX
+
+ + + + + + + + + + + + + + + +
TypeEncoding
Original OEM Processor00B
Intel OverDrive® Processor01B
Dual processor (not applicable to Intel486 processors)10B
Intel reserved11B
+
Table 3-9. Processor Type Field
+
+

See Chapter 20 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for information on identifying earlier IA-32 processors.

+

The Extended Family ID needs to be examined only when the Family ID is 0FH. Integrate the fields into a display using the following rule:

+

IF Family_ID ≠ 0FH

+

THEN DisplayFamily = Family_ID;

+

ELSE DisplayFamily = Extended_Family_ID + Family_ID;

+

FI;

+

(* Show DisplayFamily as HEX field. *)

+

The Extended Model ID needs to be examined only when the Family ID is 06H or 0FH. Integrate the field into a display using the following rule:

+

IF (Family_ID = 06H or Family_ID = 0FH)

+

THEN DisplayModel = (Extended_Model_ID « 4) + Model_ID;

+

(* Right justify and zero-extend 4-bit field; display Model_ID as HEX field.*)

+

ELSE DisplayModel = Model_ID;

+

FI;

+

(* Show DisplayModel as HEX field. *)

+

INPUT EAX = 01H: Returns Additional Information in EBX + ¶ +

+

When CPUID executes with EAX set to 01H, additional information is returned to the EBX register:

+
    +
  • Brand index (low byte of EBX) — this number provides an entry into a brand string table that contains brand strings for IA-32 processors. More information about this field is provided later in this section.
  • +
  • CLFLUSH instruction cache line size (second byte of EBX) — this number indicates the size of the cache line flushed by the CLFLUSH and CLFLUSHOPT instructions in 8-byte increments. This field was introduced in the Pentium 4 processor.
  • +
  • Local APIC ID (high byte of EBX) — this number is the 8-bit ID that is assigned to the local APIC on the processor during power up. This field was introduced in the Pentium 4 processor.
+

INPUT EAX = 01H: Returns Feature Information in ECX and EDX + ¶ +

+

When CPUID executes with EAX set to 01H, feature information is returned in ECX and EDX.

+ +

For all feature flags, a 1 indicates that the feature is supported. Use Intel to properly interpret feature flags.

+
+

Software must confirm that a processor feature is present using feature flags returned by CPUID prior to using the feature. Software should not depend on future offerings retaining all features.

+
+ + +
31302928272625242322212019181716151413121110 9 8 7 6 5 4 3 2 1 0 ECX RDRAND F16C AVX OSXSAVE XSAVE AES TSC-Deadline POPCNT MOVBE x2APIC SSE4_2 — SSE4.2 SSE4_1 — SSE4.1 DCA — Direct Cache Access PCID — Process-context Identifiers PDCM — Perf/Debug Capability MSR xTPR Update Control CMPXCHG16B FMA — Fused Multiply Add SDBG CNXT-ID — L1 Context ID SSSE3 — SSSE3 Extensions TM2 — Thermal Monitor 2 EIST — Enhanced Intel SpeedStep® Technology SMX — Safer Mode Extensions VMX — Virtual Machine Extensions DS-CPL — CPL Qualified Debug Store MONITOR — MONITOR/MWAIT DTES64 — 64-bit DS Area PCLMULQDQ — Carryless Multiplication SSE3 — SSE3 Extensions OM16524b Reserved
+
Figure 3-7. Feature Information Returned in the ECX Register
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Bit #MnemonicDescription
0SSE3Streaming SIMD Extensions 3 (SSE3). A value of 1 indicates the processor supports this technology.
1PCLMULQDQPCLMULQDQ. A value of 1 indicates the processor supports the PCLMULQDQ instruction.
2DTES6464-bit DS Area. A value of 1 indicates the processor supports DS area using 64-bit layout.
3MONITORMONITOR/MWAIT. A value of 1 indicates the processor supports this feature.
4DS-CPLCPL Qualified Debug Store. A value of 1 indicates the processor supports the extensions to the Debug Store feature to allow for branch message storage qualified by CPL.
5VMXVirtual Machine Extensions. A value of 1 indicates that the processor supports this technology.
6SMXSafer Mode Extensions. A value of 1 indicates that the processor supports this technology. See Chapter 7, “Safer Mode Extensions Reference.”
7EISTEnhanced Intel SpeedStep® technology. A value of 1 indicates that the processor supports this technology.
8TM2Thermal Monitor 2. A value of 1 indicates whether the processor supports this technology.
9SSSE3A value of 1 indicates the presence of the Supplemental Streaming SIMD Extensions 3 (SSSE3). A value of 0 indicates the instruction extensions are not present in the processor.
10CNXT-IDL1 Context ID. A value of 1 indicates the L1 data cache mode can be set to either adaptive mode or shared mode. A value of 0 indicates this feature is not supported. See definition of the IA32_MISC_ENABLE MSR Bit 24 (L1 Data Cache Context Mode) for details.
11SDBGA value of 1 indicates the processor supports IA32_DEBUG_INTERFACE MSR for silicon debug.
12FMAA value of 1 indicates the processor supports FMA extensions using YMM state.
13CMPXCHG16BCMPXCHG16B Available. A value of 1 indicates that the feature is available. See the “CMPXCHG8B/CMPXCHG16B—Compare and Exchange Bytes” section in this chapter for a description.
14xTPR Update ControlxTPR Update Control. A value of 1 indicates that the processor supports changing IA32_MISC_ENABLE[bit 23].
15PDCMPerfmon and Debug Capability: A value of 1 indicates the processor supports the performance and debug feature indication MSR IA32_PERF_CAPABILITIES.
16ReservedReserved
17PCIDProcess-context identifiers. A value of 1 indicates that the processor supports PCIDs and that software may set CR4.PCIDE to 1.
18DCAA value of 1 indicates the processor supports the ability to prefetch data from a memory mapped device.
19SSE4_1A value of 1 indicates that the processor supports SSE4.1.
20SSE4_2A value of 1 indicates that the processor supports SSE4.2.
21x2APICA value of 1 indicates that the processor supports x2APIC feature.
22MOVBEA value of 1 indicates that the processor supports MOVBE instruction.
23POPCNTA value of 1 indicates that the processor supports the POPCNT instruction.
24TSC-DeadlineA value of 1 indicates that the processor’s local APIC timer supports one-shot operation using a TSC deadline value.
25AESNIA value of 1 indicates that the processor supports the AESNI instruction extensions.
26XSAVEA value of 1 indicates that the processor supports the XSAVE/XRSTOR processor extended states feature, the XSETBV/XGETBV instructions, and XCR0.
27OSXSAVEA value of 1 indicates that the OS has set CR4.OSXSAVE[bit 18] to enable XSETBV/XGETBV instructions to access XCR0 and to support processor extended state management using XSAVE/XRSTOR.
28AVXA value of 1 indicates the processor supports the AVX instruction extensions.
29F16CA value of 1 indicates that processor supports 16-bit floating-point conversion instructions.
30RDRANDA value of 1 indicates that processor supports RDRAND instruction.
31Not UsedAlways returns 0.
+
Table 3-10. Feature Information Returned in the ECX Register
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +31302928272625242322212019181716151413121110 9 8 7 6 5 4 3 2 1 0 +EDX +PBE–Pend. Brk. EN. +TM–Therm. Monitor +HTT–Multi-threading +SS–Self Snoop +SSE2–SSE2 Extensions +SSE–SSE Extensions +FXSR–FXSAVE/FXRSTOR +MMX–MMX Technology +ACPI–Thermal Monitor and Clock Ctrl +DS–Debug Store +CLFSH–CLFLUSH instruction +PSN–Processor Serial Number +PSE-36 – Page Size Extension +PAT–Page Attribute Table +CMOV–Conditional Move/Compare Instruction +MCA–Machine Check Architecture +PGE–PTE Global Bit +MTRR–Memory Type Range Registers +SEP–SYSENTER and SYSEXIT +APIC–APIC on Chip +CX8–CMPXCHG8B Inst. +MCE–Machine Check Exception +PAE–Physical Address Extensions +MSR–RDMSR and WRMSR Support +TSC–Time Stamp Counter +PSE–Page Size Extensions +DE–Debugging Extensions +VME–Virtual-8086 Mode Enhancement +FPU–x87 FPU on Chip +Reserved +
Figure 3-8. Feature Information Returned in the EDX Register
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Bit #MnemonicDescription
0FPUFloating-Point Unit On-Chip. The processor contains an x87 FPU.
1VMEVirtual 8086 Mode Enhancements. Virtual 8086 mode enhancements, including CR4.VME for controlling the feature, CR4.PVI for protected mode virtual interrupts, software interrupt indirection, expansion of the TSS with the software indirection bitmap, and EFLAGS.VIF and EFLAGS.VIP flags.
2DEDebugging Extensions. Support for I/O breakpoints, including CR4.DE for controlling the feature, and optional trapping of accesses to DR4 and DR5.
3PSEPage Size Extension. Large pages of size 4 MByte are supported, including CR4.PSE for controlling the feature, the defined dirty bit in PDE (Page Directory Entries), optional reserved bit trapping in CR3, PDEs, and PTEs.
4TSCTime Stamp Counter. The RDTSC instruction is supported, including CR4.TSD for controlling privilege.
5MSRModel Specific Registers RDMSR and WRMSR Instructions. The RDMSR and WRMSR instructions are supported. Some of the MSRs are implementation dependent.
6PAEPhysical Address Extension. Physical addresses greater than 32 bits are supported: extended page table entry formats, an extra level in the page translation tables is defined, 2-MByte pages are supported instead of 4 Mbyte pages if PAE bit is 1.
7MCEMachine Check Exception. Exception 18 is defined for Machine Checks, including CR4.MCE for controlling the feature. This feature does not define the model-specific implementations of machine-check error logging, reporting, and processor shutdowns. Machine Check exception handlers may have to depend on processor version to do model specific processing of the exception, or test for the presence of the Machine Check feature.
8CX8CMPXCHG8B Instruction. The compare-and-exchange 8 bytes (64 bits) instruction is supported (implicitly locked and atomic).
9APICAPIC On-Chip. The processor contains an Advanced Programmable Interrupt Controller (APIC), responding to memory mapped commands in the physical address range FFFE0000H to FFFE0FFFH (by default - some processors permit the APIC to be relocated).
10ReservedReserved
11SEPSYSENTER and SYSEXIT Instructions. The SYSENTER and SYSEXIT and associated MSRs are supported.
12MTRRMemory Type Range Registers. MTRRs are supported. The MTRRcap MSR contains feature bits that describe what memory types are supported, how many variable MTRRs are supported, and whether fixed MTRRs are supported.
13PGEPage Global Bit. The global bit is supported in paging-structure entries that map a page, indicating TLB entries that are common to different processes and need not be flushed. The CR4.PGE bit controls this feature.
14MCAMachine Check Architecture. A value of 1 indicates the Machine Check Architecture of reporting machine errors is supported. The MCG_CAP MSR contains feature bits describing how many banks of error reporting MSRs are supported.
15CMOVConditional Move Instructions. The conditional move instruction CMOV is supported. In addition, if x87 FPU is present as indicated by the CPUID.FPU feature bit, then the FCOMI and FCMOV instructions are supported
16PATPage Attribute Table. Page Attribute Table is supported. This feature augments the Memory Type Range Registers (MTRRs), allowing an operating system to specify attributes of memory accessed through a linear address on a 4KB granularity.
17PSE-3636-Bit Page Size Extension. 4-MByte pages addressing physical memory beyond 4 GBytes are supported with 32-bit paging. This feature indicates that upper bits of the physical address of a 4-MByte page are encoded in bits 20:13 of the page-directory entry. Such physical addresses are limited by MAXPHYADDR and may be up to 40 bits in size.
18PSNProcessor Serial Number. The processor supports the 96-bit processor identification number feature and the feature is enabled.
19CLFSHCLFLUSH Instruction. CLFLUSH Instruction is supported.
20ReservedReserved
21DSDebug Store. The processor supports the ability to write debug information into a memory resident buffer. This feature is used by the branch trace store (BTS) and processor event-based sampling (PEBS) facilities (see Chapter 24, “Introduction to Virtual Machine Extensions,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C).
22ACPIThermal Monitor and Software Controlled Clock Facilities. The processor implements internal MSRs that allow processor temperature to be monitored and processor performance to be modulated in predefined duty cycles under software control.
23MMXIntel MMX Technology. The processor supports the Intel MMX technology.
24FXSRFXSAVE and FXRSTOR Instructions. The FXSAVE and FXRSTOR instructions are supported for fast save and restore of the floating-point context. Presence of this bit also indicates that CR4.OSFXSR is available for an operating system to indicate that it supports the FXSAVE and FXRSTOR instructions.
25SSESSE. The processor supports the SSE extensions.
26SSE2SSE2. The processor supports the SSE2 extensions.
27SSSelf Snoop. The processor supports the management of conflicting memory types by performing a snoop of its own cache structure for transactions issued to the bus.
28HTTMax APIC IDs reserved field is Valid. A value of 0 for HTT indicates there is only a single logical processor in the package and software should assume only a single APIC ID is reserved. A value of 1 for HTT indicates the value in CPUID.1.EBX[23:16] (the Maximum number of addressable IDs for logical processors in this package) is valid for the package.
29TMThermal Monitor. The processor implements the thermal monitor automatic thermal control circuitry (TCC).
30ReservedReserved
31PBEPending Break Enable. The processor supports the use of the FERR#/PBE# pin when the processor is in the stop-clock state (STPCLK# is asserted) to signal the processor that an interrupt is pending and that the processor should return to normal operation to handle the interrupt.
+
Table 3-11. More on Feature Information Returned in the EDX Register
+

INPUT EAX = 02H: TLB/Cache/Prefetch Information Returned in EAX, EBX, ECX, EDX + ¶ +

+

When CPUID executes with EAX set to 02H, the processor returns information about the processor’s internal TLBs, cache, and prefetch hardware in the EAX, EBX, ECX, and EDX registers. The information is reported in encoded form and fall into the following categories:

+
    +
  • The least-significant byte in register EAX (register AL) will always return 01H. Software should ignore this value and not interpret it as an informational descriptor.
  • +
  • The most significant bit (bit 31) of each register indicates whether the register contains valid information (set to 0) or is reserved (set to 1).
  • +
  • If a register contains valid information, the information is contained in 1 byte descriptors. There are four types of encoding values for the byte descriptor, the encoding type is noted in the second column of Table 3-12. Table 3-12 lists the encoding of these descriptors. Note that the order of descriptors in the EAX, EBX, ECX, and EDX registers is not defined; that is, specific bytes are not designated to contain descriptors for specific cache, prefetch, or TLB types. The descriptors may appear in any order. Note also a processor may report a general descriptor type (FFH) and not report any byte descriptor of “cache type” via CPUID leaf 2.
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Descriptor ValueTypeCache or TLB Description
00HGeneralNull descriptor, this byte contains no information.
01HTLBInstruction TLB: 4 KByte pages, 4-way set associative, 32 entries.
02HTLBInstruction TLB: 4 MByte pages, fully associative, 2 entries.
03HTLBData TLB: 4 KByte pages, 4-way set associative, 64 entries.
04HTLBData TLB: 4 MByte pages, 4-way set associative, 8 entries.
05HTLBData TLB1: 4 MByte pages, 4-way set associative, 32 entries.
06HCache1st-level instruction cache: 8 KBytes, 4-way set associative, 32 byte line size.
08HCache1st-level instruction cache: 16 KBytes, 4-way set associative, 32 byte line size.
09HCache1st-level instruction cache: 32KBytes, 4-way set associative, 64 byte line size.
0AHCache1st-level data cache: 8 KBytes, 2-way set associative, 32 byte line size.
0BHTLBInstruction TLB: 4 MByte pages, 4-way set associative, 4 entries.
0CHCache1st-level data cache: 16 KBytes, 4-way set associative, 32 byte line size.
0DHCache1st-level data cache: 16 KBytes, 4-way set associative, 64 byte line size.
0EHCache1st-level data cache: 24 KBytes, 6-way set associative, 64 byte line size.
1DHCache2nd-level cache: 128 KBytes, 2-way set associative, 64 byte line size.
21HCache2nd-level cache: 256 KBytes, 8-way set associative, 64 byte line size.
22HCache3rd-level cache: 512 KBytes, 4-way set associative, 64 byte line size, 2 lines per sector.
23HCache3rd-level cache: 1 MBytes, 8-way set associative, 64 byte line size, 2 lines per sector.
24HCache2nd-level cache: 1 MBytes, 16-way set associative, 64 byte line size.
25HCache3rd-level cache: 2 MBytes, 8-way set associative, 64 byte line size, 2 lines per sector.
29HCache3rd-level cache: 4 MBytes, 8-way set associative, 64 byte line size, 2 lines per sector.
2CHCache1st-level data cache: 32 KBytes, 8-way set associative, 64 byte line size.
30HCache1st-level instruction cache: 32 KBytes, 8-way set associative, 64 byte line size.
40HCacheNo 2nd-level cache or, if processor contains a valid 2nd-level cache, no 3rd-level cache.
41HCache2nd-level cache: 128 KBytes, 4-way set associative, 32 byte line size.
42HCache2nd-level cache: 256 KBytes, 4-way set associative, 32 byte line size.
43HCache2nd-level cache: 512 KBytes, 4-way set associative, 32 byte line size.
44HCache2nd-level cache: 1 MByte, 4-way set associative, 32 byte line size.
45HCache2nd-level cache: 2 MByte, 4-way set associative, 32 byte line size.
46HCache3rd-level cache: 4 MByte, 4-way set associative, 64 byte line size.
47HCache3rd-level cache: 8 MByte, 8-way set associative, 64 byte line size.
48HCache2nd-level cache: 3MByte, 12-way set associative, 64 byte line size.
49HCache3rd-level cache: 4MB, 16-way set associative, 64-byte line size (Intel Xeon processor MP, Family 0FH, Model 06H); 2nd-level cache: 4 MByte, 16-way set associative, 64 byte line size.
4AHCache3rd-level cache: 6MByte, 12-way set associative, 64 byte line size.
4BHCache3rd-level cache: 8MByte, 16-way set associative, 64 byte line size.
4CHCache3rd-level cache: 12MByte, 12-way set associative, 64 byte line size.
4DHCache3rd-level cache: 16MByte, 16-way set associative, 64 byte line size.
4EHCache2nd-level cache: 6MByte, 24-way set associative, 64 byte line size.
4FHTLBInstruction TLB: 4 KByte pages, 32 entries.
50HTLBInstruction TLB: 4 KByte and 2-MByte or 4-MByte pages, 64 entries.
51HTLBInstruction TLB: 4 KByte and 2-MByte or 4-MByte pages, 128 entries.
52HTLBInstruction TLB: 4 KByte and 2-MByte or 4-MByte pages, 256 entries.
55HTLBInstruction TLB: 2-MByte or 4-MByte pages, fully associative, 7 entries.
56HTLBData TLB0: 4 MByte pages, 4-way set associative, 16 entries.
57HTLBData TLB0: 4 KByte pages, 4-way associative, 16 entries.
59HTLBData TLB0: 4 KByte pages, fully associative, 16 entries.
5AHTLBData TLB0: 2 MByte or 4 MByte pages, 4-way set associative, 32 entries.
5BHTLBData TLB: 4 KByte and 4 MByte pages, 64 entries.
5CHTLBData TLB: 4 KByte and 4 MByte pages,128 entries.
5DHTLBData TLB: 4 KByte and 4 MByte pages,256 entries.
60HCache1st-level data cache: 16 KByte, 8-way set associative, 64 byte line size.
61HTLBInstruction TLB: 4 KByte pages, fully associative, 48 entries.
63HTLBData TLB: 2 MByte or 4 MByte pages, 4-way set associative, 32 entries and a separate array with 1 GByte pages, 4-way set associative, 4 entries.
64HTLBData TLB: 4 KByte pages, 4-way set associative, 512 entries.
66HCache1st-level data cache: 8 KByte, 4-way set associative, 64 byte line size.
67HCache1st-level data cache: 16 KByte, 4-way set associative, 64 byte line size.
68HCache1st-level data cache: 32 KByte, 4-way set associative, 64 byte line size.
6AHCacheuTLB: 4 KByte pages, 8-way set associative, 64 entries.
6BHCacheDTLB: 4 KByte pages, 8-way set associative, 256 entries.
6CHCacheDTLB: 2M/4M pages, 8-way set associative, 128 entries.
6DHCacheDTLB: 1 GByte pages, fully associative, 16 entries.
70HCacheTrace cache: 12 K-μop, 8-way set associative.
71HCacheTrace cache: 16 K-μop, 8-way set associative.
72HCacheTrace cache: 32 K-μop, 8-way set associative.
76HTLBInstruction TLB: 2M/4M pages, fully associative, 8 entries.
78HCache2nd-level cache: 1 MByte, 4-way set associative, 64byte line size.
79HCache2nd-level cache: 128 KByte, 8-way set associative, 64 byte line size, 2 lines per sector.
7AHCache2nd-level cache: 256 KByte, 8-way set associative, 64 byte line size, 2 lines per sector.
7BHCache2nd-level cache: 512 KByte, 8-way set associative, 64 byte line size, 2 lines per sector.
7CHCache2nd-level cache: 1 MByte, 8-way set associative, 64 byte line size, 2 lines per sector.
7DHCache2nd-level cache: 2 MByte, 8-way set associative, 64byte line size.
7FHCache2nd-level cache: 512 KByte, 2-way set associative, 64-byte line size.
80HCache2nd-level cache: 512 KByte, 8-way set associative, 64-byte line size.
82HCache2nd-level cache: 256 KByte, 8-way set associative, 32 byte line size.
83HCache2nd-level cache: 512 KByte, 8-way set associative, 32 byte line size.
84HCache2nd-level cache: 1 MByte, 8-way set associative, 32 byte line size.
85HCache2nd-level cache: 2 MByte, 8-way set associative, 32 byte line size.
86HCache2nd-level cache: 512 KByte, 4-way set associative, 64 byte line size.
87HCache2nd-level cache: 1 MByte, 8-way set associative, 64 byte line size.
+
Table 3-12. Encoding of CPUID Leaf 2 Descriptors
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Descriptor ValueTypeCache or TLB Description
A0HDTLBDTLB: 4k pages, fully associative, 32 entries.
B0HTLBInstruction TLB: 4 KByte pages, 4-way set associative, 128 entries.
B1HTLBInstruction TLB: 2M pages, 4-way, 8 entries or 4M pages, 4-way, 4 entries.
B2HTLBInstruction TLB: 4KByte pages, 4-way set associative, 64 entries.
B3HTLBData TLB: 4 KByte pages, 4-way set associative, 128 entries.
B4HTLBData TLB1: 4 KByte pages, 4-way associative, 256 entries.
B5HTLBInstruction TLB: 4KByte pages, 8-way set associative, 64 entries.
B6HTLBInstruction TLB: 4KByte pages, 8-way set associative, 128 entries.
BAHTLBData TLB1: 4 KByte pages, 4-way associative, 64 entries.
C0HTLBData TLB: 4 KByte and 4 MByte pages, 4-way associative, 8 entries.
C1HSTLBShared 2nd-Level TLB: 4 KByte/2MByte pages, 8-way associative, 1024 entries.
C2HDTLBDTLB: 4 KByte/2 MByte pages, 4-way associative, 16 entries.
C3HSTLBShared 2nd-Level TLB: 4 KByte /2 MByte pages, 6-way associative, 1536 entries. Also 1GBbyte pages, 4-way, 16 entries.
C4HDTLBDTLB: 2M/4M Byte pages, 4-way associative, 32 entries.
CAHSTLBShared 2nd-Level TLB: 4 KByte pages, 4-way associative, 512 entries.
D0HCache3rd-level cache: 512 KByte, 4-way set associative, 64 byte line size.
D1HCache3rd-level cache: 1 MByte, 4-way set associative, 64 byte line size.
D2HCache3rd-level cache: 2 MByte, 4-way set associative, 64 byte line size.
D6HCache3rd-level cache: 1 MByte, 8-way set associative, 64 byte line size.
D7HCache3rd-level cache: 2 MByte, 8-way set associative, 64 byte line size.
D8HCache3rd-level cache: 4 MByte, 8-way set associative, 64 byte line size.
DCHCache3rd-level cache: 1.5 MByte, 12-way set associative, 64 byte line size.
DDHCache3rd-level cache: 3 MByte, 12-way set associative, 64 byte line size.
DEHCache3rd-level cache: 6 MByte, 12-way set associative, 64 byte line size.
E2HCache3rd-level cache: 2 MByte, 16-way set associative, 64 byte line size.
E3HCache3rd-level cache: 4 MByte, 16-way set associative, 64 byte line size.
E4HCache3rd-level cache: 8 MByte, 16-way set associative, 64 byte line size.
EAHCache3rd-level cache: 12MByte, 24-way set associative, 64 byte line size.
EBHCache3rd-level cache: 18MByte, 24-way set associative, 64 byte line size.
ECHCache3rd-level cache: 24MByte, 24-way set associative, 64 byte line size.
F0HPrefetch64-Byte prefetching.
F1HPrefetch128-Byte prefetching.
FEHGeneralCPUID leaf 2 does not report TLB descriptor information; use CPUID leaf 18H to query TLB and other address translation parameters.
FFHGeneralCPUID leaf 2 does not report cache descriptor information, use CPUID leaf 4 to query cache parameters.
+
Table 3-12. Encoding of CPUID Leaf 2 Descriptors (Contd.)
+

Example 3-1. Example of Cache and TLB Interpretation + ¶ +

+

The first member of the family of Pentium 4 processors returns the following information about caches and TLBs when the CPUID executes with an input value of 2:

+

EAX 66 5B 50 01H EBX 0H ECX 0H EDX 00 7A 70 00H

+

Which means:

+
    +
  • The least-significant byte (byte 0) of register EAX is set to 01H. This value should be ignored.
  • +
  • The most-significant bit of all four registers (EAX, EBX, ECX, and EDX) is set to 0, indicating that each register contains valid 1-byte descriptors.
  • +
  • Bytes 1, 2, and 3 of register EAX indicate that the processor has: +
      +
    • 50H - a 64-entry instruction TLB, for mapping 4-KByte and 2-MByte or 4-MByte pages.
    • +
    • 50H - a 64-entry instruction TLB, for mapping 4-KByte and 2-MByte or 4-MByte pages.
    • +
    • 5BH - a 64-entry data TLB, for mapping 4-KByte and 4-MByte pages.
    • +
    • 5BH - a 64-entry data TLB, for mapping 4-KByte and 4-MByte pages.
    • +
    • 66H - an 8-KByte 1st level data cache, 4-way set associative, with a 64-Byte cache line size.
    • +
    • 66H - an 8-KByte 1st level data cache, 4-way set associative, with a 64-Byte cache line size.
  • +
  • The descriptors in registers EBX and ECX are valid, but contain NULL descriptors.
  • +
  • Bytes 0, 1, 2, and 3 of register EDX indicate that the processor has: +
      +
    • 00H - NULL descriptor.
    • +
    • 00H - NULL descriptor.
    • +
    • 70H - Trace cache: 12 K-μop, 8-way set associative.
    • +
    • 70H - Trace cache: 12 K-μop, 8-way set associative.
    • +
    • 7AH - a 256-KByte 2nd level cache, 8-way set associative, with a sectored, 64-byte cache line size.
    • +
    • 7AH - a 256-KByte 2nd level cache, 8-way set associative, with a sectored, 64-byte cache line size.
    • +
    • 00H - NULL descriptor.
    • +
    • 00H - NULL descriptor.
+

INPUT EAX = 04H: Returns Deterministic Cache Parameters for Each Level + ¶ +

+

When CPUID executes with EAX set to 04H and ECX contains an index value, the processor returns encoded data that describe a set of deterministic cache parameters (for the cache level associated with the input in ECX). Valid index values start from 0.

+

Software can enumerate the deterministic cache parameters for each level of the cache hierarchy starting with an index value of 0, until the parameters report the value associated with the cache type field is 0. The architecturally defined fields reported by deterministic cache parameters are documented in Table 3-8.

+

This Cache Size in Bytes

+

= (Ways + 1) * (Partitions + 1) * (Line_Size + 1) * (Sets + 1)

+

= (EBX[31:22] + 1) * (EBX[21:12] + 1) * (EBX[11:0] + 1) * (ECX + 1)

+

The CPUID leaf 04H also reports data that can be used to derive the topology of processor cores in a physical package. This information is constant for all valid index values. Software can query the raw data reported by executing CPUID with EAX=04H and ECX=0 and use it as part of the topology enumeration algorithm described in Chapter 9, “Multiple-Processor Management,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

INPUT EAX = 05H: Returns MONITOR and MWAIT Features + ¶ +

+

When CPUID executes with EAX set to 05H, the processor returns information about features available to MONITOR/MWAIT instructions. The MONITOR instruction is used for address-range monitoring in conjunction with MWAIT instruction. The MWAIT instruction optionally provides additional extensions for advanced power management. See Table 3-8.

+

INPUT EAX = 06H: Returns Thermal and Power Management Features + ¶ +

+

When CPUID executes with EAX set to 06H, the processor returns information about thermal and power management features. See Table 3-8.

+

INPUT EAX = 07H: Returns Structured Extended Feature Enumeration Information + ¶ +

+

When CPUID executes with EAX set to 07H and ECX = 0, the processor returns information about the maximum input value for sub-leaves that contain extended feature flags. See Table 3-8.

+

When CPUID executes with EAX set to 07H and the input value of ECX is invalid (see leaf 07H entry in Table 3-8), the processor returns 0 in EAX/EBX/ECX/EDX. In subleaf 0, EAX returns the maximum input value of the highest leaf 7 sub-leaf, and EBX, ECX & EDX contain information of extended feature flags.

+

INPUT EAX = 09H: Returns Direct Cache Access Information + ¶ +

+

When CPUID executes with EAX set to 09H, the processor returns information about Direct Cache Access capabilities. See Table 3-8.

+

INPUT EAX = 0AH: Returns Architectural Performance Monitoring Features + ¶ +

+

When CPUID executes with EAX set to 0AH, the processor returns information about support for architectural performance monitoring capabilities. Architectural performance monitoring is supported if the version ID (see Table 3-8) is greater than Pn 0. See Table 3-8.

+

For each version of architectural performance monitoring capability, software must enumerate this leaf to discover the programming facilities and the architectural performance events available in the processor. The details are described in Chapter 24, “Introduction to Virtual Machine Extensions,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C.

+

INPUT EAX = 0BH: Returns Extended Topology Information + ¶ +

+

CPUID leaf 1FH is a preferred superset to leaf 0BH. Intel recommends first checking for the existence of Leaf 1FH before using leaf 0BH.

+

When CPUID executes with EAX set to 0BH, the processor returns information about extended topology enumeration data. Software must detect the presence of CPUID leaf 0BH by verifying (a) the highest leaf index supported by CPUID is >= 0BH, and (b) CPUID.0BH:EBX[15:0] reports a non-zero value. See Table 3-8.

+

INPUT EAX = 0DH: Returns Processor Extended States Enumeration Information + ¶ +

+

When CPUID executes with EAX set to 0DH and ECX = 0, the processor returns information about the bit-vector representation of all processor state extensions that are supported in the processor and storage size requirements of the XSAVE/XRSTOR area. See Table 3-8.

+

When CPUID executes with EAX set to 0DH and ECX = n (n > 1, and is a valid sub-leaf index), the processor returns information about the size and offset of each processor extended state save area within the XSAVE/XRSTOR area. See Table 3-8. Software can use the forward-extendable technique depicted below to query the valid sub-leaves and obtain size and offset information for each processor extended state save area:

+

For i = 2 to 62 // sub-leaf 1 is reserved IF (CPUID.(EAX=0DH, ECX=0H):VECTOR[i] = 1 ) // VECTOR is the 64-bit value of EDX:EAX Execute CPUID.(EAX=0DH, ECX = i) to examine size and offset for sub-leaf i; FI;

+

INPUT EAX = 0FH: Returns Intel Resource Director Technology (Intel RDT) Monitoring Enumeration Information + ¶ +

+

When CPUID executes with EAX set to 0FH and ECX = 0, the processor returns information about the bit-vector representation of QoS monitoring resource types that are supported in the processor and maximum range of RMID values the processor can use to monitor of any supported resource types. Each bit, starting from bit 1, corresponds to a specific resource type if the bit is set. The bit position corresponds to the sub-leaf index (or ResID) that software must use to query QoS monitoring capability available for that type. See Table 3-8.

+

When CPUID executes with EAX set to 0FH and ECX = n (n >= 1, and is a valid ResID), the processor returns information software can use to program IA32_PQR_ASSOC, IA32_QM_EVTSEL MSRs before reading QoS data from the IA32_QM_CTR MSR.

+

INPUT EAX = 10H: Returns Intel Resource Director Technology (Intel RDT) Allocation Enumeration Information + ¶ +

+

When CPUID executes with EAX set to 10H and ECX = 0, the processor returns information about the bit-vector representation of QoS Enforcement resource types that are supported in the processor. Each bit, starting from bit 1, corresponds to a specific resource type if the bit is set. The bit position corresponds to the sub-leaf index (or ResID) that software must use to query QoS enforcement capability available for that type. See Table 3-8.

+

When CPUID executes with EAX set to 10H and ECX = n (n >= 1, and is a valid ResID), the processor returns information about available classes of service and range of QoS mask MSRs that software can use to configure each class of services using capability bit masks in the QoS Mask registers, IA32_resourceType_Mask_n.

+

INPUT EAX = 12H: Returns Intel SGX Enumeration Information + ¶ +

+

When CPUID executes with EAX set to 12H and ECX = 0H, the processor returns information about Intel SGX capabilities. See Table 3-8.

+

When CPUID executes with EAX set to 12H and ECX = 1H, the processor returns information about Intel SGX attributes. See Table 3-8.

+

When CPUID executes with EAX set to 12H and ECX = n (n > 1), the processor returns information about Intel SGX Enclave Page Cache. See Table 3-8.

+

INPUT EAX = 14H: Returns Intel Processor Trace Enumeration Information + ¶ +

+

When CPUID executes with EAX set to 14H and ECX = 0H, the processor returns information about Intel Processor Trace extensions. See Table 3-8.

+

When CPUID executes with EAX set to 14H and ECX = n (n > 0 and less than the number of non-zero bits in CPUID.(EAX=14H, ECX= 0H).EAX), the processor returns information about packet generation in Intel Processor Trace. See Table 3-8.

+

INPUT EAX = 15H: Returns Time Stamp Counter and Nominal Core Crystal Clock Information + ¶ +

+

When CPUID executes with EAX set to 15H and ECX = 0H, the processor returns information about Time Stamp Counter and Core Crystal Clock. See Table 3-8.

+

INPUT EAX = 16H: Returns Processor Frequency Information + ¶ +

+

When CPUID executes with EAX set to 16H, the processor returns information about Processor Frequency Information. See Table 3-8.

+

INPUT EAX = 17H: Returns System-On-Chip Information + ¶ +

+

When CPUID executes with EAX set to 17H, the processor returns information about the System-On-Chip Vendor Attribute Enumeration. See Table 3-8.

+

INPUT EAX = 18H: Returns Deterministic Address Translation Parameters Information + ¶ +

+

When CPUID executes with EAX set to 18H, the processor returns information about the Deterministic Address Translation Parameters. See Table 3-8.

+

INPUT EAX = 19H: Returns Key Locker Information + ¶ +

+

When CPUID executes with EAX set to 19H, the processor returns information about Key Locker. See Table 3-8.

+

INPUT EAX = 1AH: Returns Native Model ID Information + ¶ +

+

When CPUID executes with EAX set to 1AH, the processor returns information about Native Model Identification. See Table 3-8.

+

INPUT EAX = 1BH: Returns PCONFIG Information + ¶ +

+

When CPUID executes with EAX set to 1BH, the processor returns information about PCONFIG capabilities. This information is enumerated in sub-leaves selected by the value of ECX (starting with 0).

+

Each sub-leaf of CPUID function 1BH enumerates its sub-leaf type in EAX. If a sub-leaf type is 0, the sub-leaf is invalid and zero is returned in EBX, ECX, and EDX. In this case, all subsequent sub-leaves (selected by larger input values of ECX) are also invalid.

+

The only valid sub-leaf type currently defined is 1, indicating that the sub-leaf enumerates target identifiers for the PCONFIG instruction. Any non-zero value returned in EBX, ECX, or EDX indicates a valid target identifier of the PCONFIG instruction (any value of zero should be ignored). The only target identifier currently defined is 1, indicating TME-MK. See the “PCONFIG—Platform Configuration” instruction in Chapter 4 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2B, for more information.

+

INPUT EAX = 1CH: Returns Last Branch Record Information + ¶ +

+

When CPUID executes with EAX set to 1CH, the processor returns information about LBRs (the architectural feature). See Table 3-8.

+

INPUT EAX = 1DH: Returns Tile Information + ¶ +

+

When CPUID executes with EAX set to 1DH and ECX = 0H, the processor returns information about tile architecture. See Table 3-8.

+

When CPUID executes with EAX set to 1DH and ECX = 1H, the processor returns information about tile palette 1. See Table 3-8.

+

INPUT EAX = 1EH: Returns TMUL Information + ¶ +

+

When CPUID executes with EAX set to 1EH and ECX = 0H, the processor returns information about TMUL capabilities. See Table 3-8.

+

INPUT EAX = 1FH: Returns V2 Extended Topology Information + ¶ +

+

When CPUID executes with EAX set to 1FH, the processor returns information about extended topology enumeration data. Software must detect the presence of CPUID leaf 1FH by verifying (a) the highest leaf index supported by CPUID is >= 1FH, and (b) CPUID.1FH:EBX[15:0] reports a non-zero value. See Table 3-8.

+

INPUT EAX = 20H: Returns History Reset Information + ¶ +

+

When CPUID executes with EAX set to 20H, the processor returns information about History Reset. See Table 3-8.

+

METHODS FOR RETURNING BRANDING INFORMATION + ¶ +

+

Use the following techniques to access branding information:

+

1. Processor brand string method.

+

2. Processor brand index; this method uses a software supplied brand string table.

+

These two methods are discussed in the following sections. For methods that are available in early processors, see Section: “Identification of Earlier IA-32 Processors” in Chapter 20 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

The Processor Brand String Method + ¶ +

+

Figure 3-9 describes the algorithm used for detection of the brand string. Processor brand identification software should execute this algorithm on all Intel 64 and IA-32 processors.

+

This method (introduced with Pentium 4 processors) returns an ASCII brand identification string and the Processor Base frequency of the processor to the EAX, EBX, ECX, and EDX registers.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +Input: EAX= +0x80000000 +CPUID +Processor Brand +False +IF (EAX & 0x80000000) +String Not +Supported +CPUID +True≥ +Function +Extended +Supported +EAX Return Value = +Max. Extended CPUID +Function Index +True +Processor Brand +IF (EAX Return Value +String Supported +≥ 0x80000004) +
Figure 3-9. Determination of Support for the Processor Brand String
+

How Brand Strings Work + ¶ +

+

To use the brand string method, execute CPUID with EAX input of 8000002H through 80000004H. For each input value, CPUID returns 16 ASCII characters using EAX, EBX, ECX, and EDX. The returned string will be NULL-terminated.

+

Table 3-13 shows the brand string that is returned by the first processor in the Pentium 4 processor family.

+
+ + + + + + + + + + + + + + + + +
EAX Input ValueReturn ValuesASCII Equivalent
80000002HEAX = 20202020H EBX = 20202020H ECX = 20202020H EDX = 6E492020H“” “” “” “nI ”
80000003HEAX = 286C6574H EBX = 50202952H ECX = 69746E65H EDX = 52286D75H“(let” “P )R” “itne” “R(mu”
80000004HEAX = 20342029H EBX = 20555043H ECX = 30303531H EDX = 007A484DH“ 4 )” “ UPC” “0051” “\0zHM”
+
Table 3-13. Processor Brand String Returned with Pentium 4 Processor
+

Extracting the Processor Frequency from Brand Strings + ¶ +

+

Figure 3-10 provides an algorithm which software can use to extract the Processor Base frequency from the processor brand string.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Scan "Brand String" in +Reverse Byte Order +"zHM", or Match +"zHG", or Substring +"zHT" +False +IF Substring Matched +Report Error +Determine "Freq" True +If "zHM" +Multiplier = 1 x 106 +and "Multiplier" +If "zHG" +Multiplier = 1 x 109 +Determine "Multiplier" +If "zHT" +Multiplier = 1 x 1012 +Scan Digits +Until Blank +Reverse Digits +Determine "Freq" +To Decimal Value +In Reverse Order +Processor Base +Frequency = +"Freq" = X.YZ if +"Freq" x "Multiplier" +Digits = "ZY.X" +
Figure 3-10. Algorithm for Extracting Processor Frequency
+

The Processor Brand Index Method + ¶ +

+

The brand index method (introduced with Pentium® III Xeon® processors) provides an entry point into a brand identification table that is maintained in memory by system software and is accessible from system- and user-level code. In this table, each brand index is associate with an ASCII brand identification string that identifies the official Intel family and model number of a processor.

+

When CPUID executes with EAX set to 1, the processor returns a brand index to the low byte in EBX. Software can then use this index to locate the brand identification string for the processor in the brand identification table. The first entry (brand index 0) in this table is reserved, allowing for backward compatibility with processors that do not support the brand identification feature. Starting with processor signature family ID = 0FH, model = 03H, brand index method is no longer supported. Use brand string method instead.

+

Table 3-14 shows brand indices that have identification strings associated with them.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Brand IndexBrand String
00HThis processor does not support the brand identification feature
01HIntel(R) Celeron(R) processor1
02HIntel(R) Pentium(R) III processor1
03HIntel(R) Pentium(R) III Xeon(R) processor; If processor signature = 000006B1h, then Intel(R) Celeron(R) processor
04HIntel(R) Pentium(R) III processor
06HMobile Intel(R) Pentium(R) III processor-M
07HMobile Intel(R) Celeron(R) processor1
08HIntel(R) Pentium(R) 4 processor
09HIntel(R) Pentium(R) 4 processor
0AHIntel(R) Celeron(R) processor1
0BHIntel(R) Xeon(R) processor; If processor signature = 00000F13h, then Intel(R) Xeon(R) processor MP
0CHIntel(R) Xeon(R) processor MP
0EHMobile Intel(R) Pentium(R) 4 processor-M; If processor signature = 00000F13h, then Intel(R) Xeon(R) processor
0FHMobile Intel(R) Celeron(R) processor1
11HMobile Genuine Intel(R) processor
12HIntel(R) Celeron(R) M processor
13HMobile Intel(R) Celeron(R) processor1
14HIntel(R) Celeron(R) processor
15HMobile Genuine Intel(R) processor
16HIntel(R) Pentium(R) M processor
17HMobile Intel(R) Celeron(R) processor1
18H – 0FFHRESERVED
+
Table 3-14. Mapping of Brand Indices; and Intel 64 and IA-32 Processor Brand Strings
+

NOTES:

+

1. Indicates versions of these processors that were introduced after the Pentium III

+

IA-32 Architecture Compatibility + ¶ +

+

CPUID is not supported in early models of the Intel486 processor or in any IA-32 processor earlier than the Intel486 processor.

+

Operation + ¶ +

+
IA32_BIOS_SIGN_ID MSR := Update with installed microcode revision number;
+CASE (EAX) OF
+    EAX = 0:
+        EAX := Highest basic function input value understood by CPUID;
+        EBX := Vendor identification string;
+        EDX := Vendor identification string;
+        ECX := Vendor identification string;
+    BREAK;
+    EAX = 1H:
+        EAX[3:0] := Stepping ID;
+        EAX[7:4] := Model;
+        EAX[11:8] := Family;
+        EAX[13:12] := Processor type;
+        EAX[15:14] := Reserved;
+        EAX[19:16] := Extended Model;
+        EAX[27:20] := Extended Family;
+        EAX[31:28] := Reserved;
+        EBX[7:0] := Brand Index; (* Reserved if the value is zero. *)
+        EBX[15:8] := CLFLUSH Line Size;
+        EBX[16:23] := Reserved; (* Number of threads enabled = 2 if MT enable fuse set. *)
+        EBX[24:31] := Initial APIC ID;
+        ECX := Feature flags; (* See Figure 3-7. *)
+        EDX := Feature flags; (* See Figure 3-8. *)
+    BREAK;
+    EAX = 2H:
+        EAX := Cache and TLB information;
+        EBX := Cache and TLB information;
+        ECX := Cache and TLB information;
+        EDX := Cache and TLB information;
+    BREAK;
+    EAX = 3H:
+        EAX := Reserved;
+        EBX := Reserved;
+        ECX := ProcessorSerialNumber[31:0];
+        (* Pentium III processors only, otherwise reserved. *)
+        EDX := ProcessorSerialNumber[63:32];
+        (* Pentium III processors only, otherwise reserved. *
+    BREAK
+    EAX = 4H:
+        EAX := Deterministic Cache Parameters Leaf; (* See Table 3-8. *)
+        EBX := Deterministic Cache Parameters Leaf;
+        ECX := Deterministic Cache Parameters Leaf;
+        EDX := Deterministic Cache Parameters Leaf;
+    BREAK;
+    EAX = 5H:
+        EAX := MONITOR/MWAIT Leaf; (* See Table 3-8. *)
+        EBX := MONITOR/MWAIT Leaf;
+        ECX := MONITOR/MWAIT Leaf;
+        EDX := MONITOR/MWAIT Leaf;
+    BREAK;
+    EAX = 6H:
+        EAX := Thermal and Power Management Leaf; (* See Table 3-8. *)
+        EBX := Thermal and Power Management Leaf;
+        ECX := Thermal and Power Management Leaf;
+        EDX := Thermal and Power Management Leaf;
+    BREAK;
+    EAX = 7H:
+        EAX := Structured Extended Feature Flags Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Structured Extended Feature Flags Enumeration Leaf;
+        ECX := Structured Extended Feature Flags Enumeration Leaf;
+        EDX := Structured Extended Feature Flags Enumeration Leaf;
+    BREAK;
+    EAX = 8H:
+        EAX := Reserved = 0;
+        EBX := Reserved = 0;
+        ECX := Reserved = 0;
+        EDX := Reserved = 0;
+    BREAK;
+    EAX = 9H:
+        EAX := Direct Cache Access Information Leaf; (* See Table 3-8. *)
+        EBX := Direct Cache Access Information Leaf;
+        ECX := Direct Cache Access Information Leaf;
+        EDX := Direct Cache Access Information Leaf;
+    BREAK;
+    EAX = AH:
+        EAX := Architectural Performance Monitoring Leaf; (* See Table 3-8. *)
+        EBX := Architectural Performance Monitoring Leaf;
+        ECX := Architectural Performance Monitoring Leaf;
+        EDX := Architectural Performance Monitoring Leaf;
+        BREAK
+    EAX = BH:
+        EAX := Extended Topology Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Extended Topology Enumeration Leaf;
+        ECX := Extended Topology Enumeration Leaf;
+        EDX := Extended Topology Enumeration Leaf;
+    BREAK;
+    EAX = CH:
+        EAX := Reserved = 0;
+        EBX := Reserved = 0;
+        ECX := Reserved = 0;
+        EDX := Reserved = 0;
+    BREAK;
+    EAX = DH:
+        EAX := Processor Extended State Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Processor Extended State Enumeration Leaf;
+        ECX := Processor Extended State Enumeration Leaf;
+        EDX := Processor Extended State Enumeration Leaf;
+    BREAK;
+    EAX = EH:
+        EAX := Reserved = 0;
+        EBX := Reserved = 0;
+        ECX := Reserved = 0;
+        EDX := Reserved = 0;
+    BREAK;
+    EAX = FH:
+        EAX := Intel Resource Director Technology Monitoring Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Intel Resource Director Technology Monitoring Enumeration Leaf;
+        ECX := Intel Resource Director Technology Monitoring Enumeration Leaf;
+        EDX := Intel Resource Director Technology Monitoring Enumeration Leaf;
+    BREAK;
+    EAX = 10H:
+        EAX := Intel Resource Director Technology Allocation Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Intel Resource Director Technology Allocation Enumeration Leaf;
+        ECX := Intel Resource Director Technology Allocation Enumeration Leaf;
+        EDX := Intel Resource Director Technology Allocation Enumeration Leaf;
+    BREAK;
+    EAX = 12H:
+        EAX := Intel SGX Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Intel SGX Enumeration Leaf;
+        ECX := Intel SGX Enumeration Leaf;
+        EDX := Intel SGX Enumeration Leaf;
+    BREAK;
+    EAX = 14H:
+        EAX := Intel Processor Trace Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Intel Processor Trace Enumeration Leaf;
+        ECX := Intel Processor Trace Enumeration Leaf;
+        EDX := Intel Processor Trace Enumeration Leaf;
+    BREAK;
+    EAX = 15H:
+        EAX := Time Stamp Counter and Nominal Core Crystal Clock Information Leaf; (* See Table 3-8. *)
+        EBX := Time Stamp Counter and Nominal Core Crystal Clock Information Leaf;
+        ECX := Time Stamp Counter and Nominal Core Crystal Clock Information Leaf;
+        EDX := Time Stamp Counter and Nominal Core Crystal Clock Information Leaf;
+    BREAK;
+    EAX = 16H:
+        EAX := Processor Frequency Information Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Processor Frequency Information Enumeration Leaf;
+        ECX := Processor Frequency Information Enumeration Leaf;
+        EDX := Processor Frequency Information Enumeration Leaf;
+    BREAK;
+    EAX = 17H:
+        EAX := System-On-Chip Vendor Attribute Enumeration Leaf; (* See Table 3-8. *)
+        EBX := System-On-Chip Vendor Attribute Enumeration Leaf;
+        ECX := System-On-Chip Vendor Attribute Enumeration Leaf;
+        EDX := System-On-Chip Vendor Attribute Enumeration Leaf;
+    BREAK;
+    EAX = 18H:
+        EAX := Deterministic Address Translation Parameters Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Deterministic Address Translation Parameters Enumeration Leaf;
+        ECX := Deterministic Address Translation Parameters Enumeration Leaf;
+        EDX := Deterministic Address Translation Parameters Enumeration Leaf;
+    BREAK;
+    EAX = 19H:
+        EAX := Key Locker Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Key Locker Enumeration Leaf;
+        ECX := Key Locker Enumeration Leaf;
+        EDX := Key Locker Enumeration Leaf;
+    BREAK;
+    EAX = 1AH:
+        EAX := Native Model ID Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Native Model ID Enumeration Leaf;
+        ECX := Native Model ID Enumeration Leaf;
+        EDX := Native Model ID Enumeration Leaf;
+    BREAK;
+    EAX = 1BH:
+        EAX := PCONFIG Information Enumeration Leaf; (* See “INPUT EAX = 1BH: Returns PCONFIG Information” on page 3-253. *)
+        EBX := PCONFIG Information Enumeration Leaf;
+        ECX := PCONFIG Information Enumeration Leaf;
+        EDX := PCONFIG Information Enumeration Leaf;
+    BREAK;
+    EAX = 1CH:
+        EAX := Last Branch Record Information Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Last Branch Record Information Enumeration Leaf;
+        ECX := Last Branch Record Information Enumeration Leaf;
+        EDX := Last Branch Record Information Enumeration Leaf;
+    BREAK;
+    EAX = 1DH:
+        EAX := Tile Information Enumeration Leaf; (* See Table 3-8. *)
+        EBX := Tile Information Enumeration Leaf;
+        ECX := Tile Information Enumeration Leaf;
+        EDX := Tile Information Enumeration Leaf;
+    BREAK;
+    EAX = 1EH:
+        EAX := TMUL Information Enumeration Leaf; (* See Table 3-8. *)
+        EBX := TMUL Information Enumeration Leaf;
+        ECX := TMUL Information Enumeration Leaf;
+        EDX := TMUL Information Enumeration Leaf;
+    BREAK;
+    EAX = 1FH:
+        EAX := V2 Extended Topology Enumeration Leaf; (* See Table 3-8. *)
+        EBX := V2 Extended Topology Enumeration Leaf;
+        ECX := V2 Extended Topology Enumeration Leaf;
+        EDX := V2 Extended Topology Enumeration Leaf;
+    BREAK;
+    EAX = 20H:
+        EAX := Processor History Reset Sub-leaf; (* See Table 3-8. *)
+        EBX := Processor History Reset Sub-leaf;
+        ECX := Processor History Reset Sub-leaf;
+        EDX := Processor History Reset Sub-leaf;
+    BREAK;
+    EAX = 80000000H:
+        EAX := Highest extended function input value understood by CPUID;
+        EBX := Reserved;
+        ECX := Reserved;
+        EDX := Reserved;
+    BREAK;
+    EAX = 80000001H:
+        EAX := Reserved;
+        EBX := Reserved;
+        ECX := Extended Feature Bits (* See Table 3-8.*);
+        EDX := Extended Feature Bits (* See Table 3-8. *);
+    BREAK;
+    EAX = 80000002H:
+        EAX := Processor Brand String;
+        EBX := Processor Brand String,
+            continued;
+        ECX := Processor Brand String,
+            continued;
+        EDX := Processor Brand String,
+            continued;
+    BREAK;
+    EAX = 80000003H:
+        EAX := Processor Brand String,
+            continued;
+        EBX := Processor Brand String,
+            continued;
+        ECX := Processor Brand String,
+            continued;
+        EDX := Processor Brand String,
+            continued;
+    BREAK;
+    EAX = 80000004H:
+        EAX := Processor Brand String,
+            continued;
+        EBX := Processor Brand String,
+            continued;
+        ECX := Processor Brand String,
+            continued;
+        EDX := Processor Brand String, continued;
+    BREAK;
+    EAX = 80000005H:
+        EAX := Reserved = 0;
+        EBX := Reserved = 0;
+        ECX := Reserved = 0;
+        EDX := Reserved = 0;
+    BREAK;
+    EAX = 80000006H:
+        EAX := Reserved = 0;
+        EBX := Reserved = 0;
+        ECX := Cache information;
+        EDX := Reserved = 0;
+    BREAK;
+    EAX = 80000007H:
+        EAX := Reserved = 0;
+        EBX := Reserved = 0;
+        ECX := Reserved = 0;
+        EDX := Reserved = Misc Feature Flags;
+    BREAK;
+    EAX = 80000008H:
+        EAX := Address Size Information;
+        EBX := Misc Feature Flags;
+        ECX := Reserved = 0;
+        EDX := Reserved = 0;
+    BREAK;
+    EAX >= 40000000H and EAX <= 4FFFFFFFH:
+    DEFAULT: (* EAX = Value outside of recognized range for CPUID. *)
+        (* If the highest basic information leaf data depend on ECX input value, ECX is honored.*)
+        EAX := Reserved; (* Information returned for highest basic information leaf. *)
+        EBX := Reserved; (* Information returned for highest basic information leaf. *)
+        ECX := Reserved; (* Information returned for highest basic information leaf. *)
+        EDX := Reserved; (* Information returned for highest basic information leaf. *)
+    BREAK;
+ESAC;
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

+

In earlier IA-32 processors that do not support the CPUID instruction, execution of the instruction results in an invalid opcode (#UD) exception being generated.

diff --git a/x86/crc32.html b/x86/crc32.html new file mode 100644 index 0000000..03a75cd --- /dev/null +++ b/x86/crc32.html @@ -0,0 +1,228 @@ + +CRC32 + — Accumulate CRC32 Value

CRC32 + — Accumulate CRC32 Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F2 0F 38 F0 /r CRC32 r32, r/m8RMValidValidAccumulate CRC32 on r/m8.
F2 REX 0F 38 F0 /r CRC32 r32, r/m81RMValidN.E.Accumulate CRC32 on r/m8.
F2 0F 38 F1 /r CRC32 r32, r/m16RMValidValidAccumulate CRC32 on r/m16.
F2 0F 38 F1 /r CRC32 r32, r/m32RMValidValidAccumulate CRC32 on r/m32.
F2 REX.W 0F 38 F0 /r CRC32 r64, r/m8RMValidN.E.Accumulate CRC32 on r/m8.
F2 REX.W 0F 38 F1 /r CRC32 r64, r/m64RMValidN.E.Accumulate CRC32 on r/m64.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Starting with an initial value in the first operand (destination operand), accumulates a CRC32 (polynomial 11EDC6F41H) value for the second operand (source operand) and stores the result in the destination operand. The source operand can be a register or a memory location. The destination operand must be an r32 or r64 register. If the destination is an r64 register, then the 32-bit result is stored in the least significant double word and 00000000H is stored in the most significant double word of the r64 register.

+

The initial value supplied in the destination operand is a double word integer stored in the r32 register or the least significant double word of the r64 register. To incrementally accumulate a CRC32 value, software retains the result of the previous CRC32 operation in the destination operand, then executes the CRC32 instruction again with new input data in the source operand. Data contained in the source operand is processed in reflected bit order. This means that the most significant bit of the source operand is treated as the least significant bit of the quotient, and so on, for all the bits of the source operand. Likewise, the result of the CRC operation is stored in the destination operand in reflected bit order. This means that the most significant bit of the resulting CRC (bit 31) is stored in the least significant bit of the destination operand (bit 0), and so on, for all the bits of the CRC.

+

Operation + ¶ +

+
+

BIT_REFLECT64: DST[63-0] = SRC[0-63]

+

BIT_REFLECT32: DST[31-0] = SRC[0-31]

+

BIT_REFLECT16: DST[15-0] = SRC[0-15]

+

BIT_REFLECT8: DST[7-0] = SRC[0-7]

+

MOD2: Remainder from Polynomial division modulus 2

+
CRC32 instruction for 64-bit source operand and 64-bit destination operand:
+    TEMP1[63-0] := BIT_REFLECT64 (SRC[63-0])
+    TEMP2[31-0] := BIT_REFLECT32 (DEST[31-0])
+    TEMP3[95-0] := TEMP1[63-0] « 32
+    TEMP4[95-0] := TEMP2[31-0] « 64
+    TEMP5[95-0] := TEMP3[95-0] XOR TEMP4[95-0]
+    TEMP6[31-0] := TEMP5[95-0] MOD2 11EDC6F41H
+    DEST[31-0] := BIT_REFLECT (TEMP6[31-0])
+    DEST[63-32] := 00000000H
+CRC32 instruction for 32-bit source operand and 32-bit destination operand:
+    TEMP1[31-0] := BIT_REFLECT32 (SRC[31-0])
+    TEMP2[31-0] := BIT_REFLECT32 (DEST[31-0])
+    TEMP3[63-0] := TEMP1[31-0] « 32
+    TEMP4[63-0] := TEMP2[31-0] « 32
+    TEMP5[63-0] := TEMP3[63-0] XOR TEMP4[63-0]
+    TEMP6[31-0] := TEMP5[63-0] MOD2 11EDC6F41H
+    DEST[31-0] := BIT_REFLECT (TEMP6[31-0])
+CRC32 instruction for 16-bit source operand and 32-bit destination operand:
+    TEMP1[15-0] := BIT_REFLECT16 (SRC[15-0])
+    TEMP2[31-0] := BIT_REFLECT32 (DEST[31-0])
+    TEMP3[47-0] := TEMP1[15-0] « 32
+    TEMP4[47-0] := TEMP2[31-0] « 16
+    TEMP5[47-0] := TEMP3[47-0] XOR TEMP4[47-0]
+    TEMP6[31-0] := TEMP5[47-0] MOD2 11EDC6F41H
+    DEST[31-0] := BIT_REFLECT (TEMP6[31-0])
+CRC32 instruction for 8-bit source operand and 64-bit destination operand:
+    TEMP1[7-0] := BIT_REFLECT8(SRC[7-0])
+    TEMP2[31-0] := BIT_REFLECT32 (DEST[31-0])
+    TEMP3[39-0] := TEMP1[7-0] « 32
+    TEMP4[39-0] := TEMP2[31-0] « 8
+    TEMP5[39-0] := TEMP3[39-0] XOR TEMP4[39-0]
+    TEMP6[31-0] := TEMP5[39-0] MOD2 11EDC6F41H
+    DEST[31-0] := BIT_REFLECT (TEMP6[31-0])
+    DEST[63-32] := 00000000H
+CRC32 instruction for 8-bit source operand and 32-bit destination operand:
+    TEMP1[7-0] := BIT_REFLECT8(SRC[7-0])
+    TEMP2[31-0] := BIT_REFLECT32 (DEST[31-0])
+    TEMP3[39-0] := TEMP1[7-0] « 32
+    TEMP4[39-0] := TEMP2[31-0] « 8
+    TEMP5[39-0] := TEMP3[39-0] XOR TEMP4[39-0]
+    TEMP6[31-0] := TEMP5[39-0] MOD2 11EDC6F41H
+    DEST[31-0] := BIT_REFLECT (TEMP6[31-0])
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
unsigned int _mm_crc32_u8( unsigned int crc, unsigned char data )
+
+
unsigned int _mm_crc32_u16( unsigned int crc, unsigned short data )
+
+
unsigned int _mm_crc32_u32( unsigned int crc, unsigned int data )
+
+
unsigned __int64 _mm_crc32_u64( unsigned __int64 crc, unsigned __int64 data )
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS or GS segments.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf CPUID.01H:ECX.SSE4_2[Bit 20] = 0.
If LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#UDIf CPUID.01H:ECX.SSE4_2[Bit 20] = 0.
If LOCK prefix is used.
+

Virtual 8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf CPUID.01H:ECX.SSE4_2[Bit 20] = 0.
If LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in Protected Mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf CPUID.01H:ECX.SSE4_2[Bit 20] = 0.
If LOCK prefix is used.
diff --git a/x86/cvtdq2pd.html b/x86/cvtdq2pd.html new file mode 100644 index 0000000..5bdbbb8 --- /dev/null +++ b/x86/cvtdq2pd.html @@ -0,0 +1,265 @@ + +CVTDQ2PD + — Convert Packed Doubleword Integers to Packed Double Precision Floating-PointValues

CVTDQ2PD + — Convert Packed Doubleword Integers to Packed Double Precision Floating-PointValues

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F E6 /r CVTDQ2PD xmm1, xmm2/m64AV/VSSE2Convert two packed signed doubleword integers from xmm2/mem to two packed double precision floating-point values in xmm1.
VEX.128.F3.0F.WIG E6 /r VCVTDQ2PD xmm1, xmm2/m64AV/VAVXConvert two packed signed doubleword integers from xmm2/mem to two packed double precision floating-point values in xmm1.
VEX.256.F3.0F.WIG E6 /r VCVTDQ2PD ymm1, xmm2/m128AV/VAVXConvert four packed signed doubleword integers from xmm2/mem to four packed double precision floating-point values in ymm1.
EVEX.128.F3.0F.W0 E6 /r VCVTDQ2PD xmm1 {k1}{z}, xmm2/m64/m32bcstBV/VAVX512VL AVX512FConvert 2 packed signed doubleword integers from xmm2/m64/m32bcst to eight packed double precision floating-point values in xmm1 with writemask k1.
EVEX.256.F3.0F.W0 E6 /r VCVTDQ2PD ymm1 {k1}{z}, xmm2/m128/m32bcstBV/VAVX512VL AVX512FConvert 4 packed signed doubleword integers from xmm2/m128/m32bcst to 4 packed double precision floating-point values in ymm1 with writemask k1.
EVEX.512.F3.0F.W0 E6 /r VCVTDQ2PD zmm1 {k1}{z}, ymm2/m256/m32bcstBV/VAVX512FConvert eight packed signed doubleword integers from ymm2/m256/m32bcst to eight packed double precision floating-point values in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two, four or eight packed signed doubleword integers in the source operand (the second operand) to two, four or eight packed double precision floating-point values in the destination operand (the first operand).

+

EVEX encoded versions: The source operand can be a YMM/XMM/XMM (low 64 bits) register, a 256/128/64-bit memory location or a 256/128/64-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1. Attempt to encode this instruction with EVEX embedded rounding is ignored.

+

VEX.256 encoded version: The source operand is an XMM register or 128- bit memory location. The destination operand is a YMM register.

+

VEX.128 encoded version: The source operand is an XMM register or 64- bit memory location. The destination operand is a XMM register. The upper Bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The source operand is an XMM register or 64- bit memory location. The destination operand is an XMM register. The upper Bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X3 +X2 X1 X0 +SRC +X3 +X2 +X1 +X0 +DEST +
Figure 3-11. CVTDQ2PD (VEX.256 encoded version)
+

Operation + ¶ +

+

VCVTDQ2PD (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Integer_To_Double_Precision_Floating_Point(SRC[k+31:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTDQ2PD (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_Integer_To_Double_Precision_Floating_Point(SRC[31:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_Integer_To_Double_Precision_Floating_Point(SRC[k+31:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTDQ2PD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[31:0])
+DEST[127:64] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[63:32])
+DEST[191:128] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[95:64])
+DEST[255:192] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[127:96)
+DEST[MAXVL-1:256] := 0
+
+

VCVTDQ2PD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[31:0])
+DEST[127:64] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[63:32])
+DEST[MAXVL-1:128] := 0
+
+

CVTDQ2PD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[31:0])
+DEST[127:64] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[63:32])
+DEST[MAXVL-1:128] (unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTDQ2PD __m512d _mm512_cvtepi32_pd( __m256i a);
+
+
VCVTDQ2PD __m512d _mm512_mask_cvtepi32_pd( __m512d s, __mmask8 k, __m256i a);
+
+
VCVTDQ2PD __m512d _mm512_maskz_cvtepi32_pd( __mmask8 k, __m256i a);
+
+
VCVTDQ2PD __m256d _mm256_cvtepi32_pd (__m128i src);
+
+
VCVTDQ2PD __m256d _mm256_mask_cvtepi32_pd( __m256d s, __mmask8 k, __m256i a);
+
+
VCVTDQ2PD __m256d _mm256_maskz_cvtepi32_pd( __mmask8 k, __m256i a);
+
+
VCVTDQ2PD __m128d _mm_mask_cvtepi32_pd( __m128d s, __mmask8 k, __m128i a);
+
+
VCVTDQ2PD __m128d _mm_maskz_cvtepi32_pd( __mmask8 k, __m128i a);
+
+
CVTDQ2PD __m128d _mm_cvtepi32_pd (__m128i src)
+
+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-51, “Type E5 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cvtdq2ps.html b/x86/cvtdq2ps.html new file mode 100644 index 0000000..a0fd66c --- /dev/null +++ b/x86/cvtdq2ps.html @@ -0,0 +1,214 @@ + +CVTDQ2PS + — Convert Packed Doubleword Integers to Packed Single Precision Floating-PointValues

CVTDQ2PS + — Convert Packed Doubleword Integers to Packed Single Precision Floating-PointValues

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 5B /r CVTDQ2PS xmm1, xmm2/m128AV/VSSE2Convert four packed signed doubleword integers from xmm2/mem to four packed single precision floating-point values in xmm1.
VEX.128.0F.WIG 5B /r VCVTDQ2PS xmm1, xmm2/m128AV/VAVXConvert four packed signed doubleword integers from xmm2/mem to four packed single precision floating-point values in xmm1.
VEX.256.0F.WIG 5B /r VCVTDQ2PS ymm1, ymm2/m256AV/VAVXConvert eight packed signed doubleword integers from ymm2/mem to eight packed single precision floating-point values in ymm1.
EVEX.128.0F.W0 5B /r VCVTDQ2PS xmm1 {k1}{z}, xmm2/m128/m32bcstBV/VAVX512VL AVX512FConvert four packed signed doubleword integers from xmm2/m128/m32bcst to four packed single precision floating-point values in xmm1with writemask k1.
EVEX.256.0F.W0 5B /r VCVTDQ2PS ymm1 {k1}{z}, ymm2/m256/m32bcstBV/VAVX512VL AVX512FConvert eight packed signed doubleword integers from ymm2/m256/m32bcst to eight packed single precision floating-point values in ymm1with writemask k1.
EVEX.512.0F.W0 5B /r VCVTDQ2PS zmm1 {k1}{z}, zmm2/m512/m32bcst{er}BV/VAVX512FConvert sixteen packed signed doubleword integers from zmm2/m512/m32bcst to sixteen packed single precision floating-point values in zmm1with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts four, eight or sixteen packed signed doubleword integers in the source operand to four, eight or sixteen packed single precision floating-point values in the destination operand.

+

EVEX encoded versions: The source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The source operand is a YMM register or 256- bit memory location. The destination operand is a YMM register. Bits (MAXVL-1:256) of the corresponding register destination are zeroed.

+

VEX.128 encoded version: The source operand is an XMM register or 128- bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding register destination are zeroed.

+

128-bit Legacy SSE version: The source operand is an XMM register or 128- bit memory location. The destination operand is an XMM register. The upper Bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTDQ2PS (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC); ; refer to Table 15-4 in the Intel® 64 and IA-32 Architectures
+Software Developer’s Manual, Volume 1
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC); ; refer to Table 15-4 in the Intel® 64 and IA-32 Architectures
+Software Developer’s Manual, Volume 1
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            Convert_Integer_To_Single_Precision_Floating_Point(SRC[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTDQ2PS (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_Integer_To_Single_Precision_Floating_Point(SRC[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_Integer_To_Single_Precision_Floating_Point(SRC[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTDQ2PS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[31:0])
+DEST[63:32] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[63:32])
+DEST[95:64] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[95:64])
+DEST[127:96] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[127:96)
+DEST[159:128] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[159:128])
+DEST[191:160] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[191:160])
+DEST[223:192] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[223:192])
+DEST[255:224] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[255:224)
+DEST[MAXVL-1:256] := 0
+
+

VCVTDQ2PS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[31:0])
+DEST[63:32] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[63:32])
+DEST[95:64] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[95:64])
+DEST[127:96] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[127z:96)
+DEST[MAXVL-1:128] := 0
+
+

CVTDQ2PS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[31:0])
+DEST[63:32] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[63:32])
+DEST[95:64] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[95:64])
+DEST[127:96] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[127z:96)
+DEST[MAXVL-1:128] (unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTDQ2PS __m512 _mm512_cvtepi32_ps( __m512i a);
+
+
VCVTDQ2PS __m512 _mm512_mask_cvtepi32_ps( __m512 s, __mmask16 k, __m512i a);
+
+
VCVTDQ2PS __m512 _mm512_maskz_cvtepi32_ps( __mmask16 k, __m512i a);
+
+
VCVTDQ2PS __m512 _mm512_cvt_roundepi32_ps( __m512i a, int r);
+
+
VCVTDQ2PS __m512 _mm512_mask_cvt_roundepi_ps( __m512 s, __mmask16 k, __m512i a, int r);
+
+
VCVTDQ2PS __m512 _mm512_maskz_cvt_roundepi32_ps( __mmask16 k, __m512i a, int r);
+
+
VCVTDQ2PS __m256 _mm256_mask_cvtepi32_ps( __m256 s, __mmask8 k, __m256i a);
+
+
VCVTDQ2PS __m256 _mm256_maskz_cvtepi32_ps( __mmask8 k, __m256i a);
+
+
VCVTDQ2PS __m128 _mm_mask_cvtepi32_ps( __m128 s, __mmask8 k, __m128i a);
+
+
VCVTDQ2PS __m128 _mm_maskz_cvtepi32_ps( __mmask8 k, __m128i a);
+
+
CVTDQ2PS __m256 _mm256_cvtepi32_ps (__m256i src)
+
+
CVTDQ2PS __m128 _mm_cvtepi32_ps (__m128i src)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cvtpd2dq.html b/x86/cvtpd2dq.html new file mode 100644 index 0000000..f346795 --- /dev/null +++ b/x86/cvtpd2dq.html @@ -0,0 +1,289 @@ + +CVTPD2DQ + — Convert Packed Double Precision Floating-Point Values to Packed DoublewordIntegers

CVTPD2DQ + — Convert Packed Double Precision Floating-Point Values to Packed DoublewordIntegers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F E6 /r CVTPD2DQ xmm1, xmm2/m128AV/VSSE2Convert two packed double precision floating-point values in xmm2/mem to two signed doubleword integers in xmm1.
VEX.128.F2.0F.WIG E6 /r VCVTPD2DQ xmm1, xmm2/m128AV/VAVXConvert two packed double precision floating-point values in xmm2/mem to two signed doubleword integers in xmm1.
VEX.256.F2.0F.WIG E6 /r VCVTPD2DQ xmm1, ymm2/m256AV/VAVXConvert four packed double precision floating-point values in ymm2/mem to four signed doubleword integers in xmm1.
EVEX.128.F2.0F.W1 E6 /r VCVTPD2DQ xmm1 {k1}{z}, xmm2/m128/m64bcstBV/VAVX512VL AVX512FConvert two packed double precision floating-point values in xmm2/m128/m64bcst to two signed doubleword integers in xmm1 subject to writemask k1.
EVEX.256.F2.0F.W1 E6 /r VCVTPD2DQ xmm1 {k1}{z}, ymm2/m256/m64bcstBV/VAVX512VL AVX512FConvert four packed double precision floating-point values in ymm2/m256/m64bcst to four signed doubleword integers in xmm1 subject to writemask k1.
EVEX.512.F2.0F.W1 E6 /r VCVTPD2DQ ymm1 {k1}{z}, zmm2/m512/m64bcst{er}BV/VAVX512FConvert eight packed double precision floating-point values in zmm2/m512/m64bcst to eight signed doubleword integers in ymm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed double precision floating-point values in the source operand (second operand) to packed signed doubleword integers in the destination operand (first operand).

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (2w-1, where w represents the number of bits in the destination format) is returned.

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1. The upper bits (MAXVL-1:256/128/64) of the corresponding destination are zeroed.

+

VEX.256 encoded version: The source operand is a YMM register or 256- bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The source operand is an XMM register or 128- bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:64) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The source operand is an XMM register or 128- bit memory location. The destination operand is an XMM register. Bits[127:64] of the destination XMM register are zeroed. However, the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SR +X3 +X2 +X1 +X0 +DEST +0 +X3 X2 X1 X0 +
Figure 3-12. VCVTPD2DQ (VEX.256 encoded version)
+

Operation + ¶ +

+

VCVTPD2DQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_Integer(SRC[k+63:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTPD2DQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_Integer(SRC[63:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_Integer(SRC[k+63:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTPD2DQ (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[63:0])
+DEST[63:32] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[127:64])
+DEST[95:64] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[191:128])
+DEST[127:96] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[255:192)
+DEST[MAXVL-1:128] := 0
+
+

VCVTPD2DQ (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[63:0])
+DEST[63:32] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[127:64])
+DEST[MAXVL-1:64] := 0
+
+

CVTPD2DQ (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[63:0])
+DEST[63:32] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[127:64])
+DEST[127:64] := 0
+DEST[MAXVL-1:128] (unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPD2DQ __m256i _mm512_cvtpd_epi32( __m512d a);
+
+
VCVTPD2DQ __m256i _mm512_mask_cvtpd_epi32( __m256i s, __mmask8 k, __m512d a);
+
+
VCVTPD2DQ __m256i _mm512_maskz_cvtpd_epi32( __mmask8 k, __m512d a);
+
+
VCVTPD2DQ __m256i _mm512_cvt_roundpd_epi32( __m512d a, int r);
+
+
VCVTPD2DQ __m256i _mm512_mask_cvt_roundpd_epi32( __m256i s, __mmask8 k, __m512d a, int r);
+
+
VCVTPD2DQ __m256i _mm512_maskz_cvt_roundpd_epi32( __mmask8 k, __m512d a, int r);
+
+
VCVTPD2DQ __m128i _mm256_mask_cvtpd_epi32( __m128i s, __mmask8 k, __m256d a);
+
+
VCVTPD2DQ __m128i _mm256_maskz_cvtpd_epi32( __mmask8 k, __m256d a);
+
+
VCVTPD2DQ __m128i _mm_mask_cvtpd_epi32( __m128i s, __mmask8 k, __m128d a);
+
+
VCVTPD2DQ __m128i _mm_maskz_cvtpd_epi32( __mmask8 k, __m128d a);
+
+
VCVTPD2DQ __m128i _mm256_cvtpd_epi32 (__m256d src)
+
+
CVTPD2DQ __m128i _mm_cvtpd_epi32 (__m128d src)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cvtpd2pi.html b/x86/cvtpd2pi.html new file mode 100644 index 0000000..4be583f --- /dev/null +++ b/x86/cvtpd2pi.html @@ -0,0 +1,65 @@ + +CVTPD2PI + — Convert Packed Double Precision Floating-Point Values to Packed Dword Integers

CVTPD2PI + — Convert Packed Double Precision Floating-Point Values to Packed Dword Integers

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 2D /r CVTPD2PI mm, xmm/m128RMV/VSSE2Convert two packed double precision floating-point values from xmm/m128 to two packed signed doubleword integers in mm.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two packed double precision floating-point values in the source operand (second operand) to two packed signed doubleword integers in the destination operand (first operand).

+

The source operand can be an XMM register or a 128-bit memory location. The destination operand is an MMX technology register.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register. If a converted result is larger than the maximum signed doubleword integer, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000H) is returned.

+

This instruction causes a transition from x87 FPU to MMX technology operation (that is, the x87 FPU top-of-stack pointer is set to 0 and the x87 FPU tag word is set to all 0s [valid]). If this instruction is executed while an x87 FPU floating-point exception is pending, the exception is handled before the CVTPD2PI instruction is executed.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Operation + ¶ +

+
DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer32(SRC[63:0]);
+DEST[63:32] := Convert_Double_Precision_Floating_Point_To_Integer32(SRC[127:64]);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
CVTPD1PI __m64 _mm_cvtpd_pi32(__m128d a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

See Table 23-4, “Exception Conditions for Legacy SIMD/MMX Instructions with FP Exception and 16-Byte Alignment” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/cvtpd2ps.html b/x86/cvtpd2ps.html new file mode 100644 index 0000000..8c61945 --- /dev/null +++ b/x86/cvtpd2ps.html @@ -0,0 +1,286 @@ + +CVTPD2PS + — Convert Packed Double Precision Floating-Point Values to Packed Single PrecisionFloating-Point Values

CVTPD2PS + — Convert Packed Double Precision Floating-Point Values to Packed Single PrecisionFloating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 5A /r CVTPD2PS xmm1, xmm2/m128AV/VSSE2Convert two packed double precision floating-point values in xmm2/mem to two single precision floating-point values in xmm1.
VEX.128.66.0F.WIG 5A /r VCVTPD2PS xmm1, xmm2/m128AV/VAVXConvert two packed double precision floating-point values in xmm2/mem to two single precision floating-point values in xmm1.
VEX.256.66.0F.WIG 5A /r VCVTPD2PS xmm1, ymm2/m256AV/VAVXConvert four packed double precision floating-point values in ymm2/mem to four single precision floating-point values in xmm1.
EVEX.128.66.0F.W1 5A /r VCVTPD2PS xmm1 {k1}{z}, xmm2/m128/m64bcstBV/VAVX512VL AVX512FConvert two packed double precision floating-point values in xmm2/m128/m64bcst to two single precision floating-point values in xmm1with writemask k1.
EVEX.256.66.0F.W1 5A /r VCVTPD2PS xmm1 {k1}{z}, ymm2/m256/m64bcstBV/VAVX512VL AVX512FConvert four packed double precision floating-point values in ymm2/m256/m64bcst to four single precision floating-point values in xmm1with writemask k1.
EVEX.512.66.0F.W1 5A /r VCVTPD2PS ymm1 {k1}{z}, zmm2/m512/m64bcst{er}BV/VAVX512FConvert eight packed double precision floating-point values in zmm2/m512/m64bcst to eight single precision floating-point values in ymm1with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two, four or eight packed double precision floating-point values in the source operand (second operand) to two, four or eight packed single precision floating-point values in the destination operand (first operand).

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits.

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a YMM/XMM/XMM (low 64-bits) register conditionally updated with writemask k1. The upper bits (MAXVL-1:256/128/64) of the corresponding destination are zeroed.

+

VEX.256 encoded version: The source operand is a YMM register or 256- bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The source operand is an XMM register or 128- bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:64) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The source operand is an XMM register or 128- bit memory location. The destination operand is an XMM register. Bits[127:64] of the destination XMM register are zeroed. However, the upper Bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SR +X3 +X2 +X1 +X0 +DEST +0 +X3 X2 X1 X0 +
Figure 3-13. VCVTPD2PS (VEX.256 encoded version)
+

Operation + ¶ +

+

VCVTPD2PS (EVEX Encoded Version) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+31:i] := Convert_Double_Precision_Floating_Point_To_Single_Precision_Floating_Point(SRC[k+63:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTPD2PS (EVEX Encoded Version) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=Convert_Double_Precision_Floating_Point_To_Single_Precision_Floating_Point(SRC[63:0])
+                ELSE
+                    DEST[i+31:i] := Convert_Double_Precision_Floating_Point_To_Single_Precision_Floating_Point(SRC[k+63:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTPD2PS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC[63:0])
+DEST[63:32] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC[127:64])
+DEST[95:64] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC[191:128])
+DEST[127:96] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC[255:192)
+DEST[MAXVL-1:128] := 0
+
+

VCVTPD2PS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC[63:0])
+DEST[63:32] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC[127:64])
+DEST[MAXVL-1:64] := 0
+
+

CVTPD2PS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC[63:0])
+DEST[63:32] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC[127:64])
+DEST[127:64] := 0
+DEST[MAXVL-1:128] (unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPD2PS __m256 _mm512_cvtpd_ps( __m512d a);
+
+
VCVTPD2PS __m256 _mm512_mask_cvtpd_ps( __m256 s, __mmask8 k, __m512d a);
+
+
VCVTPD2PS __m256 _mm512_maskz_cvtpd_ps( __mmask8 k, __m512d a);
+
+
VCVTPD2PS __m256 _mm512_cvt_roundpd_ps( __m512d a, int r);
+
+
VCVTPD2PS __m256 _mm512_mask_cvt_roundpd_ps( __m256 s, __mmask8 k, __m512d a, int r);
+
+
VCVTPD2PS __m256 _mm512_maskz_cvt_roundpd_ps( __mmask8 k, __m512d a, int r);
+
+
VCVTPD2PS __m128 _mm256_mask_cvtpd_ps( __m128 s, __mmask8 k, __m256d a);
+
+
VCVTPD2PS __m128 _mm256_maskz_cvtpd_ps( __mmask8 k, __m256d a);
+
+
VCVTPD2PS __m128 _mm_mask_cvtpd_ps( __m128 s, __mmask8 k, __m128d a);
+
+
VCVTPD2PS __m128 _mm_maskz_cvtpd_ps( __mmask8 k, __m128d a);
+
+
VCVTPD2PS __m128 _mm256_cvtpd_ps (__m256d a)
+
+
CVTPD2PS __m128 _mm_cvtpd_ps (__m128d a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision, Underflow, Overflow, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cvtpi2pd.html b/x86/cvtpi2pd.html new file mode 100644 index 0000000..3857282 --- /dev/null +++ b/x86/cvtpi2pd.html @@ -0,0 +1,68 @@ + +CVTPI2PD + — Convert Packed Dword Integers to Packed Double Precision Floating-Point Values

CVTPI2PD + — Convert Packed Dword Integers to Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
66 0F 2A /r CVTPI2PD xmm, mm/m641RMValidValidConvert two packed signed doubleword integers from mm/mem64 to two packed double precision floating-point values in xmm.
+
+

1. Operation is different for different operand sets; see the Description section.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two packed signed doubleword integers in the source operand (second operand) to two packed double precision floating-point values in the destination operand (first operand).

+

The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an XMM register. In addition, depending on the operand configuration:

+
    +
  • For operands xmm, mm: the instruction causes a transition from x87 FPU to MMX technology operation (that is, the x87 FPU top-of-stack pointer is set to 0 and the x87 FPU tag word is set to all 0s [valid]). If this instruction is executed while an x87 FPU floating-point exception is pending, the exception is handled before the CVTPI2PD instruction is executed.
  • +
  • For operands xmm, m64: the instruction does not cause a transition to MMX technology and does not take x87 FPU exceptions.
+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Operation + ¶ +

+
DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[31:0]);
+DEST[127:64] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[63:32]);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
CVTPI2PD __m128d _mm_cvtpi32_pd(__m64 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 23-6, “Exception Conditions for Legacy SIMD/MMX Instructions with XMM and without FP Exception” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/cvtpi2ps.html b/x86/cvtpi2ps.html new file mode 100644 index 0000000..067d65a --- /dev/null +++ b/x86/cvtpi2ps.html @@ -0,0 +1,65 @@ + +CVTPI2PS + — Convert Packed Dword Integers to Packed Single Precision Floating-Point Values

CVTPI2PS + — Convert Packed Dword Integers to Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F 2A /r CVTPI2PS xmm, mm/m64RMValidValidConvert two signed doubleword integers from mm/m64 to two single precision floating-point values in xmm.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two packed signed doubleword integers in the source operand (second operand) to two packed single precision floating-point values in the destination operand (first operand).

+

The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an XMM register. The results are stored in the low quadword of the destination operand, and the high quadword remains unchanged. When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register.

+

This instruction causes a transition from x87 FPU to MMX technology operation (that is, the x87 FPU top-of-stack pointer is set to 0 and the x87 FPU tag word is set to all 0s [valid]). If this instruction is executed while an x87 FPU floating-point exception is pending, the exception is handled before the CVTPI2PS instruction is executed.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Operation + ¶ +

+
DEST[31:0] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[31:0]);
+DEST[63:32] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[63:32]);
+(* High quadword of destination unchanged *)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
CVTPI2PS __m128 _mm_cvtpi32_ps(__m128 a, __m64 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

See Table 23-5, “Exception Conditions for Legacy SIMD/MMX Instructions with XMM and FP Exception” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/cvtps2dq.html b/x86/cvtps2dq.html new file mode 100644 index 0000000..edf2d34 --- /dev/null +++ b/x86/cvtps2dq.html @@ -0,0 +1,212 @@ + +CVTPS2DQ + — Convert Packed Single Precision Floating-Point Values to Packed SignedDoubleword Integer Values

CVTPS2DQ + — Convert Packed Single Precision Floating-Point Values to Packed SignedDoubleword Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 5B /r CVTPS2DQ xmm1, xmm2/m128AV/VSSE2Convert four packed single precision floating-point values from xmm2/mem to four packed signed doubleword values in xmm1.
VEX.128.66.0F.WIG 5B /r VCVTPS2DQ xmm1, xmm2/m128AV/VAVXConvert four packed single precision floating-point values from xmm2/mem to four packed signed doubleword values in xmm1.
VEX.256.66.0F.WIG 5B /r VCVTPS2DQ ymm1, ymm2/m256AV/VAVXConvert eight packed single precision floating-point values from ymm2/mem to eight packed signed doubleword values in ymm1.
EVEX.128.66.0F.W0 5B /r VCVTPS2DQ xmm1 {k1}{z}, xmm2/m128/m32bcstBV/VAVX512VL AVX512FConvert four packed single precision floating-point values from xmm2/m128/m32bcst to four packed signed doubleword values in xmm1 subject to writemask k1.
EVEX.256.66.0F.W0 5B /r VCVTPS2DQ ymm1 {k1}{z}, ymm2/m256/m32bcstBV/VAVX512VL AVX512FConvert eight packed single precision floating-point values from ymm2/m256/m32bcst to eight packed signed doubleword values in ymm1 subject to writemask k1.
EVEX.512.66.0F.W0 5B /r VCVTPS2DQ zmm1 {k1}{z}, zmm2/m512/m32bcst{er}BV/VAVX512FConvert sixteen packed single precision floating-point values from zmm2/m512/m32bcst to sixteen packed signed doubleword values in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts four, eight or sixteen packed single precision floating-point values in the source operand to four, eight or sixteen signed doubleword integers in the destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (2w-1, where w represents the number of bits in the destination format) is returned.

+

EVEX encoded versions: The source operand is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The source operand is a YMM register or 256- bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The source operand is an XMM register or 128- bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The source operand is an XMM register or 128- bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTPS2DQ (Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_Integer(SRC[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPS2DQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO 15
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_Integer(SRC[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_Integer(SRC[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPS2DQ (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[31:0])
+DEST[63:32] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[63:32])
+DEST[95:64] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[95:64])
+DEST[127:96] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[127:96)
+DEST[159:128] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[159:128])
+DEST[191:160] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[191:160])
+DEST[223:192] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[223:192])
+DEST[255:224] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[255:224])
+
+

VCVTPS2DQ (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[31:0])
+DEST[63:32] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[63:32])
+DEST[95:64] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[95:64])
+DEST[127:96] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[127:96])
+DEST[MAXVL-1:128] := 0
+
+

CVTPS2DQ (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[31:0])
+DEST[63:32] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[63:32])
+DEST[95:64] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[95:64])
+DEST[127:96] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[127:96])
+DEST[MAXVL-1:128] (unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPS2DQ __m512i _mm512_cvtps_epi32( __m512 a);
+
+
VCVTPS2DQ __m512i _mm512_mask_cvtps_epi32( __m512i s, __mmask16 k, __m512 a);
+
+
VCVTPS2DQ __m512i _mm512_maskz_cvtps_epi32( __mmask16 k, __m512 a);
+
+
VCVTPS2DQ __m512i _mm512_cvt_roundps_epi32( __m512 a, int r);
+
+
VCVTPS2DQ __m512i _mm512_mask_cvt_roundps_epi32( __m512i s, __mmask16 k, __m512 a, int r);
+
+
VCVTPS2DQ __m512i _mm512_maskz_cvt_roundps_epi32( __mmask16 k, __m512 a, int r);
+
+
VCVTPS2DQ __m256i _mm256_mask_cvtps_epi32( __m256i s, __mmask8 k, __m256 a);
+
+
VCVTPS2DQ __m256i _mm256_maskz_cvtps_epi32( __mmask8 k, __m256 a);
+
+
VCVTPS2DQ __m128i _mm_mask_cvtps_epi32( __m128i s, __mmask8 k, __m128 a);
+
+
VCVTPS2DQ __m128i _mm_maskz_cvtps_epi32( __mmask8 k, __m128 a);
+
+
VCVTPS2DQ __ m256i _mm256_cvtps_epi32 (__m256 a)
+
+
CVTPS2DQ __m128i _mm_cvtps_epi32 (__m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cvtps2pd.html b/x86/cvtps2pd.html new file mode 100644 index 0000000..8e1bdb7 --- /dev/null +++ b/x86/cvtps2pd.html @@ -0,0 +1,274 @@ + +CVTPS2PD + — Convert Packed Single Precision Floating-Point Values to Packed Double PrecisionFloating-Point Values

CVTPS2PD + — Convert Packed Single Precision Floating-Point Values to Packed Double PrecisionFloating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 5A /r CVTPS2PD xmm1, xmm2/m64AV/VSSE2Convert two packed single precision floating-point values in xmm2/m64 to two packed double precision floating-point values in xmm1.
VEX.128.0F.WIG 5A /r VCVTPS2PD xmm1, xmm2/m64AV/VAVXConvert two packed single precision floating-point values in xmm2/m64 to two packed double precision floating-point values in xmm1.
VEX.256.0F.WIG 5A /r VCVTPS2PD ymm1, xmm2/m128AV/VAVXConvert four packed single precision floating-point values in xmm2/m128 to four packed double precision floating-point values in ymm1.
EVEX.128.0F.W0 5A /r VCVTPS2PD xmm1 {k1}{z}, xmm2/m64/m32bcstBV/VAVX512VL AVX512FConvert two packed single precision floating-point values in xmm2/m64/m32bcst to packed double precision floating-point values in xmm1 with writemask k1.
EVEX.256.0F.W0 5A /r VCVTPS2PD ymm1 {k1}{z}, xmm2/m128/m32bcstBV/VAVX512VL AVX512FConvert four packed single precision floating-point values in xmm2/m128/m32bcst to packed double precision floating-point values in ymm1 with writemask k1.
EVEX.512.0F.W0 5A /r VCVTPS2PD zmm1 {k1}{z}, ymm2/m256/m32bcst{sae}BV/VAVX512FConvert eight packed single precision floating-point values in ymm2/m256/b32bcst to eight packed double precision floating-point values in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two, four or eight packed single precision floating-point values in the source operand (second operand) to two, four or eight packed double precision floating-point values in the destination operand (first operand).

+

EVEX encoded versions: The source operand is a YMM/XMM/XMM (low 64-bits) register, a 256/128/64-bit memory location or a 256/128/64-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The source operand is an XMM register or 128- bit memory location. The destination operand is a YMM register. Bits (MAXVL-1:256) of the corresponding destination ZMM register are zeroed.

+

VEX.128 encoded version: The source operand is an XMM register or 64- bit memory location. The destination operand is a XMM register. The upper Bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The source operand is an XMM register or 64- bit memory location. The destination operand is an XMM register. The upper Bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X3 X2 X1 X0 +SRC +X3 +X2 +X1 +X0 +DEST +
Figure 3-14. CVTPS2PD (VEX.256 encoded version)
+

Operation + ¶ +

+

VCVTPS2PD (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[k+31:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPS2PD (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[31:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[k+31:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPS2PD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[31:0])
+DEST[127:64] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[63:32])
+DEST[191:128] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[95:64])
+DEST[255:192] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[127:96)
+DEST[MAXVL-1:256] := 0
+
+

VCVTPS2PD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[31:0])
+DEST[127:64] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[63:32])
+DEST[MAXVL-1:128] := 0
+
+

CVTPS2PD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[31:0])
+DEST[127:64] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[63:32])
+DEST[MAXVL-1:128] (unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPS2PD __m512d _mm512_cvtps_pd( __m256 a);
+
+
VCVTPS2PD __m512d _mm512_mask_cvtps_pd( __m512d s, __mmask8 k, __m256 a);
+
+
VCVTPS2PD __m512d _mm512_maskz_cvtps_pd( __mmask8 k, __m256 a);
+
+
VCVTPS2PD __m512d _mm512_cvt_roundps_pd( __m256 a, int sae);
+
+
VCVTPS2PD __m512d _mm512_mask_cvt_roundps_pd( __m512d s, __mmask8 k, __m256 a, int sae);
+
+
VCVTPS2PD __m512d _mm512_maskz_cvt_roundps_pd( __mmask8 k, __m256 a, int sae);
+
+
VCVTPS2PD __m256d _mm256_mask_cvtps_pd( __m256d s, __mmask8 k, __m128 a);
+
+
VCVTPS2PD __m256d _mm256_maskz_cvtps_pd( __mmask8 k, __m128a);
+
+
VCVTPS2PD __m128d _mm_mask_cvtps_pd( __m128d s, __mmask8 k, __m128 a);
+
+
VCVTPS2PD __m128d _mm_maskz_cvtps_pd( __mmask8 k, __m128 a);
+
+
VCVTPS2PD __m256d _mm256_cvtps_pd (__m128 a)
+
+
CVTPS2PD __m128d _mm_cvtps_pd (__m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cvtps2pi.html b/x86/cvtps2pi.html new file mode 100644 index 0000000..6e068c5 --- /dev/null +++ b/x86/cvtps2pi.html @@ -0,0 +1,64 @@ + +CVTPS2PI + — Convert Packed Single Precision Floating-Point Values to Packed Dword Integers

CVTPS2PI + — Convert Packed Single Precision Floating-Point Values to Packed Dword Integers

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F 2D /r CVTPS2PI mm, xmm/m64RMValidValidConvert two packed single precision floating-point values from xmm/m64 to two packed signed doubleword integers in mm.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two packed single precision floating-point values in the source operand (second operand) to two packed signed doubleword integers in the destination operand (first operand).

+

The source operand can be an XMM register or a 128-bit memory location. The destination operand is an MMX technology register. When the source operand is an XMM register, the two single precision floating-point values are contained in the low quadword of the register. When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register. If a converted result is larger than the maximum signed doubleword integer, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000H) is returned.

+

CVTPS2PI causes a transition from x87 FPU to MMX technology operation (that is, the x87 FPU top-of-stack pointer is set to 0 and the x87 FPU tag word is set to all 0s [valid]). If this instruction is executed while an x87 FPU floating-point exception is pending, the exception is handled before the CVTPS2PI instruction is executed.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Operation + ¶ +

+
DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[31:0]);
+DEST[63:32] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[63:32]);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
CVTPS2PI __m64 _mm_cvtps_pi32(__m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

See Table 23-5, “Exception Conditions for Legacy SIMD/MMX Instructions with XMM and FP Exception,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/cvtsd2si.html b/x86/cvtsd2si.html new file mode 100644 index 0000000..7328db6 --- /dev/null +++ b/x86/cvtsd2si.html @@ -0,0 +1,146 @@ + +CVTSD2SI + — Convert Scalar Double Precision Floating-Point Value to Doubleword Integer

CVTSD2SI + — Convert Scalar Double Precision Floating-Point Value to Doubleword Integer

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 2D /r CVTSD2SI r32, xmm1/m64AV/VSSE2Convert one double precision floating-point value from xmm1/m64 to one signed doubleword integer r32.
F2 REX.W 0F 2D /r CVTSD2SI r64, xmm1/m64AV/N.E.SSE2Convert one double precision floating-point value from xmm1/m64 to one signed quadword integer sign-extended into r64.
VEX.LIG.F2.0F.W0 2D /r 1 VCVTSD2SI r32, xmm1/m64AV/VAVXConvert one double precision floating-point value from xmm1/m64 to one signed doubleword integer r32.
VEX.LIG.F2.0F.W1 2D /r 1 VCVTSD2SI r64, xmm1/m64AV/N.E.2AVXConvert one double precision floating-point value from xmm1/m64 to one signed quadword integer sign-extended into r64.
EVEX.LLIG.F2.0F.W0 2D /r VCVTSD2SI r32, xmm1/m64{er}BV/VAVX512FConvert one double precision floating-point value from xmm1/m64 to one signed doubleword integer r32.
EVEX.LLIG.F2.0F.W1 2D /r VCVTSD2SI r64, xmm1/m64{er}BV/N.E.2AVX512FConvert one double precision floating-point value from xmm1/m64 to one signed quadword integer sign-extended into r64.
+
+

1. Software should ensure VCVTSD2SI is encoded with VEX.L=0. Encoding VCVTSD2SI with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

2. VEX.W1/EVEX.W1 in non-64 bit is ignored; the instructions behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BTuple1 FixedModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts a double precision floating-point value in the source operand (the second operand) to a signed double-word integer in the destination operand (first operand). The source operand can be an XMM register or a 64-bit memory location. The destination operand is a general-purpose register. When the source operand is an XMM register, the double precision floating-point value is contained in the low quadword of the register.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register.

+

If a converted result exceeds the range limits of signed doubleword integer (in non-64-bit modes or 64-bit mode with REX.W/VEX.W/EVEX.W=0), the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000H) is returned.

+

If a converted result exceeds the range limits of signed quadword integer (in 64-bit mode and REX.W/VEX.W/EVEX.W = 1), the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000_00000000H) is returned.

+

Legacy SSE instruction: Use of the REX.W prefix promotes the instruction to produce 64-bit data in 64-bit mode. See the summary chart at the beginning of this section for encoding data and limits.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Software should ensure VCVTSD2SI is encoded with VEX.L=0. Encoding VCVTSD2SI with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VCVTSD2SI (EVEX Encoded Version) + ¶ +

+
IF SRC *is register* AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF 64-Bit Mode and OperandSize = 64
+    THEN DEST[63:0] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[63:0]);
+    ELSE DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[63:0]);
+FI
+
+

(V)CVTSD2SI + ¶ +

+
IF 64-Bit Mode and OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[63:0]);
+ELSE
+    DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer(SRC[63:0]);
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSD2SI int _mm_cvtsd_i32(__m128d);
+
+
VCVTSD2SI int _mm_cvt_roundsd_i32(__m128d, int r);
+
+
VCVTSD2SI __int64 _mm_cvtsd_i64(__m128d);
+
+
VCVTSD2SI __int64 _mm_cvt_roundsd_i64(__m128d, int r);
+
+
CVTSD2SI __int64 _mm_cvtsd_si64(__m128d);
+
+
CVTSD2SI int _mm_cvtsd_si32(__m128d a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cvtsd2ss.html b/x86/cvtsd2ss.html new file mode 100644 index 0000000..2644f0e --- /dev/null +++ b/x86/cvtsd2ss.html @@ -0,0 +1,136 @@ + +CVTSD2SS + — Convert Scalar Double Precision Floating-Point Value to Scalar Single PrecisionFloating-Point Value

CVTSD2SS + — Convert Scalar Double Precision Floating-Point Value to Scalar Single PrecisionFloating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 5A /r CVTSD2SS xmm1, xmm2/m64AV/VSSE2Convert one double precision floating-point value in xmm2/m64 to one single precision floating-point value in xmm1.
VEX.LIG.F2.0F.WIG 5A /r VCVTSD2SS xmm1,xmm2, xmm3/m64BV/VAVXConvert one double precision floating-point value in xmm3/m64 to one single precision floating-point value and merge with high bits in xmm2.
EVEX.LLIG.F2.0F.W1 5A /r VCVTSD2SS xmm1 {k1}{z}, xmm2, xmm3/m64{er}CV/VAVX512FConvert one double precision floating-point value in xmm3/m64 to one single precision floating-point value and merge with high bits in xmm2 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts a double precision floating-point value in the “convert-from” source operand (the second operand in SSE2 version, otherwise the third operand) to a single precision floating-point value in the destination operand.

+

When the “convert-from” operand is an XMM register, the double precision floating-point value is contained in the low quadword of the register. The result is stored in the low doubleword of the destination operand. When the conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register.

+

128-bit Legacy SSE version: The “convert-from” source operand (the second operand) is an XMM register or memory location. Bits (MAXVL-1:32) of the corresponding destination register remain unchanged. The destination operand is an XMM register.

+

VEX.128 and EVEX encoded versions: The “convert-from” source operand (the third operand) can be an XMM register or a 64-bit memory location. The first source and destination operands are XMM registers. Bits (127:32) of the XMM register destination are copied from the corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: the converted result in written to the low doubleword element of the destination under the writemask.

+

Software should ensure VCVTSD2SS is encoded with VEX.L=0. Encoding VCVTSD2SS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VCVTSD2SS (EVEX Encoded Version) + ¶ +

+
IF (SRC2 *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC2[63:0]);
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VCVTSD2SS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC2[63:0]);
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

CVTSD2SS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_To_Single_Precision_Floating_Point(SRC[63:0]);
+(* DEST[MAXVL-1:32] Unmodified *)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSD2SS __m128 _mm_mask_cvtsd_ss(__m128 s, __mmask8 k, __m128 a, __m128d b);
+
+
VCVTSD2SS __m128 _mm_maskz_cvtsd_ss( __mmask8 k, __m128 a,__m128d b);
+
+
VCVTSD2SS __m128 _mm_cvt_roundsd_ss(__m128 a, __m128d b, int r);
+
+
VCVTSD2SS __m128 _mm_mask_cvt_roundsd_ss(__m128 s, __mmask8 k, __m128 a, __m128d b, int r);
+
+
VCVTSD2SS __m128 _mm_maskz_cvt_roundsd_ss( __mmask8 k, __m128 a,__m128d b, int r);
+
+
CVTSD2SS __m128_mm_cvtsd_ss(__m128 a, __m128d b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/cvtsi2sd.html b/x86/cvtsi2sd.html new file mode 100644 index 0000000..a4f13a4 --- /dev/null +++ b/x86/cvtsi2sd.html @@ -0,0 +1,162 @@ + +CVTSI2SD + — Convert Doubleword Integer to Scalar Double Precision Floating-Point Value

CVTSI2SD + — Convert Doubleword Integer to Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 2A /r CVTSI2SD xmm1, r32/m32AV/VSSE2Convert one signed doubleword integer from r32/m32 to one double precision floating-point value in xmm1.
F2 REX.W 0F 2A /r CVTSI2SD xmm1, r/m64AV/N.E.SSE2Convert one signed quadword integer from r/m64 to one double precision floating-point value in xmm1.
VEX.LIG.F2.0F.W0 2A /r VCVTSI2SD xmm1, xmm2, r/m32BV/VAVXConvert one signed doubleword integer from r/m32 to one double precision floating-point value in xmm1.
VEX.LIG.F2.0F.W1 2A /r VCVTSI2SD xmm1, xmm2, r/m64BV/N.E.1AVXConvert one signed quadword integer from r/m64 to one double precision floating-point value in xmm1.
EVEX.LLIG.F2.0F.W0 2A /r VCVTSI2SD xmm1, xmm2, r/m32CV/VAVX512FConvert one signed doubleword integer from r/m32 to one double precision floating-point value in xmm1.
EVEX.LLIG.F2.0F.W1 2A /r VCVTSI2SD xmm1, xmm2, r/m64{er}CV/N.E.1AVX512FConvert one signed quadword integer from r/m64 to one double precision floating-point value in xmm1.
+
+

1. VEX.W1/EVEX.W1 in non-64 bit is ignored; the instructions behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts a signed doubleword integer (or signed quadword integer if operand size is 64 bits) in the “convert-from” source operand to a double precision floating-point value in the destination operand. The result is stored in the low quadword of the destination operand, and the high quadword left unchanged. When conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register.

+

The second source operand can be a general-purpose register or a 32/64-bit memory location. The first source and destination operands are XMM registers.

+

128-bit Legacy SSE version: Use of the REX.W prefix promotes the instruction to 64-bit operands. The “convert-from” source operand (the second operand) is a general-purpose register or memory location. The destination is an XMM register Bits (MAXVL-1:64) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded versions: The “convert-from” source operand (the third operand) can be a general-purpose register or a memory location. The first source and destination operands are XMM registers. Bits (127:64) of the XMM register destination are copied from the corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX.W0 version: attempt to encode this instruction with EVEX embedded rounding is ignored.

+

VEX.W1 and EVEX.W1 versions: promotes the instruction to use 64-bit input value in 64-bit mode.

+

Software should ensure VCVTSI2SD is encoded with VEX.L=0. Encoding VCVTSI2SD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VCVTSI2SD (EVEX Encoded Version) + ¶ +

+
IF (SRC2 *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF 64-Bit Mode And OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC2[63:0]);
+ELSE
+    DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC2[31:0]);
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VCVTSI2SD (VEX.128 Encoded Version) + ¶ +

+
IF 64-Bit Mode And OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC2[63:0]);
+ELSE
+    DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC2[31:0]);
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

CVTSI2SD + ¶ +

+
IF 64-Bit Mode And OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[63:0]);
+ELSE
+    DEST[63:0] := Convert_Integer_To_Double_Precision_Floating_Point(SRC[31:0]);
+FI;
+DEST[MAXVL-1:64] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSI2SD __m128d _mm_cvti32_sd(__m128d s, int a);
+
+
VCVTSI2SD __m128d _mm_cvti64_sd(__m128d s, __int64 a);
+
+
VCVTSI2SD __m128d _mm_cvt_roundi64_sd(__m128d s, __int64 a, int r);
+
+
CVTSI2SD __m128d _mm_cvtsi64_sd(__m128d s, __int64 a);
+
+
CVTSI2SD __m128d_mm_cvtsi32_sd(__m128d a, int b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions,” if W1; else see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions,” if W1; else see Table 2-59, “Type E10NF Class Exception Conditions.”

diff --git a/x86/cvtsi2ss.html b/x86/cvtsi2ss.html new file mode 100644 index 0000000..5c6db61 --- /dev/null +++ b/x86/cvtsi2ss.html @@ -0,0 +1,162 @@ + +CVTSI2SS + — Convert Doubleword Integer to Scalar Single Precision Floating-Point Value

CVTSI2SS + — Convert Doubleword Integer to Scalar Single Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 2A /r CVTSI2SS xmm1, r/m32AV/VSSEConvert one signed doubleword integer from r/m32 to one single precision floating-point value in xmm1.
F3 REX.W 0F 2A /r CVTSI2SS xmm1, r/m64AV/N.E.SSEConvert one signed quadword integer from r/m64 to one single precision floating-point value in xmm1.
VEX.LIG.F3.0F.W0 2A /r VCVTSI2SS xmm1, xmm2, r/m32BV/VAVXConvert one signed doubleword integer from r/m32 to one single precision floating-point value in xmm1.
VEX.LIG.F3.0F.W1 2A /r VCVTSI2SS xmm1, xmm2, r/m64BV/N.E.1AVXConvert one signed quadword integer from r/m64 to one single precision floating-point value in xmm1.
EVEX.LLIG.F3.0F.W0 2A /r VCVTSI2SS xmm1, xmm2, r/m32{er}CV/VAVX512FConvert one signed doubleword integer from r/m32 to one single precision floating-point value in xmm1.
EVEX.LLIG.F3.0F.W1 2A /r VCVTSI2SS xmm1, xmm2, r/m64{er}CV/N.E.1AVX512FConvert one signed quadword integer from r/m64 to one single precision floating-point value in xmm1.
+
+

1. VEX.W1/EVEX.W1 in non-64 bit is ignored; the instructions behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts a signed doubleword integer (or signed quadword integer if operand size is 64 bits) in the “convert-from” source operand to a single precision floating-point value in the destination operand (first operand). The “convert-from” source operand can be a general-purpose register or a memory location. The destination operand is an XMM register. The result is stored in the low doubleword of the destination operand, and the upper three doublewords are left unchanged. When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits.

+

128-bit Legacy SSE version: In 64-bit mode, Use of the REX.W prefix promotes the instruction to use 64-bit input value. The “convert-from” source operand (the second operand) is a general-purpose register or memory location. Bits (MAXVL-1:32) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded versions: The “convert-from” source operand (the third operand) can be a general-purpose register or a memory location. The first source and destination operands are XMM registers. Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: the converted result in written to the low doubleword element of the destination under the writemask.

+

Software should ensure VCVTSI2SS is encoded with VEX.L=0. Encoding VCVTSI2SS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VCVTSI2SS (EVEX Encoded Version) + ¶ +

+
IF (SRC2 *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF 64-Bit Mode And OperandSize = 64
+THEN
+    DEST[31:0] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[63:0]);
+ELSE
+    DEST[31:0] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[31:0]);
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VCVTSI2SS (VEX.128 Encoded Version) + ¶ +

+
IF 64-Bit Mode And OperandSize = 64
+THEN
+    DEST[31:0] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[63:0]);
+ELSE
+    DEST[31:0] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[31:0]);
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

CVTSI2SS (128-bit Legacy SSE Version) + ¶ +

+
IF 64-Bit Mode And OperandSize = 64
+THEN
+    DEST[31:0] := Convert_Integer_To_Single_Precision_Floating_Point(SRC[63:0]);
+ELSE
+    DEST[31:0] :=Convert_Integer_To_Single_Precision_Floating_Point(SRC[31:0]);
+FI;
+DEST[MAXVL-1:32] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSI2SS __m128 _mm_cvti32_ss(__m128 s, int a);
+
+
VCVTSI2SS __m128 _mm_cvt_roundi32_ss(__m128 s, int a, int r);
+
+
VCVTSI2SS __m128 _mm_cvti64_ss(__m128 s, __int64 a);
+
+
VCVTSI2SS __m128 _mm_cvt_roundi64_ss(__m128 s, __int64 a, int r);
+
+
CVTSI2SS __m128 _mm_cvtsi64_ss(__m128 s, __int64 a);
+
+
CVTSI2SS __m128 _mm_cvtsi32_ss(__m128 a, int b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/cvtss2sd.html b/x86/cvtss2sd.html new file mode 100644 index 0000000..5ba249e --- /dev/null +++ b/x86/cvtss2sd.html @@ -0,0 +1,128 @@ + +CVTSS2SD + — Convert Scalar Single Precision Floating-Point Value to Scalar Double PrecisionFloating-Point Value

CVTSS2SD + — Convert Scalar Single Precision Floating-Point Value to Scalar Double PrecisionFloating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 5A /r CVTSS2SD xmm1, xmm2/m32AV/VSSE2Convert one single precision floating-point value in xmm2/m32 to one double precision floating-point value in xmm1.
VEX.LIG.F3.0F.WIG 5A /r VCVTSS2SD xmm1, xmm2, xmm3/m32BV/VAVXConvert one single precision floating-point value in xmm3/m32 to one double precision floating-point value and merge with high bits of xmm2.
EVEX.LLIG.F3.0F.W0 5A /r VCVTSS2SD xmm1 {k1}{z}, xmm2, xmm3/m32{sae}CV/VAVX512FConvert one single precision floating-point value in xmm3/m32 to one double precision floating-point value and merge with high bits of xmm2 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts a single precision floating-point value in the “convert-from” source operand to a double precision floating-point value in the destination operand. When the “convert-from” source operand is an XMM register, the single precision floating-point value is contained in the low doubleword of the register. The result is stored in the low quadword of the destination operand.

+

128-bit Legacy SSE version: The “convert-from” source operand (the second operand) is an XMM register or memory location. Bits (MAXVL-1:64) of the corresponding destination register remain unchanged. The destination operand is an XMM register.

+

VEX.128 and EVEX encoded versions: The “convert-from” source operand (the third operand) can be an XMM register or a 32-bit memory location. The first source and destination operands are XMM registers. Bits (127:64) of the XMM register destination are copied from the corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

Software should ensure VCVTSS2SD is encoded with VEX.L=0. Encoding VCVTSS2SD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VCVTSS2SD (EVEX Encoded Version) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[63:0] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC2[31:0]);
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] = 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VCVTSS2SD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC2[31:0])
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

CVTSS2SD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := Convert_Single_Precision_To_Double_Precision_Floating_Point(SRC[31:0]);
+DEST[MAXVL-1:64] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSS2SD __m128d _mm_cvt_roundss_sd(__m128d a, __m128 b, int r);
+
+
VCVTSS2SD __m128d _mm_mask_cvt_roundss_sd(__m128d s, __mmask8 m, __m128d a,__m128 b, int r);
+
+
VCVTSS2SD __m128d _mm_maskz_cvt_roundss_sd(__mmask8 k, __m128d a, __m128 a, int r);
+
+
VCVTSS2SD __m128d _mm_mask_cvtss_sd(__m128d s, __mmask8 m, __m128d a,__m128 b);
+
+
VCVTSS2SD __m128d _mm_maskz_cvtss_sd(__mmask8 m, __m128d a,__m128 b);
+
+
CVTSS2SD __m128d_mm_cvtss_sd(__m128d a, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/cvtss2si.html b/x86/cvtss2si.html new file mode 100644 index 0000000..dcf3de4 --- /dev/null +++ b/x86/cvtss2si.html @@ -0,0 +1,142 @@ + +CVTSS2SI + — Convert Scalar Single Precision Floating-Point Value to Doubleword Integer

CVTSS2SI + — Convert Scalar Single Precision Floating-Point Value to Doubleword Integer

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 2D /r CVTSS2SI r32, xmm1/m32AV/VSSEConvert one single precision floating-point value from xmm1/m32 to one signed doubleword integer in r32.
F3 REX.W 0F 2D /r CVTSS2SI r64, xmm1/m32AV/N.E.SSEConvert one single precision floating-point value from xmm1/m32 to one signed quadword integer in r64.
VEX.LIG.F3.0F.W0 2D /r 1 VCVTSS2SI r32, xmm1/m32AV/VAVXConvert one single precision floating-point value from xmm1/m32 to one signed doubleword integer in r32.
VEX.LIG.F3.0F.W1 2D /r 1 VCVTSS2SI r64, xmm1/m32AV/N.E.2AVXConvert one single precision floating-point value from xmm1/m32 to one signed quadword integer in r64.
EVEX.LLIG.F3.0F.W0 2D /r VCVTSS2SI r32, xmm1/m32{er}BV/VAVX512FConvert one single precision floating-point value from xmm1/m32 to one signed doubleword integer in r32.
EVEX.LLIG.F3.0F.W1 2D /r VCVTSS2SI r64, xmm1/m32{er}BV/N.E.2AVX512FConvert one single precision floating-point value from xmm1/m32 to one signed quadword integer in r64.
+
+

1. Software should ensure VCVTSS2SI is encoded with VEX.L=0. Encoding VCVTSS2SI with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

2. VEX.W1/EVEX.W1 in non-64 bit is ignored; the instructions behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BTuple1 FixedModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts a single precision floating-point value in the source operand (the second operand) to a signed doubleword integer (or signed quadword integer if operand size is 64 bits) in the destination operand (the first operand). The source operand can be an XMM register or a memory location. The destination operand is a general-purpose register. When the source operand is an XMM register, the single precision floating-point value is contained in the low doubleword of the register.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (2w-1, where w represents the number of bits in the destination format) is returned.

+

Legacy SSE instructions: In 64-bit mode, Use of the REX.W prefix promotes the instruction to produce 64-bit data. See the summary chart at the beginning of this section for encoding data and limits.

+

VEX.W1 and EVEX.W1 versions: promotes the instruction to produce 64-bit data in 64-bit mode.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Software should ensure VCVTSS2SI is encoded with VEX.L=0. Encoding VCVTSS2SI with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VCVTSS2SI (EVEX Encoded Version) + ¶ +

+
IF (SRC *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF 64-bit Mode and OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[31:0]);
+ELSE
+    DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[31:0]);
+FI;
+
+

(V)CVTSS2SI (Legacy and VEX.128 Encoded Version) + ¶ +

+
IF 64-bit Mode and OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[31:0]);
+ELSE
+    DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer(SRC[31:0]);
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSS2SI int _mm_cvtss_i32( __m128 a);
+
+
VCVTSS2SI int _mm_cvt_roundss_i32( __m128 a, int r);
+
+
VCVTSS2SI __int64 _mm_cvtss_i64( __m128 a);
+
+
VCVTSS2SI __int64 _mm_cvt_roundss_i64( __m128 a, int r);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/cvttpd2dq.html b/x86/cvttpd2dq.html new file mode 100644 index 0000000..1982358 --- /dev/null +++ b/x86/cvttpd2dq.html @@ -0,0 +1,282 @@ + +CVTTPD2DQ + — Convert with Truncation Packed Double Precision Floating-Point Values toPacked Doubleword Integers

CVTTPD2DQ + — Convert with Truncation Packed Double Precision Floating-Point Values toPacked Doubleword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F E6 /r CVTTPD2DQ xmm1, xmm2/m128AV/VSSE2Convert two packed double precision floating-point values in xmm2/mem to two signed doubleword integers in xmm1 using truncation.
VEX.128.66.0F.WIG E6 /r VCVTTPD2DQ xmm1, xmm2/m128AV/VAVXConvert two packed double precision floating-point values in xmm2/mem to two signed doubleword integers in xmm1 using truncation.
VEX.256.66.0F.WIG E6 /r VCVTTPD2DQ xmm1, ymm2/m256AV/VAVXConvert four packed double precision floating-point values in ymm2/mem to four signed doubleword integers in xmm1 using truncation.
EVEX.128.66.0F.W1 E6 /r VCVTTPD2DQ xmm1 {k1}{z}, xmm2/m128/m64bcstBV/VAVX512VL AVX512FConvert two packed double precision floating-point values in xmm2/m128/m64bcst to two signed doubleword integers in xmm1 using truncation subject to writemask k1.
EVEX.256.66.0F.W1 E6 /r VCVTTPD2DQ xmm1 {k1}{z}, ymm2/m256/m64bcstBV/VAVX512VL AVX512FConvert four packed double precision floating-point values in ymm2/m256/m64bcst to four signed doubleword integers in xmm1 using truncation subject to writemask k1.
EVEX.512.66.0F.W1 E6 /r VCVTTPD2DQ ymm1 {k1}{z}, zmm2/m512/m64bcst{sae}BV/VAVX512FConvert eight packed double precision floating-point values in zmm2/m512/m64bcst to eight signed doubleword integers in ymm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two, four or eight packed double precision floating-point values in the source operand (second operand) to two, four or eight packed signed doubleword integers in the destination operand (first operand).

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result is larger than the maximum signed doubleword integer, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000H) is returned.

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a YMM/XMM/XMM (low 64 bits) register conditionally updated with writemask k1. The upper bits (MAXVL-1:256) of the corresponding destination are zeroed.

+

VEX.256 encoded version: The source operand is a YMM register or 256- bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The source operand is an XMM register or 128- bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:64) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The source operand is an XMM register or 128- bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SR +X3 +X2 +X1 +X0 +DEST +0 +X3 X2 X1 X0 +
Figure 3-15. VCVTTPD2DQ (VEX.256 encoded version)
+

Operation + ¶ +

+

VCVTTPD2DQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[k+63:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTTPD2DQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[63:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[k+63:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTTPD2DQ (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[63:0])
+DEST[63:32] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[127:64])
+DEST[95:64] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[191:128])
+DEST[127:96] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[255:192)
+DEST[MAXVL-1:128] := 0
+
+

VCVTTPD2DQ (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[63:0])
+DEST[63:32] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[127:64])
+DEST[MAXVL-1:64] := 0
+
+

CVTTPD2DQ (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[63:0])
+DEST[63:32] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[127:64])
+DEST[127:64] := 0
+DEST[MAXVL-1:128] (unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPD2DQ __m256i _mm512_cvttpd_epi32( __m512d a);
+
+
VCVTTPD2DQ __m256i _mm512_mask_cvttpd_epi32( __m256i s, __mmask8 k, __m512d a);
+
+
VCVTTPD2DQ __m256i _mm512_maskz_cvttpd_epi32( __mmask8 k, __m512d a);
+
+
VCVTTPD2DQ __m256i _mm512_cvtt_roundpd_epi32( __m512d a, int sae);
+
+
VCVTTPD2DQ __m256i _mm512_mask_cvtt_roundpd_epi32( __m256i s, __mmask8 k, __m512d a, int sae);
+
+
VCVTTPD2DQ __m256i _mm512_maskz_cvtt_roundpd_epi32( __mmask8 k, __m512d a, int sae);
+
+
VCVTTPD2DQ __m128i _mm256_mask_cvttpd_epi32( __m128i s, __mmask8 k, __m256d a);
+
+
VCVTTPD2DQ __m128i _mm256_maskz_cvttpd_epi32( __mmask8 k, __m256d a);
+
+
VCVTTPD2DQ __m128i _mm_mask_cvttpd_epi32( __m128i s, __mmask8 k, __m128d a);
+
+
VCVTTPD2DQ __m128i _mm_maskz_cvttpd_epi32( __mmask8 k, __m128d a);
+
+
VCVTTPD2DQ __m128i _mm256_cvttpd_epi32 (__m256d src);
+
+
CVTTPD2DQ __m128i _mm_cvttpd_epi32 (__m128d src);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cvttpd2pi.html b/x86/cvttpd2pi.html new file mode 100644 index 0000000..186ffc5 --- /dev/null +++ b/x86/cvttpd2pi.html @@ -0,0 +1,64 @@ + +CVTTPD2PI + — Convert With Truncation Packed Double Precision Floating-Point Values to PackedDword Integers

CVTTPD2PI + — Convert With Truncation Packed Double Precision Floating-Point Values to PackedDword Integers

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
66 0F 2C /r CVTTPD2PI mm, xmm/m128RMValidValidConvert two packer double precision floating-point values from xmm/m128 to two packed signed doubleword integers in mm using truncation.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two packed double precision floating-point values in the source operand (second operand) to two packed signed doubleword integers in the destination operand (first operand). The source operand can be an XMM register or a 128-bit memory location. The destination operand is an MMX technology register.

+

When a conversion is inexact, a truncated (round toward zero) result is returned. If a converted result is larger than the maximum signed doubleword integer, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000H) is returned.

+

This instruction causes a transition from x87 FPU to MMX technology operation (that is, the x87 FPU top-of-stack pointer is set to 0 and the x87 FPU tag word is set to all 0s [valid]). If this instruction is executed while an x87 FPU floating-point exception is pending, the exception is handled before the CVTTPD2PI instruction is executed.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Operation + ¶ +

+
DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer32_Truncate(SRC[63:0]);
+DEST[63:32] := Convert_Double_Precision_Floating_Point_To_Integer32_Truncate(SRC[127:64]);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
CVTTPD1PI __m64 _mm_cvttpd_pi32(__m128d a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Mode Exceptions + ¶ +

+

See Table 23-4, “Exception Conditions for Legacy SIMD/MMX Instructions with FP Exception and 16-Byte Alignment,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/cvttps2dq.html b/x86/cvttps2dq.html new file mode 100644 index 0000000..2783e55 --- /dev/null +++ b/x86/cvttps2dq.html @@ -0,0 +1,206 @@ + +CVTTPS2DQ + — Convert With Truncation Packed Single Precision Floating-Point Values to PackedSigned Doubleword Integer Values

CVTTPS2DQ + — Convert With Truncation Packed Single Precision Floating-Point Values to PackedSigned Doubleword Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 5B /r CVTTPS2DQ xmm1, xmm2/m128AV/VSSE2Convert four packed single precision floating-point values from xmm2/mem to four packed signed doubleword values in xmm1 using truncation.
VEX.128.F3.0F.WIG 5B /r VCVTTPS2DQ xmm1, xmm2/m128AV/VAVXConvert four packed single precision floating-point values from xmm2/mem to four packed signed doubleword values in xmm1 using truncation.
VEX.256.F3.0F.WIG 5B /r VCVTTPS2DQ ymm1, ymm2/m256AV/VAVXConvert eight packed single precision floating-point values from ymm2/mem to eight packed signed doubleword values in ymm1 using truncation.
EVEX.128.F3.0F.W0 5B /r VCVTTPS2DQ xmm1 {k1}{z}, xmm2/m128/m32bcstBV/VAVX512VL AVX512FConvert four packed single precision floating-point values from xmm2/m128/m32bcst to four packed signed doubleword values in xmm1 using truncation subject to writemask k1.
EVEX.256.F3.0F.W0 5B /r VCVTTPS2DQ ymm1 {k1}{z}, ymm2/m256/m32bcstBV/VAVX512VL AVX512FConvert eight packed single precision floating-point values from ymm2/m256/m32bcst to eight packed signed doubleword values in ymm1 using truncation subject to writemask k1.
EVEX.512.F3.0F.W0 5B /r VCVTTPS2DQ zmm1 {k1}{z}, zmm2/m512/m32bcst {sae}BV/VAVX512FConvert sixteen packed single precision floating-point values from zmm2/m512/m32bcst to sixteen packed signed doubleword values in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts four, eight or sixteen packed single precision floating-point values in the source operand to four, eight or sixteen signed doubleword integers in the destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result is larger than the maximum signed doubleword integer, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000H) is returned.

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The source operand is a YMM register or 256- bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The source operand is an XMM register or 128- bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The source operand is an XMM register or 128- bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTTPS2DQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTTPS2DQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO 15
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTTPS2DQ (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[31:0])
+DEST[63:32] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[63:32])
+DEST[95:64] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[95:64])
+DEST[127:96] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[127:96)
+DEST[159:128] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[159:128])
+DEST[191:160] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[191:160])
+DEST[223:192] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[223:192])
+DEST[255:224] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[255:224])
+
+

VCVTTPS2DQ (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[31:0])
+DEST[63:32] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[63:32])
+DEST[95:64] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[95:64])
+DEST[127:96] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[127:96])
+DEST[MAXVL-1:128] := 0
+
+

CVTTPS2DQ (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[31:0])
+DEST[63:32] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[63:32])
+DEST[95:64] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[95:64])
+DEST[127:96] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[127:96])
+DEST[MAXVL-1:128] (unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPS2DQ __m512i _mm512_cvttps_epi32( __m512 a);
+
+
VCVTTPS2DQ __m512i _mm512_mask_cvttps_epi32( __m512i s, __mmask16 k, __m512 a);
+
+
VCVTTPS2DQ __m512i _mm512_maskz_cvttps_epi32( __mmask16 k, __m512 a);
+
+
VCVTTPS2DQ __m512i _mm512_cvtt_roundps_epi32( __m512 a, int sae);
+
+
VCVTTPS2DQ __m512i _mm512_mask_cvtt_roundps_epi32( __m512i s, __mmask16 k, __m512 a, int sae);
+
+
VCVTTPS2DQ __m512i _mm512_maskz_cvtt_roundps_epi32( __mmask16 k, __m512 a, int sae);
+
+
VCVTTPS2DQ __m256i _mm256_mask_cvttps_epi32( __m256i s, __mmask8 k, __m256 a);
+
+
VCVTTPS2DQ __m256i _mm256_maskz_cvttps_epi32( __mmask8 k, __m256 a);
+
+
VCVTTPS2DQ __m128i _mm_mask_cvttps_epi32( __m128i s, __mmask8 k, __m128 a);
+
+
VCVTTPS2DQ __m128i _mm_maskz_cvttps_epi32( __mmask8 k, __m128 a);
+
+
VCVTTPS2DQ __m256i _mm256_cvttps_epi32 (__m256 a)
+
+
CVTTPS2DQ __m128i _mm_cvttps_epi32 (__m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/cvttps2pi.html b/x86/cvttps2pi.html new file mode 100644 index 0000000..3f80aec --- /dev/null +++ b/x86/cvttps2pi.html @@ -0,0 +1,64 @@ + +CVTTPS2PI + — Convert With Truncation Packed Single Precision Floating-Point Values to PackedDword Integers

CVTTPS2PI + — Convert With Truncation Packed Single Precision Floating-Point Values to PackedDword Integers

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F 2C /r CVTTPS2PI mm, xmm/m64RMValidValidConvert two single precision floating-point values from xmm/m64 to two signed doubleword signed integers in mm using truncation.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts two packed single precision floating-point values in the source operand (second operand) to two packed signed doubleword integers in the destination operand (first operand). The source operand can be an XMM register or a 64-bit memory location. The destination operand is an MMX technology register. When the source operand is an XMM register, the two single precision floating-point values are contained in the low quadword of the register.

+

When a conversion is inexact, a truncated (round toward zero) result is returned. If a converted result is larger than the maximum signed doubleword integer, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000H) is returned.

+

This instruction causes a transition from x87 FPU to MMX technology operation (that is, the x87 FPU top-of-stack pointer is set to 0 and the x87 FPU tag word is set to all 0s [valid]). If this instruction is executed while an x87 FPU floating-point exception is pending, the exception is handled before the CVTTPS2PI instruction is executed.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Operation + ¶ +

+
DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[31:0]);
+DEST[63:32] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[63:32]);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
CVTTPS2PI __m64 _mm_cvttps_pi32(__m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

See Table 23-5, “Exception Conditions for Legacy SIMD/MMX Instructions with XMM and FP Exception,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/cvttsd2si.html b/x86/cvttsd2si.html new file mode 100644 index 0000000..9a06392 --- /dev/null +++ b/x86/cvttsd2si.html @@ -0,0 +1,132 @@ + +CVTTSD2SI + — Convert With Truncation Scalar Double Precision Floating-Point Value to SignedInteger

CVTTSD2SI + — Convert With Truncation Scalar Double Precision Floating-Point Value to SignedInteger

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 2C /r CVTTSD2SI r32, xmm1/m64AV/VSSE2Convert one double precision floating-point value from xmm1/m64 to one signed doubleword integer in r32 using truncation.
F2 REX.W 0F 2C /r CVTTSD2SI r64, xmm1/m64AV/N.E.SSE2Convert one double precision floating-point value from xmm1/m64 to one signed quadword integer in r64 using truncation.
VEX.LIG.F2.0F.W0 2C /r 1 VCVTTSD2SI r32, xmm1/m64AV/VAVXConvert one double precision floating-point value from xmm1/m64 to one signed doubleword integer in r32 using truncation.
VEX.LIG.F2.0F.W1 2C /r 1 VCVTTSD2SI r64, xmm1/m64BV/N.E.2AVXConvert one double precision floating-point value from xmm1/m64 to one signed quadword integer in r64 using truncation.
EVEX.LLIG.F2.0F.W0 2C /r VCVTTSD2SI r32, xmm1/m64{sae}BV/VAVX512FConvert one double precision floating-point value from xmm1/m64 to one signed doubleword integer in r32 using truncation.
EVEX.LLIG.F2.0F.W1 2C /r VCVTTSD2SI r64, xmm1/m64{sae}BV/N.E.2AVX512FConvert one double precision floating-point value from xmm1/m64 to one signed quadword integer in r64 using truncation.
+
+

1. Software should ensure VCVTTSD2SI is encoded with VEX.L=0. Encoding VCVTTSD2SI with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

2. For this specific instruction, VEX.W/EVEX.W in non-64 bit is ignored; the instructions behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BTuple1 FixedModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts a double precision floating-point value in the source operand (the second operand) to a signed double-word integer (or signed quadword integer if operand size is 64 bits) in the destination operand (the first operand). The source operand can be an XMM register or a 64-bit memory location. The destination operand is a general purpose register. When the source operand is an XMM register, the double precision floating-point value is contained in the low quadword of the register.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register.

+

If a converted result exceeds the range limits of signed doubleword integer (in non-64-bit modes or 64-bit mode with REX.W/VEX.W/EVEX.W=0), the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000H) is returned.

+

If a converted result exceeds the range limits of signed quadword integer (in 64-bit mode and REX.W/VEX.W/EVEX.W = 1), the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (80000000_00000000H) is returned.

+

Legacy SSE instructions: In 64-bit mode, Use of the REX.W prefix promotes the instruction to 64-bit operation. See the summary chart at the beginning of this section for encoding data and limits.

+

VEX.W1 and EVEX.W1 versions: promotes the instruction to produce 64-bit data in 64-bit mode.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Software should ensure VCVTTSD2SI is encoded with VEX.L=0. Encoding VCVTTSD2SI with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

(V)CVTTSD2SI (All Versions) + ¶ +

+
IF 64-Bit Mode and OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[63:0]);
+ELSE
+    DEST[31:0] := Convert_Double_Precision_Floating_Point_To_Integer_Truncate(SRC[63:0]);
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTSD2SI int _mm_cvttsd_i32( __m128d a);
+
+
VCVTTSD2SI int _mm_cvtt_roundsd_i32( __m128d a, int sae);
+
+
VCVTTSD2SI __int64 _mm_cvttsd_i64( __m128d a);
+
+
VCVTTSD2SI __int64 _mm_cvtt_roundsd_i64( __m128d a, int sae);
+
+
CVTTSD2SI int _mm_cvttsd_si32( __m128d a);
+
+
CVTTSD2SI __int64 _mm_cvttsd_si64( __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/cvttss2si.html b/x86/cvttss2si.html new file mode 100644 index 0000000..1d29133 --- /dev/null +++ b/x86/cvttss2si.html @@ -0,0 +1,130 @@ + +CVTTSS2SI + — Convert With Truncation Scalar Single Precision Floating-Point Value to Integer

CVTTSS2SI + — Convert With Truncation Scalar Single Precision Floating-Point Value to Integer

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 2C /r CVTTSS2SI r32, xmm1/m32AV/VSSEConvert one single precision floating-point value from xmm1/m32 to one signed doubleword integer in r32 using truncation.
F3 REX.W 0F 2C /r CVTTSS2SI r64, xmm1/m32AV/N.E.SSEConvert one single precision floating-point value from xmm1/m32 to one signed quadword integer in r64 using truncation.
VEX.LIG.F3.0F.W0 2C /r 1 VCVTTSS2SI r32, xmm1/m32AV/VAVXConvert one single precision floating-point value from xmm1/m32 to one signed doubleword integer in r32 using truncation.
VEX.LIG.F3.0F.W1 2C /r 1 VCVTTSS2SI r64, xmm1/m32AV/N.E.2AVXConvert one single precision floating-point value from xmm1/m32 to one signed quadword integer in r64 using truncation.
EVEX.LLIG.F3.0F.W0 2C /r VCVTTSS2SI r32, xmm1/m32{sae}BV/VAVX512FConvert one single precision floating-point value from xmm1/m32 to one signed doubleword integer in r32 using truncation.
EVEX.LLIG.F3.0F.W1 2C /r VCVTTSS2SI r64, xmm1/m32{sae}BV/N.E.2AVX512FConvert one single precision floating-point value from xmm1/m32 to one signed quadword integer in r64 using truncation.
+
+

1. Software should ensure VCVTTSS2SI is encoded with VEX.L=0. Encoding VCVTTSS2SI with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

2. For this specific instruction, VEX.W/EVEX.W in non-64 bit is ignored; the instructions behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BTuple1 FixedModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts a single precision floating-point value in the source operand (the second operand) to a signed doubleword integer (or signed quadword integer if operand size is 64 bits) in the destination operand (the first operand). The source operand can be an XMM register or a 32-bit memory location. The destination operand is a general purpose register. When the source operand is an XMM register, the single precision floating-point value is contained in the low doubleword of the register.

+

When a conversion is inexact, a truncated (round toward zero) result is returned. If a converted result is larger than the maximum signed doubleword integer, the floating-point invalid exception is raised. If this exception is masked, the indefinite integer value (80000000H or 80000000_00000000H if operand size is 64 bits) is returned.

+

Legacy SSE instructions: In 64-bit mode, Use of the REX.W prefix promotes the instruction to 64-bit operation. See the summary chart at the beginning of this section for encoding data and limits.

+

VEX.W1 and EVEX.W1 versions: promotes the instruction to produce 64-bit data in 64-bit mode.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Software should ensure VCVTTSS2SI is encoded with VEX.L=0. Encoding VCVTTSS2SI with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

(V)CVTTSS2SI (All Versions) + ¶ +

+
IF 64-Bit Mode and OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[31:0]);
+ELSE
+    DEST[31:0] := Convert_Single_Precision_Floating_Point_To_Integer_Truncate(SRC[31:0]);
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTSS2SI int _mm_cvttss_i32( __m128 a);
+
+
VCVTTSS2SI int _mm_cvtt_roundss_i32( __m128 a, int sae);
+
+
VCVTTSS2SI __int64 _mm_cvttss_i64( __m128 a);
+
+
VCVTTSS2SI __int64 _mm_cvtt_roundss_i64( __m128 a, int sae);
+
+
CVTTSS2SI int _mm_cvttss_si32( __m128 a);
+
+
CVTTSS2SI __int64 _mm_cvttss_si64( __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

See Table 2-20, “Type 3 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/cwd.cdq.cqo.html b/x86/cwd.cdq.cqo.html new file mode 100644 index 0000000..107b38d --- /dev/null +++ b/x86/cwd.cdq.cqo.html @@ -0,0 +1,83 @@ + +CWD/CDQ/CQO + — Convert Word to Doubleword/Convert Doubleword to Quadword

CWD/CDQ/CQO + — Convert Word to Doubleword/Convert Doubleword to Quadword

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
99CWDZOValidValidDX:AX := sign-extend of AX.
99CDQZOValidValidEDX:EAX := sign-extend of EAX.
REX.W + 99CQOZOValidN.E.RDX:RAX:= sign-extend of RAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Doubles the size of the operand in register AX, EAX, or RAX (depending on the operand size) by means of sign extension and stores the result in registers DX:AX, EDX:EAX, or RDX:RAX, respectively. The CWD instruction copies the sign (bit 15) of the value in the AX register into every bit position in the DX register. The CDQ instruction copies the sign (bit 31) of the value in the EAX register into every bit position in the EDX register. The CQO instruction (available in 64-bit mode only) copies the sign (bit 63) of the value in the RAX register into every bit position in the RDX register.

+

The CWD instruction can be used to produce a doubleword dividend from a word before word division. The CDQ instruction can be used to produce a quadword dividend from a doubleword before doubleword division. The CQO instruction can be used to produce a double quadword dividend from a quadword before a quadword division.

+

The CWD and CDQ mnemonics reference the same opcode. The CWD instruction is intended for use when the operand-size attribute is 16 and the CDQ instruction for when the operand-size attribute is 32. Some assemblers may force the operand size to 16 when CWD is used and to 32 when CDQ is used. Others may treat these mnemonics as synonyms (CWD/CDQ) and use the current setting of the operand-size attribute to determine the size of values to be converted, regardless of the mnemonic used.

+

In 64-bit mode, use of the REX.W prefix promotes operation to 64 bits. The CQO mnemonics reference the same opcode as CWD/CDQ. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF OperandSize = 16 (* CWD instruction *)
+    THEN
+        DX := SignExtend(AX);
+    ELSE IF OperandSize = 32 (* CDQ instruction *)
+        EDX := SignExtend(EAX); FI;
+    ELSE IF 64-Bit Mode and OperandSize = 64 (* CQO instruction*)
+        RDX := SignExtend(RAX); FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/daa.html b/x86/daa.html new file mode 100644 index 0000000..c118efa --- /dev/null +++ b/x86/daa.html @@ -0,0 +1,120 @@ + +DAA + — Decimal Adjust AL After Addition

DAA + — Decimal Adjust AL After Addition

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
27DAAZOInvalidValidDecimal adjust AL after addition.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Adjusts the sum of two packed BCD values to create a packed BCD result. The AL register is the implied source and destination operand. The DAA instruction is only useful when it follows an ADD instruction that adds (binary addition) two 2-digit, packed BCD values and stores a byte result in the AL register. The DAA instruction then adjusts the contents of the AL register to contain the correct 2-digit, packed BCD result. If a decimal carry is detected, the CF and AF flags are set accordingly.

+

This instruction executes as described above in compatibility mode and legacy mode. It is not valid in 64-bit mode.

+

Operation + ¶ +

+
IF 64-Bit Mode
+    THEN
+        #UD;
+    ELSE
+        old_AL := AL;
+        old_CF := CF;
+        CF := 0;
+        IF (((AL AND 0FH) > 9) or AF = 1)
+                THEN
+                    AL := AL + 6;
+                    CF := old_CF or (Carry from AL := AL + 6);
+                    AF := 1;
+                ELSE
+                    AF := 0;
+        FI;
+        IF ((old_AL > 99H) or (old_CF = 1))
+            THEN
+                    AL := AL + 60H;
+                    CF := 1;
+            ELSE
+                    CF := 0;
+        FI;
+FI;
+
+

Example + ¶ +

+

ADD AL, BL Before: AL=79H BL=35H EFLAGS(OSZAPC)=XXXXXX

+

After: AL=AEH BL=35H EFLAGS(0SZAPC)=110000

+

DAA Before: AL=AEH BL=35H EFLAGS(OSZAPC)=110000

+

After: AL=14H BL=35H EFLAGS(0SZAPC)=X00111

+

DAA Before: AL=2EH BL=35H EFLAGS(OSZAPC)=110000

+

After: AL=34H BL=35H EFLAGS(0SZAPC)=X00101

+

Flags Affected + ¶ +

+

The CF and AF flags are set if the adjustment of the value results in a decimal carry in either digit of the result (see the “Operation” section above). The SF, ZF, and PF flags are set according to the result. The OF flag is undefined.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf in 64-bit mode.
diff --git a/x86/das.html b/x86/das.html new file mode 100644 index 0000000..070ce3e --- /dev/null +++ b/x86/das.html @@ -0,0 +1,116 @@ + +DAS + — Decimal Adjust AL After Subtraction

DAS + — Decimal Adjust AL After Subtraction

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
2FDASZOInvalidValidDecimal adjust AL after subtraction.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Adjusts the result of the subtraction of two packed BCD values to create a packed BCD result. The AL register is the implied source and destination operand. The DAS instruction is only useful when it follows a SUB instruction that subtracts (binary subtraction) one 2-digit, packed BCD value from another and stores a byte result in the AL register. The DAS instruction then adjusts the contents of the AL register to contain the correct 2-digit, packed BCD result. If a decimal borrow is detected, the CF and AF flags are set accordingly.

+

This instruction executes as described above in compatibility mode and legacy mode. It is not valid in 64-bit mode.

+

Operation + ¶ +

+
IF 64-Bit Mode
+    THEN
+        #UD;
+    ELSE
+        old_AL := AL;
+        old_CF := CF;
+        CF := 0;
+        IF (((AL AND 0FH) > 9) or AF = 1)
+            THEN
+                    AL:=AL -6;
+                    CF := old_CF or (Borrow from AL := AL − 6);
+                    AF := 1;
+            ELSE
+                    AF := 0;
+        FI;
+        IF ((old_AL > 99H) or (old_CF = 1))
+                THEN
+                    AL := AL − 60H;
+                    CF := 1;
+        FI;
+FI;
+
+

Example + ¶ +

+

SUB AL, BL Before: AL = 35H, BL = 47H, EFLAGS(OSZAPC) = XXXXXX

+

After: AL = EEH, BL = 47H, EFLAGS(0SZAPC) = 010111

+

DAA Before: AL = EEH, BL = 47H, EFLAGS(OSZAPC) = 010111

+

After: AL = 88H, BL = 47H, EFLAGS(0SZAPC) = X10111

+

Flags Affected + ¶ +

+

The CF and AF flags are set if the adjustment of the value results in a decimal borrow in either digit of the result (see the “Operation” section above). The SF, ZF, and PF flags are set according to the result. The OF flag is undefined.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf in 64-bit mode.
diff --git a/x86/dec.html b/x86/dec.html new file mode 100644 index 0000000..c89948b --- /dev/null +++ b/x86/dec.html @@ -0,0 +1,184 @@ + +DEC + — Decrement by 1

DEC + — Decrement by 1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
FE /1DEC r/m8MValidValidDecrement r/m8 by 1.
REX + FE /1DEC r/m8*MValidN.E.Decrement r/m8 by 1.
FF /1DEC r/m16MValidValidDecrement r/m16 by 1.
FF /1DEC r/m32MValidValidDecrement r/m32 by 1.
REX.W + FF /1DEC r/m64MValidN.E.Decrement r/m64 by 1.
48+rwDEC r16ON.E.ValidDecrement r16 by 1.
48+rdDEC r32ON.E.ValidDecrement r32 by 1.
+
+

* In64-bitmode,r/m8cannotbeencodedtoaccessthefollowingbyteregistersifaREXprefixisused:AH,BH,CH,DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r, w)N/AN/AN/A
Oopcode + rd (r, w)N/AN/AN/A
+

Description + ¶ +

+

Subtracts 1 from the destination operand, while preserving the state of the CF flag. The destination operand can be a register or a memory location. This instruction allows a loop counter to be updated without disturbing the CF flag. (To perform a decrement operation that updates the CF flag, use a SUB instruction with an immediate operand of 1.)

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, DEC r16 and DEC r32 are not encodable (because opcodes 48H through 4FH are REX prefixes). Otherwise, the instruction’s 64-bit mode default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits.

+

See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := DEST – 1;
+
+

Flags Affected + ¶ +

+

The CF flag is not affected. The OF, SF, ZF, AF, and PF flags are set according to the result.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/div.html b/x86/div.html new file mode 100644 index 0000000..f1f1b07 --- /dev/null +++ b/x86/div.html @@ -0,0 +1,263 @@ + +DIV + — Unsigned Divide

DIV + — Unsigned Divide

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F6 /6DIV r/m8MValidValidUnsigned divide AX by r/m8, with result stored in AL := Quotient, AH := Remainder.
REX + F6 /6DIV r/m81MValidN.E.Unsigned divide AX by r/m8, with result stored in AL := Quotient, AH := Remainder.
F7 /6DIV r/m16MValidValidUnsigned divide DX:AX by r/m16, with result stored in AX := Quotient, DX := Remainder.
F7 /6DIV r/m32MValidValidUnsigned divide EDX:EAX by r/m32, with result stored in EAX := Quotient, EDX := Remainder.
REX.W + F7 /6DIV r/m64MValidN.E.Unsigned divide RDX:RAX by r/m64, with result stored in RAX := Quotient, RDX := Remainder.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Divides unsigned the value in the AX, DX:AX, EDX:EAX, or RDX:RAX registers (dividend) by the source operand (divisor) and stores the result in the AX (AH:AL), DX:AX, EDX:EAX, or RDX:RAX registers. The source operand can be a general-purpose register or a memory location. The action of this instruction depends on the operand size (dividend/divisor). Division using 64-bit operand is available only in 64-bit mode.

+

Non-integral results are truncated (chopped) towards 0. The remainder is always less than the divisor in magnitude. Overflow is indicated with the #DE (divide error) exception rather than with the CF flag.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. In 64-bit mode when REX.W is applied, the instruction divides the unsigned value in RDX:RAX by the source operand and stores the quotient in RAX, the remainder in RDX.

+

See the summary chart at the beginning of this section for encoding data and limits. See Table 3-15.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Operand SizeDividendDivisorQuotientRemainderMaximum Quotient
Word/byteAXr/m8ALAH255
Doubleword/wordDX:AXr/m16AXDX65,535
Quadword/doublewordEDX:EAXr/m32EAXEDX232 − 1
Doublequadword/quadwordRDX:RAXr/m64RAXRDX264 − 1
+
Table 3-15. DIV Action
+

Operation + ¶ +

+
IF SRC = 0
+    THEN #DE; FI; (* Divide Error *)
+IF OperandSize = 8 (* Word/Byte Operation *)
+    THEN
+        temp := AX / SRC;
+        IF temp > FFH
+            THEN #DE; (* Divide error *)
+            ELSE
+                AL := temp;
+                AH := AX MOD SRC;
+        FI;
+    ELSE IF OperandSize = 16 (* Doubleword/word operation *)
+        THEN
+            temp := DX:AX / SRC;
+            IF temp > FFFFH
+                THEN #DE; (* Divide error *)
+            ELSE
+                AX := temp;
+                DX := DX:AX MOD SRC;
+            FI;
+        FI;
+    ELSE IF Operandsize = 32 (* Quadword/doubleword operation *)
+        THEN
+            temp := EDX:EAX / SRC;
+            IF temp > FFFFFFFFH
+                THEN #DE; (* Divide error *)
+            ELSE
+                EAX := temp;
+                EDX := EDX:EAX MOD SRC;
+            FI;
+        FI;
+    ELSE IF 64-Bit Mode and Operandsize = 64 (* Doublequadword/quadword operation *)
+        THEN
+            temp := RDX:RAX / SRC;
+            IF temp > FFFFFFFFFFFFFFFFH
+                THEN #DE; (* Divide error *)
+            ELSE
+                RAX := temp;
+                RDX := RDX:RAX MOD SRC;
+            FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

The CF, OF, SF, ZF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#DEIf the source operand (divisor) is 0
If the quotient is too large for the designated register.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#DEIf the source operand (divisor) is 0.
If the quotient is too large for the designated register.
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#DEIf the source operand (divisor) is 0.
If the quotient is too large for the designated register.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#DEIf the source operand (divisor) is 0
If the quotient is too large for the designated register.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/divpd.html b/x86/divpd.html new file mode 100644 index 0000000..8d27057 --- /dev/null +++ b/x86/divpd.html @@ -0,0 +1,185 @@ + +DIVPD + — Divide Packed Double Precision Floating-Point Values

DIVPD + — Divide Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 5E /r DIVPD xmm1, xmm2/m128AV/VSSE2Divide packed double precision floating-point values in xmm1 by packed double precision floating-point values in xmm2/mem.
VEX.128.66.0F.WIG 5E /r VDIVPD xmm1, xmm2, xmm3/m128BV/VAVXDivide packed double precision floating-point values in xmm2 by packed double precision floating-point values in xmm3/mem.
VEX.256.66.0F.WIG 5E /r VDIVPD ymm1, ymm2, ymm3/m256BV/VAVXDivide packed double precision floating-point values in ymm2 by packed double precision floating-point values in ymm3/mem.
EVEX.128.66.0F.W1 5E /r VDIVPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FDivide packed double precision floating-point values in xmm2 by packed double precision floating-point values in xmm3/m128/m64bcst and write results to xmm1 subject to writemask k1.
EVEX.256.66.0F.W1 5E /r VDIVPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FDivide packed double precision floating-point values in ymm2 by packed double precision floating-point values in ymm3/m256/m64bcst and write results to ymm1 subject to writemask k1.
EVEX.512.66.0F.W1 5E /r VDIVPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}CV/VAVX512FDivide packed double precision floating-point values in zmm2 by packed double precision floating-point values in zmm3/m512/m64bcst and write results to zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD divide of the double precision floating-point values in the first source operand by the floating-point values in the second source operand (the third operand). Results are written to the destination operand (the first operand).

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand (the second operand) is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding destination are zeroed.

+

VEX.128 encoded version: The first source operand (the second operand) is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding destination are zeroed.

+

128-bit Legacy SSE version: The second source operand (the second operand) can be an XMM register or an 128-bit memory location. The destination is the same as the first source operand. The upper bits (MAXVL-1:128) of the corresponding destination are unmodified.

+

Operation + ¶ +

+

VDIVPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC); ; refer to Table 15-4 in the Intel® 64 and IA-32 Architectures
+Software Developer’s Manual, Volume 1
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := SRC1[i+63:i] / SRC2[63:0]
+                ELSE
+                    DEST[i+63:i] := SRC1[i+63:i] / SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VDIVPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] / SRC2[63:0]
+DEST[127:64] := SRC1[127:64] / SRC2[127:64]
+DEST[191:128] := SRC1[191:128] / SRC2[191:128]
+DEST[255:192] := SRC1[255:192] / SRC2[255:192]
+DEST[MAXVL-1:256] := 0;
+
+

VDIVPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] / SRC2[63:0]
+DEST[127:64] := SRC1[127:64] / SRC2[127:64]
+DEST[MAXVL-1:128] := 0;
+
+

DIVPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] / SRC2[63:0]
+DEST[127:64] := SRC1[127:64] / SRC2[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VDIVPD __m512d _mm512_div_pd( __m512d a, __m512d b);
+
+
VDIVPD __m512d _mm512_mask_div_pd(__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VDIVPD __m512d _mm512_maskz_div_pd( __mmask8 k, __m512d a, __m512d b);
+
+
VDIVPD __m256d _mm256_mask_div_pd(__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VDIVPD __m256d _mm256_maskz_div_pd( __mmask8 k, __m256d a, __m256d b);
+
+
VDIVPD __m128d _mm_mask_div_pd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VDIVPD __m128d _mm_maskz_div_pd( __mmask8 k, __m128d a, __m128d b);
+
+
VDIVPD __m512d _mm512_div_round_pd( __m512d a, __m512d b, int);
+
+
VDIVPD __m512d _mm512_mask_div_round_pd(__m512d s, __mmask8 k, __m512d a, __m512d b, int);
+
+
VDIVPD __m512d _mm512_maskz_div_round_pd( __mmask8 k, __m512d a, __m512d b, int);
+
+
VDIVPD __m256d _mm256_div_pd (__m256d a, __m256d b);
+
+
DIVPD __m128d _mm_div_pd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Divide-by-Zero, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/divps.html b/x86/divps.html new file mode 100644 index 0000000..1aa46fe --- /dev/null +++ b/x86/divps.html @@ -0,0 +1,192 @@ + +DIVPS + — Divide Packed Single Precision Floating-Point Values

DIVPS + — Divide Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 5E /r DIVPS xmm1, xmm2/m128AV/VSSEDivide packed single precision floating-point values in xmm1 by packed single precision floating-point values in xmm2/mem.
VEX.128.0F.WIG 5E /r VDIVPS xmm1, xmm2, xmm3/m128BV/VAVXDivide packed single precision floating-point values in xmm2 by packed single precision floating-point values in xmm3/mem.
VEX.256.0F.WIG 5E /r VDIVPS ymm1, ymm2, ymm3/m256BV/VAVXDivide packed single precision floating-point values in ymm2 by packed single precision floating-point values in ymm3/mem.
EVEX.128.0F.W0 5E /r VDIVPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FDivide packed single precision floating-point values in xmm2 by packed single precision floating-point values in xmm3/m128/m32bcst and write results to xmm1 subject to writemask k1.
EVEX.256.0F.W0 5E /r VDIVPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FDivide packed single precision floating-point values in ymm2 by packed single precision floating-point values in ymm3/m256/m32bcst and write results to ymm1 subject to writemask k1.
EVEX.512.0F.W0 5E /r VDIVPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}CV/VAVX512FDivide packed single precision floating-point values in zmm2 by packed single precision floating-point values in zmm3/m512/m32bcst and write results to zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD divide of the four, eight or sixteen packed single precision floating-point values in the first source operand (the second operand) by the four, eight or sixteen packed single precision floating-point values in the second source operand (the third operand). Results are written to the destination operand (the first operand).

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+

VDIVPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := SRC1[i+31:i] / SRC2[31:0]
+                ELSE
+                    DEST[i+31:i] := SRC1[i+31:i] / SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VDIVPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] / SRC2[31:0]
+DEST[63:32] := SRC1[63:32] / SRC2[63:32]
+DEST[95:64] := SRC1[95:64] / SRC2[95:64]
+DEST[127:96] := SRC1[127:96] / SRC2[127:96]
+DEST[159:128] := SRC1[159:128] / SRC2[159:128]
+DEST[191:160] := SRC1[191:160] / SRC2[191:160]
+DEST[223:192] := SRC1[223:192] / SRC2[223:192]
+DEST[255:224] := SRC1[255:224] / SRC2[255:224].
+DEST[MAXVL-1:256] := 0;
+
+

VDIVPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] / SRC2[31:0]
+DEST[63:32] := SRC1[63:32] / SRC2[63:32]
+DEST[95:64] := SRC1[95:64] / SRC2[95:64]
+DEST[127:96] := SRC1[127:96] / SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

DIVPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] / SRC2[31:0]
+DEST[63:32] := SRC1[63:32] / SRC2[63:32]
+DEST[95:64] := SRC1[95:64] / SRC2[95:64]
+DEST[127:96] := SRC1[127:96] / SRC2[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VDIVPS __m512 _mm512_div_ps( __m512 a, __m512 b);
+
+
VDIVPS __m512 _mm512_mask_div_ps(__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VDIVPS __m512 _mm512_maskz_div_ps(__mmask16 k, __m512 a, __m512 b);
+
+
VDIVPD __m256d _mm256_mask_div_pd(__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VDIVPD __m256d _mm256_maskz_div_pd( __mmask8 k, __m256d a, __m256d b);
+
+
VDIVPD __m128d _mm_mask_div_pd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VDIVPD __m128d _mm_maskz_div_pd( __mmask8 k, __m128d a, __m128d b);
+
+
VDIVPS __m512 _mm512_div_round_ps( __m512 a, __m512 b, int);
+
+
VDIVPS __m512 _mm512_mask_div_round_ps(__m512 s, __mmask16 k, __m512 a, __m512 b, int);
+
+
VDIVPS __m512 _mm512_maskz_div_round_ps(__mmask16 k, __m512 a, __m512 b, int);
+
+
VDIVPS __m256 _mm256_div_ps (__m256 a, __m256 b);
+
+
DIVPS __m128 _mm_div_ps (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Divide-by-Zero, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/divsd.html b/x86/divsd.html new file mode 100644 index 0000000..f9b6bef --- /dev/null +++ b/x86/divsd.html @@ -0,0 +1,136 @@ + +DIVSD + — Divide Scalar Double Precision Floating-Point Value

DIVSD + — Divide Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 5E /r DIVSD xmm1, xmm2/m64AV/VSSE2Divide low double precision floating-point value in xmm1 by low double precision floating-point value in xmm2/m64.
VEX.LIG.F2.0F.WIG 5E /r VDIVSD xmm1, xmm2, xmm3/m64BV/VAVXDivide low double precision floating-point value in xmm2 by low double precision floating-point value in xmm3/m64.
EVEX.LLIG.F2.0F.W1 5E /r VDIVSD xmm1 {k1}{z}, xmm2, xmm3/m64{er}CV/VAVX512FDivide low double precision floating-point value in xmm2 by low double precision floating-point value in xmm3/m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Divides the low double precision floating-point value in the first source operand by the low double precision floating-point value in the second source operand, and stores the double precision floating-point result in the destination operand. The second source operand can be an XMM register or a 64-bit memory location. The first source and destination are XMM registers.

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:64) of the corresponding ZMM destination register remain unchanged.

+

VEX.128 encoded version: The first source operand is an xmm register encoded by VEX.vvvv. The quadword at bits 127:64 of the destination operand is copied from the corresponding quadword of the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX.128 encoded version: The first source operand is an xmm register encoded by EVEX.vvvv. The quadword element of the destination operand at bits 127:64 are copied from the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX version: The low quadword element of the destination is updated according to the writemask.

+

Software should ensure VDIVSD is encoded with VEX.L=0. Encoding VDIVSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VDIVSD (EVEX Encoded Version) + ¶ +

+
IF (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := SRC1[63:0] / SRC2[63:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VDIVSD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] / SRC2[63:0]
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

DIVSD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] / SRC[63:0]
+DEST[MAXVL-1:64] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VDIVSD __m128d _mm_mask_div_sd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VDIVSD __m128d _mm_maskz_div_sd( __mmask8 k, __m128d a, __m128d b);
+
+
VDIVSD __m128d _mm_div_round_sd( __m128d a, __m128d b, int);
+
+
VDIVSD __m128d _mm_mask_div_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, int);
+
+
VDIVSD __m128d _mm_maskz_div_round_sd( __mmask8 k, __m128d a, __m128d b, int);
+
+
DIVSD __m128d _mm_div_sd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Divide-by-Zero, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/divss.html b/x86/divss.html new file mode 100644 index 0000000..1472f98 --- /dev/null +++ b/x86/divss.html @@ -0,0 +1,136 @@ + +DIVSS + — Divide Scalar Single Precision Floating-Point Values

DIVSS + — Divide Scalar Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 5E /r DIVSS xmm1, xmm2/m32AV/VSSEDivide low single precision floating-point value in xmm1 by low single precision floating-point value in xmm2/m32.
VEX.LIG.F3.0F.WIG 5E /r VDIVSS xmm1, xmm2, xmm3/m32BV/VAVXDivide low single precision floating-point value in xmm2 by low single precision floating-point value in xmm3/m32.
EVEX.LLIG.F3.0F.W0 5E /r VDIVSS xmm1 {k1}{z}, xmm2, xmm3/m32{er}CV/VAVX512FDivide low single precision floating-point value in xmm2 by low single precision floating-point value in xmm3/m32.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Divides the low single precision floating-point value in the first source operand by the low single precision floating-point value in the second source operand, and stores the single precision floating-point result in the destination operand. The second source operand can be an XMM register or a 32-bit memory location.

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:32) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The first source operand is an xmm register encoded by VEX.vvvv. The three high-order doublewords of the destination operand are copied from the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX.128 encoded version: The first source operand is an xmm register encoded by EVEX.vvvv. The doubleword elements of the destination operand at bits 127:32 are copied from the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX version: The low doubleword element of the destination is updated according to the writemask.

+

Software should ensure VDIVSS is encoded with VEX.L=0. Encoding VDIVSS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VDIVSS (EVEX Encoded Version) + ¶ +

+
IF (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := SRC1[31:0] / SRC2[31:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VDIVSS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] / SRC2[31:0]
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

DIVSS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := DEST[31:0] / SRC[31:0]
+DEST[MAXVL-1:32] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VDIVSS __m128 _mm_mask_div_ss(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VDIVSS __m128 _mm_maskz_div_ss( __mmask8 k, __m128 a, __m128 b);
+
+
VDIVSS __m128 _mm_div_round_ss( __m128 a, __m128 b, int);
+
+
VDIVSS __m128 _mm_mask_div_round_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, int);
+
+
VDIVSS __m128 _mm_maskz_div_round_ss( __mmask8 k, __m128 a, __m128 b, int);
+
+
DIVSS __m128 _mm_div_ss(__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Divide-by-Zero, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/dppd.html b/x86/dppd.html new file mode 100644 index 0000000..bd2be2a --- /dev/null +++ b/x86/dppd.html @@ -0,0 +1,116 @@ + +DPPD + — Dot Product of Packed Double Precision Floating-Point Values

DPPD + — Dot Product of Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 3A 41 /r ib DPPD xmm1, xmm2/m128, imm8RMIV/VSSE4_1Selectively multiply packed double precision floating-point values from xmm1 with packed double precision floating-point values from xmm2, add and selectively store the packed double precision floating-point values to xmm1.
VEX.128.66.0F3A.WIG 41 /r ib VDPPD xmm1,xmm2, xmm3/m128, imm8RVMIV/VAVXSelectively multiply packed double precision floating-point values from xmm2 with packed double precision floating-point values from xmm3, add and selectively store the packed double precision floating-point values to xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (r, w)ModRM:r/m (r)imm8N/A
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Conditionally multiplies the packed double precision floating-point values in the destination operand (first operand) with the packed double precision floating-point values in the source (second operand) depending on a mask extracted from bits [5:4] of the immediate operand (third operand). If a condition mask bit is zero, the corresponding multiplication is replaced by a value of 0.0 in the manner described by Section 12.8.4 of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

The two resulting double precision values are summed into an intermediate result. The intermediate result is conditionally broadcasted to the destination using a broadcast mask specified by bits [1:0] of the immediate byte.

+

If a broadcast mask bit is “1”, the intermediate result is copied to the corresponding qword element in the destination operand. If a broadcast mask bit is zero, the corresponding element in the destination is set to zero.

+

DPPD follows the NaN forwarding rules stated in the Software Developer’s Manual, vol. 1, table 4-7. These rules do not cover horizontal prioritization of NaNs. Horizontal propagation of NaNs to the destination and the positioning of those NaNs in the destination is implementation dependent. NaNs on the input sources or computationally generated NaNs will have at least one NaN propagated to the destination.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

If VDPPD is encoded with VEX.L= 1, an attempt to execute the instruction encoded with VEX.L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

DP_primitive (SRC1, SRC2) + ¶ +

+
IF (imm8[4] = 1)
+    THEN Temp1[63:0] := DEST[63:0] * SRC[63:0]; // update SIMD exception flags
+    ELSE Temp1[63:0] := +0.0; FI;
+IF (imm8[5] = 1)
+    THEN Temp1[127:64] := DEST[127:64] * SRC[127:64]; // update SIMD exception flags
+    ELSE Temp1[127:64] := +0.0; FI;
+/* if unmasked exception reported, execute exception handler*/
+Temp2[63:0] := Temp1[63:0] + Temp1[127:64]; // update SIMD exception flags
+/* if unmasked exception reported, execute exception handler*/
+IF (imm8[0] = 1)
+    THEN DEST[63:0] := Temp2[63:0];
+    ELSE DEST[63:0] := +0.0; FI;
+IF (imm8[1] = 1)
+    THEN DEST[127:64] := Temp2[63:0];
+    ELSE DEST[127:64] := +0.0; FI;
+
+

DPPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[127:0] := DP_Primitive(SRC1[127:0], SRC2[127:0]);
+DEST[MAXVL-1:128] (Unmodified)
+
+

VDPPD (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := DP_Primitive(SRC1[127:0], SRC2[127:0]);
+DEST[MAXVL-1:128] := 0
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
DPPD __m128d _mm_dp_pd ( __m128d a, __m128d b, const int mask);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Exceptions are determined separately for each add and multiply operation. Unmasked exceptions will leave the destination untouched.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L= 1.
diff --git a/x86/dpps.html b/x86/dpps.html new file mode 100644 index 0000000..657f5f1 --- /dev/null +++ b/x86/dpps.html @@ -0,0 +1,141 @@ + +DPPS + — Dot Product of Packed Single Precision Floating-Point Values

DPPS + — Dot Product of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 3A 40 /r ib DPPS xmm1, xmm2/m128, imm8RMIV/VSSE4_1Selectively multiply packed single precision floating-point values from xmm1 with packed single precision floating-point values from xmm2, add and selectively store the packed single precision floating-point values or zero values to xmm1.
VEX.128.66.0F3A.WIG 40 /r ib VDPPS xmm1,xmm2, xmm3/m128, imm8RVMIV/VAVXMultiply packed single precision floating-point values from xmm1 with packed single precision floating-point values from xmm2/mem selectively add and store to xmm1.
VEX.256.66.0F3A.WIG 40 /r ib VDPPS ymm1, ymm2, ymm3/m256, imm8RVMIV/VAVXMultiply packed single precision floating-point values from ymm2 with packed single precision floating-point values from ymm3/mem, selectively add pairs of elements and store to ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (r, w)ModRM:r/m (r)imm8N/A
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Conditionally multiplies the packed single precision floating-point values in the destination operand (first operand) with the packed single precision floats in the source (second operand) depending on a mask extracted from the high 4 bits of the immediate byte (third operand). If a condition mask bit in imm8[7:4] is zero, the corresponding multiplication is replaced by a value of 0.0 in the manner described by Section 12.8.4 of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

The four resulting single precision values are summed into an intermediate result. The intermediate result is conditionally broadcasted to the destination using a broadcast mask specified by bits [3:0] of the immediate byte.

+

If a broadcast mask bit is “1”, the intermediate result is copied to the corresponding dword element in the destination operand. If a broadcast mask bit is zero, the corresponding element in the destination is set to zero.

+

DPPS follows the NaN forwarding rules stated in the Software Developer’s Manual, vol. 1, table 4-7. These rules do not cover horizontal prioritization of NaNs. Horizontal propagation of NaNs to the destination and the positioning of those NaNs in the destination is implementation dependent. NaNs on the input sources or computationally generated NaNs will have at least one NaN propagated to the destination.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Operation + ¶ +

+

DP_primitive (SRC1, SRC2) + ¶ +

+
IF (imm8[4] = 1)
+    THEN Temp1[31:0] := DEST[31:0] * SRC[31:0]; // update SIMD exception flags
+    ELSE Temp1[31:0] := +0.0; FI;
+IF (imm8[5] = 1)
+    THEN Temp1[63:32] := DEST[63:32] * SRC[63:32]; // update SIMD exception flags
+    ELSE Temp1[63:32] := +0.0; FI;
+IF (imm8[6] = 1)
+    THEN Temp1[95:64] := DEST[95:64] * SRC[95:64]; // update SIMD exception flags
+    ELSE Temp1[95:64] := +0.0; FI;
+IF (imm8[7] = 1)
+    THEN Temp1[127:96] := DEST[127:96] * SRC[127:96]; // update SIMD exception flags
+    ELSE Temp1[127:96] := +0.0; FI;
+Temp2[31:0] := Temp1[31:0] + Temp1[63:32]; // update SIMD exception flags
+/* if unmasked exception reported, execute exception handler*/
+Temp3[31:0] := Temp1[95:64] + Temp1[127:96]; // update SIMD exception flags
+/* if unmasked exception reported, execute exception handler*/
+Temp4[31:0] := Temp2[31:0] + Temp3[31:0]; // update SIMD exception flags
+/* if unmasked exception reported, execute exception handler*/
+IF (imm8[0] = 1)
+    THEN DEST[31:0] := Temp4[31:0];
+    ELSE DEST[31:0] := +0.0; FI;
+IF (imm8[1] = 1)
+    THEN DEST[63:32] := Temp4[31:0];
+    ELSE DEST[63:32] := +0.0; FI;
+IF (imm8[2] = 1)
+    THEN DEST[95:64] := Temp4[31:0];
+    ELSE DEST[95:64] := +0.0; FI;
+IF (imm8[3] = 1)
+    THEN DEST[127:96] := Temp4[31:0];
+    ELSE DEST[127:96] := +0.0; FI;
+
+

DPPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[127:0] := DP_Primitive(SRC1[127:0], SRC2[127:0]);
+DEST[MAXVL-1:128] (Unmodified)
+
+

VDPPS (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := DP_Primitive(SRC1[127:0], SRC2[127:0]);
+DEST[MAXVL-1:128] := 0
+
+

VDPPS (VEX.256 Encoded Version) + ¶ +

+
DEST[127:0] := DP_Primitive(SRC1[127:0], SRC2[127:0]);
+DEST[255:128] := DP_Primitive(SRC1[255:128], SRC2[255:128]);
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)DPPS __m128 _mm_dp_ps ( __m128 a, __m128 b, const int mask);
+
+
VDPPS __m256 _mm256_dp_ps ( __m256 a, __m256 b, const int mask);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Exceptions are determined separately for each add and multiply operation, in the order of their execution. Unmasked exceptions will leave the destination operands unchanged.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions.”

diff --git a/x86/eaccept.html b/x86/eaccept.html new file mode 100644 index 0000000..65b47c3 --- /dev/null +++ b/x86/eaccept.html @@ -0,0 +1,314 @@ + +EACCEPT + — Accept Changes to an EPC Page

EACCEPT + — Accept Changes to an EPC Page

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 05H ENCLU[EACCEPT]IRV/VSGX2This leaf function accepts changes made by system software to an EPC page in the running enclave.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/En EAXRBXRCX
IREACCEPT (In)Return Error Code (Out)Address of a SECINFO (In)Address of the destination EPC page (In)
+

Description + ¶ +

+

This leaf function accepts changes to a page in the running enclave by verifying that the security attributes specified in the SECINFO match the security attributes of the page in the EPCM. This instruction leaf can only be executed when inside the enclave.

+

RBX contains the effective address of a SECINFO structure while RCX contains the effective address of an EPC page. The table below provides additional information on the memory parameter of the EACCEPT leaf function.

+

EACCEPT Memory Parameter Semantics + ¶ +

+ + + + + + +
SECINFOEPCPAGE (Destination)
Read access permitted by Non EnclaveRead access permitted by Enclave
+

The instruction faults if any of the following:

+

EACCEPT Faulting Conditions + ¶ +

+ + + + + + + + + + + + + + + +
The operands are not properly aligned.RBX does not contain an effective address in an EPC page in the running enclave.
The EPC page is locked by another thread.RCX does not contain an effective address of an EPC page in the running enclave.
The EPC page is not valid.Page type is PT_REG and MODIFIED bit is 0.
SECINFO contains an invalid request.Page type is PT_TCS or PT_TRIM and PENDING bit is 0 and MODIFIED bit is 1.
If security attributes of the SECINFO page make the page inaccessible.
+

The error codes are:

+
+ + + + + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEACCEPT successful.
SGX_PAGE_ATTRIBUTES_MISMATCHThe attributes of the target EPC page do not match the expected values.
SGX_NOT_TRACKEDThe OS did not complete an ETRACK on the target page.
+
Table 38-54. EACCEPT Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EACCEPTTarget [DS:RCX]Shared#GP
SECINFO [DS:RBX]Concurrent
+
Table 38-55. Base Concurrency Restrictions of EACCEPT
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EACCEPTTarget [DS:RCX]Exclusive#GPConcurrentConcurrent
SECINFO [DS:RBX]ConcurrentConcurrentConcurrent
+
Table 38-56. Additional Concurrency Restrictions of EACCEPT
+

Operation + ¶ +

+

Temp Variables in EACCEPT Operational Flow + ¶ +

+ + + + + + + + + + + + + + + +
NameTypeSize (bits)Description
TMP_SECSEffective Address32/64Physical address of SECS to which EPC operands belongs.
SCRATCH_SECINFOSECINFO512Scratch storage for holding the contents of DS:RBX.
+

IF (DS:RBX is not 64Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RBX is not within CR_ELRANGE)

+

THEN #GP(0); FI;

+

IF (DS:RBX does not resolve within an EPC)

+

THEN #PF(DS:RBX); FI;

+

IF ( (EPCM(DS:RBX &~FFFH).VALID = 0) or (EPCM(DS:RBX &~FFFH).R = 0) or (EPCM(DS:RBX &~FFFH).PENDING ≠ 0) or

+

(EPCM(DS:RBX &~FFFH).MODIFIED ≠ 0) or (EPCM(DS:RBX &~FFFH).BLOCKED ≠ 0) or

+

(EPCM(DS:RBX &~FFFH).PT ≠ PT_REG) or (EPCM(DS:RBX &~FFFH).ENCLAVESECS ≠ CR_ACTIVE_SECS) or

+

(EPCM(DS:RBX &~FFFH).ENCLAVEADDRESS ≠ (DS:RBX & FFFH)) )

+

THEN #PF(DS:RBX); FI;

+

(* Copy 64 bytes of contents *)

+

SCRATCH_SECINFO := DS:RBX;

+

(* Check for misconfigured SECINFO flags*)

+

IF (SCRATCH_SECINFO reserved fields are not zero )

+

THEN #GP(0); FI;

+

IF (DS:RCX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX is not within CR_ELRANGE)

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

(* Check that the combination of requested PT, PENDING, and MODIFIED is legal *)

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 0 )

+

THEN

+

IF (NOT (((SCRATCH_SECINFO.FLAGS.PT is PT_REG) and

+

((SCRATCH_SECINFO.FLAGS.PR is 1) or

+

(SCRATCH_SECINFO.FLAGS.PENDING is 1)) and

+

(SCRATCH_SECINFO.FLAGS.MODIFIED is 0)) or

+

((SCRATCH_SECINFO.FLAGS.PT is PT_TCS or PT_TRIM) and

+

(SCRATCH_SECINFO.FLAGS.PR is 0) and

+

(SCRATCH_SECINFO.FLAGS.PENDING is 0) and

+

(SCRATCH_SECINFO.FLAGS.MODIFIED is 1) )))

+

THEN #GP(0); FI

+

ELSE

+

IF (NOT (((SCRATCH_SECINFO.FLAGS.PT is PT_REG) AND

+

((SCRATCH_SECINFO.FLAGS.PR is 1) OR

+

(SCRATCH_SECINFO.FLAGS.PENDING is 1)) AND

+

(SCRATCH_SECINFO.FLAGS.MODIFIED is 0)) OR

+

((SCRATCH_SECINFO.FLAGS.PT is PT_TCS OR PT_TRIM) AND

+

(SCRATCH_SECINFO.FLAGS.PENDING is 0) AND

+

(SCRATCH_SECINFO.FLAGS.MODIFIED is 1) AND

+

(SCRATCH_SECINFO.FLAGS.PR is 0)) OR

+

((SCRATCH_SECINFO.FLAGS.PT is PT_SS_FIRST or PT_SS_REST) AND

+

(SCRATCH_SECINFO.FLAGS.PENDING is 1) AND

+

(SCRATCH_SECINFO.FLAGS.MODIFIED is 0) AND

+

(SCRATCH_SECINFO.FLAGS.PR is 0))))

+

THEN #GP(0); FI;

+

FI;

+

(* Check security attributes of the destination EPC page *)

+

IF ( (EPCM(DS:RCX).VALID is 0) or (EPCM(DS:RCX).BLOCKED is not 0) or

+

((EPCM(DS:RCX).PT is not PT_REG) and (EPCM(DS:RCX).PT is not PT_TCS) and (EPCM(DS:RCX).PT is not PT_TRIM)

+

and (EPCM(DS:RCX).PT is not PT_SS_FIRST) and (EPCM(DS:RCX).PT is not PT_SS_REST)) or

+

(EPCM(DS:RCX).ENCLAVESECS ≠ CR_ACTIVE_SECS))

+

THEN #PF((DS:RCX); FI;

+

(* Check the destination EPC page for concurrency *)

+

IF ( EPC page in use )

+

THEN #GP(0); FI;

+

(* Re-Check security attributes of the destination EPC page *)

+

IF ( (EPCM(DS:RCX).VALID is 0) or (EPCM(DS:RCX).ENCLAVESECS ≠ CR_ACTIVE_SECS) )

+

THEN #PF(DS:RCX); FI;

+

(* Verify that accept request matches current EPC page settings *)

+

IF ( (EPCM(DS:RCX).ENCLAVEADDRESS ≠ DS:RCX) or (EPCM(DS:RCX).PENDING ≠ SCRATCH_SECINFO.FLAGS.PENDING) or

+

(EPCM(DS:RCX).MODIFIED ≠ SCRATCH_SECINFO.FLAGS.MODIFIED) or (EPCM(DS:RCX).R ≠ SCRATCH_SECINFO.FLAGS.R) or

+

(EPCM(DS:RCX).W ≠ SCRATCH_SECINFO.FLAGS.W) or (EPCM(DS:RCX).X ≠ SCRATCH_SECINFO.FLAGS.X) or

+

(EPCM(DS:RCX).PT ≠ SCRATCH_SECINFO.FLAGS.PT) )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_PAGE_ATTRIBUTES_MISMATCH;

+

GOTO DONE;

+

FI;

+

(* Check that all required threads have left enclave *)

+

IF (Tracking not correct)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_NOT_TRACKED;

+

GOTO DONE;

+

FI;

+

(* Get pointer to the SECS to which the EPC page belongs *)

+

TMP_SECS = << Obtain physical address of SECS through EPCM(DS:RCX)>>

+

(* For TCS pages, perform additional checks *)

+

IF (SCRATCH_SECINFO.FLAGS.PT = PT_TCS)

+

THEN

+

IF (DS:RCX.RESERVED ≠ 0) #GP(0); FI;

+

(* Check that TCS.FLAGS.DBGOPTIN, TCS stack, and TCS status are correctly initialized *)

+

(* check that TCS.PREVSSP is 0 *) IF ( ((DS:RCX).FLAGS.DBGOPTIN is not 0) or ((DS:RCX).CSSA ≥ (DS:RCX).NSSA) or ((DS:RCX).AEP is not 0) or ((DS:RCX).STATE is not 0) or ((CPUID.(EAX=07H, ECX=0H):ECX[CET_SS] = 1) AND ((DS:RCX).PREVSSP != 0)))

+

THEN #GP(0); FI;

+

(* Check consistency of FS & GS Limit *)

+

IF ( (TMP_SECS.ATTRIBUTES.MODE64BIT is 0) and ((DS:RCX.FSLIMIT & FFFH ≠ FFFH) or (DS:RCX.GSLIMIT & FFFH ≠ FFFH)) )

+

THEN #GP(0); FI;

+

FI;

+

(* Clear PENDING/MODIFIED flags to mark accept operation complete *)

+

EPCM(DS:RCX).PENDING := 0;

+

EPCM(DS:RCX).MODIFIED := 0;

+

EPCM(DS:RCX).PR := 0;

+

(* Clear EAX and ZF to indicate successful completion *)

+

RFLAGS.ZF := 0;

+

RAX := 0;

+

DONE:

+

RFLAGS.CF,PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

Sets ZF if page cannot be accepted, otherwise cleared. Clears CF, PF, AF, OF, SF

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
If EPC page has incorrect page type or security attributes.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
If EPC page has incorrect page type or security attributes.
diff --git a/x86/eacceptcopy.html b/x86/eacceptcopy.html new file mode 100644 index 0000000..e9b9071 --- /dev/null +++ b/x86/eacceptcopy.html @@ -0,0 +1,280 @@ + +EACCEPTCOPY + — Initialize a Pending Page

EACCEPTCOPY + — Initialize a Pending Page

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 07H ENCLU[EACCEPTCOPY]IRV/VSGX2This leaf function initializes a dynamically allocated EPC page from another page in the EPC.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + +
Op/EnEAXRBXRCXRDX
IREACCEPTCOPY (In)Return Error Code (Out)Address of a SECINFO (In)Address of the destination EPC page (In)Address of the source EPC page (In)
+

Description + ¶ +

+

This leaf function copies the contents of an existing EPC page into an uninitialized EPC page (created by EAUG). After initialization, the instruction may also modify the access rights associated with the destination EPC page. This instruction leaf can only be executed when inside the enclave.

+

RBX contains the effective address of a SECINFO structure while RCX and RDX each contain the effective address of an EPC page. The table below provides additional information on the memory parameter of the EACCEPTCOPY leaf function.

+

EACCEPTCOPY Memory Parameter Semantics + ¶ +

+ + + + + + + + +
SECINFOEPCPAGE (Destination)EPCPAGE (Source)
Read access permitted by Non EnclaveRead/Write access permitted by EnclaveRead access permitted by Enclave
+

The instruction faults if any of the following:

+

EACCEPTCOPY Faulting Conditions + ¶ +

+ + + + + + + + + + + + +
The operands are not properly aligned.If security attributes of the SECINFO page make the page inaccessible.
The EPC page is locked by another thread.If security attributes of the source EPC page make the page inaccessible.
The EPC page is not valid.RBX does not contain an effective address in an EPC page in the running enclave.
SECINFO contains an invalid request.RCX/RDX does not contain an effective address of an EPC page in the running enclave.
+

The error codes are:

+
+ + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEACCEPTCOPY successful.
SGX_PAGE_ATTRIBUTES_MISMATCHThe attributes of the target EPC page do not match the expected values.
+
Table 38-57. EACCEPTCOPY Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EACCEPTCOPYTarget [DS:RCX]Concurrent
Source [DS:RDX]Concurrent
SECINFO [DS:RBX]Concurrent
+
Table 38-58. Base Concurrency Restrictions of EACCEPTCOPY
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EACCEPTCOPYTarget [DS:RCX]Exclusive#GPConcurrentConcurrent
Source [DS:RDX]ConcurrentConcurrentConcurrent
SECINFO [DS:RBX]ConcurrentConcurrentConcurrent
+
Table 38-59. Additional Concurrency Restrictions of EACCEPTCOPY
+

Operation + ¶ +

+

Temp Variables in EACCEPTCOPY Operational Flow + ¶ +

+ + + + + + + + + + +
Name Type Size (bits) Name
SCRATCH_SECINFOSECINFO512Scratch storage for holding the contents of DS:RBX.
+

IF (DS:RBX is not 64Byte Aligned)

+

THEN #GP(0); FI;

+

IF ( (DS:RCX is not 4KByte Aligned) or (DS:RDX is not 4KByte Aligned) )

+

THEN #GP(0); FI;

+

IF ((DS:RBX is not within CR_ELRANGE) or (DS:RCX is not within CR_ELRANGE) or (DS:RDX is not within CR_ELRANGE))

+

THEN #GP(0); FI;

+

IF (DS:RBX does not resolve within an EPC)

+

THEN #PF(DS:RBX); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

IF (DS:RDX does not resolve within an EPC)

+

THEN #PF(DS:RDX); FI;

+

IF ( (EPCM(DS:RBX &~FFFH).VALID = 0) or (EPCM(DS:RBX &~FFFH).R = 0) or (EPCM(DS:RBX &~FFFH).PENDING ≠ 0) or

+

(EPCM(DS:RBX &~FFFH).MODIFIED ≠ 0) or (EPCM(DS:RBX &~FFFH).BLOCKED ≠ 0) or (EPCM(DS:RBX &~FFFH).PT ≠ PT_REG) or

+

(EPCM(DS:RBX &~FFFH).ENCLAVESECS ≠ CR_ACTIVE_SECS) or

+

(EPCM(DS:RBX &~FFFH).ENCLAVEADDRESS ≠ DS:RBX) )

+

THEN #PF(DS:RBX); FI;

+

(* Copy 64 bytes of contents *)

+

SCRATCH_SECINFO := DS:RBX;

+

(* Check for misconfigured SECINFO flags*)

+

IF ( (SCRATCH_SECINFO reserved fields are not zero ) or (SCRATCH_SECINFO.FLAGS.R=0) AND(SCRATCH_SECINFO.FLAGS.W≠0 ) or

+

(SCRATCH_SECINFO.FLAGS.PT is not PT_REG) )

+

THEN #GP(0); FI;

+

(* Check security attributes of the source EPC page *)

+

IF ( (EPCM(DS:RDX).VALID = 0) or (EPCM(DS:RCX).R = 0) or (EPCM(DS:RDX).PENDING ≠ 0) or (EPCM(DS:RDX).MODIFIED ≠ 0) or

+

(EPCM(DS:RDX).BLOCKED ≠ 0) or (EPCM(DS:RDX).PT ≠ PT_REG) or (EPCM(DS:RDX).ENCLAVESECS ≠ CR_ACTIVE_SECS) or

+

(EPCM(DS:RDX).ENCLAVEADDRESS ≠ DS:RDX))

+

THEN #PF(DS:RDX); FI;

+

(* Check security attributes of the destination EPC page *)

+

IF ( (EPCM(DS:RCX).VALID = 0) or (EPCM(DS:RCX).PENDING ≠ 1) or (EPCM(DS:RCX).MODIFIED ≠ 0) or

+

(EPCM(DS:RDX).BLOCKED ≠ 0) or (EPCM(DS:RCX).PT ≠ PT_REG) or (EPCM(DS:RCX).ENCLAVESECS ≠ CR_ACTIVE_SECS) )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_PAGE_ATTRIBUTES_MISMATCH;

+

GOTO DONE;

+

FI;

+

(* Check the destination EPC page for concurrency *)

+

IF (destination EPC page in use )

+

THEN #GP(0); FI;

+

(* Re-Check security attributes of the destination EPC page *)

+

IF ( (EPCM(DS:RCX).VALID = 0) or (EPCM(DS:RCX).PENDING ≠ 1) or (EPCM(DS:RCX).MODIFIED ≠ 0) or

+

(EPCM(DS:RCX).R ≠ 1) or (EPCM(DS:RCX).W ≠ 1) or (EPCM(DS:RCX).X ≠ 0) or

+

(EPCM(DS:RCX).PT ≠ SCRATCH_SECINFO.FLAGS.PT) or (EPCM(DS:RCX).ENCLAVESECS ≠ CR_ACTIVE_SECS) or

+

(EPCM(DS:RCX).ENCLAVEADDRESS ≠ DS:RCX))

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_PAGE_ATTRIBUTES_MISMATCH;

+

GOTO DONE;

+

FI;

+

(* Copy 4KBbytes form the source to destination EPC page*)

+

DS:RCX[32767:0] := DS:RDX[32767:0];

+

(* Update EPCM permissions *)

+

EPCM(DS:RCX).R := SCRATCH_SECINFO.FLAGS.R;

+

EPCM(DS:RCX).W := SCRATCH_SECINFO.FLAGS.W;

+

EPCM(DS:RCX).X := SCRATCH_SECINFO.FLAGS.X;

+

EPCM(DS:RCX).PENDING := 0;

+

RFLAGS.ZF := 0;

+

RAX := 0;

+

DONE:

+

RFLAGS.CF,PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

Sets ZF if page is not modifiable, otherwise cleared. Clears CF, PF, AF, OF, SF.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
If EPC page has incorrect page type or security attributes.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
If EPC page has incorrect page type or security attributes.
diff --git a/x86/eadd.html b/x86/eadd.html new file mode 100644 index 0000000..ac36813 --- /dev/null +++ b/x86/eadd.html @@ -0,0 +1,369 @@ + +EADD + — Add a Page to an Uninitialized Enclave

EADD + — Add a Page to an Uninitialized Enclave

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 01H ENCLS[EADD]IRV/VSGX1This leaf function adds a page to an uninitialized enclave.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnEAXRBXRCX
IREADD (In)Address of a PAGEINFO (In)Address of the destination EPC page (In)
+

Description + ¶ +

+

This leaf function copies a source page from non-enclave memory into the EPC, associates the EPC page with an SECS page residing in the EPC, and stores the linear address and security attributes in EPCM. As part of the association, the enclave offset and the security attributes are measured and extended into the SECS.MRENCLAVE. This instruction can only be executed when current privilege level is 0.

+

RBX contains the effective address of a PAGEINFO structure while RCX contains the effective address of an EPC page. The table below provides additional information on the memory parameter of EADD leaf function.

+

EADD Memory Parameter Semantics + ¶ +

+ + + + + + + + + + + + +
PAGEINFOPAGEINFO.SECSPAGEINFO.SRCPGEPAGEINFO.SECINFOEPCPAGE
Read access permitted by Non EnclaveRead/Write access permitted by EnclaveRead access permitted by Non EnclaveRead access permitted by Non EnclaveWrite access permitted by Enclave
+

The instruction faults if any of the following:

+

EADD Faulting Conditions + ¶ +

+ + + + + + + + + + + + + + + +
The operands are not properly aligned.Unsupported security attributes are set.
Refers to an invalid SECS.Reference is made to an SECS that is locked by another thread.
The EPC page is locked by another thread.RCX does not contain an effective address of an EPC page.
The EPC page is already valid.If security attributes specifies a TCS and the source page specifies unsupported TCS values or fields.
The SECS has been initialized.The specified enclave offset is outside of the enclave address space.
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EADDTarget [DS:RCX]Exclusive#GPEPC_PAGE_CONFLICT_EXCEPTION
SECS [DS:RBX]PAGEINFO.SECSShared#GP
+
Table 38-8. Base Concurrency Restrictions of EADD
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EADDTarget [DS:RCX]ConcurrentConcurrentConcurrent
SECS [DS:RBX]PAGEINFO.SECSConcurrentExclusive#GPConcurrent
+
Table 38-9. Additional Concurrency Restrictions of EADD
+

Operation + ¶ +

+

Temp Variables in EADD Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSize (bits)Description
TMP_SRCPGEEffective Address32/64Effective address of the source page.
TMP_SECSEffective Address32/64Effective address of the SECS destination page.
TMP_SECINFOEffective Address32/64Effective address of an SECINFO structure which contains security attributes of the page to be added.
SCRATCH_SECINFOSECINFO512Scratch storage for holding the contents of DS:TMP_SECINFO.
TMP_LINADDRUnsigned Integer64Holds the linear address to be stored in the EPCM and used to calculate TMP_ENCLAVEOFFSET.
TMP_ENCLAVEOFFSETEnclave Offset64The page displacement from the enclave base address.
TMPUPDATEFIELDSHA256 Buffer512Buffer used to hold data being added to TMP_SECS.MRENCLAVE.
+

IF (DS:RBX is not 32Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

TMP_SRCPGE := DS:RBX.SRCPGE;

+

TMP_SECS := DS:RBX.SECS;

+

TMP_SECINFO := DS:RBX.SECINFO;

+

TMP_LINADDR := DS:RBX.LINADDR;

+

IF (DS:TMP_SRCPGE is not 4KByte aligned or DS:TMP_SECS is not 4KByte aligned or

+

DS:TMP_SECINFO is not 64Byte aligned or TMP_LINADDR is not 4KByte aligned)

+

THEN #GP(0); FI;

+

IF (DS:TMP_SECS does not resolve within an EPC)

+

THEN #PF(DS:TMP_SECS); FI;

+

SCRATCH_SECINFO := DS:TMP_SECINFO;

+

(* Check for misconfigured SECINFO flags*)

+

IF (SCRATCH_SECINFO reserved fields are not zero or

+

! (SCRATCH_SECINFO.FLAGS.PT is PT_REG or SCRATCH_SECINFO.FLAGS.PT is PT_TCS or

+

(SCRATCH_SECINFO.FLAGS.PT is PT_SS_FIRST and CPUID.(EAX=12H, ECX=1):EAX[6] = 1) or

+

(SCRATCH_SECINFO.FLAGS.PT is PT_SS_REST and CPUID.(EAX=12H, ECX=1):EAX[6] = 1)) )

+

THEN #GP(0); FI;

+

(* If PT_SS_FIRST/PT_SS_REST page types are requested then CR4.CET must be 1 *)

+

IF ( (SCRATCH_SECINFO.FLAGS.PT is PT_SS_FIRST OR

+

SCRATCH_SECINFO.FLAGS.PT is PT_SS_REST) AND CR4.CET == 0)

+

THEN #GP(0); FI;

+

(* Check the EPC page for concurrency *)

+

IF (EPC page is not available for EADD)

+

THEN

+

IF (<<VMX non-root operation>> AND <<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)

+

THEN

+

VMCS.Exit_reason := SGX_CONFLICT;

+

VMCS.Exit_qualification.code := EPC_PAGE_CONFLICT_EXCEPTION;

+

VMCS.Exit_qualification.error := 0;

+

VMCS.Guest-physical_address := << translation of DS:RCX produced by paging >>;

+

VMCS.Guest-linear_address := DS:RCX;

+

Deliver VMEXIT;

+

ELSE

+

#GP(0);

+

FI;

+

FI;

+

IF (EPCM(DS:RCX).VALID ≠ 0)

+

THEN #PF(DS:RCX); FI;

+

(* Check the SECS for concurrency *)

+

IF (SECS is not available for EADD)

+

THEN #GP(0); FI;

+

IF (EPCM(DS:TMP_SECS).VALID = 0 or EPCM(DS:TMP_SECS).PT ≠ PT_SECS)

+

THEN #PF(DS:TMP_SECS); FI;

+

(* Copy 4KBytes from source page to EPC page*)

+

DS:RCX[32767:0] := DS:TMP_SRCPGE[32767:0];

+

CASE (SCRATCH_SECINFO.FLAGS.PT)

+

PT_TCS:

+

IF (DS:RCX.RESERVED ≠ 0) #GP(0); FI;

+

IF ( (DS:TMP_SECS.ATTRIBUTES.MODE64BIT = 0) and

+

((DS:TCS.FSLIMIT & 0FFFH ≠ 0FFFH) or (DS:TCS.GSLIMIT & 0FFFH ≠ 0FFFH) )) #GP(0); FI;

+

(* Ensure TCS.PREVSSP is zero *)

+

IF (CPUID.(EAX=07H, ECX=00h):ECX[CET_SS] = 1) and (DS:RCX.PREVSSP != 0) #GP(0); FI;

+

BREAK;

+

PT_REG:

+

IF (SCRATCH_SECINFO.FLAGS.W = 1 and SCRATCH_SECINFO.FLAGS.R = 0) #GP(0); FI;

+

BREAK;

+

PT_SS_FIRST:

+

PT_SS_REST:

+

(* SS pages cannot be created on first or last page of ELRANGE *)

+

IF ( TMP_LINADDR = DS:TMP_SECS.BASEADDR or TMP_LINADDR = (DS:TMP_SECS.BASEADDR + DS:TMP_SECS.SIZE - 0x1000) )

+

THEN #GP(0); FI;

+

IF ( DS:RCX[4087:0] != 0 ) #GP(0); FI;

+

IF (SCRATCH_SECINFO.FLAGS.PT == PT_SS_FIRST)

+

THEN

+

(* Check that valid RSTORSSP token exists *)

+

IF ( DS:RCX[4095:4088] != ((TMP_LINADDR + 0x1000) | DS:TMP_SECS.ATTRIBUTES.MODE64BIT) ) #GP(0); FI;

+

ELSE

+

(* Check the 8 bytes are zero *)

+

IF ( DS:RCX[4095:4088] != 0 ) #GP(0); FI;

+

FI;

+

IF (SCRATCH_SECINFO.FLAGS.W = 0 OR SCRATCH_SECINFO.FLAGS.R = 0 OR

+

SCRATCH_SECINFO.FLAGS.X = 1) #GP(0); FI;

+

BREAK;

+

ESAC;

+

(* Check the enclave offset is within the enclave linear address space *) IF (TMP_LINADDR < DS:TMP_SECS.BASEADDR or TMP_LINADDR ≥ DS:TMP_SECS.BASEADDR + DS:TMP_SECS.SIZE) THEN #GP(0); FI;

+

(* Check concurrency of measurement resource*)

+

IF (Measurement being updated)

+

THEN #GP(0); FI;

+

(* Check if the enclave to which the page will be added is already in Initialized state *)

+

IF (DS:TMP_SECS already initialized)

+

THEN #GP(0); FI;

+

(* For TCS pages, force EPCM.rwx bits to 0 and no debug access *)

+

IF (SCRATCH_SECINFO.FLAGS.PT = PT_TCS)

+

THEN

+

SCRATCH_SECINFO.FLAGS.R := 0;

+

SCRATCH_SECINFO.FLAGS.W := 0;

+

SCRATCH_SECINFO.FLAGS.X := 0;

+

(DS:RCX).FLAGS.DBGOPTIN := 0; // force TCS.FLAGS.DBGOPTIN off

+

DS:RCX.CSSA := 0;

+

DS:RCX.AEP := 0;

+

DS:RCX.STATE := 0;

+

FI;

+

(* Add enclave offset and security attributes to MRENCLAVE *)

+

TMP_ENCLAVEOFFSET := TMP_LINADDR - DS:TMP_SECS.BASEADDR;

+

TMPUPDATEFIELD[63:0] := 0000000044444145H; // “EADD”

+

TMPUPDATEFIELD[127:64] := TMP_ENCLAVEOFFSET;

+

TMPUPDATEFIELD[511:128] := SCRATCH_SECINFO[375:0]; // 48 bytes

+

DS:TMP_SECS.MRENCLAVE := SHA256UPDATE(DS:TMP_SECS.MRENCLAVE, TMPUPDATEFIELD)

+

INC enclave’s MRENCLAVE update counter;

+

(* Add enclave offset and security attributes to MRENCLAVE *)

+

EPCM(DS:RCX).R := SCRATCH_SECINFO.FLAGS.R;

+

EPCM(DS:RCX).W := SCRATCH_SECINFO.FLAGS.W;

+

EPCM(DS:RCX).X := SCRATCH_SECINFO.FLAGS.X;

+

EPCM(DS:RCX).PT := SCRATCH_SECINFO.FLAGS.PT;

+

EPCM(DS:RCX).ENCLAVEADDRESS := TMP_LINADDR;

+

(* associate the EPCPAGE with the SECS by storing the SECS identifier of DS:TMP_SECS *)

+

Update EPCM(DS:RCX) SECS identifier to reference DS:TMP_SECS identifier;

+

(* Set EPCM entry fields *)

+

EPCM(DS:RCX).BLOCKED := 0;

+

EPCM(DS:RCX).PENDING := 0;

+

EPCM(DS:RCX).MODIFIED := 0;

+

EPCM(DS:RCX).VALID := 1;

+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If an enclave memory operand is outside of the EPC.
If an enclave memory operand is the wrong type.
If a memory operand is locked.
If the enclave is initialized.
If the enclave's MRENCLAVE is locked.
If the TCS page reserved bits are set.
If the TCS page PREVSSP field is not zero.
If the PT_SS_REST or PT_SS_REST page is the first or last page in the enclave.
If the PT_SS_FIRST or PT_SS_REST page is not initialized correctly.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the EPC page is valid.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If an enclave memory operand is outside of the EPC.
If an enclave memory operand is the wrong type.
If a memory operand is locked.
If the enclave is initialized.
If the enclave's MRENCLAVE is locked.
If the TCS page reserved bits are set.
If the TCS page PREVSSP field is not zero.
If the PT_SS_REST or PT_SS_REST page is the first or last page in the enclave.
If the PT_SS_FIRST or PT_SS_REST page is not initialized correctly.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the EPC page is valid.
diff --git a/x86/eaug.html b/x86/eaug.html new file mode 100644 index 0000000..28d1234 --- /dev/null +++ b/x86/eaug.html @@ -0,0 +1,306 @@ + +EAUG + — Add a Page to an Initialized Enclave

EAUG + — Add a Page to an Initialized Enclave

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 0DH ENCLS[EAUG]IRV/VSGX2This leaf function adds a page to an initialized enclave.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnEAXRBXRCX
IREAUG (In)Address of a PAGEINFO (In)Address of the destination EPC page (In)
+

Description + ¶ +

+

This leaf function zeroes a page of EPC memory, associates the EPC page with an SECS page residing in the EPC, and stores the linear address and security attributes in the EPCM. As part of the association, the security attributes are configured to prevent access to the EPC page until a corresponding invocation of the EACCEPT leaf or EACCEPTCOPY leaf confirms the addition of the new page into the enclave. This instruction can only be executed when current privilege level is 0.

+

RBX contains the effective address of a PAGEINFO structure while RCX contains the effective address of an EPC page. The table below provides additional information on the memory parameter of the EAUG leaf function.

+

EAUG Memory Parameter Semantics + ¶ +

+ + + + + + + + + + + + +
PAGEINFOPAGEINFO.SECSPAGEINFO.SRCPGEPAGEINFO.SECINFOEPCPAGE
Read access permitted by Non EnclaveRead/Write access permitted by EnclaveMust be zeroRead access permitted by Non EnclaveWrite access permitted by Enclave
+

The instruction faults if any of the following:

+

EAUG Faulting Conditions + ¶ +

+ + + + + + + + + + + + + + + +
The operands are not properly aligned.Unsupported security attributes are set.
Refers to an invalid SECS.Reference is made to an SECS that is locked by another thread.
The EPC page is locked by another thread.RCX does not contain an effective address of an EPC page.
The EPC page is already valid.The specified enclave offset is outside of the enclave address space.
The SECS has been initialized.
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EAUGTarget [DS:RCX]Exclusive#GPEPC_PAGE_CONFLICT_EXCEPTION
SECS [DS:RBX]PAGEINFO.SECSShared#GP
+
Table 38-10. Base Concurrency Restrictions of EAUG
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EAUGTarget [DS:RCX]ConcurrentConcurrentConcurrent
SECS [DS:RBX]PAGEINFO.SECSConcurrentConcurrentConcurrent
+
Table 38-11. Additional Concurrency Restrictions of EAUG
+

Operation + ¶ +

+

Temp Variables in EAUG Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSize (bits)Description
TMP_SECSEffective Address32/64Effective address of the SECS destination page.
TMP_SECINFOEffective Address32/64Effective address of an SECINFO structure which contains security attributes of the page to be added.
SCRATCH_SECINFOSECINFO512Scratch storage for holding the contents of DS:TMP_SECINFO.
TMP_LINADDRUnsigned Integer64Holds the linear address to be stored in the EPCM and used to calculate TMP_ENCLAVEOFFSET.
+

IF (DS:RBX is not 32Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

TMP_SECS := DS:RBX.SECS;

+

TMP_SECINFO := DS:RBX.SECINFO;

+

IF (DS:RBX.SECINFO is not 0)

+

THEN

+

IF (DS:TMP_SECINFO is not 64B aligned)

+

THEN #GP(0); FI;

+

FI;

+

TMP_LINADDR := DS:RBX.LINADDR;

+

IF ( DS:TMP_SECS is not 4KByte aligned or TMP_LINADDR is not 4KByte aligned )

+

THEN #GP(0); FI;

+

IF DS:RBX.SRCPAGE is not 0

+

THEN #GP(0); FI;

+

IF (DS:TMP_SECS does not resolve within an EPC)

+

THEN #PF(DS:TMP_SECS); FI;

+

(* Check the EPC page for concurrency *)

+

IF (EPC page in use)

+

THEN

+

IF (<<VMX non-root operation>> AND <<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)

+

THEN

+

VMCS.Exit_reason := SGX_CONFLICT;

+

VMCS.Exit_qualification.code := EPC_PAGE_CONFLICT_EXCEPTION;

+

VMCS.Exit_qualification.error := 0;

+

VMCS.Guest-physical_address := << translation of DS:RCX produced by paging >>;

+

VMCS.Guest-linear_address := DS:RCX;

+

Deliver VMEXIT;

+

ELSE

+

#GP(0);

+

FI;

+

FI:

+

IF (EPCM(DS:RCX).VALID ≠ 0)

+

THEN #PF(DS:RCX); FI;

+

(* copy SECINFO contents into a scratch SECINFO *)

+

IF (DS:RBX.SECINFO is 0)

+

THEN

+

(* allocate and initialize a new scratch SECINFO structure *)

+

SCRATCH_SECINFO.PT := PT_REG;

+

SCRATCH_SECINFO.R := 1;

+

SCRATCH_SECINFO.W := 1;

+

SCRATCH_SECINFO.X := 0;

+

<< zero out remaining fields of SCRATCH_SECINFO >>

+

ELSE

+

(* copy SECINFO contents into scratch SECINFO *)

+

SCRATCH_SECINFO := DS:TMP_SECINFO;

+

(* check SECINFO flags for misconfiguration *)

+

(* reserved flags must be zero *)

+

(* SECINFO.FLAGS.PT must either be PT_SS_FIRST, or PT_SS_REST *)

+

IF ( (SCRATCH_SECINFO reserved fields are not 0) or

+

CPUID.(EAX=12H, ECX=1):EAX[6] is 0) OR

+

(SCRATCH_SECINFO.PT is not PT_SS_FIRST, or PT_SS_REST) OR

+

( (SCRATCH_SECINFO.FLAGS.R is 0) OR (SCRATCH_SECINFO.FLAGS.W is 0) OR (SCRATCH_SECINFO.FLAGS.X is 1) ) )

+

THEN #GP(0); FI;

+

FI;

+

(* Check if PT_SS_FIRST/PT_SS_REST page types are requested then CR4.CET must be 1 *)

+

IF ( (SCRATCH_SECINFO.PT is PT_SS_FIRST OR SCRATCH_SECINFO.PT is PT_SS_REST) AND CR4.CET == 0 )

+

THEN #GP(0); FI;

+

(* Check the SECS for concurrency *)

+

IF (SECS is not available for EAUG)

+

THEN #GP(0); FI;

+

IF (EPCM(DS:TMP_SECS).VALID = 0 or EPCM(DS:TMP_SECS).PT ≠ PT_SECS)

+

THEN #PF(DS:TMP_SECS); FI;

+

(* Check if the enclave to which the page will be added is in the Initialized state *)

+

IF (DS:TMP_SECS is not initialized)

+

THEN #GP(0); FI;

+

(* Check the enclave offset is within the enclave linear address space *) IF ( (TMP_LINADDR < DS:TMP_SECS.BASEADDR) or (TMP_LINADDR ≥ DS:TMP_SECS.BASEADDR + DS:TMP_SECS.SIZE) ) THEN #GP(0); FI;

+

IF ( (SCRATCH_SECINFO.PT is PT_SS_FIRST OR SCRATCH_SECINFO.PT is PT_SS_REST) )

+

THEN

+

(* SS pages cannot created on first or last page of ELRANGE *)

+

IF ( TMP_LINADDR == DS:TMP_SECS.BASEADDR OR

+

TMP_LINADDR == (DS:TMP_SECS.BASEADDR + DS:TMP_SECS.SIZE - 0x1000) )

+

THEN

+

#GP(0); FI;

+

FI;

+

(* Clear the content of EPC page*)

+

DS:RCX[32767:0] := 0;

+

IF (CPUID.(EAX=07H, ECX=0H):ECX[CET_SS] = 1)

+

THEN

+

(* set up shadow stack RSTORSSP token *)

+

IF (SCRATCH_SECINFO.PT is PT_SS_FIRST)

+

THEN

+

DS:RCX[0xFF8] := (TMP_LINADDR + 0x1000) | TMP_SECS.ATTRIBUTES.MODE64BIT; FI;

+

FI;

+

(* Set EPCM security attributes *)

+

EPCM(DS:RCX).R := SCRATCH_SECINFO.FLAGS.R;

+

EPCM(DS:RCX).W := SCRATCH_SECINFO.FLAGS.W;

+

EPCM(DS:RCX).X := SCRATCH_SECINFO.FLAGS.X;

+

EPCM(DS:RCX).PT := SCRATCH_SECINFO.FLAGS.PT;

+

EPCM(DS:RCX).ENCLAVEADDRESS := TMP_LINADDR;

+

EPCM(DS:RCX).BLOCKED := 0;

+

EPCM(DS:RCX).PENDING := 1;

+

EPCM(DS:RCX).MODIFIED := 0;

+

EPCM(DS:RCX).PR := 0;

+

(* associate the EPCPAGE with the SECS by storing the SECS identifier of DS:TMP_SECS *)

+

Update EPCM(DS:RCX) SECS identifier to reference DS:TMP_SECS identifier;

+

(* Set EPCM valid fields *)

+

EPCM(DS:RCX).VALID := 1;

+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If a memory operand is locked.
If the enclave is not initialized.
#PF(errorcode) If a page fault occurs in accessing memory operands.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If a memory operand is locked.
If the enclave is not initialized.
#PF(errorcode) If a page fault occurs in accessing memory operands.
diff --git a/x86/eblock.html b/x86/eblock.html new file mode 100644 index 0000000..d9aacb8 --- /dev/null +++ b/x86/eblock.html @@ -0,0 +1,227 @@ + +EBLOCK + — Mark a page in EPC as Blocked

EBLOCK + — Mark a page in EPC as Blocked

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 09H ENCLS[EBLOCK]IRV/VSGX1This leaf function marks a page in the EPC as blocked.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + +
Op/EnEAXRCX
IREBLOCK (In)Return error code (Out)Effective address of the EPC page (In)
+

Description + ¶ +

+

This leaf function causes an EPC page to be marked as BLOCKED. This instruction can only be executed when current privilege level is 0.

+

The content of RCX is an effective address of an EPC page. The DS segment is used to create linear address. Segment override is not supported.

+

An error code is returned in RAX.

+

The table below provides additional information on the memory parameter of EBLOCK leaf function.

+

EBLOCK Memory Parameter Semantics + ¶ +

+ + + + +
EPCPAGE
Read/Write access permitted by Enclave
+

The error codes are:

+
+ + + + + + + + + + + + + + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEBLOCK successful.
SGX_BLKSTATEPage already blocked. This value is used to indicate to a VMM that the page was already in BLOCKED state as a result of EBLOCK and thus will need to be restored to this state when it is eventually reloaded (using ELDB).
SGX_ENTRYEPOCH_LOCKEDSECS locked for Entry Epoch update. This value indicates that an ETRACK is currently executing on the SECS. The EBLOCK should be reattempted.
SGX_NOTBLOCKABLEPage type is not one which can be blocked.
SGX_PG_INVLDPage is not valid and cannot be blocked.
SGX_EPC_PAGE_CONFLICTPage is being written by EADD, EAUG, ECREATE, ELDU/B, EMODT, or EWB.
+
Table 38-12. EBLOCK Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EBLOCKTarget [DS:RCX]SharedSGX_EPC_PAGE_ CONFLICT
+
Table 38-13. Base Concurrency Restrictions of EBLOCK
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
EBLOCKTarget [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-14. Additional Concurrency Restrictions of EBLOCK
+

Operation + ¶ +

+

Temp Variables in EBLOCK Operational Flow + ¶ +

+ + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_BLKSTATEInteger64Page is already blocked.
+

IF (DS:RCX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

RFLAGS.ZF,CF,PF,AF,OF,SF := 0;

+

RAX := 0;

+

(* Check the EPC page for concurrency*)

+

IF (EPC page in use)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

GOTO DONE;

+

FI;

+

IF (EPCM(DS:RCX). VALID = 0)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_PG_INVLD;

+

GOTO DONE;

+

FI;

+

IF ( (EPCM(DS:RCX).PT ≠ PT_REG) and (EPCM(DS:RCX).PT ≠ PT_TCS) and (EPCM(DS:RCX).PT ≠ PT_TRIM)

+

and EPCM(DS:RCX).PT ≠ PT_SS_FIRST) and (EPCM(DS:RCX).PT ≠ PT_SS_REST))

+

THEN

+

RFLAGS.CF := 1;

+

IF (EPCM(DS:RCX).PT = PT_SECS)

+

THEN RAX := SGX_PG_IS_SECS;

+

ELSE RAX := SGX_NOTBLOCKABLE;

+

FI;

+

GOTO DONE;

+

FI;

+

(* Check if the page is already blocked and report blocked state *)

+

TMP_BLKSTATE := EPCM(DS:RCX).BLOCKED;

+

(* at this point, the page must be valid and PT_TCS or PT_REG or PT_TRIM*)

+

IF (TMP_BLKSTATE = 1)

+

THEN

+

RFLAGS.CF := 1;

+

RAX := SGX_BLKSTATE;

+

ELSE

+

EPCM(DS:RCX).BLOCKED := 1

+

FI;

+

DONE:

+

Flags Affected + ¶ +

+

Sets ZF if SECS is in use or invalid, otherwise cleared. Sets CF if page is BLOCKED or not blockable, otherwise cleared. Clears PF, AF, OF, SF.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If the specified EPC resource is in use.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If the specified EPC resource is in use.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
diff --git a/x86/ecreate.html b/x86/ecreate.html new file mode 100644 index 0000000..0f457be --- /dev/null +++ b/x86/ecreate.html @@ -0,0 +1,344 @@ + +ECREATE + — Create an SECS page in the Enclave Page Cache

ECREATE + — Create an SECS page in the Enclave Page Cache

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 00H ENCLS[ECREATE]IRV/VSGX1This leaf function begins an enclave build by creating an SECS page in EPC.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnEAXRBXRCX
IRECREATE (In)Address of a PAGEINFO (In)Address of the destination SECS page (In)
+

Description + ¶ +

+

ENCLS[ECREATE] is the first instruction executed in the enclave build process. ECREATE copies an SECS structure outside the EPC into an SECS page inside the EPC. The internal structure of SECS is not accessible to software.

+

ECREATE will set up fields in the protected SECS and mark the page as valid inside the EPC. ECREATE initializes or checks unused fields.

+

Software sets the following fields in the source structure: SECS:BASEADDR, SECS:SIZE in bytes, ATTRIBUTES, CONFIGID, and CONFIGSVN. SECS:BASEADDR must be naturally aligned on an SECS.SIZE boundary. SECS.SIZE must be at least 2 pages (8192).

+

The source operand RBX contains an effective address of a PAGEINFO structure. PAGEINFO contains an effective address of a source SECS and an effective address of an SECINFO. The SECS field in PAGEINFO is not used.

+

The RCX register is the effective address of the destination SECS. It is an address of an empty slot in the EPC. The SECS structure must be page aligned. SECINFO flags must specify the page as an SECS page.

+

ECREATE Memory Parameter Semantics + ¶ +

+ + + + + + + + + + +
PAGEINFOPAGEINFO.SRCPGEPAGEINFO.SECINFOEPCPAGE
Read access permitted by Non EnclaveRead access permitted by Non EnclaveRead access permitted by Non EnclaveWrite access permitted by Enclave
+

ECREATE will fault if the SECS target page is in use; already valid; outside the EPC. It will also fault if addresses are not aligned; unused PAGEINFO fields are not zero.

+

If the amount of space needed to store the SSA frame is greater than the amount specified in SECS.SSAFRAMESIZE, a #GP(0) results. The amount of space needed for an SSA frame is computed based on DS:TMP_SECS.ATTRIBUTES.XFRM size. Details of computing the size can be found Section 39.7.

+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
ECREATE ECREATE +SECS [DS:RCX] +Exclusive #GP ECREATE +SECS [DS:RCX] +SECS [DS:RCX]
+
Table 38-15. Base Concurrency Restrictions of ECREATE
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
ECREATESECS [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-16. Additional Concurrency Restrictions of ECREATE
+

Operation + ¶ +

+

Temp Variables in ECREATE Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_SRCPGEEffective Address32/64Effective address of the SECS source page.
TMP_SECSEffective Address32/64Effective address of the SECS destination page.
TMP_SECINFOEffective Address32/64Effective address of an SECINFO structure which contains security attributes of the SECS page to be added.
TMP_XSIZESSA Size64The size calculation of SSA frame.
TMP_MISC_SIZEMISC Field Size64Size of the selected MISC field components.
TMPUPDATEFIELDSHA256 Buffer512Buffer used to hold data being added to TMP_SECS.MRENCLAVE.
+

IF (DS:RBX is not 32Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

TMP_SRCPGE := DS:RBX.SRCPGE;

+

TMP_SECINFO := DS:RBX.SECINFO;

+

IF (DS:TMP_SRCPGE is not 4KByte aligned or DS:TMP_SECINFO is not 64Byte aligned)

+

THEN #GP(0); FI;

+

IF (DS:RBX.LINADDR ! = 0 or DS:RBX.SECS ≠ 0)

+

THEN #GP(0); FI;

+

(* Check for misconfigured SECINFO flags*)

+

IF (DS:TMP_SECINFO reserved fields are not zero or DS:TMP_SECINFO.FLAGS.PT ≠ PT_SECS)

+

THEN #GP(0); FI;

+

TMP_SECS := RCX;

+

IF (EPC entry in use)

+

THEN

+

IF (<<VMX non-root operation>> AND <<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)

+

THEN

+

VMCS.Exit_reason := SGX_CONFLICT;

+

VMCS.Exit_qualification.code := EPC_PAGE_CONFLICT_EXCEPTION;

+

VMCS.Exit_qualification.error := 0;

+

VMCS.Guest-physical_address :=

+

<< translation of DS:TMP_SECS produced by paging >>;

+

VMCS.Guest-linear_address := DS:TMP_SECS;

+

Deliver VMEXIT;

+

ELSE

+

#GP(0);

+

FI;

+

FI;

+

IF (EPC entry in use)

+

THEN #GP(0); FI;

+

IF (EPCM(DS:RCX).VALID = 1)

+

THEN #PF(DS:RCX); FI;

+

(* Copy 4KBytes from source page to EPC page*)

+

DS:RCX[32767:0] := DS:TMP_SRCPGE[32767:0];

+

(* Check lower 2 bits of XFRM are set *)

+

IF ( ( DS:TMP_SECS.ATTRIBUTES.XFRM BitwiseAND 03H) ≠ 03H)

+

THEN #GP(0); FI;

+

IF (XFRM is illegal)

+

THEN #GP(0); FI;

+

(* Check legality of CET_ATTRIBUTES *)

+

IF ((DS:TMP_SECS.ATTRIBUTES.CET = 0 and DS:TMP_SECS.CET_ATTRIBUTES ≠ 0) ||

+

(DS:TMP_SECS.ATTRIBUTES.CET = 0 and DS:TMP_SECS.CET_LEG_BITMAP_OFFSET ≠ 0) ||

+

(CPUID.(EAX=7, ECX=0):EDX[CET_IBT] = 0 and DS:TMP_SECS.CET_LEG_BITMAP_OFFSET ≠ 0) ||

+

(CPUID.(EAX=7, ECX=0):EDX[CET_IBT] = 0 and DS:TMP_SECS.CET_ATTRIBUTES[5:2] ≠ 0) ||

+

(CPUID.(EAX=7, ECX=0):ECX[CET_SS] = 0 and DS:TMP_SECS.CET_ATTRIBUTES[1:0] ≠ 0) ||

+

(DS:TMP_SECS.ATTRIBUTES.MODE64BIT = 1 and

+

(DS:TMP_SECS.BASEADDR + DS:TMP_SECS.CET_LEG_BITMAP_OFFSET) not canonical) ||

+

(DS:TMP_SECS.ATTRIBUTES.MODE64BIT = 0 and

+

(DS:TMP_SECS.BASEADDR + DS:TMP_SECS.CET_LEG_BITMAP_OFFSET) & 0xFFFFFFFF00000000) ||

+

(DS:TMP_SECS.CET_ATTRIBUTES.reserved fields not 0) or

+

(DS:TMP_SECS.CET_LEG_BITMAP_OFFSET) is not page aligned))

+

THEN

+

#GP(0);

+

FI;

+

(* Make sure that the SECS does not have any unsupported MISCSELECT options*)

+

IF ( !(CPUID.(EAX=12H, ECX=0):EBX[31:0] & DS:TMP_SECS.MISCSELECT[31:0]) )

+

THEN

+

EPCM(DS:TMP_SECS).EntryLock.Release();

+

#GP(0);

+

FI;

+

( * Compute size of MISC area *)

+

TMP_MISC_SIZE := compute_misc_region_size();

+

(* Compute the size required to save state of the enclave on async exit, see Section 39.7.2.2*)

+

TMP_XSIZE := compute_xsave_size(DS:TMP_SECS.ATTRIBUTES.XFRM) + GPR_SIZE + TMP_MISC_SIZE;

+

(* Ensure that the declared area is large enough to hold XSAVE and GPR stat *)

+

IF ( DS:TMP_SECS.SSAFRAMESIZE*4096 < TMP_XSIZE)

+

THEN #GP(0); FI;

+

IF ( (DS:TMP_SECS.ATTRIBUTES.MODE64BIT = 1) and (DS:TMP_SECS.BASEADDR is not canonical) )

+

THEN #GP(0); FI;

+

IF ( (DS:TMP_SECS.ATTRIBUTES.MODE64BIT = 0) and (DS:TMP_SECS.BASEADDR and 0FFFFFFFF00000000H) )

+

THEN #GP(0); FI;

+

IF ( (DS:TMP_SECS.ATTRIBUTES.MODE64BIT = 0) and (DS:TMP_SECS.SIZE ≥ 2 ^ (CPUID.(EAX=12H, ECX=0):.EDX[7:0]) ) ) THEN #GP(0); FI;

+

IF ( (DS:TMP_SECS.ATTRIBUTES.MODE64BIT = 1) and (DS:TMP_SECS.SIZE ≥ 2 ^ (CPUID.(EAX=12H, ECX=0):.EDX[15:8]) ) ) THEN #GP(0); FI;

+

(* Enclave size must be at least 8192 bytes and must be power of 2 in bytes*)

+

IF (DS:TMP_SECS.SIZE < 8192 or popcnt(DS:TMP_SECS.SIZE) > 1)

+

THEN #GP(0); FI;

+

(* Ensure base address of an enclave is aligned on size*)

+

IF ( ( DS:TMP_SECS.BASEADDR and (DS:TMP_SECS.SIZE-1) ) )

+

THEN #GP(0); FI;

+

(* Ensure the SECS does not have any unsupported attributes*)

+

IF ( DS:TMP_SECS.ATTRIBUTES and (~CR_SGX_ATTRIBUTES_MASK) )

+

THEN #GP(0); FI;

+

IF ( DS:TMP_SECS reserved fields are not zero)

+

THEN #GP(0); FI;

+

(* Verify that CONFIGID/CONFIGSVN are not set with attribute *)

+

IF ( ((DS:TMP_SECS.CONFIGID ≠ 0) or (DS:TMP_SECS.CONFIGSVN ≠0)) AND (DS:TMP_SECS.ATTRIBUTES.KSS == 0 ))

+

THEN #GP(0); FI;

+

Clear DS:TMP_SECS to Uninitialized;

+

DS:TMP_SECS.MRENCLAVE := SHA256INITIALIZE(DS:TMP_SECS.MRENCLAVE);

+

DS:TMP_SECS.ISVSVN := 0;

+

DS:TMP_SECS.ISVPRODID := 0;

+

(* Initialize hash updates etc*)

+

Initialize enclave’s MRENCLAVE update counter;

+

(* Add “ECREATE” string and SECS fields to MRENCLAVE *)

+

TMPUPDATEFIELD[63:0] := 0045544145524345H; // “ECREATE”

+

TMPUPDATEFIELD[95:64] := DS:TMP_SECS.SSAFRAMESIZE;

+

TMPUPDATEFIELD[159:96] := DS:TMP_SECS.SIZE;

+

IF (CPUID.(EAX=7, ECX=0):EDX[CET_IBT] = 1)

+

THEN

+

TMPUPDATEFIELD[223:160] := DS:TMP_SECS.CET_LEG_BITMAP_OFFSET;

+

ELSE

+

TMPUPDATEFIELD[223:160] := 0;

+

FI;

+

TMPUPDATEFIELD[511:160] := 0;

+

DS:TMP_SECS.MRENCLAVE := SHA256UPDATE(DS:TMP_SECS.MRENCLAVE, TMPUPDATEFIELD)

+

INC enclave’s MRENCLAVE update counter;

+

(* Set EID *)

+

DS:TMP_SECS.EID := LockedXAdd(CR_NEXT_EID, 1);

+

(* Initialize the virtual child count to zero *)

+

DS:TMP_SECS.VIRTCHILDCNT := 0;

+

(* Load ENCLAVECONTEXT with Address out of paging of SECS *)

+

<< store translation of DS:RCX produced by paging in SECS(DS:RCX).ENCLAVECONTEXT >>

+

(* Set the EPCM entry, first create SECS identifier and store the identifier in EPCM *)

+

EPCM(DS:TMP_SECS).PT := PT_SECS;

+

EPCM(DS:TMP_SECS).ENCLAVEADDRESS := 0;

+

EPCM(DS:TMP_SECS).R := 0;

+

EPCM(DS:TMP_SECS).W := 0;

+

EPCM(DS:TMP_SECS).X := 0;

+

(* Set EPCM entry fields *)

+

EPCM(DS:RCX).BLOCKED := 0;

+

EPCM(DS:RCX).PENDING := 0;

+

EPCM(DS:RCX).MODIFIED := 0;

+

EPCM(DS:RCX).PR := 0;

+

EPCM(DS:RCX).VALID := 1;

+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If the reserved fields are not zero.
If PAGEINFO.SECS is not zero.
If PAGEINFO.LINADDR is not zero.
If the SECS destination is locked.
If SECS.SSAFRAMESIZE is insufficient.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the SECS destination is outside the EPC.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory address is non-canonical form.
If a memory operand is not properly aligned.
If the reserved fields are not zero.
If PAGEINFO.SECS is not zero.
If PAGEINFO.LINADDR is not zero.
If the SECS destination is locked.
If SECS.SSAFRAMESIZE is insufficient.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the SECS destination is outside the EPC.
diff --git a/x86/edbgrd.html b/x86/edbgrd.html new file mode 100644 index 0000000..8ccb97e --- /dev/null +++ b/x86/edbgrd.html @@ -0,0 +1,270 @@ + +EDBGRD + — Read From a Debug Enclave

EDBGRD + — Read From a Debug Enclave

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 04H ENCLS[EDBGRD]IRV/VSGX1This leaf function reads a dword/quadword from a debug enclave.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/EnEAXRBXRCX
IREDBGRD (In)Return error code (Out)Data read from a debug enclave (Out)Address of source memory in the EPC (In)
+

Description + ¶ +

+

This leaf function copies a quadword/doubleword from an EPC page belonging to a debug enclave into the RBX register. Eight bytes are read in 64-bit mode, four bytes are read in non-64-bit modes. The size of data read cannot be overridden.

+

The effective address of the source location inside the EPC is provided in the register RCX.

+

EDBGRD Memory Parameter Semantics + ¶ +

+ + + + +
EPCQW
Read access permitted by Enclave
+

The error codes are:

+
+ + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEDBGRD successful.
SGX_PAGE_NOT_DEBUGGABLEThe EPC page cannot be accessed because it is in the PENDING or MODIFIED state.
+
Table 38-17. EDBGRD Return Value in RAX
+

The instruction faults if any of the following:

+

EDBGRD Faulting Conditions + ¶ +

+ + + + + + + + + + + + +
RCX points into a page that is an SECS.RCX does not resolve to a naturally aligned linear address.
RCX points to a page that does not belong to an enclave that is in debug mode.RCX points to a location inside a TCS that is beyond the architectural size of the TCS (SGX_TCS_LIMIT).
An operand causing any segment violation.May page fault.
CPL > 0.
+

This instruction ignores the EPCM RWX attributes on the enclave page. Consequently, violation of EPCM RWX attributes via EDBGRD does not result in a #GP.

+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
EDBGRD EDBGRD +Target [DS:RCX] +Shared EDBGRD +Target [DS:RCX] +Target [DS:RCX]
+
Table 38-18. Base Concurrency Restrictions of EDBGRD
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
EDBGRDTarget [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-19. Additional Concurrency Restrictions of EDBGRD
+

Operation + ¶ +

+

Temp Variables in EDBGRD Operational Flow + ¶ +

+ + + + + + + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_MODE64Binary1((IA32_EFER.LMA = 1) && (CS.L = 1))
TMP_SECS64Physical address of SECS of the enclave to which source operand belongs.
+

TMP_MODE64 := ((IA32_EFER.LMA = 1) && (CS.L = 1));

+

IF ( (TMP_MODE64 = 1) and (DS:RCX is not 8Byte Aligned) )

+

THEN #GP(0); FI;

+

IF ( (TMP_MODE64 = 0) and (DS:RCX is not 4Byte Aligned) )

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

(* make sure no other Intel SGX instruction is accessing the same EPCM entry *)

+

IF (Another instruction modifying the same EPCM entry is executing)

+

THEN #GP(0); FI;

+

IF (EPCM(DS:RCX).VALID = 0)

+

THEN #PF(DS:RCX); FI;

+

(* make sure that DS:RCX (SOURCE) is pointing to a PT_REG or PT_TCS or PT_VA or PT_SS_FIRST or PT_SS_REST *)

+

IF ( (EPCM(DS:RCX).PT ≠ PT_REG) and (EPCM(DS:RCX).PT ≠ PT_TCS) and (EPCM(DS:RCX).PT ≠ PT_VA)

+

and (EPCM(DS:RCX).PT ≠ PT_SS_FIRST) and (EPCM(DS:RCX).PT ≠ PT_SS_REST))

+

THEN #PF(DS:RCX); FI;

+

(* make sure that DS:RCX points to an accessible EPC page *)

+

IF (EPCM(DS:RCX).PENDING is not 0 or (EPCM(DS:RCX).MODIFIED is not 0) )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_PAGE_NOT_DEBUGGABLE;

+

GOTO DONE;

+

FI;

+

(* If source is a TCS, then make sure that the offset into the page is not beyond the TCS size*) IF ( ( EPCM(DS:RCX). PT = PT_TCS) and ((DS:RCX) & FFFH ≥ SGX_TCS_LIMIT) ) THEN #GP(0); FI;

+

(* make sure the enclave owning the PT_REG or PT_TCS page allow debug *)

+

IF ( (EPCM(DS:RCX).PT = PT_REG) or (EPCM(DS:RCX).PT = PT_TCS) )

+

THEN

+

TMP_SECS := GET_SECS_ADDRESS;

+

IF (TMP_SECS.ATTRIBUTES.DEBUG = 0)

+

THEN #GP(0); FI;

+

IF ( (TMP_MODE64 = 1) )

+

THEN RBX[63:0] := (DS:RCX)[63:0];

+

ELSE EBX[31:0] := (DS:RCX)[31:0];

+

FI;

+

ELSE

+

TMP_64BIT_VAL[63:0] := (DS:RCX)[63:0] & (~07H); // Read contents from VA slot

+

IF (TMP_MODE64 = 1)

+

THEN

+

IF (TMP_64BIT_VAL ≠ 0H)

+

THEN RBX[63:0] := 0FFFFFFFFFFFFFFFFH;

+

ELSE RBX[63:0] := 0H;

+

FI;

+

ELSE

+

IF (TMP_64BIT_VAL ≠ 0H)

+

THEN EBX[31:0] := 0FFFFFFFFH;

+

ELSE EBX[31:0] := 0H;

+

FI;

+

FI;

+

(* clear EAX and ZF to indicate successful completion *)

+

RAX := 0;

+

RFLAGS.ZF := 0;

+

DONE:

+

(* clear flags *)

+

RFLAGS.CF,PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

ZF is set if the page is MODIFIED or PENDING; RAX contains the error code. Otherwise ZF is cleared and RAX is set to 0. CF, PF, AF, OF, SF are cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If the address in RCS violates DS limit or access rights.
If DS segment is unusable.
If RCX points to a memory location not 4Byte-aligned.
If the address in RCX points to a page belonging to a non-debug enclave.
If the address in RCX points to a page which is not PT_TCS, PT_REG or PT_VA.
If the address in RCX points to a location inside TCS that is beyond SGX_TCS_LIMIT.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the address in RCX points to a non-EPC page.
If the address in RCX points to an invalid EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If RCX is non-canonical form.
If RCX points to a memory location not 8Byte-aligned.
If the address in RCX points to a page belonging to a non-debug enclave.
If the address in RCX points to a page which is not PT_TCS, PT_REG or PT_VA.
If the address in RCX points to a location inside TCS that is beyond SGX_TCS_LIMIT.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the address in RCX points to a non-EPC page.
If the address in RCX points to an invalid EPC page.
diff --git a/x86/edbgwr.html b/x86/edbgwr.html new file mode 100644 index 0000000..31b142a --- /dev/null +++ b/x86/edbgwr.html @@ -0,0 +1,257 @@ + +EDBGWR + — Write to a Debug Enclave

EDBGWR + — Write to a Debug Enclave

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 05H ENCLS[EDBGWR]IRV/VSGX1This leaf function writes a dword/quadword to a debug enclave.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/EnEAXRBXRCX
IREDBGWR (In)Return error code (Out)Data to be written to a debug enclave (In)Address of Target memory in the EPC (In)
+

Description + ¶ +

+

This leaf function copies the content in EBX/RBX to an EPC page belonging to a debug enclave. Eight bytes are written in 64-bit mode, four bytes are written in non-64-bit modes. The size of data cannot be overridden.

+

The effective address of the target location inside the EPC is provided in the register RCX.

+

EDBGWR Memory Parameter Semantics + ¶ +

+ + + + +
EPCQW
Write access permitted by Enclave
+

The instruction faults if any of the following:

+

EDBGWR Faulting Conditions + ¶ +

+ + + + + + + + + + + + +
RCX points into a page that is an SECS.RCX does not resolve to a naturally aligned linear address.
RCX points to a page that does not belong to an enclave that is in debug mode.RCX points to a location inside a TCS that is not the FLAGS word.
An operand causing any segment violation.May page fault.
CPL > 0.
+

The error codes are:

+
+ + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEDBGWR successful.
SGX_PAGE_NOT_DEBUGGABLEThe EPC page cannot be accessed because it is in the PENDING or MODIFIED state.
+
Table 38-20. EDBGWR Return Value in RAX
+

This instruction ignores the EPCM RWX attributes on the enclave page. Consequently, violation of EPCM RWX attributes via EDBGRD does not result in a #GP.

+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
EDBGWR EDBGWR +Target [DS:RCX] +Shared EDBGWR +Target [DS:RCX] +Target [DS:RCX]
+
Table 38-21. Base Concurrency Restrictions of EDBGWR
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
EDBGWRTarget [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-22. Additional Concurrency Restrictions of EDBGWR
+

Operation + ¶ +

+

Temp Variables in EDBGWR Operational Flow + ¶ +

+ + + + + + + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_MODE64Binary1((IA32_EFER.LMA = 1) && (CS.L = 1)).
TMP_SECS64Physical address of SECS of the enclave to which source operand belongs.
+

TMP_MODE64 := ((IA32_EFER.LMA = 1) && (CS.L = 1));

+

IF ( (TMP_MODE64 = 1) and (DS:RCX is not 8Byte Aligned) )

+

THEN #GP(0); FI;

+

IF ( (TMP_MODE64 = 0) and (DS:RCX is not 4Byte Aligned) )

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

(* make sure no other Intel SGX instruction is accessing the same EPCM entry *)

+

IF (Another instruction modifying the same EPCM entry is executing)

+

THEN #GP(0); FI;

+

IF (EPCM(DS:RCX).VALID = 0)

+

THEN #PF(DS:RCX); FI;

+

(* make sure that DS:RCX (DST) is pointing to a PT_REG or PT_TCS or PT_SS_FIRST or PT_SS_REST *)

+

IF ( (EPCM(DS:RCX).PT ≠ PT_REG) and (EPCM(DS:RCX).PT ≠ PT_TCS)

+

and (EPCM(DS:RCX).PT ≠ PT_SS_FIRST) and (EPCM(DS:RCX).PT ≠ PT_SS_REST))

+

THEN #PF(DS:RCX); FI;

+

(* make sure that DS:RCX points to an accessible EPC page *)

+

IF ( (EPCM(DS:RCX).PENDING is not 0) or (EPCM(DS:RCS).MODIFIED is not 0) )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_PAGE_NOT_DEBUGGABLE;

+

GOTO DONE;

+

FI;

+

(* If destination is a TCS, then make sure that the offset into the page can only point to the FLAGS field*)

+

IF ( ( EPCM(DS:RCX). PT = PT_TCS) and ((DS:RCX) & FF8H ≠ offset_of_FLAGS & 0FF8H) )

+

THEN #GP(0); FI;

+

(* Locate the SECS for the enclave to which the DS:RCX page belongs *)

+

TMP_SECS := GET_SECS_PHYS_ADDRESS(EPCM(DS:RCX).ENCLAVESECS);

+

(* make sure the enclave owning the PT_REG or PT_TCS page allow debug *)

+

IF (TMP_SECS.ATTRIBUTES.DEBUG = 0)

+

THEN #GP(0); FI;

+

IF ( (TMP_MODE64 = 1) )

+

THEN (DS:RCX)[63:0] := RBX[63:0];

+

ELSE (DS:RCX)[31:0] := EBX[31:0];

+

FI;

+

(* clear EAX and ZF to indicate successful completion *)

+

RAX := 0;

+

RFLAGS.ZF := 0;

+

DONE:

+

(* clear flags *)

+

RFLAGS.CF,PF,AF,OF,SF := 0

+

Flags Affected + ¶ +

+

ZF is set if the page is MODIFIED or PENDING; RAX contains the error code. Otherwise ZF is cleared and RAX is set to 0. CF, PF, AF, OF, SF are cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If the address in RCS violates DS limit or access rights.
If DS segment is unusable.
If RCX points to a memory location not 4Byte-aligned.
If the address in RCX points to a page belonging to a non-debug enclave.
If the address in RCX points to a page which is not PT_TCS or PT_REG.
If the address in RCX points to a location inside TCS that is not the FLAGS word.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the address in RCX points to a non-EPC page.
If the address in RCX points to an invalid EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If RCX is non-canonical form.
If RCX points to a memory location not 8Byte-aligned.
If the address in RCX points to a page belonging to a non-debug enclave.
If the address in RCX points to a page which is not PT_TCS or PT_REG.
If the address in RCX points to a location inside TCS that is not the FLAGS word.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the address in RCX points to a non-EPC page.
If the address in RCX points to an invalid EPC page.
diff --git a/x86/edeccssa.html b/x86/edeccssa.html new file mode 100644 index 0000000..fadf344 --- /dev/null +++ b/x86/edeccssa.html @@ -0,0 +1,283 @@ + +EDECCSSA + — Decrements TCS.CSSA

EDECCSSA + — Decrements TCS.CSSA

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 09H ENCLU[EDECCSSA]IRV/VEDECCSSAThis leaf function decrements TCS.CSSA.
+

Instruction Operand Encoding + ¶ +

+ + + + + + +
Op/EnEAX
IREDECCSSA (In)
+

Description + ¶ +

+

This leaf function changes the current SSA frame by decrementing TCS.CSSA for the current enclave thread. If the enclave has enabled CET shadow stacks or indirect branch tracking, then EDECCSSA also changes the current CET state save frame. This instruction leaf can only be executed inside an enclave.

+

EDECCSSA Memory Parameter Semantics + ¶ +

+ + + + +
TCS
Read/Write access by Enclave
+

The instruction faults if any of the following:

+

EDECCSSA Faulting Conditions + ¶ +

+ + + + + + +
TCS.CSSA is 0.TCS is not valid or available or locked.
The SSA frame is not valid or in use.
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
EDECCSSA EDECCSSA +TCS [CR_TCS_PA] +Shared EDECCSSA +TCS [CR_TCS_PA] +TCS [CR_TCS_PA]
+
Table 38-60. Base Concurrency Restrictions of EDECCSSA
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
EDECCSSATCS [CR_TCS_PA]ConcurrentConcurrentConcurrent
+
Table 38-61. Additional Concurrency Restrictions of EDECCSSA
+

Operation + ¶ +

+

Temp Variables in EDECCSSA Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSize (bits)Description
TMP_SSAEffective Address32/64Address of current SSA frame.
TMP_XSIZEInteger64Size of XSAVE area based on SECS.ATTRIBUTES.XFRM.
TMP_SSA_PAGEEffective Address32/64Pointer used to iterate over the SSA pages in the target frame.
TMP_GPREffective Address32/64Address of the GPR area within the target SSA frame.
TMP_XSAVE_PAGE_PA_nPhysical Address32/64Physical address of the nth page within the target SSA frame.
TMP_CET_SAVE_AREAEffective Address32/64Address of the current CET save area.
TMP_CET_SAVE_PAGEEffective Address32/64Address of the current CET save area page.
+

IF (CR_TCS_PA.CSSA = 0)

+

THEN #GP(0); FI;

+

(* Compute linear address of SSA frame *)

+

TMP_SSA := CR_TCS_PA.OSSA + CR_ACTIVE_SECS.BASEADDR + 4096 * CR_ACTIVE_SECS.SSAFRAMESIZE * (CR_TCS_PA.CSSA - 1);

+

TMP_XSIZE := compute_XSAVE_frame_size(CR_ACTIVE_SECS.ATTRIBUTES.XFRM);

+

FOR EACH TMP_SSA_PAGE = TMP_SSA to TMP_SSA + TMP_XSIZE

+

(* Check page is read/write accessible *)

+

Check that DS:TMP_SSA_PAGE is read/write accessible;

+

If a fault occurs, release locks, abort and deliver that fault;

+

IF (DS:TMP_SSA_PAGE does not resolve to EPC page)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF (EPCM(DS:TMP_SSA_PAGE).VALID = 0)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF (EPCM(DS:TMP_SSA_PAGE).BLOCKED = 1)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF ((EPCM(DS:TMP_SSA_PAGE).PENDING = 1) or (EPCM(DS:TMP_SSA_PAGE_.MODIFIED = 1))

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF ( ( EPCM(DS:TMP_SSA_PAGE).ENCLAVEADDRESS ≠ DS:TMPSSA_PAGE) or

+

(EPCM(DS:TMP_SSA_PAGE).PT ≠ PT_REG) or

+

(EPCM(DS:TMP_SSA_PAGE).ENCLAVESECS ≠ EPCM(CR_TCS_PA).ENCLAVESECS) or

+

(EPCM(DS:TMP_SSA_PAGE).R = 0) or (EPCM(DS:TMP_SSA_PAGE).W = 0))

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

TMP_XSAVE_PAGE_PA_n := Physical_Address(DS:TMP_SSA_PAGE);

+

ENDFOR

+

(* Compute address of GPR area*)

+

TMP_GPR := TMP_SSA + 4096 * CR_ACTIVE_SECS.SSAFRAMESIZE - sizeof(GPRSGX_AREA);

+

Check that DS:TMP_SSA_PAGE is read/write accessible;

+

If a fault occurs, release locks, abort and deliver that fault;

+

IF (DS:TMP_GPR does not resolve to EPC page)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (EPCM(DS:TMP_GPR).VALID = 0)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (EPCM(DS:TMP_GPR).BLOCKED = 1)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF ((EPCM(DS:TMP_GPR).PENDING = 1) or (EPCM(DS:TMP_GPR).MODIFIED = 1))

+

THEN #PF(DS:TMP_GPR); FI;

+

IF ( ( EPCM(DS:TMP_GPR).ENCLAVEADDRESS ≠ DS:TMP_GPR) or

+

(EPCM(DS:TMP_GPR).PT ≠ PT_REG) or

+

(EPCM(DS:TMP_GPR).ENCLAVESECS ≠ EPCM(CR_TCS_PA).ENCLAVESECS) or

+

(EPCM(DS:TMP_GPR).R = 0) or (EPCM(DS:TMP_GPR).W = 0) )

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (TMP_MODE64 = 0)

+

THEN

+

IF (TMP_GPR + (sizeof(GPRSGX_AREA) -1) is not in DS segment)

+

THEN #GP(0); FI;

+

FI;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN

+

IF ((CR_ACTIVE_SECS.CET_ATTRIBUTES.SH_STK_EN == 1) OR (CR_ACTIVE_SECS.CET_ATTRIBUTES.ENDBR_EN == 1))

+

THEN

+

(* Compute linear address of what will become new CET state save area and cache its PA *)

+

TMP_CET_SAVE_AREA := CR_TCS_PA.OCETSSA + CR_ACTIVE_SECS.BASEADDR + (CR_TCS_PA.CSSA - 1) * 16;

+

TMP_CET_SAVE_PAGE := TMP_CET_SAVE_AREA & ~0xFFF;

+

Check the TMP_CET_SAVE_PAGE page is read/write accessible

+

If fault occurs release locks, abort and deliver fault

+

(* read the EPCM VALID, PENDING, MODIFIED, BLOCKED and PT fields atomically *)

+

IF ((DS:TMP_CET_SAVE_PAGE Does NOT RESOLVE TO EPC PAGE) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).VALID = 0) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).PENDING = 1) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).MODIFIED = 1) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).BLOCKED = 1) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).R = 0) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).W = 0) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).ENCLAVEADDRESS ≠ DS:TMP_CET_SAVE_PAGE) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).PT ≠ PT_SS_REST) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).ENCLAVESECS ≠ EPCM(CR_TCS_PA).ENCLAVESECS))

+

THEN #PF(DS:TMP_CET_SAVE_PAGE); FI;

+

FI;

+

FI;

+

(* At this point, the instruction is guaranteed to complete *)

+

CR_TCS_PA.CSSA := CR_TCS_PA.CSSA - 1;

+

CR_GPR_PA := Physical_Address(DS:TMP_GPR);

+

FOR EACH TMP_XSAVE_PAGE_n

+

CR_XSAVE_PAGE_n := TMP_XSAVE_PAGE_PA_n;

+

ENDFOR

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

IF ((TMP_SECS.CET_ATTRIBUTES.SH_STK_EN == 1) OR

+

(TMP_SECS.CET_ATTRIBUTES.ENDBR_EN == 1))

+

THEN

+

CR_CET_SAVE_AREA_PA := Physical_Address(DS:TMP_CET_SAVE_AREA);

+

FI;

+

FI;

+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If CR_TCS_PA.CSSA = 0.
#PF(errorcode) If a page fault occurs in accessing memory.
If one or more pages of the target SSA frame are not readable/writable, or do not resolve to a valid PT_REG EPC page.
If CET is enabled for the enclave and the target CET SSA frame is not readable/writable, or does not resolve to a valid PT_REG EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If CR_TCS_PA.CSSA = 0.
#PF(errorcode) If a page fault occurs in accessing memory.
If one or more pages of the target SSA frame are not readable/writable, or do not resolve to a valid PT_REG EPC page.
If CET is enabled for the enclave and the target CET SSA frame is not readable/writable, or does not resolve to a valid PT_REG EPC page.
diff --git a/x86/edecvirtchild.html b/x86/edecvirtchild.html new file mode 100644 index 0000000..b5c67a6 --- /dev/null +++ b/x86/edecvirtchild.html @@ -0,0 +1,265 @@ + +EDECVIRTCHILD + — Decrement VIRTCHILDCNT in SECS

EDECVIRTCHILD + — Decrement VIRTCHILDCNT in SECS

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 00H ENCLV[EDECVIRTCHILD]IRV/VEAX[5]This leaf function decrements the SECS VIRTCHILDCNT field.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/EnEAXRBXRCX
IREDECVIRTCHILD (In)Return error code (Out)Address of an enclave page (In)Address of an SECS page (In)
+

Description + ¶ +

+

This instruction decrements the SECS VIRTCHILDCNT field. This instruction can only be executed when current privilege level is 0.

+

The content of RCX is an effective address of an EPC page. The DS segment is used to create linear address. Segment override is not supported.

+

EDECVIRTCHILD Memory Parameter Semantics + ¶ +

+ + + + + + +
EPCPAGESECS
Read/Write access permitted by Non EnclaveRead access permitted by Enclave
+

The instruction faults if any of the following:

+

EDECVIRTCHILD Faulting Conditions + ¶ +

+ + + + + + + + + + + + +
A memory operand effective address is outside the DS segment limit (32b mode).A page fault occurs in accessing memory operands.
DS segment is unusable (32b mode).RBX does not refer to an enclave page (REG, TCS, TRIM, SECS).
A memory address is in a non-canonical form (64b mode).RCX does not refer to an SECS page.
A memory operand is not properly aligned.RBX does not refer to an enclave page associated with SECS referenced in RCX.
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EDECVIRTCHILDTarget [DS:RBX]SharedSGX_EPC_PAGE_ CONFLICT
SECS [DS:RCX]Concurrent
+
Table 38-76. Base Concurrency Restrictions of EDECVIRTCHILD
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EDECVIRTCHILDTarget [DS:RBX]ConcurrentConcurrentConcurrent
SECS [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-77. Additional Concurrency Restrictions of EDECVIRTCHILD
+

Operation + ¶ +

+

Temp Variables in EDECVIRTCHILD Operational Flow + ¶ +

+ + + + + + + + + + + + + + + +
NameTypeSize (bits)Description
TMP_SECSPhysical Address64Physical address of the SECS of the page being modified.
TMP_VIRTCHILDCNTInteger64Number of virtual child pages.
+

EDECVIRTCHILD Return Value in RAX + ¶ +

+ + + + + + + + + + + + + + + + +
ErrorValueDescription
No Error0EDECVIRTCHILD Successful.
SGX_EPC_PAGE_CONFLICTFailure due to concurrent operation of another SGX instruction.
SGX_INVALID_COUNTERAttempt to decrement counter that is already zero.
+

(* check alignment of DS:RBX *)

+

IF (DS:RBX is not 4K aligned) THEN

+

#GP(0); FI;

+

(* check DS:RBX is an linear address of an EPC page *)

+

IF (DS:RBX does not resolve within an EPC) THEN

+

#PF(DS:RBX, PFEC.SGX); FI;

+

(* check DS:RCX is an linear address of an EPC page *)

+

IF (DS:RCX does not resolve within an EPC) THEN

+

#PF(DS:RCX, PFEC.SGX); FI;

+

(* Check the EPCPAGE for concurrency *)

+

IF (EPCPAGE is being modified) THEN

+

RFLAGS.ZF = 1;

+

RAX = SGX_EPC_PAGE_CONFLICT;

+

goto DONE;

+

FI;

+

(* check that the EPC page is valid *)

+

IF (EPCM(DS:RBX).VALID = 0) THEN

+

#PF(DS:RBX, PFEC.SGX); FI;

+

(* check that the EPC page has the correct type and that the back pointer matches the pointer passed as the pointer to parent *)

+

IF ((EPCM(DS:RBX).PAGE_TYPE = PT_REG) or

+

(EPCM(DS:RBX).PAGE_TYPE = PT_TCS) or

+

(EPCM(DS:RBX).PAGE_TYPE = PT_TRIM) or

+

(EPCM(DS:RBX).PAGE_TYPE = PT_SS_FIRST) or

+

(EPCM(DS:RBX).PAGE_TYPE = PT_SS_REST))

+

THEN

+

(* get the SECS of DS:RBX *)

+

TMP_SECS := Address of SECS for (DS:RBX);

+

ELSE IF (EPCM(DS:RBX).PAGE_TYPE = PT_SECS) THEN

+

(* get the physical address of DS:RBX *)

+

TMP_SECS := Physical_Address(DS:RBX);

+

ELSE

+

(* EDECVIRTCHILD called on page of incorrect type *)

+

#PF(DS:RBX, PFEC.SGX); FI;

+

IF (TMP_SECS ≠ Physical_Address(DS:RCX)) THEN

+

#GP(0); FI;

+

(* Atomically decrement virtchild counter and check for underflow *)

+

Locked_Decrement(SECS(TMP_SECS).VIRTCHILDCNT);

+

IF (There was an underflow) THEN

+

Locked_Increment(SECS(TMP_SECS).VIRTCHILDCNT);

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_COUNTER;

+

goto DONE;

+

FI;

+

RFLAGS.ZF := 0;

+

RAX := 0;

+

DONE:

+

(* clear flags *)

+

RFLAGS.CF := 0;

+

RFLAGS.PF := 0;

+

RFLAGS.AF := 0;

+

RFLAGS.OF := 0;

+

RFLAGS.SF := 0;

+

Flags Affected + ¶ +

+

ZF is set if EDECVIRTCHILD fails due to concurrent operation with another SGX instruction, or if there is a VIRTCHILDCNT underflow. Otherwise cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If DS segment is unusable.
If a memory operand is not properly aligned.
RBX does not refer to an enclave page associated with SECS referenced in RCX.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If RBX does not refer to an enclave page (REG, TCS, TRIM, SECS).
If RCX does not refer to an SECS page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)If a memory address is in a non-canonical form.
If a memory operand is not properly aligned.
RBX does not refer to an enclave page associated with SECS referenced in RCX.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If RBX does not refer to an enclave page (REG, TCS, TRIM, SECS).
If RCX does not refer to an SECS page.
diff --git a/x86/eenter.html b/x86/eenter.html new file mode 100644 index 0000000..f1ac087 --- /dev/null +++ b/x86/eenter.html @@ -0,0 +1,545 @@ + +EENTER + — Enters an Enclave

EENTER + — Enters an Enclave

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 02H ENCLU[EENTER]IRV/VSGX1This leaf function is used to enter an enclave.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnEAXRBXRCX
IREENTER (In)Content of RBX.CSSA (Out)Address of a TCS (In)Address of AEP (In)Address of IP following EENTER (Out)
+

Description + ¶ +

+

The ENCLU[EENTER] instruction transfers execution to an enclave. At the end of the instruction, the logical processor is executing in enclave mode at the RIP computed as EnclaveBase + TCS.OENTRY. If the target address is not within the CS segment (32-bit) or is not canonical (64-bit), a #GP(0) results.

+

EENTER Memory Parameter Semantics + ¶ +

+ + + + +
TCS
Enclave access
+

EENTER is a serializing instruction. The instruction faults if any of the following occurs:

+ + + + + + + + + + + + + + + + + + +
Address in RBX is not properly aligned.Any TCS.FLAGS’s must-be-zero bit is not zero.
TCS pointed to by RBX is not valid or available or locked.Current 32/64 mode does not match the enclave mode in SECS.ATTRIBUTES.MODE64.
The SECS is in use.Either of TCS-specified FS and GS segment is not a subsets of the current DS segment.
Any one of DS, ES, CS, SS is not zero.If XSAVE available, CR4.OSXSAVE = 0, but SECS.ATTRIBUTES.XFRM ≠ 3.
CR4.OSFXSR ≠ 1.If CR4.OSXSAVE = 1, SECS.ATTRIBUTES.XFRM is not a subset of XCR0.
If SECS.ATTRIBUTES.AEXNOTIFY ≠ TCS.FLAGS.AEXNOTIFY and TCS.FLAGS.DBGOPTIN = 0.
+

The following operations are performed by EENTER:

+
    +
  • RSP and RBP are saved in the current SSA frame on EENTER and are automatically restored on EEXIT or interrupt.
  • +
  • The AEP contained in RCX is stored into the TCS for use by AEXs.FS and GS (including hidden portions) are saved and new values are constructed using TCS.OFSBASE/GSBASE (32 and 64-bit mode) and TCS.OFSLIMIT/GSLIMIT (32-bit mode only). The resulting segments must be a subset of the DS segment.
  • +
  • If CR4.OSXSAVE == 1, XCR0 is saved and replaced by SECS.ATTRIBUTES.XFRM. The effect of RFLAGS.TF depends on whether the enclave entry is opt-in or opt-out (see Section 40.1.2): +
      +
    • On opt-out entry, TF is saved and cleared (it is restored on EEXIT or AEX). Any attempt to set TF via a POPF instruction while inside the enclave clears TF (see Section 40.2.5).
    • +
    • On opt-out entry, TF is saved and cleared (it is restored on EEXIT or AEX). Any attempt to set TF via a POPF instruction while inside the enclave clears TF (see Section 40.2.5).
    • +
    • On opt-in entry, a single-step debug exception is pended on the instruction boundary immediately after EENTER (see Section 40.2.2).
    • +
    • On opt-in entry, a single-step debug exception is pended on the instruction boundary immediately after EENTER (see Section 40.2.2).
  • +
  • All code breakpoints that do not overlap with ELRANGE are also suppressed. If the entry is an opt-out entry, all code and data breakpoints that overlap with the ELRANGE are suppressed.
  • +
  • On opt-out entry, a number of performance monitoring counters and behaviors are modified or suppressed (see Section 40.2.3): +
      +
    • All performance monitoring activity on the current thread is suppressed except for incrementing and firing of FIXED_CTR1 and FIXED_CTR2.
    • +
    • All performance monitoring activity on the current thread is suppressed except for incrementing and firing of FIXED_CTR1 and FIXED_CTR2.
    • +
    • PEBS is suppressed.
    • +
    • PEBS is suppressed.
    • +
    • AnyThread counting on other threads is demoted to MyThread mode and IA32_PERF_GLOBAL_STATUS[60] on that thread is set
    • +
    • AnyThread counting on other threads is demoted to MyThread mode and IA32_PERF_GLOBAL_STATUS[60] on that thread is set
    • +
    • If the opt-out entry on a hardware thread results in suppression of any performance monitoring, then the processor sets IA32_PERF_GLOBAL_STATUS[60] and IA32_PERF_GLOBAL_STATUS[63].
    • +
    • If the opt-out entry on a hardware thread results in suppression of any performance monitoring, then the processor sets IA32_PERF_GLOBAL_STATUS[60] and IA32_PERF_GLOBAL_STATUS[63].
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
EENTER EENTER +TCS [DS:RBX] +Shared EENTER +TCS [DS:RBX] +TCS [DS:RBX]
+
Table 38-62. Base Concurrency Restrictions of EENTER
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
EENTERTCS [DS:RBX]ConcurrentConcurrentConcurrent
+
Table 38-63. Additional Concurrency Restrictions of EENTER
+

Operation + ¶ +

+

Temp Variables in EENTER Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_FSBASEEffective Address32/64Proposed base address for FS segment.
TMP_GSBASEEffective Address32/64Proposed base address for FS segment.
TMP_FSLIMITEffective Address32/64Highest legal address in proposed FS segment.
TMP_GSLIMITEffective Address32/64Highest legal address in proposed GS segment.
TMP_XSIZEinteger64Size of XSAVE area based on SECS.ATTRIBUTES.XFRM.
TMP_SSA_PAGEEffective Address32/64Pointer used to iterate over the SSA pages in the current frame.
TMP_GPREffective Address32/64Address of the GPR area within the current SSA frame.
+

TMP_MODE64 := ((IA32_EFER.LMA = 1) && (CS.L = 1));

+

(* Make sure DS is usable, expand up *)

+

IF (TMP_MODE64 = 0 and (DS not usable or ( ( DS[S] = 1) and (DS[bit 11] = 0) and DS[bit 10] = 1) ) )

+

THEN #GP(0); FI;

+

(* Check that CS, SS, DS, ES.base is 0 *)

+

IF (TMP_MODE64 = 0)

+

THEN

+

IF(CS.base ≠ 0 or DS.base ≠ 0) #GP(0); FI;

+

IF(ES usable and ES.base ≠ 0) #GP(0); FI;

+

IF(SS usable and SS.base ≠ 0) #GP(0); FI;

+

IF(SS usable and SS.B = 0) #GP(0); FI;

+

FI;

+

IF (DS:RBX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RBX does not resolve within an EPC)

+

THEN #PF(DS:RBX); FI;

+

(* Check AEP is canonical*)

+

IF (TMP_MODE64 = 1 and (CS:RCX is not canonical) )

+

THEN #GP(0); FI;

+

(* Check concurrency of TCS operation*)

+

IF (Other Intel SGX instructions are operating on TCS)

+

THEN #GP(0); FI;

+

(* TCS verification *)

+

IF (EPCM(DS:RBX).VALID = 0)

+

THEN #PF(DS:RBX); FI;

+

IF (EPCM(DS:RBX).BLOCKED = 1)

+

THEN #PF(DS:RBX); FI;

+

IF ( (EPCM(DS:RBX).ENCLAVEADDRESS ≠ DS:RBX) or (EPCM(DS:RBX).PT ≠ PT_TCS) )

+

THEN #PF(DS:RBX); FI;

+

IF ((EPCM(DS:RBX).PENDING = 1) or (EPCM(DS:RBX).MODIFIED = 1))

+

THEN #PF(DS:RBX); FI;

+

IF ( (DS:RBX).OSSA is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

(* Check proposed FS and GS *)

+

IF ( ( (DS:RBX).OFSBASE is not 4KByte Aligned) or ( (DS:RBX).OGSBASE is not 4KByte Aligned) )

+

THEN #GP(0); FI;

+

(* Get the SECS for the enclave in which the TCS resides *)

+

TMP_SECS := Address of SECS for TCS;

+

(* Ensure that the FLAGS field in the TCS does not have any reserved bits set *)

+

IF ( ( (DS:RBX).FLAGS & FFFFFFFFFFFFFFFCH) ≠ 0)

+

THEN #GP(0); FI;

+

(* SECS must exist and enclave must have previously been EINITted *)

+

IF (the enclave is not already initialized)

+

THEN #GP(0); FI;

+

(* make sure the logical processor’s operating mode matches the enclave *)

+

IF ( (TMP_MODE64 ≠ TMP_SECS.ATTRIBUTES.MODE64BIT) )

+

THEN #GP(0); FI;

+

IF (CR4.OSFXSR = 0)

+

THEN #GP(0); FI;

+

(* Check for legal values of SECS.ATTRIBUTES.XFRM *)

+

IF (CR4.OSXSAVE = 0)

+

THEN

+

IF (TMP_SECS.ATTRIBUTES.XFRM ≠ 03H) THEN #GP(0); FI;

+

ELSE

+

IF ( (TMP_SECS.ATTRIBUTES.XFRM & XCR0) ≠ TMP_SECS.ATTRIBUES.XFRM) THEN #GP(0); FI;

+

FI;

+

IF ((DS:RBX).CSSA.FLAGS.DBGOPTIN = 0) and (DS:RBX).CSSA.FLAGS.AEXNOTIFY ≠ TMP_SECS.ATTRIBUTES.AEXNOTIFY))

+

THEN #GP(0); FI;

+

(* Make sure the SSA contains at least one more frame *) IF ( (DS:RBX).CSSA ≥ (DS:RBX).NSSA) THEN #GP(0); FI;

+

(* Compute linear address of SSA frame *)

+

TMP_SSA := (DS:RBX).OSSA + TMP_SECS.BASEADDR + 4096 * TMP_SECS.SSAFRAMESIZE * (DS:RBX).CSSA;

+

TMP_XSIZE := compute_XSAVE_frame_size(TMP_SECS.ATTRIBUTES.XFRM);

+

FOR EACH TMP_SSA_PAGE = TMP_SSA to TMP_SSA + TMP_XSIZE

+

(* Check page is read/write accessible *)

+

Check that DS:TMP_SSA_PAGE is read/write accessible;

+

If a fault occurs, release locks, abort, and deliver that fault;

+

IF (DS:TMP_SSA_PAGE does not resolve to EPC page)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF (EPCM(DS:TMP_SSA_PAGE).VALID = 0)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF (EPCM(DS:TMP_SSA_PAGE).BLOCKED = 1)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF ((EPCM(DS:TMP_SSA_PAGE).PENDING = 1) or (EPCM(DS:TMP_SSA_PAGE).MODIFIED = 1))

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF ( ( EPCM(DS:TMP_SSA_PAGE).ENCLAVEADDRESS ≠ DS:TMP_SSA_PAGE) or (EPCM(DS:TMP_SSA_PAGE).PT ≠ PT_REG) or

+

(EPCM(DS:TMP_SSA_PAGE).ENCLAVESECS ≠ EPCM(DS:RBX).ENCLAVESECS) or

+

(EPCM(DS:TMP_SSA_PAGE).R = 0) or (EPCM(DS:TMP_SSA_PAGE).W = 0) )

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

CR_XSAVE_PAGE_n := Physical_Address(DS:TMP_SSA_PAGE);

+

ENDFOR

+

(* Compute address of GPR area*)

+

TMP_GPR := TMP_SSA + 4096 * DS:TMP_SECS.SSAFRAMESIZE - sizeof(GPRSGX_AREA);

+

If a fault occurs; release locks, abort, and deliver that fault;

+

IF (DS:TMP_GPR does not resolve to EPC page)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (EPCM(DS:TMP_GPR).VALID = 0)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (EPCM(DS:TMP_GPR).BLOCKED = 1)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF ((EPCM(DS:TMP_GPR).PENDING = 1) or (EPCM(DS:TMP_GPR).MODIFIED = 1))

+

THEN #PF(DS:TMP_GPR); FI;

+

IF ( ( EPCM(DS:TMP_GPR).ENCLAVEADDRESS ≠ DS:TMP_GPR) or (EPCM(DS:TMP_GPR).PT ≠ PT_REG) or

+

(EPCM(DS:TMP_GPR).ENCLAVESECS EPCM(DS:RBX).ENCLAVESECS) or

+

(EPCM(DS:TMP_GPR).R = 0) or (EPCM(DS:TMP_GPR).W = 0) )

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (TMP_MODE64 = 0)

+

THEN

+

IF (TMP_GPR + (GPR_SIZE -1) is not in DS segment) THEN #GP(0); FI;

+

FI;

+

CR_GPR_PA := Physical_Address (DS: TMP_GPR);

+

(* Validate TCS.OENTRY *)

+

TMP_TARGET := (DS:RBX).OENTRY + TMP_SECS.BASEADDR;

+

IF (TMP_MODE64 = 1)

+

THEN

+

IF (TMP_TARGET is not canonical) THEN #GP(0); FI;

+

ELSE

+

IF (TMP_TARGET > CS limit) THEN #GP(0); FI;

+

FI;

+

(* Check proposed FS/GS segments fall within DS *)

+

IF (TMP_MODE64 = 0)

+

THEN

+

TMP_FSBASE := (DS:RBX).OFSBASE + TMP_SECS.BASEADDR;

+

TMP_FSLIMIT := (DS:RBX).OFSBASE + TMP_SECS.BASEADDR + (DS:RBX).FSLIMIT;

+

TMP_GSBASE := (DS:RBX).OGSBASE + TMP_SECS.BASEADDR;

+

TMP_GSLIMIT := (DS:RBX).OGSBASE + TMP_SECS.BASEADDR + (DS:RBX).GSLIMIT;

+

(* if FS wrap-around, make sure DS has no holes*)

+

IF (TMP_FSLIMIT < TMP_FSBASE)

+

THEN

+

IF (DS.limit < 4GB) THEN #GP(0); FI;

+

ELSE

+

IF (TMP_FSLIMIT > DS.limit) THEN #GP(0); FI;

+

FI;

+

(* if GS wrap-around, make sure DS has no holes*)

+

IF (TMP_GSLIMIT < TMP_GSBASE)

+

THEN

+

IF (DS.limit < 4GB) THEN #GP(0); FI;

+

ELSE

+

IF (TMP_GSLIMIT > DS.limit) THEN #GP(0); FI;

+

FI;

+

ELSE

+

TMP_FSBASE := (DS:RBX).OFSBASE + TMP_SECS.BASEADDR;

+

TMP_GSBASE := (DS:RBX).OGSBASE + TMP_SECS.BASEADDR;

+

IF ( (TMP_FSBASE is not canonical) or (TMP_GSBASE is not canonical))

+

THEN #GP(0); FI;

+

FI;

+

(* Ensure the enclave is not already active and this thread is the only one using the TCS*)

+

IF (DS:RBX.STATE = ACTIVE)

+

THEN #GP(0); FI;

+

TMP_IA32_U_CET := 0

+

TMP_SSP : = 0

+

IF CPUID.(EAX=12H, ECX=1):EAX[6] = 1

+

THEN

+

IF ( CR4.CET = 0 )

+

THEN

+

(* If part does not support CET or CET has not been enabled and enclave requires CET then fail *)

+

IF ( TMP_SECS.CET_ATTRIBUTES ≠ 0 OR TMP_SECS.CET_LEG_BITMAP_OFFSET ≠ 0 ) #GP(0); FI;

+

FI;

+

(* If indirect branch tracking or shadow stacks enabled but CET state save area is not 16B aligned then fail EENTER *)

+

IF ( TMP_SECS.CET_ATTRIBUTES.SH_STK_EN = 1 OR TMP_SECS.CET_ATTRIBUTES.ENDBR_EN = 1 )

+

THEN

+

IF (DS:RBX.OCETSSA is not 16B aligned) #GP(0); FI;

+

FI;

+

IF (TMP_SECS.CET_ATTRIBUTES.SH_STK_EN OR TMP_SECS.CET_ATTRIBUTES.ENDBR_EN)

+

THEN

+

(* Setup CET state from SECS, note tracker goes to IDLE *)

+

TMP_IA32_U_CET = TMP_SECS.CET_ATTRIBUTES;

+

IF (TMP_IA32_U_CET.LEG_IW_EN = 1 AND TMP_IA32_U_CET.ENDBR_EN = 1 )

+

THEN

+

TMP_IA32_U_CET := TMP_IA32_U_CET + TMP_SECS.BASEADDR;

+

TMP_IA32_U_CET := TMP_IA32_U_CET + TMP_SECS.CET_LEG_BITMAP_BASE;

+

FI;

+

(* Compute linear address of what will become new CET state save area and cache its PA *)

+

TMP_CET_SAVE_AREA = DS:RBX.OCETSSA + TMP_SECS.BASEADDR + (DS:RBX.CSSA) * 16

+

TMP_CET_SAVE_PAGE = TMP_CET_SAVE_AREA & ~0xFFF;

+

Check the TMP_CET_SAVE_PAGE page is read/write accessible

+

If fault occurs release locks, abort, and deliver fault

+

(* Read the EPCM VALID, PENDING, MODIFIED, BLOCKED, and PT fields atomically *)

+

IF ((DS:TMP_CET_SAVE_PAGE Does NOT RESOLVE TO EPC PAGE) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).VALID = 0) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).PENDING = 1) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).MODIFIED = 1) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).BLOCKED = 1) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).R = 0) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).W = 0) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).ENCLAVEADDRESS ≠ DS:TMP_CET_SAVE_PAGE) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).PT ≠ PT_SS_REST) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).ENCLAVESECS ≠ EPCM(DS:RBX).ENCLAVESECS))

+

THEN

+

#PF(DS:TMP_CET_SAVE_PAGE);

+

FI;

+

CR_CET_SAVE_AREA_PA := Physical address(DS:TMP_CET_SAVE_AREA)

+

IF TMP_IA32_U_CET.SH_STK_EN = 1

+

THEN

+

TMP_SSP = TCS.PREVSSP;

+

FI;

+

FI;

+

CR_ENCLAVE_MODE := 1;

+

CR_ACTIVE_SECS := TMP_SECS;

+

CR_ELRANGE := (TMPSECS.BASEADDR, TMP_SECS.SIZE);

+

(* Save state for possible AEXs *)

+

CR_TCS_PA := Physical_Address (DS:RBX);

+

CR_TCS_LA := RBX;

+

CR_TCS_LA.AEP := RCX;

+

(* Save the hidden portions of FS and GS *)

+

CR_SAVE_FS_selector := FS.selector;

+

CR_SAVE_FS_base := FS.base;

+

CR_SAVE_FS_limit := FS.limit;

+

CR_SAVE_FS_access_rights := FS.access_rights;

+

CR_SAVE_GS_selector := GS.selector;

+

CR_SAVE_GS_base := GS.base;

+

CR_SAVE_GS_limit := GS.limit;

+

CR_SAVE_GS_access_rights := GS.access_rights;

+

(* If XSAVE is enabled, save XCR0 and replace it with SECS.ATTRIBUTES.XFRM*)

+

IF (CR4.OSXSAVE = 1)

+

CR_SAVE_XCR0 := XCR0;

+

XCR0 := TMP_SECS.ATTRIBUTES.XFRM;

+

FI;

+

RCX := RIP;

+

RIP := TMP_TARGET;

+

RAX := (DS:RBX).CSSA;

+

(* Save the outside RSP and RBP so they can be restored on interrupt or EEXIT *)

+

DS:TMP_SSA.U_RSP := RSP;

+

DS:TMP_SSA.U_RBP := RBP;

+

(* Do the FS/GS swap *)

+

FS.base := TMP_FSBASE;

+

FS.limit := DS:RBX.FSLIMIT;

+

FS.type := 0001b;

+

FS.W := DS.W;

+

FS.S := 1;

+

FS.DPL := DS.DPL;

+

FS.G := 1;

+

FS.B := 1;

+

FS.P := 1;

+

FS.AVL := DS.AVL;

+

FS.L := DS.L;

+

FS.unusable := 0;

+

FS.selector := 0BH;

+

GS.base := TMP_GSBASE;

+

GS.limit := DS:RBX.GSLIMIT;

+

GS.type := 0001b;

+

GS.W := DS.W;

+

GS.S := 1;

+

GS.DPL := DS.DPL;

+

GS.G := 1;

+

GS.B := 1;

+

GS.P := 1;

+

GS.AVL := DS.AVL;

+

GS.L := DS.L;

+

GS.unusable := 0;

+

GS.selector := 0BH;

+

CR_DBGOPTIN := TCS.FLAGS.DBGOPTIN;

+

Suppress_all_code_breakpoints_that_are_outside_ELRANGE;

+

IF (CR_DBGOPTIN = 0)

+

THEN

+

Suppress_all_code_breakpoints_that_overlap_with_ELRANGE;

+

CR_SAVE_TF := RFLAGS.TF;

+

RFLAGS.TF := 0;

+

Suppress_monitor_trap_flag for the source of the execution of the enclave;

+

Suppress any pending debug exceptions;

+

Suppress any pending MTF VM exit;

+

ELSE

+

IF RFLAGS.TF = 1

+

THEN pend a single-step #DB at the end of EENTER; FI;

+

IF the “monitor trap flag” VM-execution control is set

+

THEN pend an MTF VM exit at the end of EENTER; FI;

+

FI;

+

IF ((CPUID.(EAX=7H, ECX=0):EDX[CET_IBT] = 1) OR (CPUID.(EAX=7H, ECX=0):ECX[CET_SS] = 1)

+

THEN

+

(* Save enclosing application CET state into save registers *)

+

CR_SAVE_IA32_U_CET := IA32_U_CET

+

(* Setup enclave CET state *)

+

IF CPUID.(EAX=07H, ECX=00h):ECX[CET_SS] = 1

+

THEN

+

CR_SAVE_SSP := SSP

+

SSP := TMP_SSP

+

FI;

+

IA32_U_CET := TMP_IA32_U_CET;

+

FI;

+

Flush_linear_context;

+

Allow_front_end_to_begin_fetch_at_new_RIP;

+

Flags Affected + ¶ +

+

RFLAGS.TF is cleared on opt-out entry.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If DS:RBX is not page aligned.
If the enclave is not initialized.
If part or all of the FS or GS segment specified by TCS is outside the DS segment or not properly aligned.
If the thread is not in the INACTIVE state.
If CS, DS, ES or SS bases are not all zero.
If executed in enclave mode.
If any reserved field in the TCS FLAG is set.
If the target address is not within the CS segment.
If CR4.OSFXSR = 0.
If CR4.OSXSAVE = 0 and SECS.ATTRIBUTES.XFRM ≠ 3.
If CR4.OSXSAVE = 1and SECS.ATTRIBUTES.XFRM is not a subset of XCR0.
If SECS.ATTRIBUTES.AEXNOTIFY ≠ TCS.FLAGS.AEXNOTIFY and TCS.FLAGS.DBGOPTIN = 0.
#PF(errorcode) If a page fault occurs in accessing memory.
If DS:RBX does not point to a valid TCS.
If one or more pages of the current SSA frame are not readable/writable, or do not resolve to a valid PT_REG EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If DS:RBX is not page aligned.
If the enclave is not initialized.
If the thread is not in the INACTIVE state.
If CS, DS, ES or SS bases are not all zero.
If executed in enclave mode.
If part or all of the FS or GS segment specified by TCS is outside the DS segment or not properly aligned.
If the target address is not canonical.
If CR4.OSFXSR = 0.
If CR4.OSXSAVE = 0 and SECS.ATTRIBUTES.XFRM ≠ 3.
If CR4.OSXSAVE = 1and SECS.ATTRIBUTES.XFRM is not a subset of XCR0.
If SECS.ATTRIBUTES.AEXNOTIFY ≠ TCS.FLAGS.AEXNOTIFY and TCS.FLAGS.DBGOPTIN = 0.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If DS:RBX does not point to a valid TCS.
If one or more pages of the current SSA frame are not readable/writable, or do not resolve to a valid PT_REG EPC page.
diff --git a/x86/eexit.html b/x86/eexit.html new file mode 100644 index 0000000..d3547fd --- /dev/null +++ b/x86/eexit.html @@ -0,0 +1,212 @@ + +EEXIT + — Exits an Enclave

EEXIT + — Exits an Enclave

+ + + +
Opcode/Op/En 64/32 CPUID Description Instruction bit Mode Feature Support Flag EAX = 04H IR V/V SGX1 This leaf function is used to exit an enclave. ENCLU[EEXIT]
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnEAXRBXRCX
IREEXIT (In)Target address outside the enclave (In)Address of the current AEP (Out)
+

Description + ¶ +

+

The ENCLU[EEXIT] instruction exits the currently executing enclave and branches to the location specified in RBX. RCX receives the current AEP. If RBX is not within the CS (32-bit mode) or is not canonical (64-bit mode) a #GP(0) results.

+

EEXIT Memory Parameter Semantics + ¶ +

+ + + + +
Target Address
Non-Enclave read and execute access
+

If RBX specifies an address that is inside the enclave, the instruction will complete normally. The fetch of the next instruction will occur in non-enclave mode, but will attempt to fetch from inside the enclave. This fetch returns a fixed data pattern.

+

If secrets are contained in any registers, it is responsibility of enclave software to clear those registers.

+

If XCR0 was modified on enclave entry, it is restored to the value it had at the time of the most recent EENTER or ERESUME.

+

If the enclave is opt-out, RFLAGS.TF is loaded from the value previously saved on EENTER.

+

Code and data breakpoints are unsuppressed.

+

Performance monitoring counters are unsuppressed.

+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
EEXITConcurrent
+
Table 38-64. Base Concurrency Restrictions of EEXIT
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
EEXITConcurrentConcurrentConcurrent
+
Table 38-65. Additional Concurrency Restrictions of EEXIT
+

Operation + ¶ +

+

Temp Variables in EEXIT Operational Flow + ¶ +

+ + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_RIPEffective Address32/64Saved copy of CRIP for use when creating LBR.
+

TMP_MODE64 := ((IA32_EFER.LMA = 1) && (CS.L = 1));

+

IF (TMP_MODE64 = 1)

+

THEN

+

IF (RBX is not canonical) THEN #GP(0); FI;

+

ELSE

+

IF (RBX > CS limit) THEN #GP(0); FI;

+

FI;

+

TMP_RIP := CRIP;

+

RIP := RBX;

+

(* Return current AEP in RCX *)

+

RCX := CR_TCS_PA.AEP;

+

(* Do the FS/GS swap *)

+

FS.selector := CR_SAVE_FS.selector;

+

FS.base := CR_SAVE_FS.base;

+

FS.limit := CR_SAVE_FS.limit;

+

FS.access_rights := CR_SAVE_FS.access_rights;

+

GS.selector := CR_SAVE_GS.selector;

+

GS.base := CR_SAVE_GS.base;

+

GS.limit := CR_SAVE_GS.limit;

+

GS.access_rights := CR_SAVE_GS.access_rights;

+

(* Restore XCR0 if needed *)

+

IF (CR4.OSXSAVE = 1)

+

XCR0 := CR_SAVE__XCR0;

+

FI;

+

Unsuppress_all_code_breakpoints_that_are_outside_ELRANGE;

+

IF (CR_DBGOPTIN = 0)

+

THEN

+

UnSuppress_all_code_breakpoints_that_overlap_with_ELRANGE;

+

Restore suppressed breakpoint matches;

+

RFLAGS.TF := CR_SAVE_TF;

+

UnSuppress_montior_trap_flag;

+

UnSuppress_LBR_Generation;

+

UnSuppress_performance monitoring_activity;

+

Restore performance monitoring counter AnyThread demotion to MyThread in enclave back to AnyThread

+

FI;

+

IF RFLAGS.TF = 1

+

THEN Pend Single-Step #DB at the end of EEXIT;

+

FI;

+

IF the “monitor trap flag” VM-execution control is set

+

THEN pend a MTF VM exit at the end of EEXIT;

+

FI;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN

+

(* Record PREVSSP *)

+

IF (IA32_U_CET.SH_STK_EN == 1)

+

THEN CR_TCS_PA.PREVSSP = SSP; FI;

+

FI;

+

IF ((CPUID.(EAX=7H, ECX=0):EDX[CET_IBT] = 1) OR (CPUID.(EAX=7, ECX=0):ECX[CET_SS] = 1)

+

THEN

+

(* Restore enclosing app’s CET state from the save registers *)

+

IA32_U_CET := CR_SAVE_IA32_U_CET;

+

IF CPUID.(EAX=07H, ECX=00h):ECX[CET_SS] = 1

+

THEN SSP := CR_SAVE_SSP; FI;

+

(* Update enclosing app’s TRACKER if enclosing app has indirect branch tracking enabled *)

+

IF (CR4.CET = 1 AND IA32_U_CET.ENDBR_EN = 1)

+

THEN

+

IA32_U_CET.TRACKER := WAIT_FOR_ENDBRANCH;

+

IA32_U_CET.SUPPRESS := 0

+

FI;

+

FI;

+

CR_ENCLAVE_MODE := 0;

+

CR_TCS_PA.STATE := INACTIVE;

+

(* Assure consistent translations *)

+

Flush_linear_context;

+

Flags Affected + ¶ +

+

RFLAGS.TF is restored from the value previously saved in EENTER or ERESUME.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If executed outside an enclave.
If RBX is outside the CS segment.
#PF(errorcode) If a page fault occurs in accessing memory.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If executed outside an enclave.
If RBX is not canonical.
#PF(errorcode) If a page fault occurs in accessing memory operands.
diff --git a/x86/eextend.html b/x86/eextend.html new file mode 100644 index 0000000..421451d --- /dev/null +++ b/x86/eextend.html @@ -0,0 +1,263 @@ + +EEXTEND + — Extend Uninitialized Enclave Measurement by 256 Bytes

EEXTEND + — Extend Uninitialized Enclave Measurement by 256 Bytes

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 06H ENCLS[EEXTEND]IRV/VSGX1This leaf function measures 256 bytes of an uninitialized enclave page.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnEAXEBXRCX
IREEXTEND (In)Effective address of the SECS of the data chunk (In)Effective address of a 256-byte chunk in the EPC (In)
+

Description + ¶ +

+

This leaf function updates the MRENCLAVE measurement register of an SECS with the measurement of an EXTEND string compromising of “EEXTEND” || ENCLAVEOFFSET || PADDING || 256 bytes of the enclave page. This instruction can only be executed when current privilege level is 0 and the enclave is uninitialized.

+

RBX contains the effective address of the SECS of the region to be measured. The address must be the same as the one used to add the page into the enclave.

+

RCX contains the effective address of the 256 byte region of an EPC page to be measured. The DS segment is used to create linear addresses. Segment override is not supported.

+

EEXTEND Memory Parameter Semantics + ¶ +

+ + + + +
EPC[RCX]
Read access by Enclave
+

The instruction faults if any of the following:

+

EEXTEND Faulting Conditions + ¶ +

+ + + + + + + + + + + + + + + + + + +
RBX points to an address not 4KBytes aligned.RBX does not resolve to an SECS.
RBX does not point to an SECS page.RBX does not point to the SECS page of the data chunk.
RCX points to an address not 256B aligned.RCX points to an unused page or a SECS.
RCX does not resolve in an EPC page.If SECS is locked.
If the SECS is already initialized.May page fault.
CPL > 0.
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EEXTENDTarget [DS:RCX]Shared#GP
SECS [DS:RBX]Concurrent
+
Table 38-23. Base Concurrency Restrictions of EEXTEND
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EEXTENDTarget [DS:RCX]ConcurrentConcurrentConcurrent
SECS [DS:RBX]ConcurrentExclusive#GPConcurrent
+
Table 38-24. Additional Concurrency Restrictions of EEXTEND
+

Operation + ¶ +

+

Temp Variables in EEXTEND Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_SECS64Physical address of SECS of the enclave to which source operand belongs.
TMP_ENCLAVEOFFS ETEnclave Offset64The page displacement from the enclave base address.
TMPUPDATEFIELDSHA256 Buffer512Buffer used to hold data being added to TMP_SECS.MRENCLAVE.
+

TMP_MODE64 := ((IA32_EFER.LMA = 1) && (CS.L = 1));

+

IF (DS:RBX is not 4096 Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RBX does not resolve to an EPC page)

+

THEN #PF(DS:RBX); FI;

+

IF (DS:RCX is not 256Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

(* make sure no other Intel SGX instruction is accessing EPCM *)

+

IF (Other instructions accessing EPCM)

+

THEN #GP(0); FI;

+

IF (EPCM(DS:RCX). VALID = 0)

+

THEN #PF(DS:RCX); FI;

+

(* make sure that DS:RCX (DST) is pointing to a PT_REG or PT_TCS or PT_SS_FIRST or PT_SS_REST *)

+

IF ( (EPCM(DS:RCX).PT ≠ PT_REG) and (EPCM(DS:RCX).PT ≠ PT_TCS)

+

and (EPCM(DS:RCX).PT ≠ PT_SS_FIRST) and (EPCM(DS:RCX).PT ≠ PT_SS_REST))

+

THEN #PF(DS:RCX); FI;

+

TMP_SECS := Get_SECS_ADDRESS();

+

IF (DS:RBX does not resolve to TMP_SECS)

+

THEN #GP(0); FI;

+

(* make sure no other instruction is accessing MRENCLAVE or ATTRIBUTES.INIT *)

+

IF ( (Other instruction accessing MRENCLAVE) or (Other instructions checking or updating the initialized state of the SECS))

+

THEN #GP(0); FI;

+

(* Calculate enclave offset *)

+

TMP_ENCLAVEOFFSET := EPCM(DS:RCX).ENCLAVEADDRESS - TMP_SECS.BASEADDR;

+

TMP_ENCLAVEOFFSET := TMP_ENCLAVEOFFSET + (DS:RCX & 0FFFH)

+

(* Add EEXTEND message and offset to MRENCLAVE *)

+

TMPUPDATEFIELD[63:0] := 00444E4554584545H; // “EEXTEND”

+

TMPUPDATEFIELD[127:64] := TMP_ENCLAVEOFFSET;

+

TMPUPDATEFIELD[511:128] := 0; // 48 bytes

+

TMP_SECS.MRENCLAVE := SHA256UPDATE(TMP_SECS.MRENCLAVE, TMPUPDATEFIELD)

+

INC enclave’s MRENCLAVE update counter;

+

(*Add 256 bytes to MRENCLAVE, 64 byte at a time *)

+

TMP_SECS.MRENCLAVE := SHA256UPDATE(TMP_SECS.MRENCLAVE, DS:RCX[511:0] );

+

TMP_SECS.MRENCLAVE := SHA256UPDATE(TMP_SECS.MRENCLAVE, DS:RCX[1023: 512] );

+

TMP_SECS.MRENCLAVE := SHA256UPDATE(TMP_SECS.MRENCLAVE, DS:RCX[1535: 1024] );

+

TMP_SECS.MRENCLAVE := SHA256UPDATE(TMP_SECS.MRENCLAVE, DS:RCX[2047: 1536] );

+

INC enclave’s MRENCLAVE update counter by 4;

+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the address in RBX is outside the DS segment limit.
If RBX points to an SECS page which is not the SECS of the data chunk.
If the address in RCX is outside the DS segment limit.
If RCX points to a memory location not 256Byte-aligned.
If another instruction is accessing MRENCLAVE.
If another instruction is checking or updating the SECS.
If the enclave is already initialized.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the address in RBX points to a non-EPC page.
If the address in RCX points to a page which is not PT_TCS or PT_REG.
If the address in RCX points to a non-EPC page.
If the address in RCX points to an invalid EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If RBX is non-canonical form.
If RBX points to an SECS page which is not the SECS of the data chunk.
If RCX is non-canonical form.
If RCX points to a memory location not 256 Byte-aligned.
If another instruction is accessing MRENCLAVE.
If another instruction is checking or updating the SECS.
If the enclave is already initialized.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the address in RBX points to a non-EPC page.
If the address in RCX points to a page which is not PT_TCS or PT_REG.
If the address in RCX points to a non-EPC page.
If the address in RCX points to an invalid EPC page.
diff --git a/x86/egetkey.html b/x86/egetkey.html new file mode 100644 index 0000000..0c09560 --- /dev/null +++ b/x86/egetkey.html @@ -0,0 +1,687 @@ + +EGETKEY + — Retrieves a Cryptographic Key

EGETKEY + — Retrieves a Cryptographic Key

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 01H ENCLU[EGETKEY]IRV/VSGX1This leaf function retrieves a cryptographic key.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/EnEAXRBXRCX
IREGETKEY (In)Return error code (Out)Address to a KEYREQUEST (In)Address of the OUTPUTDATA (In)
+

Description + ¶ +

+

The ENCLU[EGETKEY] instruction returns a 128-bit secret key from the processor specific key hierarchy. The register RBX contains the effective address of a KEYREQUEST structure, which the instruction interprets to determine the key being requested. The Requesting Keys section below provides a description of the keys that can be requested. The RCX register contains the effective address where the key will be returned. Both the addresses in RBX & RCX should be locations inside the enclave.

+

EGETKEY derives keys using a processor unique value to create a specific key based on a number of possible inputs. This instruction leaf can only be executed inside an enclave.

+

EEGETKEY Memory Parameter Semantics + ¶ +

+ + + + + + +
KEYREQUESTOUTPUTDATA
Enclave read accessEnclave write access
+

After validating the operands, the instruction determines which key is to be produced and performs the following actions:

+
    +
  • The instruction assembles the derivation data for the key based on the Table 38-66.
  • +
  • Computes derived key using the derivation data and package specific value.
  • +
  • Outputs the calculated key to the address in RCX.
+

The instruction fails with #GP(0) if the operands are not properly aligned. Successful completion of the instruction will clear RFLAGS.{ZF, CF, AF, OF, SF, PF}. The instruction returns an error code if the user tries to request a key based on an invalid CPUSVN or ISVSVN (when the user request is accepted, see the table below), requests a key for which it has not been granted the attribute to request, or requests a key that is not supported by the hardware. These checks may be performed in any order. Thus, an indication by error number of one cause (for example, invalid attribute) does not imply that there are not also other errors. Different processors may thus give different error numbers for the same Enclave. The correctness of software should not rely on the order resulting from the checks documented in this section. In such cases the ZF flag is set and the corresponding error bit (SGX_INVALID_SVN, SGX_INVALID_ATTRIBUTE, SGX_INVALID_KEYNAME) is set in RAX and the data at the address specified by RCX is unmodified.

+

Requesting Keys

+

The KEYREQUEST structure (see Section 35.18.1) identifies the key to be provided. The Keyrequest.KeyName field identifies which type of key is requested.

+

Deriving Keys

+

Key derivation is based on a combination of the enclave specific values (see Table 38-66) and a processor key. Depending on the key being requested a field may either be included by definition or the value may be included from the KeyRequest. A “yes” in Table 38-66 indicates the value for the field is included from its default location, identified in the source row, and a “request” indicates the values for the field is included from its corresponding KeyRequest field.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Key NameAttributesOwner EpochCPU SVNISV SVNISV PRODIDISVEXT PRODIDISVFAM ILYIDMRENCLAVEMRSIGNERCONFIG IDCONFIGS VNRAND
SourceKey Dependent ConstantY := SECS.ATTRIBUTES and SECS.MISCSELECT and SECS.CET_ATTRIB UTES;CR_SGX OWNER EPOCHY := CPUSVN Register;R := Req.ISV SVN;SECS. ISVIDSECS.IS VEXTPR ODIDSECS.IS VFAMIL YIDSECS. MRENCLAVESECS. MRSIGNERSECS.CO NFIGIDSECS.CO NFIGSVNReq. KEYID
R := AttribMask & SECS.ATTRIBUTES and SECS.MISCSELECT and SECS.CET_ATTRIB UTES;R := Req.CPU SVN;
EINITTOKENYesRequestYesRequestRequestYesNoNoNoYesNoNoRequest
ReportYesYesYesYesNoNoNoNoYesNoYesYesRequest
SealYesRequestYesRequestRequestRequestRequestRequestRequestRequestRequestRequestRequest
ProvisioningYesRequestNoRequestRequestYesNoNoNoYesNoNoYes
Provisioning SealYesRequestNoRequestRequestRequestRequestRequestNoYesRequestRequestYes
+
Table 38-66. Key Derivation
+

Keys that permit the specification of a CPU or ISV's code's, or enclave configuration's SVNs have additional requirements. The caller may not request a key for an SVN beyond the current CPU, ISV or enclave configuration's SVN, respectively.

+

Several keys are access controlled. Access to the Provisioning Key and Provisioning Seal key requires the enclave's ATTRIBUTES.PROVISIONKEY be set. The EINITTOKEN Key requires ATTRIBUTES.EINITTOKEN_KEY be set and SECS.MRSIGNER equal IA32_SGXLEPUBKEYHASH.

+

Some keys are derived based on a hardcode PKCS padding constant (352 byte string):

+

HARDCODED_PKCS1_5_PADDING[15:0] := 0100H;

+

HARDCODED_PKCS1_5_PADDING[2655:16] := SignExtend330Byte(-1); // 330 bytes of 0FFH

+

HARDCODED_PKCS1_5_PADDING[2815:2656] := 2004000501020403650148866009060D30313000H;

+

The error codes are:

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Error Code (see Table 38-4)ValueDescription
No Error0EGETKEY successful.
SGX_INVALID_ATTRIBUTEThe KEYREQUEST contains a KEYNAME for which the enclave is not authorized.
SGX_INVALID_CPUSVNIf KEYREQUEST.CPUSVN is an unsupported platforms CPUSVN value.
SGX_INVALID_ISVSVNIf KEYREQUEST software SVN (ISVSVN or CONFIGSVN) is greater than the enclave's corresponding SVN.
SGX_INVALID_KEYNAMEIf KEYREQUEST.KEYNAME is an unsupported value.
+
Table 38-67. EGETKEY Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EGETKEYKEYREQUEST [DS:RBX]Concurrent
OUTPUTDATA [DS:RCX]Concurrent
+
Table 38-68. Base Concurrency Restrictions of EGETKEY
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EGETKEYKEYREQUEST [DS:RBX]ConcurrentConcurrentConcurrent
OUTPUTDATA [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-69. Additional Concurrency Restrictions of EGETKEY
+

Operation + ¶ +

+

Temp Variables in EGETKEY Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_CURRENTSECSAddress of the SECS for the currently executing enclave.
TMP_KEYDEPENDENCIESTemp space for key derivation.
TMP_ATTRIBUTES128Temp Space for the calculation of the sealable Attributes.
TMP_ISVEXTPRODID16 bytesTemp Space for ISVEXTPRODID.
TMP_ISVPRODID2 bytesTemp Space for ISVPRODID.
TMP_ISVFAMILYID16 bytesTemp Space for ISVFAMILYID.
TMP_CONFIGID64 bytesTemp Space for CONFIGID.
TMP_CONFIGSVN2 bytesTemp Space for CONFIGSVN.
TMP_OUTPUTKEY128Temp Space for the calculation of the key.
+

(* Make sure KEYREQUEST is properly aligned and inside the current enclave *)

+

IF ( (DS:RBX is not 512Byte aligned) or (DS:RBX is not within CR_ELRANGE) )

+

THEN #GP(0); FI;

+

(* Make sure DS:RBX is an EPC address and the EPC page is valid *)

+

IF ( (DS:RBX does not resolve to an EPC address) or (EPCM(DS:RBX).VALID = 0) )

+

THEN #PF(DS:RBX); FI;

+

IF (EPCM(DS:RBX).BLOCKED = 1)

+

THEN #PF(DS:RBX); FI;

+

(* Check page parameters for correctness *)

+

IF ( (EPCM(DS:RBX).PT ≠ PT_REG) or (EPCM(DS:RBX).ENCLAVESECS ≠ CR_ACTIVE_SECS) or (EPCM(DS:RBX).PENDING = 1) or

+

(EPCM(DS:RBX).MODIFIED = 1) or (EPCM(DS:RBX).ENCLAVEADDRESS ≠ (DS:RBX & ~0FFFH) ) or (EPCM(DS:RBX).R = 0) )

+

THEN #PF(DS:RBX);

+

FI;

+

(* Make sure OUTPUTDATA is properly aligned and inside the current enclave *)

+

IF ( (DS:RCX is not 16Byte aligned) or (DS:RCX is not within CR_ELRANGE) )

+

THEN #GP(0); FI;

+

(* Make sure DS:RCX is an EPC address and the EPC page is valid *)

+

IF ( (DS:RCX does not resolve to an EPC address) or (EPCM(DS:RCX).VALID = 0) )

+

THEN #PF(DS:RCX); FI;

+

IF (EPCM(DS:RCX).BLOCKED = 1)

+

THEN #PF(DS:RCX); FI;

+

(* Check page parameters for correctness *)

+

IF ( (EPCM(DS:RCX).PT ≠ PT_REG) or (EPCM(DS:RCX).ENCLAVESECS ≠ CR_ACTIVE_SECS) or (EPCM(DS:RCX).PENDING = 1) or

+

(EPCM(DS:RCX).MODIFIED = 1) or (EPCM(DS:RCX).ENCLAVEADDRESS ≠ (DS:RCX & ~0FFFH) ) or (EPCM(DS:RCX).W = 0) )

+

THEN #PF(DS:RCX);

+

FI;

+

(* Verify RESERVED spaces in KEYREQUEST are valid *)

+

IF ( (DS:RBX).RESERVED ≠ 0) or (DS:RBX.KEYPOLICY.RESERVED ≠ 0) )

+

THEN #GP(0); FI;

+

TMP_CURRENTSECS := CR_ACTIVE_SECS;

+

(* Verify that CONFIGSVN & New Policy bits are not used if KSS is not enabled *)

+

IF ((TMP_CURRENTSECS.ATTRIBUTES.KSS == 0) AND ((DS:RBX.KEYPOLICY & 0x003C ≠ 0) OR (DS:RBX.CONFIGSVN > 0)))

+

THEN #GP(0); FI;

+

(* Determine which enclave attributes that must be included in the key. Attributes that must always be include INIT & DEBUG *)

+

REQUIRED_SEALING_MASK[127:0] := 00000000 00000000 00000000 00000003H;

+

TMP_ATTRIBUTES := (DS:RBX.ATTRIBUTEMASK | REQUIRED_SEALING_MASK) & TMP_CURRENTSECS.ATTRIBUTES;

+

(* Compute MISCSELECT fields to be included *)

+

TMP_MISCSELECT := DS:RBX.MISCMASK & TMP_CURRENTSECS.MISCSELECT

+

(* Compute CET_ATTRIBUTES fields to be included *)

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN TMP_CET_ATTRIBUTES := DS:RBX.CET_ATTRIBUTES_ MASK & TMP_CURRENTSECS.CET_ATTRIBUTES; FI;

+

TMP_KEYDEPENDENCIES := 0;

+

CASE (DS:RBX.KEYNAME)

+

SEAL_KEY:

+

IF (DS:RBX.CPUSVN is beyond current CPU configuration)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_CPUSVN;

+

GOTO EXIT;

+

FI;

+

IF (DS:RBX.ISVSVN > TMP_CURRENTSECS.ISVSVN)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ISVSVN;

+

GOTO EXIT;

+

FI;

+

IF (DS:RBX.CONFIGSVN > TMP_CURRENTSECS.CONFIGSVN)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ISVSVN;

+

GOTO EXIT;

+

FI;

+

(*Include enclave identity?*)

+

TMP_MRENCLAVE := 0;

+

IF (DS:RBX.KEYPOLICY.MRENCLAVE = 1)

+

THEN TMP_MRENCLAVE := TMP_CURRENTSECS.MRENCLAVE;

+

FI;

+

(*Include enclave author?*)

+

TMP_MRSIGNER := 0;

+

IF (DS:RBX.KEYPOLICY.MRSIGNER = 1)

+

THEN TMP_MRSIGNER := TMP_CURRENTSECS.MRSIGNER;

+

FI;

+

(* Include enclave product family ID? *)

+

TMP_ISVFAMILYID := 0;

+

IF (DS:RBX.KEYPOLICY.ISVFAMILYID = 1)

+

THEN TMP_ISVFAMILYID := TMP_CURRENTSECS.ISVFAMILYID;

+

FI;

+

(* Include enclave product ID? *)

+

TMP_ISVPRODID := 0;

+

IF (DS:RBX.KEYPOLICY.NOISVPRODID = 0)

+

TMP_ISVPRODID := TMP_CURRENTSECS.ISVPRODID;

+

FI;

+

(* Include enclave Config ID? *)

+

TMP_CONFIGID := 0;

+

TMP_CONFIGSVN := 0;

+

IF (DS:RBX.KEYPOLICY.CONFIGID = 1)

+

TMP_CONFIGID := TMP_CURRENTSECS.CONFIGID;

+

TMP_CONFIGSVN := DS:RBX.CONFIGSVN;

+

FI;

+

(* Include enclave extended product ID? *)

+

TMP_ISVEXTPRODID := 0;

+

IF (DS:RBX.KEYPOLICY.ISVEXTPRODID = 1 )

+

TMP_ISVEXTPRODID := TMP_CURRENTSECS.ISVEXTPRODID;

+

FI;

+

//Determine values key is based on

+

TMP_KEYDEPENDENCIES.KEYNAME := SEAL_KEY;

+

TMP_KEYDEPENDENCIES.ISVFAMILYID := TMP_ISVFAMILYID;

+

TMP_KEYDEPENDENCIES.ISVEXTPRODID := TMP_ISVEXTPRODID;

+

TMP_KEYDEPENDENCIES.ISVPRODID := TMP_ISVPRODID;

+

TMP_KEYDEPENDENCIES.ISVSVN := DS:RBX.ISVSVN;

+

TMP_KEYDEPENDENCIES.SGXOWNEREPOCH := CR_SGXOWNEREPOCH;

+

TMP_KEYDEPENDENCIES.ATTRIBUTES := TMP_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.ATTRIBUTESMASK := DS:RBX.ATTRIBUTEMASK;

+

TMP_KEYDEPENDENCIES.MRENCLAVE := TMP_MRENCLAVE;

+

TMP_KEYDEPENDENCIES.MRSIGNER := TMP_MRSIGNER;

+

TMP_KEYDEPENDENCIES.KEYID := DS:RBX.KEYID;

+

TMP_KEYDEPENDENCIES.SEAL_KEY_FUSES := CR_SEAL_FUSES;

+

TMP_KEYDEPENDENCIES.CPUSVN := DS:RBX.CPUSVN;

+

TMP_KEYDEPENDENCIES.PADDING := TMP_CURRENTSECS.PADDING;

+

TMP_KEYDEPENDENCIES.MISCSELECT := TMP_MISCSELECT;

+

TMP_KEYDEPENDENCIES.MISCMASK := ~DS:RBX.MISCMASK;

+

TMP_KEYDEPENDENCIES.KEYPOLICY := DS:RBX.KEYPOLICY;

+

TMP_KEYDEPENDENCIES.CONFIGID := TMP_CONFIGID;

+

TMP_KEYDEPENDENCIES.CONFIGSVN := TMP_CONFIGSVN;

+

IF CPUID.(EAX=12H, ECX=1):EAX[6] = 1

+

THEN

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES := TMP_CET_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES _MASK := DS:RBX.CET_ATTRIBUTES _MASK;

+

FI;

+

BREAK;

+

REPORT_KEY:

+

//Determine values key is based on

+

TMP_KEYDEPENDENCIES.KEYNAME := REPORT_KEY;

+

TMP_KEYDEPENDENCIES.ISVFAMILYID := 0;

+

TMP_KEYDEPENDENCIES.ISVEXTPRODID := 0;

+

TMP_KEYDEPENDENCIES.ISVPRODID := 0;

+

TMP_KEYDEPENDENCIES.ISVSVN := 0;

+

TMP_KEYDEPENDENCIES.SGXOWNEREPOCH := CR_SGXOWNEREPOCH;

+

TMP_KEYDEPENDENCIES.ATTRIBUTES := TMP_CURRENTSECS.ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.ATTRIBUTESMASK := 0;

+

TMP_KEYDEPENDENCIES.MRENCLAVE := TMP_CURRENTSECS.MRENCLAVE;

+

TMP_KEYDEPENDENCIES.MRSIGNER := 0;

+

TMP_KEYDEPENDENCIES.KEYID := DS:RBX.KEYID;

+

TMP_KEYDEPENDENCIES.SEAL_KEY_FUSES := CR_SEAL_FUSES;

+

TMP_KEYDEPENDENCIES.CPUSVN := CR_CPUSVN;

+

TMP_KEYDEPENDENCIES.PADDING := HARDCODED_PKCS1_5_PADDING;

+

TMP_KEYDEPENDENCIES.MISCSELECT := TMP_CURRENTSECS.MISCSELECT;

+

TMP_KEYDEPENDENCIES.MISCMASK := 0;

+

TMP_KEYDEPENDENCIES.KEYPOLICY := 0;

+

TMP_KEYDEPENDENCIES.CONFIGID := TMP_CURRENTSECS.CONFIGID;

+

TMP_KEYDEPENDENCIES.CONFIGSVN := TMP_CURRENTSECS.CONFIGSVN;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES := TMP_CURRENTSECS.CET_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES_MASK := 0;

+

FI;

+

BREAK;

+

EINITTOKEN_KEY:

+

(* Check ENCLAVE has EINITTOKEN Key capability *)

+

IF (TMP_CURRENTSECS.ATTRIBUTES.EINITTOKEN_KEY = 0)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ATTRIBUTE;

+

GOTO EXIT;

+

FI;

+

IF (DS:RBX.CPUSVN is beyond current CPU configuration)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_CPUSVN;

+

GOTO EXIT;

+

FI;

+

IF (DS:RBX.ISVSVN > TMP_CURRENTSECS.ISVSVN)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ISVSVN;

+

GOTO EXIT;

+

FI;

+

(* Determine values key is based on *)

+

TMP_KEYDEPENDENCIES.KEYNAME := EINITTOKEN_KEY;

+

TMP_KEYDEPENDENCIES.ISVFAMILYID := 0;

+

TMP_KEYDEPENDENCIES.ISVEXTPRODID := 0;

+

TMP_KEYDEPENDENCIES.ISVPRODID := TMP_CURRENTSECS.ISVPRODID

+

TMP_KEYDEPENDENCIES.ISVSVN := DS:RBX.ISVSVN;

+

TMP_KEYDEPENDENCIES.SGXOWNEREPOCH := CR_SGXOWNEREPOCH;

+

TMP_KEYDEPENDENCIES.ATTRIBUTES := TMP_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.ATTRIBUTESMASK := 0;

+

TMP_KEYDEPENDENCIES.MRENCLAVE := 0;

+

TMP_KEYDEPENDENCIES.MRSIGNER := TMP_CURRENTSECS.MRSIGNER;

+

TMP_KEYDEPENDENCIES.KEYID := DS:RBX.KEYID;

+

TMP_KEYDEPENDENCIES.SEAL_KEY_FUSES := CR_SEAL_FUSES;

+

TMP_KEYDEPENDENCIES.CPUSVN := DS:RBX.CPUSVN;

+

TMP_KEYDEPENDENCIES.PADDING := TMP_CURRENTSECS.PADDING;

+

TMP_KEYDEPENDENCIES.MISCSELECT := TMP_MISCSELECT;

+

TMP_KEYDEPENDENCIES.MISCMASK := 0;

+

TMP_KEYDEPENDENCIES.KEYPOLICY := 0;

+

TMP_KEYDEPENDENCIES.CONFIGID := 0;

+

TMP_KEYDEPENDENCIES.CONFIGSVN := 0;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES := TMP_CET_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES _MASK := 0;

+

FI;

+

BREAK;

+

PROVISION_KEY:

+

(* Check ENCLAVE has PROVISIONING capability *)

+

IF (TMP_CURRENTSECS.ATTRIBUTES.PROVISIONKEY = 0)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ATTRIBUTE;

+

GOTO EXIT;

+

FI;

+

IF (DS:RBX.CPUSVN is beyond current CPU configuration)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_CPUSVN;

+

GOTO EXIT;

+

FI;

+

IF (DS:RBX.ISVSVN > TMP_CURRENTSECS.ISVSVN)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ISVSVN;

+

GOTO EXIT;

+

FI;

+

(* Determine values key is based on *)

+

TMP_KEYDEPENDENCIES.KEYNAME := PROVISION_KEY;

+

TMP_KEYDEPENDENCIES.ISVFAMILYID := 0;

+

TMP_KEYDEPENDENCIES.ISVEXTPRODID := 0;

+

TMP_KEYDEPENDENCIES.ISVPRODID := TMP_CURRENTSECS.ISVPRODID;

+

TMP_KEYDEPENDENCIES.ISVSVN := DS:RBX.ISVSVN;

+

TMP_KEYDEPENDENCIES.SGXOWNEREPOCH := 0;

+

TMP_KEYDEPENDENCIES.ATTRIBUTES := TMP_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.ATTRIBUTESMASK := DS:RBX.ATTRIBUTEMASK;

+

TMP_KEYDEPENDENCIES.MRENCLAVE := 0;

+

TMP_KEYDEPENDENCIES.MRSIGNER := TMP_CURRENTSECS.MRSIGNER;

+

TMP_KEYDEPENDENCIES.KEYID := 0;

+

TMP_KEYDEPENDENCIES.SEAL_KEY_FUSES := 0;

+

TMP_KEYDEPENDENCIES.CPUSVN := DS:RBX.CPUSVN;

+

TMP_KEYDEPENDENCIES.PADDING := TMP_CURRENTSECS.PADDING;

+

TMP_KEYDEPENDENCIES.MISCSELECT := TMP_MISCSELECT;

+

TMP_KEYDEPENDENCIES.MISCMASK := ~DS:RBX.MISCMASK;

+

TMP_KEYDEPENDENCIES.KEYPOLICY := 0;

+

TMP_KEYDEPENDENCIES.CONFIGID := 0;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES := TMP_CET_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES _MASK := 0;

+

FI;

+

BREAK;

+

PROVISION_SEAL_KEY:

+

(* Check ENCLAVE has PROVISIONING capability *)

+

IF (TMP_CURRENTSECS.ATTRIBUTES.PROVISIONKEY = 0)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ATTRIBUTE;

+

GOTO EXIT;

+

FI;

+

IF (DS:RBX.CPUSVN is beyond current CPU configuration)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_CPUSVN;

+

GOTO EXIT;

+

FI;

+

IF (DS:RBX.ISVSVN > TMP_CURRENTSECS.ISVSVN)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ISVSVN;

+

GOTO EXIT;

+

FI;

+

(* Include enclave product family ID? *)

+

TMP_ISVFAMILYID := 0;

+

IF (DS:RBX.KEYPOLICY.ISVFAMILYID = 1)

+

THEN TMP_ISVFAMILYID := TMP_CURRENTSECS.ISVFAMILYID;

+

FI;

+

(* Include enclave product ID? *)

+

TMP_ISVPRODID := 0;

+

IF (DS:RBX.KEYPOLICY.NOISVPRODID = 0)

+

TMP_ISVPRODID := TMP_CURRENTSECS.ISVPRODID;

+

FI;

+

(* Include enclave Config ID? *)

+

TMP_CONFIGID := 0;

+

TMP_CONFIGSVN := 0;

+

IF (DS:RBX.KEYPOLICY.CONFIGID = 1)

+

TMP_CONFIGID := TMP_CURRENTSECS.CONFIGID;

+

TMP_CONFIGSVN := DS:RBX.CONFIGSVN;

+

FI;

+

(* Include enclave extended product ID? *)

+

TMP_ISVEXTPRODID := 0;

+

IF (DS:RBX.KEYPOLICY.ISVEXTPRODID = 1)

+

TMP_ISVEXTPRODID := TMP_CURRENTSECS.ISVEXTPRODID;

+

FI;

+

(* Determine values key is based on *)

+

TMP_KEYDEPENDENCIES.KEYNAME := PROVISION_SEAL_KEY;

+

TMP_KEYDEPENDENCIES.ISVFAMILYID := TMP_ISVFAMILYID;

+

TMP_KEYDEPENDENCIES.ISVEXTPRODID := TMP_ISVEXTPRODID;

+

TMP_KEYDEPENDENCIES.ISVPRODID := TMP_ISVPRODID;

+

TMP_KEYDEPENDENCIES.ISVSVN := DS:RBX.ISVSVN;

+

TMP_KEYDEPENDENCIES.SGXOWNEREPOCH := 0;

+

TMP_KEYDEPENDENCIES.ATTRIBUTES := TMP_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.ATTRIBUTESMASK := DS:RBX.ATTRIBUTEMASK;

+

TMP_KEYDEPENDENCIES.MRENCLAVE := 0;

+

TMP_KEYDEPENDENCIES.MRSIGNER := TMP_CURRENTSECS.MRSIGNER;

+

TMP_KEYDEPENDENCIES.KEYID := 0;

+

TMP_KEYDEPENDENCIES.SEAL_KEY_FUSES := CR_SEAL_FUSES;

+

TMP_KEYDEPENDENCIES.CPUSVN := DS:RBX.CPUSVN;

+

TMP_KEYDEPENDENCIES.PADDING := TMP_CURRENTSECS.PADDING;

+

TMP_KEYDEPENDENCIES.MISCSELECT := TMP_MISCSELECT;

+

TMP_KEYDEPENDENCIES.MISCMASK := ~DS:RBX.MISCMASK;

+

TMP_KEYDEPENDENCIES.KEYPOLICY := DS:RBX.KEYPOLICY;

+

TMP_KEYDEPENDENCIES.CONFIGID := TMP_CONFIGID;

+

TMP_KEYDEPENDENCIES.CONFIGSVN := TMP_CONFIGSVN;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES := TMP_CET_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES _MASK := 0;

+

FI;

+

BREAK;

+

DEFAULT:

+

(* The value of KEYNAME is invalid *)

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_KEYNAME;

+

GOTO EXIT:

+

ESAC;

+

(* Calculate the final derived key and output to the address in RCX *)

+

TMP_OUTPUTKEY := derivekey(TMP_KEYDEPENDENCIES);

+

DS:RCX[15:0] := TMP_OUTPUTKEY;

+

RAX := 0;

+

RFLAGS.ZF := 0;

+

EXIT:

+

RFLAGS.CF := 0;

+

RFLAGS.PF := 0;

+

RFLAGS.AF := 0;

+

RFLAGS.OF := 0;

+

RFLAGS.SF := 0;

+

Flags Affected + ¶ +

+

ZF is cleared if successful, otherwise ZF is set. CF, PF, AF, OF, SF are cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If a memory operand effective address is outside the current enclave.
If an effective address is not properly aligned.
If an effective address is outside the DS segment limit.
If KEYREQUEST format is invalid.
#PF(errorcode) If a page fault occurs in accessing memory.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If a memory operand effective address is outside the current enclave.
If an effective address is not properly aligned.
If an effective address is not canonical.
If KEYREQUEST format is invalid.
#PF(errorcode) If a page fault occurs in accessing memory operands.
diff --git a/x86/eincvirtchild.html b/x86/eincvirtchild.html new file mode 100644 index 0000000..274bb24 --- /dev/null +++ b/x86/eincvirtchild.html @@ -0,0 +1,250 @@ + +EINCVIRTCHILD + — Increment VIRTCHILDCNT in SECS

EINCVIRTCHILD + — Increment VIRTCHILDCNT in SECS

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 01H ENCLV[EINCVIRTCHILD]IRV/VEAX[5]This leaf function increments the SECS VIRTCHILDCNT field.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/EnEAXRBXRCX
IREINCVIRTCHILD (In)Return error code (Out)Address of an enclave page (In)Address of an SECS page (In)
+

Description + ¶ +

+

This instruction increments the SECS VIRTCHILDCNT field. This instruction can only be executed when the current privilege level is 0.

+

The content of RCX is an effective address of an EPC page. The DS segment is used to create a linear address. Segment override is not supported.

+

EINCVIRTCHILD Memory Parameter Semantics + ¶ +

+ + + + + + +
EPCPAGESECS
Read/Write access permitted by Non EnclaveRead access permitted by Enclave
+

The instruction faults if any of the following:

+

EINCVIRTCHILD Faulting Conditions + ¶ +

+ + + + + + + + + + + + +
A memory operand effective address is outside the DS segment limit (32b mode).A page fault occurs in accessing memory operands.
DS segment is unusable (32b mode).RBX does not refer to an enclave page (REG, TCS, TRIM, SECS).
A memory address is in a non-canonical form (64b mode).RCX does not refer to an SECS page.
A memory operand is not properly aligned.RBX does not refer to an enclave page associated with SECS referenced in RCX.
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EINCVIRTCHILDTarget [DS:RBX]SharedSGX_EPC_PAGE_ CONFLICT
SECS [DS:RCX]Concurrent
+
Table 38-78. Base Concurrency Restrictions of EINCVIRTCHILD
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EINCVIRTCHILDTarget [DS:RBX]ConcurrentConcurrentConcurrent
SECS [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-79. Additional Concurrency Restrictions of EINCVIRTCHILD
+

Operation + ¶ +

+

Temp Variables in EINCVIRTCHILD Operational Flow + ¶ +

+ + + + + + + + + + +
NameTypeSize (bits)Description
TMP_SECSPhysical Address64Physical address of the SECS of the page being modified.
+

EINCVIRTCHILD Return Value in RAX + ¶ +

+ + + + + + + + + + + + +
ErrorValueDescription
No Error0EINCVIRTCHILD Successful.
SGX_EPC_PAGE_CONFLICTFailure due to concurrent operation of another SGX instruction.
+

(* check alignment of DS:RBX *)

+

IF (DS:RBX is not 4K aligned) THEN

+

#GP(0); FI;

+

(* check DS:RBX is an linear address of an EPC page *)

+

IF (DS:RBX does not resolve within an EPC) THEN

+

#PF(DS:RBX, PFEC.SGX); FI;

+

(* check DS:RCX is an linear address of an EPC page *)

+

IF (DS:RCX does not resolve within an EPC) THEN

+

#PF(DS:RCX, PFEC.SGX); FI;

+

(* Check the EPCPAGE for concurrency *)

+

IF (EPCPAGE is being modified) THEN

+

RFLAGS.ZF = 1;

+

RAX = SGX_EPC_PAGE_CONFLICT;

+

goto DONE;

+

FI;

+

(* check that the EPC page is valid *)

+

IF (EPCM(DS:RBX).VALID = 0) THEN

+

#PF(DS:RBX, PFEC.SGX); FI;

+

(* check that the EPC page has the correct type and that the back pointer matches the pointer passed as the pointer to parent *)

+

IF ((EPCM(DS:RBX).PAGE_TYPE = PT_REG) or

+

(EPCM(DS:RBX).PAGE_TYPE = PT_TCS) or

+

(EPCM(DS:RBX).PAGE_TYPE = PT_TRIM) or

+

(EPCM(DS:RBX).PAGE_TYPE = PT_SS_FIRST) or

+

(EPCM(DS:RBX).PAGE_TYPE = PT_SS_REST))

+

THEN

+

(* get the SECS of DS:RBX *)

+

TMP_SECS := Address of SECS for (DS:RBX);

+

ELSE IF (EPCM(DS:RBX).PAGE_TYPE = PT_SECS) THEN

+

(* get the physical address of DS:RBX *)

+

TMP_SECS := Physical_Address(DS:RBX);

+

ELSE

+

(* EINCVIRTCHILD called on page of incorrect type *)

+

#PF(DS:RBX, PFEC.SGX); FI;

+

IF (TMP_SECS ≠ Physical_Address(DS:RCX)) THEN

+

#GP(0); FI;

+

(* Atomically increment virtchild counter *)

+

Locked_Increment(SECS(TMP_SECS).VIRTCHILDCNT);

+

RFLAGS.ZF := 0;

+

RAX := 0;

+

DONE:

+

(* clear flags *)

+

RFLAGS.CF := 0;

+

RFLAGS.PF := 0;

+

RFLAGS.AF := 0;

+

RFLAGS.OF := 0;

+

RFLAGS.SF := 0;

+

Flags Affected + ¶ +

+

ZF is set if EINCVIRTCHILD fails due to concurrent operation with another SGX instruction; otherwise cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If DS segment is unusable.
If a memory operand is not properly aligned.
RBX does not refer to an enclave page associated with SECS referenced in RCX.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If RBX does not refer to an enclave page (REG, TCS, TRIM, SECS).
If RCX does not refer to an SECS page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)If a memory address is in a non-canonical form.
If a memory operand is not properly aligned.
RBX does not refer to an enclave page associated with SECS referenced in RCX.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If RBX does not refer to an enclave page (REG, TCS, TRIM, SECS).
If RCX does not refer to an SECS page.
diff --git a/x86/einit.html b/x86/einit.html new file mode 100644 index 0000000..dfb50c7 --- /dev/null +++ b/x86/einit.html @@ -0,0 +1,695 @@ + +EINIT + — Initialize an Enclave for Execution

EINIT + — Initialize an Enclave for Execution

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 02H ENCLS[EINIT]IRV/VSGX1This leaf function initializes the enclave and makes it ready to execute enclave code.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + +
Op/EnEAXRBXRCXRDX
IREINIT (In)Error code (Out)Address of SIGSTRUCT (In)Address of SECS (In)Address of EINITTOKEN (In)
+

Description + ¶ +

+

This leaf function is the final instruction executed in the enclave build process. After EINIT, the MRENCLAVE measurement is complete, and the enclave is ready to start user code execution using the EENTER instruction.

+

EINIT takes the effective address of a SIGSTRUCT and EINITTOKEN. The SIGSTRUCT describes the enclave including MRENCLAVE, ATTRIBUTES, ISVSVN, a 3072 bit RSA key, and a signature using the included key. SIGSTRUCT must be populated with two values, q1 and q2. These are calculated using the formulas shown below:

+

q1 = floor(Signature2 / Modulus);

+

q2 = floor((Signature3 - q1 * Signature * Modulus) / Modulus);

+

The EINITTOKEN contains the MRENCLAVE, MRSIGNER, and ATTRIBUTES. These values must match the corresponding values in the SECS. If the EINITTOKEN was created with a debug launch key, the enclave must be in debug mode as well.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Signature +Verify +Hashed +PubKey +ATTRIBUTES +ATTRIBUTEMASK +Check +MRENCLAVE +If VALID=1, Check +MRSIGNER +SIGSTRUCT +MRSIGNER +DS:RBX +ATTRIBUTES +If VALID=1, +MRENCLAVE +Check +DS:RDX +EINITTOKEN +EINIT +Copy +ATTRIBUTES +SECS +DS:RCX +MRENCLAVE +Check +ENCLAVE +EPC +
Figure 38-1. Relationships Between SECS, SIGSTRUCT, and EINITTOKEN
+

EINIT Memory Parameter Semantics + ¶ +

+

SIGSTRUCT

+ + + + + + +
SECSEINITTOKEN
Read/Write access by EnclaveAccess by non-Enclave
+

Access by non-Enclave

+

EINIT performs the following steps, which can be seen in Figure 38-1:

+

1. Validates that SIGSTRUCT is signed using the enclosed public key.

+

2. Checks that the completed computation of SECS.MRENCLAVE equals SIGSTRUCT.HASHENCLAVE.

+

3. Checks that no controlled ATTRIBUTES bits are set in SIGSTRUCT.ATTRIBUTES unless the SHA256 digest of SIGSTRUCT.MODULUS equals IA32_SGX_LEPUBKEYHASH.

+

4. Checks that the result of bitwise and-ing SIGSTRUCT.ATTRIBUTEMASK with SIGSTRUCT.ATTRIBUTES equals the result of bitwise and-ing SIGSTRUCT.ATTRIBUTEMASK with SECS.ATTRIBUTES.

+

5. If EINITTOKEN.VALID is 0, checks that the SHA256 digest of SIGSTRUCT.MODULUS equals IA32_SGX_LEPUBKEYHASH.

+

6. If EINITTOKEN.VALID is 1, checks the validity of EINITTOKEN.

+

7. If EINITTOKEN.VALID is 1, checks that EINITTOKEN.MRENCLAVE equals SECS.MRENCLAVE.

+

8. If EINITTOKEN.VALID is 1 and EINITTOKEN.ATTRIBUTES.DEBUG is 1, SECS.ATTRIBUTES.DEBUG must be 1.

+

9. Commits SECS.MRENCLAVE, and sets SECS.MRSIGNER, SECS.ISVSVN, and SECS.ISVPRODID based on SIGSTRUCT.

+

10. Update the SECS as Initialized.

+

Periodically, EINIT polls for certain asynchronous events. If such an event is detected, it completes with failure code (ZF=1 and RAX = SGX_UNMASKED_EVENT), and RIP is incremented to point to the next instruction. These events includes external interrupts, non-maskable interrupts, system-management interrupts, machine checks, INIT signals, and the VMX-preemption timer. EINIT does not fail if the pending event is inhibited (e.g., external interrupts could be inhibited due to blocking by MOV SS blocking or by STI).

+

The following bits in RFLAGS are cleared: CF, PF, AF, OF, and SF. When the instruction completes with an error, RFLAGS.ZF is set to 1, and the corresponding error bit is set in RAX. If no error occurs, RFLAGS.ZF is cleared and RAX is set to 0.

+

The error codes are:

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEINIT successful.
SGX_INVALID_SIG_STRUCTIf SIGSTRUCT contained an invalid value.
SGX_INVALID_ATTRIBUTEIf SIGSTRUCT contains an unauthorized attributes mask.
SGX_INVALID_MEASUREMENTIf SIGSTRUCT contains an incorrect measurement. If EINITTOKEN contains an incorrect measurement.
SGX_INVALID_SIGNATUREIf signature does not validate with enclosed public key.
SGX_INVALID_LICENSEIf license is invalid.
SGX_INVALID_CPUSVNIf license SVN is unsupported.
SGX_UNMASKED_EVENTIf an unmasked event is received before the instruction completes its operation.
+
Table 38-25. EINIT Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
EINIT EINIT +SECS [DS:RCX] +Shared EINIT +SECS [DS:RCX] +SECS [DS:RCX]
+
Table 38-26. Base Concurrency Restrictions of EINIT
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Leaf Access On Conflict +EINIT +SECS [DS:RCX] +Concurrent +Exclusive Access On Conflict +EINIT +SECS [DS:RCX] +Concurrent +ParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
EINITSECS [DS:RCX]ConcurrentConcurrent
+
Table 38-27. Additional Concurrency Restrictions of ENIT
+

Operation + ¶ +

+

Temp Variables in EINIT Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSizeDescription
TMP_SIGSIGSTRUCT1808BytesTemp space for SIGSTRUCT.
TMP_TOKENEINITTOKEN304BytesTemp space for EINITTOKEN.
TMP_MRENCLAVE32BytesTemp space for calculating MRENCLAVE.
TMP_MRSIGNER32BytesTemp space for calculating MRSIGNER.
CONTROLLED_ATTRIBU TESATTRIBUTES16BytesConstant mask of all ATTRIBUTE bits that can only be set for authorized enclaves.
TMP_KEYDEPENDENCIE SBuffer224BytesTemp space for key derivation.
TMP_EINITTOKENKEY16BytesTemp space for the derived EINITTOKEN Key.
TMP_SIG_PADDINGPKCS Padding Buffer352BytesThe value of the top 352 bytes from the computation of Signature3 modulo MRSIGNER.
+

(* make sure SIGSTRUCT and SECS are aligned *)

+

IF ( (DS:RBX is not 4KByte Aligned) or (DS:RCX is not 4KByte Aligned) )

+

THEN #GP(0); FI;

+

(* make sure the EINITTOKEN is aligned *)

+

IF (DS:RDX is not 512Byte Aligned)

+

THEN #GP(0); FI;

+

(* make sure the SECS is inside the EPC *)

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

TMP_SIG[14463:0] := DS:RBX[14463:0]; // 1808 bytes

+

TMP_TOKEN[2423:0] := DS:RDX[2423:0]; // 304 bytes

+

(* Verify SIGSTRUCT Header. *)

+

IF ( (TMP_SIG.HEADER ≠ 06000000E10000000000010000000000h) or

+

((TMP_SIG.VENDOR ≠ 0) and (TMP_SIG.VENDOR ≠ 00008086h) ) or

+

(TMP_SIG HEADER2 ≠ 01010000600000006000000001000000h) or

+

(TMP_SIG.EXPONENT ≠ 00000003h) or (Reserved space is not 0’s) )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_SIG_STRUCT;

+

GOTO EXIT;

+

FI;

+

(* Open “Event Window” Check for Interrupts. Verify signature using embedded public key, q1, and q2. Save upper 352 bytes of the PKCS1.5 encoded message into the TMP_SIG_PADDING*)

+

IF (interrupt was pending) THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_UNMASKED_EVENT;

+

GOTO EXIT;

+

FI

+

IF (signature failed to verify) THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_SIGNATURE;

+

GOTO EXIT;

+

FI;

+

(*Close “Event Window” *)

+

(* make sure no other Intel SGX instruction is modifying SECS*)

+

IF (Other instructions modifying SECS)

+

THEN #GP(0); FI;

+

IF ( (EPCM(DS:RCX). VALID = 0) or (EPCM(DS:RCX).PT ≠ PT_SECS) )

+

THEN #PF(DS:RCX); FI;

+

(* Verify ISVFAMILYID is not used on an enclave with KSS disabled *)

+

IF ((TMP_SIG.ISVFAMILYID != 0) AND (DS:RCX.ATTRIBUTES.KSS == 0))

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_SIG_STRUCT;

+

GOTO EXIT;

+

FI;

+

(* make sure no other instruction is accessing MRENCLAVE or ATTRIBUTES.INIT *)

+

IF ( (Other instruction modifying MRENCLAVE) or (Other instructions modifying the SECS’s Initialized state))

+

THEN #GP(0); FI;

+

(* Calculate finalized version of MRENCLAVE *)

+

(* SHA256 algorithm requires one last update that compresses the length of the hashed message into the output SHA256 digest *)

+

TMP_ENCLAVE := SHA256FINAL( (DS:RCX).MRENCLAVE, enclave’s MRENCLAVE update count *512);

+

(* Verify MRENCLAVE from SIGSTRUCT *)

+

IF (TMP_SIG.ENCLAVEHASH ≠ TMP_MRENCLAVE)

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_MEASUREMENT;

+

GOTO EXIT;

+

TMP_MRSIGNER := SHA256(TMP_SIG.MODULUS)

+

(* if controlled ATTRIBUTES are set, SIGSTRUCT must be signed using an authorized key *)

+

CONTROLLED_ATTRIBUTES := 0000000000000020H;

+

IF ( ( (DS:RCX.ATTRIBUTES & CONTROLLED_ATTRIBUTES) ≠ 0) and (TMP_MRSIGNER ≠ IA32_SGXLEPUBKEYHASH) )

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ATTRIBUTE;

+

GOTO EXIT;

+

FI;

+

(* Verify SIGSTRUCT.ATTRIBUTE requirements are met *)

+

IF ( (DS:RCX.ATTRIBUTES & TMP_SIG.ATTRIBUTEMASK) ≠ (TMP_SIG.ATTRIBUTE & TMP_SIG.ATTRIBUTEMASK) )

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ATTRIBUTE;

+

GOTO EXIT;

+

FI;

+

( *Verify SIGSTRUCT.MISCSELECT requirements are met *)

+

IF ( (DS:RCX.MISCSELECT & TMP_SIG.MISCMASK) ≠ (TMP_SIG.MISCSELECT & TMP_SIG.MISCMASK) )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ATTRIBUTE;

+

GOTO EXIT

+

FI;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

IF ( DS:RCX.CET_ATTRIBUTES & TMP_SIG.CET_ATTRIBUTES_MASK ≠ TMP_SIG.CET_ATTRIBUTES &

+

TMP_SIG.CET_ATTRIB-UTES_MASK )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_ATTRIBUTE;

+

GOTO EXIT

+

FI;

+

FI;

+

(* If EINITTOKEN.VALID[0] is 0, verify the enclave is signed by an authorized key *)

+

IF (TMP_TOKEN.VALID[0] = 0)

+

IF (TMP_MRSIGNER ≠ IA32_SGXLEPUBKEYHASH)

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_EINITTOKEN;

+

GOTO EXIT;

+

FI;

+

GOTO COMMIT;

+

FI;

+

(* Debug Launch Enclave cannot launch Production Enclaves *)

+

IF ( (DS:RDX.MASKEDATTRIBUTESLE.DEBUG = 1) and (DS:RCX.ATTRIBUTES.DEBUG = 0) )

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_EINITTOKEN;

+

GOTO EXIT;

+

(* Check reserve space in EINIT token includes reserved regions and upper bits in valid field *)

+

IF (TMP_TOKEN reserved space is not clear)

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_EINITTOKEN;

+

GOTO EXIT;

+

FI;

+

(* EINIT token must not have been created by a configuration beyond the current CPU configuration *)

+

IF (TMP_TOKEN.CPUSVN must not be a configuration beyond CR_CPUSVN)

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_CPUSVN;

+

GOTO EXIT;

+

FI;

+

(* Derive Launch key used to calculate EINITTOKEN.MAC *)

+

HARDCODED_PKCS1_5_PADDING[15:0] := 0100H;

+

HARDCODED_PKCS1_5_PADDING[2655:16] := SignExtend330Byte(-1); // 330 bytes of 0FFH

+

HARDCODED_PKCS1_5_PADDING[2815:2656] := 2004000501020403650148866009060D30313000H;

+

TMP_KEYDEPENDENCIES.KEYNAME := EINITTOKEN_KEY;

+

TMP_KEYDEPENDENCIES.ISVFAMILYID := 0;

+

TMP_KEYDEPENDENCIES.ISVEXTPRODID := 0;

+

TMP_KEYDEPENDENCIES.ISVPRODID := TMP_TOKEN.ISVPRODIDLE;

+

TMP_KEYDEPENDENCIES.ISVSVN := TMP_TOKEN.ISVSVNLE;

+

TMP_KEYDEPENDENCIES.SGXOWNEREPOCH := CR_SGXOWNEREPOCH;

+

TMP_KEYDEPENDENCIES.ATTRIBUTES := TMP_TOKEN.MASKEDATTRIBUTESLE;

+

TMP_KEYDEPENDENCIES.ATTRIBUTESMASK := 0;

+

TMP_KEYDEPENDENCIES.MRENCLAVE := 0;

+

TMP_KEYDEPENDENCIES.MRSIGNER := IA32_SGXLEPUBKEYHASH;

+

TMP_KEYDEPENDENCIES.KEYID := TMP_TOKEN.KEYID;

+

TMP_KEYDEPENDENCIES.SEAL_KEY_FUSES := CR_SEAL_FUSES;

+

TMP_KEYDEPENDENCIES.CPUSVN := TMP_TOKEN.CPUSVNLE;

+

TMP_KEYDEPENDENCIES.MISCSELECT := TMP_TOKEN.MASKEDMISCSELECTLE;

+

TMP_KEYDEPENDENCIES.MISCMASK := 0;

+

TMP_KEYDEPENDENCIES.PADDING := HARDCODED_PKCS1_5_PADDING;

+

TMP_KEYDEPENDENCIES.KEYPOLICY := 0;

+

TMP_KEYDEPENDENCIES.CONFIGID := 0;

+

TMP_KEYDEPENDENCIES.CONFIGSVN := 0;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1))

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES := TMP_TOKEN.CET_MASKED_ATTRIBUTES_ LE;

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES_MASK := 0;

+

FI;

+

(* Calculate the derived key*)

+

TMP_EINITTOKENKEY := derivekey(TMP_KEYDEPENDENCIES);

+

(* Verify EINITTOKEN was generated using this CPU's Launch key and that it has not been modified since issuing by the Launch Enclave. Only 192 bytes of EINITTOKEN are CMACed *)

+

IF (TMP_TOKEN.MAC ≠ CMAC(TMP_EINITTOKENKEY, TMP_TOKEN[1535:0] ) )

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_EINITTOKEN;

+

GOTO EXIT;

+

FI;

+

(* Verify EINITTOKEN (RDX) is for this enclave *)

+

IF ( (TMP_TOKEN.MRENCLAVE ≠ TMP_MRENCLAVE) or (TMP_TOKEN.MRSIGNER ≠ TMP_MRSIGNER) )

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_MEASUREMENT;

+

GOTO EXIT;

+

FI;

+

(* Verify ATTRIBUTES in EINITTOKEN are the same as the enclave’s *)

+

IF (TMP_TOKEN.ATTRIBUTES ≠ DS:RCX.ATTRIBUTES)

+

RFLAGS.ZF := 1;

+

RAX := SGX_INVALID_EINIT_ATTRIBUTE;

+

GOTO EXIT;

+

FI;

+

COMMIT:

+

(* Commit changes to the SECS; Set ISVPRODID, ISVSVN, MRSIGNER, INIT ATTRIBUTE fields in SECS (RCX) *)

+

DS:RCX.MRENCLAVE := TMP_MRENCLAVE;

+

(* MRSIGNER stores a SHA256 in little endian implemented natively on x86 *)

+

DS:RCX.MRSIGNER := TMP_MRSIGNER;

+

DS:RCX.ISVEXTPRODID := TMP_SIG.ISVEXTPRODID;

+

DS:RCX.ISVPRODID := TMP_SIG.ISVPRODID;

+

DS:RCX.ISVSVN := TMP_SIG.ISVSVN;

+

DS:RCX.ISVFAMILYID := TMP_SIG.ISVFAMILYID;

+

DS:RCX.PADDING := TMP_SIG_PADDING;

+

(* Mark the SECS as initialized *)

+

Update DS:RCX to initialized;

+

(* Set RAX and ZF for success*)

+

RFLAGS.ZF := 0;

+

RAX := 0;

+

EXIT:

+

RFLAGS.CF,PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

ZF is cleared if successful, otherwise ZF is set and RAX contains the error code. CF, PF, AF, OF, SF are cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If a memory operand is not properly aligned.
If another instruction is modifying the SECS.
If the enclave is already initialized.
If the SECS.MRENCLAVE is in use.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If RCX does not resolve in an EPC page.
If the memory address is not a valid, uninitialized SECS.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If a memory operand is not properly aligned.
If another instruction is modifying the SECS.
If the enclave is already initialized.
If the SECS.MRENCLAVE is in use.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If RCX does not resolve in an EPC page.
If the memory address is not a valid, uninitialized SECS.
diff --git a/x86/eldb.eldu.eldbc.elduc.html b/x86/eldb.eldu.eldbc.elduc.html new file mode 100644 index 0000000..d8956f0 --- /dev/null +++ b/x86/eldb.eldu.eldbc.elduc.html @@ -0,0 +1,459 @@ + +ELDB/ELDU/ELDBC/ELDUC + — Load an EPC Page and Mark its State

ELDB/ELDU/ELDBC/ELDUC + — Load an EPC Page and Mark its State

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 07H ENCLS[ELDB]IRV/VSGX1This leaf function loads, verifies an EPC page and marks the page as blocked.
EAX = 08H ENCLS[ELDU]IRV/VSGX1This leaf function loads, verifies an EPC page and marks the page as unblocked.
EAX = 12H ENCLS[ELDBC]IRV/VEAX[6]This leaf function behaves lie ELDB but with improved conflict handling for oversubscription.
EAX = 13H ENCLS[ELDUC]IRV/VEAX[6]This leaf function behaves like ELDU but with improved conflict handling for oversubscription.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + +
Op/EnEAXRBXRCXRDX
IR (In) r (In) LDU Return error Return error Address of the PAGEINFO (In)Address of the EPC page (In)Address of the version-array slot (In)
+

Description + ¶ +

+

This leaf function copies a page from regular main memory to the EPC. As part of the copying process, the page is cryptographically authenticated and decrypted. This instruction can only be executed when current privilege level is 0.

+

The ELDB leaf function sets the BLOCK bit in the EPCM entry for the destination page in the EPC after copying. The ELDU leaf function clears the BLOCK bit in the EPCM entry for the destination page in the EPC after copying.

+

RBX contains the effective address of a PAGEINFO structure; RCX contains the effective address of the destination EPC page; RDX holds the effective address of the version array slot that holds the version of the page.

+

The ELDBC/ELDUC leafs are very similar to ELDB and ELDU. They provide an error code on the concurrency conflict for any of the pages which need to acquire a lock. These include the destination, SECS, and VA slot.

+

The table below provides additional information on the memory parameter of ELDB/ELDU leaf functions.

+

ELDB/ELDU/ELDBC/ELBUC Memory Parameter Semantics + ¶ +

+ + + + + + + + + + + + + + +
PAGEINFOPAGEINFO.SRCPGEPAGEINFO.PCMDPAGEINFO.SECSEPCPAGEVersion-Array Slot
Non-enclave read accessNon-enclave read accessNon-enclave read accessEnclave read/write accessRead/Write access permitted by EnclaveRead/Write access permitted by Enclave
+

The error codes are:

+
+ + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorELDB/ELDU successful.
SGX_MAC_COMPARE_FAILIf the MAC check fails.
+
Table 38-28. ELDB/ELDU/ELDBC/ELBUC Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
ELDB/ELDUTarget [DS:RCX]Exclusive#GPEPC_PAGE_CONFLICT_EXCEPTION
VA [DS:RDX]Shared#GP
SECS [DS:RBX]PAGEINFO.SECSShared#GP
ELDBC/ELBUCTarget [DS:RCX]ExclusiveSGX_EPC_PAGE_ CONFLICTEPC_PAGE_CONFLICT_ERROR
VA [DS:RDX]SharedSGX_EPC_PAGE_ CONFLICT
SECS [DS:RBX]PAGEINFO.SECSSharedSGX_EPC_PAGE_ CONFLICT
+
Table 38-29. Base Concurrency Restrictions of ELDB/ELDU/ELDBC/ELBUC
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
ELDB/ELDUTarget [DS:RCX]ConcurrentConcurrentConcurrent
VA [DS:RDX]ConcurrentConcurrentConcurrent
SECS [DS:RBX]PAGEINFO.SECSConcurrentConcurrentConcurrent
ELDBC/ELBUCTarget [DS:RCX]ConcurrentConcurrentConcurrent
VA [DS:RDX]ConcurrentConcurrentConcurrent
SECS [DS:RBX]PAGEINFO.SECSConcurrentConcurrentConcurrent
+
Table 38-30. Additional Concurrency Restrictions of ELDB/ELDU/ELDBC/ELBUC
+

Operation + ¶ +

+

Temp Variables in ELDB/ELDU/ELDBC/ELBUC Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_SRCPGEMemory page4KBytes
TMP_SECSMemory page4KBytes
TMP_PCMDPCMD128 Bytes
TMP_HEADERMACHEADER128 Bytes
TMP_VERUINT6464
TMP_MACUINT128128
TMP_PKUINT128128Page encryption/MAC key.
SCRATCH_PCMDPCMD128 Bytes
+

(* Check PAGEINFO and EPCPAGE alignment *)

+

IF ( (DS:RBX is not 32Byte Aligned) or (DS:RCX is not 4KByte Aligned) )

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

(* Check VASLOT alignment *)

+

IF (DS:RDX is not 8Byte aligned)

+

THEN #GP(0); FI;

+

IF (DS:RDX does not resolve within an EPC)

+

THEN #PF(DS:RDX); FI;

+

TMP_SRCPGE := DS:RBX.SRCPGE;

+

TMP_SECS := DS:RBX.SECS;

+

TMP_PCMD := DS:RBX.PCMD;

+

(* Check alignment of PAGEINFO (RBX) linked parameters. Note: PCMD pointer is overlaid on top of PAGEINFO.SECINFO field *)

+

IF ( (DS:TMP_PCMD is not 128Byte aligned) or (DS:TMP_SRCPGE is not 4KByte aligned) )

+

THEN #GP(0); FI;

+

(* Check concurrency of EPC by other Intel SGX instructions *)

+

IF (other instructions accessing EPC)

+

THEN

+

IF ((EAX==07h) OR (EAX==08h)) (* ELDB/ELDU *)

+

THEN

+

IF (<<VMX non-root operation>> AND

+

<<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)

+

THEN

+

VMCS.Exit_reason := SGX_CONFLICT;

+

VMCS.Exit_qualification.code := EPC_PAGE_CONFLICT_EXCEPTION;

+

VMCS.Exit_qualification.error := 0;

+

VMCS.Guest-physical_address :=

+

<< translation of DS:RCX produced by paging >>;

+

VMCS.Guest-linear_address := DS:RCX;

+

Deliver VMEXIT;

+

ELSE

+

#GP(0);

+

FI;

+

ELSE (* ELDBC/ELDUC *)

+

IF (<<VMX non-root operation>> AND

+

<<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)

+

THEN

+

VMCS.Exit_reason := SGX_CONFLICT;

+

VMCS.Exit_qualification.code := EPC_PAGE_CONFLICT_ERROR;

+

VMCS.Exit_qualification.error := SGX_EPC_PAGE_CONFLICT;

+

VMCS.Guest-physical_address :=

+

<< translation of DS:RCX produced by paging >>;

+

VMCS.Guest-linear_address := DS:RCX;

+

Deliver VMEXIT;

+

ELSE

+

RFLAGS.ZF := 1;

+

RFLAGS.CF := 0;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

GOTO ERROR_EXIT;

+

FI;

+

FI;

+

FI;

+

(* Check concurrency of EPC and VASLOT by other Intel SGX instructions *)

+

IF (Other instructions modifying VA slot) THEN

+

IF ((EAX==07h) OR (EAX==08h)) (* ELDB/ELDU *)

+

THEN #GP(0);

+

ELSE (* ELDBC/ELDUC *)

+

RFLAGS.ZF := 1;

+

RFLAGS.CF := 0;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

GOTO ERROR_EXIT;

+

FI;

+

FI;

+

(* Verify EPCM attributes of EPC page, VA, and SECS *)

+

IF (EPCM(DS:RCX).VALID = 1)

+

THEN #PF(DS:RCX); FI;

+

IF ( (EPCM(DS:RDX & ~0FFFH).VALID = 0) or (EPCM(DS:RDX & ~0FFFH).PT ≠ PT_VA) )

+

THEN #PF(DS:RDX); FI;

+

(* Copy PCMD into scratch buffer *)

+

SCRATCH_PCMD[1023: 0] := DS:TMP_PCMD[1023:0];

+

(* Zero out TMP_HEADER*)

+

TMP_HEADER[sizeof(TMP_HEADER)-1: 0] := 0;

+

TMP_HEADER.SECINFO := SCRATCH_PCMD.SECINFO;

+

TMP_HEADER.RSVD := SCRATCH_PCMD.RSVD;

+

TMP_HEADER.LINADDR := DS:RBX.LINADDR;

+

(* Verify various attributes of SECS parameter *)

+

IF ( (TMP_HEADER.SECINFO.FLAGS.PT = PT_REG) or (TMP_HEADER.SECINFO.FLAGS.PT = PT_TCS) or

+

(TMP_HEADER.SECINFO.FLAGS.PT = PT_TRIM) or

+

(TMP_HEADER.SECINFO.FLAGS.PT = PT_SS_FIRST and CPUID.(EAX=12H, ECX=1):EAX[6] = 1) or

+

(TMP_HEADER.SECINFO.FLAGS.PT = PT_SS_REST and CPUID.(EAX=12H, ECX=1):EAX[6] = 1))

+

THEN

+

IF ( DS:TMP_SECS is not 4KByte aligned)

+

THEN #GP(0) FI;

+

IF (DS:TMP_SECS does not resolve within an EPC)

+

THEN #PF(DS:TMP_SECS) FI;

+

IF ( Another instruction is currently modifying the SECS) THEN

+

IF ((EAX==07h) OR (EAX==08h)) (* ELDB/ELDU *)

+

THEN #GP(0);

+

ELSE (* ELDBC/ELDUC *)

+

RFLAGS.ZF := 1;

+

RFLAGS.CF := 0;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

GOTO ERROR_EXIT;

+

FI;

+

FI;

+

TMP_HEADER.EID := DS:TMP_SECS.EID;

+

ELSE

+

(* TMP_HEADER.SECINFO.FLAGS.PT is PT_SECS or PT_VA which do not have a parent SECS, and hence no EID binding *)

+

TMP_HEADER.EID := 0;

+

IF (DS:TMP_SECS ≠ 0)

+

THEN #GP(0) FI;

+

FI;

+

(* Copy 4KBytes SRCPGE to secure location *)

+

DS:RCX[32767: 0] := DS:TMP_SRCPGE[32767: 0];

+

TMP_VER := DS:RDX[63:0];

+

(* Decrypt and MAC page. AES_GCM_DEC has 2 outputs, {plain text, MAC} *)

+

(* Parameters for AES_GCM_DEC {Key, Counter, ..} *)

+

{DS:RCX, TMP_MAC} := AES_GCM_DEC(CR_BASE_PK, TMP_VER << 32, TMP_HEADER, 128, DS:RCX, 4096);

+

IF ( (TMP_MAC ≠ DS:TMP_PCMD.MAC) )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_MAC_COMPARE_FAIL;

+

GOTO ERROR_EXIT;

+

FI;

+

(* Clear VA Slot *)

+

DS:RDX := 0

+

(* Commit EPCM changes *)

+

EPCM(DS:RCX).PT := TMP_HEADER.SECINFO.FLAGS.PT;

+

EPCM(DS:RCX).RWX := TMP_HEADER.SECINFO.FLAGS.RWX;

+

EPCM(DS:RCX).PENDING := TMP_HEADER.SECINFO.FLAGS.PENDING;

+

EPCM(DS:RCX).MODIFIED := TMP_HEADER.SECINFO.FLAGS.MODIFIED;

+

EPCM(DS:RCX).PR := TMP_HEADER.SECINFO.FLAGS.PR;

+

EPCM(DS:RCX).ENCLAVEADDRESS := TMP_HEADER.LINADDR;

+

IF ( ((EAX = 07H) or (EAX = 12H)) and (TMP_HEADER.SECINFO.FLAGS.PT is NOT PT_SECS or PT_VA))

+

THEN

+

EPCM(DS:RCX).BLOCKED := 1;

+

ELSE

+

EPCM(DS:RCX).BLOCKED := 0;

+

FI;

+

IF (TMP_HEADER.SECINFO.FLAGS.PT is PT_SECS)

+

<< store translation of DS:RCX produced by paging in SECS(DS:RCX).ENCLAVECONTEXT >>

+

FI;

+

EPCM(DS:RCX). VALID := 1;

+

RAX := 0;

+

RFLAGS.ZF := 0;

+

ERROR_EXIT:

+

RFLAGS.CF,PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

Sets ZF if unsuccessful, otherwise cleared and RAX returns error code. Clears CF, PF, AF, OF, SF.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If the instruction’s EPC resource is in use by others.
If the instruction fails to verify MAC.
If the version-array slot is in use.
If the parameters fail consistency checks.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand expected to be in EPC does not resolve to an EPC page.
If one of the EPC memory operands has incorrect page type.
If the destination EPC page is already valid.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If the instruction’s EPC resource is in use by others.
If the instruction fails to verify MAC.
If the version-array slot is in use.
If the parameters fail consistency checks.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand expected to be in EPC does not resolve to an EPC page.
If one of the EPC memory operands has incorrect page type.
If the destination EPC page is already valid.
diff --git a/x86/emms.html b/x86/emms.html new file mode 100644 index 0000000..ae9a4ac --- /dev/null +++ b/x86/emms.html @@ -0,0 +1,92 @@ + +EMMS + — Empty MMX Technology State

EMMS + — Empty MMX Technology State

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F 77EMMSZOValidValidSet the x87 FPU tag word to empty.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Sets the values of all the tags in the x87 FPU tag word to empty (all 1s). This operation marks the x87 FPU data registers (which are aliased to the MMX technology registers) as available for use by x87 FPU floating-point instructions. (See Figure 8-7 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for the format of the x87 FPU tag word.) All other MMX instructions (other than the EMMS instruction) set all the tags in x87 FPU tag word to valid (all 0s).

+

The EMMS instruction must be used to clear the MMX technology state at the end of all MMX technology procedures or subroutines and before calling other procedures or subroutines that may execute x87 floating-point instructions. If a floating-point instruction loads one of the registers in the x87 FPU data register stack before the x87 FPU tag word has been reset by the EMMS instruction, an x87 floating-point register stack overflow can occur that will result in an x87 floating-point exception or incorrect result.

+

EMMS operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
x87FPUTagWord := FFFFH;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
void _mm_empty()
+
+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#UDIf CR0.EM[bit 2] = 1.
#NMIf CR0.TS[bit 3] = 1.
#MFIf there is a pending FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/emodpe.html b/x86/emodpe.html new file mode 100644 index 0000000..b9ef329 --- /dev/null +++ b/x86/emodpe.html @@ -0,0 +1,217 @@ + +EMODPE + — Extend an EPC Page Permissions

EMODPE + — Extend an EPC Page Permissions

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 06H ENCLU[EMODPE]IRV/VSGX2This leaf function extends the access rights of an existing EPC page.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnEAXRBXRCX
IREMODPE (In)Address of a SECINFO (In)Address of the destination EPC page (In)
+

Description + ¶ +

+

This leaf function extends the access rights associated with an existing EPC page in the running enclave. THE RWX bits of the SECINFO parameter are treated as a permissions mask; supplying a value that does not extend the page permissions will have no effect. This instruction leaf can only be executed when inside the enclave.

+

RBX contains the effective address of a SECINFO structure while RCX contains the effective address of an EPC page. The table below provides additional information on the memory parameter of the EMODPE leaf function.

+

EMODPE Memory Parameter Semantics + ¶ +

+ + + + + + +
SECINFOEPCPAGE
Read access permitted by Non EnclaveRead access permitted by Enclave
+

The instruction faults if any of the following:

+

EMODPE Faulting Conditions + ¶ +

+ + + + + + + + + + + + +
The operands are not properly aligned.If security attributes of the SECINFO page make the page inaccessible.
The EPC page is locked by another thread.RBX does not contain an effective address in an EPC page in the running enclave.
The EPC page is not valid.RCX does not contain an effective address of an EPC page in the running enclave.
SECINFO contains an invalid request.
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EMODPETarget [DS:RCX]Concurrent
SECINFO [DS:RBX]Concurrent
+
Table 38-70. Base Concurrency Restrictions of EMODPE
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EMODPETarget [DS:RCX]Exclusive#GPConcurrentConcurrent
SECINFO [DS:RBX]ConcurrentConcurrentConcurrent
+
Table 38-71. Additional Concurrency Restrictions of EMODPE
+

Operation + ¶ +

+

Temp Variables in EMODPE Operational Flow + ¶ +

+ + + + + + + + + + +
NameTypeSize (bits)Description
SCRATCH_SECINFOSECINFO512Scratch storage for holding the contents of DS:RBX.
+

IF (DS:RBX is not 64Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF ((DS:RBX is not within CR_ELRANGE) or (DS:RCX is not within CR_ELRANGE) )

+

THEN #GP(0); FI;

+

IF (DS:RBX does not resolve within an EPC)

+

THEN #PF(DS:RBX); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

IF ( (EPCM(DS:RBX).VALID = 0) or (EPCM(DS:RBX).R = 0) or (EPCM(DS:RBX).PENDING ≠ 0) or (EPCM(DS:RBX).MODIFIED ≠ 0) or

+

(EPCM(DS:RBX).BLOCKED ≠ 0) or (EPCM(DS:RBX).PT ≠ PT_REG) or (EPCM(DS:RBX).ENCLAVESECS ≠ CR_ACTIVE_SECS) or

+

(EPCM(DS:RBX).ENCLAVEADDRESS ≠ (DS:RBX & ~0xFFF)) )

+

THEN #PF(DS:RBX); FI;

+

SCRATCH_SECINFO := DS:RBX;

+

(* Check for misconfigured SECINFO flags*)

+

IF (SCRATCH_SECINFO reserved fields are not zero )

+

THEN #GP(0); FI;

+

(* Check security attributes of the EPC page *)

+

IF ( (EPCM(DS:RCX).VALID = 0) or (EPCM(DS:RCX).PENDING ≠ 0) or (EPCM(DS:RCX).MODIFIED ≠ 0) or

+

(EPCM(DS:RCX).BLOCKED ≠ 0) or (EPCM(DS:RCX).PT ≠ PT_REG) or (EPCM(DS:RCX).ENCLAVESECS ≠ CR_ACTIVE_SECS) )

+

THEN #PF(DS:RCX); FI;

+

(* Check the EPC page for concurrency *)

+

IF (EPC page in use by another SGX2 instruction)

+

THEN #GP(0); FI;

+

(* Re-Check security attributes of the EPC page *)

+

IF ( (EPCM(DS:RCX).VALID = 0) or (EPCM(DS:RCX).PENDING ≠ 0) or (EPCM(DS:RCX).MODIFIED ≠ 0) or

+

(EPCM(DS:RCX).PT ≠ PT_REG) or (EPCM(DS:RCX).ENCLAVESECS ≠ CR_ACTIVE_SECS) or

+

(EPCM(DS:RCX).ENCLAVEADDRESS ≠ DS:RCX))

+

THEN #PF(DS:RCX); FI;

+

(* Check for misconfigured SECINFO flags*)

+

IF ( (EPCM(DS:RCX).R = 0) and (SCRATCH_SECINFO.FLAGS.R = 0) and (SCRATCH_SECINFO.FLAGS.W ≠ 0) )

+

(* Update EPCM permissions *)

+

EPCM(DS:RCX).R := EPCM(DS:RCX).R | SCRATCH_SECINFO.FLAGS.R;

+

EPCM(DS:RCX).W := EPCM(DS:RCX).W | SCRATCH_SECINFO.FLAGS.W;

+

EPCM(DS:RCX).X := EPCM(DS:RCX).X | SCRATCH_SECINFO.FLAGS.X;

+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
diff --git a/x86/emodpr.html b/x86/emodpr.html new file mode 100644 index 0000000..93883ac --- /dev/null +++ b/x86/emodpr.html @@ -0,0 +1,238 @@ + +EMODPR + — Restrict the Permissions of an EPC Page

EMODPR + — Restrict the Permissions of an EPC Page

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 0EH ENCLS[EMODPR]IRV/VSGX2This leaf function restricts the access rights associated with a EPC page in an initialized enclave.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/En EAXRBXRCX
IREMODPR (In)Return Error Code (Out)Address of a SECINFO (In)Address of the destination EPC page (In)
+

Description + ¶ +

+

This leaf function restricts the access rights associated with an EPC page in an initialized enclave. THE RWX bits of the SECINFO parameter are treated as a permissions mask; supplying a value that does not restrict the page permissions will have no effect. This instruction can only be executed when current privilege level is 0.

+

RBX contains the effective address of a SECINFO structure while RCX contains the effective address of an EPC page. The table below provides additional information on the memory parameter of the EMODPR leaf function.

+

EMODPR Memory Parameter Semantics + ¶ +

+ + + + + + +
SECINFOEPCPAGE
Read access permitted by Non EnclaveRead/Write access permitted by Enclave
+

The instruction faults if any of the following:

+

EMODPR Faulting Conditions + ¶ +

+ + + + + + + + + + + + +
The operands are not properly aligned.If unsupported security attributes are set.
The Enclave is not initialized.SECS is locked by another thread.
The EPC page is locked by another thread.RCX does not contain an effective address of an EPC page in the running enclave.
The EPC page is not valid.
+

The error codes are:

+
+ + + + + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEMODPR successful.
SGX_PAGE_NOT_MODIFIABLEThe EPC page cannot be modified because it is in the PENDING or MODIFIED state.
SGX_EPC_PAGE_CONFLICTPage is being written by EADD, EAUG, ECREATE, ELDU/B, EMODT, or EWB.
+
Table 38-31. EMODPR Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
EMODPR EMODPR +Target [DS:RCX] +Shared EMODPR +Target [DS:RCX] +Target [DS:RCX]
+
Table 38-32. Base Concurrency Restrictions of EMODPR
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EMODPRTarget [DS:RCX]ExclusiveSGX_EPC_PAGE _CONFLICTConcurrentConcurrent
+
Table 38-33. Additional Concurrency Restrictions of EMODPR
+

Operation + ¶ +

+

Temp Variables in EMODPR Operational Flow + ¶ +

+ + + + + + + + + + + + + + + +
NameTypeSize (bits)Description
TMP_SECSEffective Address32/64Physical address of SECS to which EPC operand belongs.
SCRATCH_SECINFOSECINFO512Scratch storage for holding the contents of DS:RBX.
+

IF (DS:RBX is not 64Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

SCRATCH_SECINFO := DS:RBX;

+

(* Check for misconfigured SECINFO flags*)

+

IF ( (SCRATCH_SECINFO reserved fields are not zero ) or

+

(SCRATCH_SECINFO.FLAGS.R is 0 and SCRATCH_SECINFO.FLAGS.W is not 0) )

+

THEN #GP(0); FI;

+

(* Check concurrency with SGX1 or SGX2 instructions on the EPC page *)

+

IF (SGX1 or other SGX2 instructions accessing EPC page)

+

THEN #GP(0); FI;

+

IF (EPCM(DS:RCX).VALID is 0 )

+

THEN #PF(DS:RCX); FI;

+

(* Check the EPC page for concurrency *)

+

IF (EPC page in use by another SGX2 instruction)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

GOTO DONE;

+

FI;

+

IF (EPCM(DS:RCX).PENDING is not 0 or (EPCM(DS:RCX).MODIFIED is not 0) )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_PAGE_NOT_MODIFIABLE;

+

GOTO DONE;

+

FI;

+

IF (EPCM(DS:RCX).PT is not PT_REG)

+

THEN #PF(DS:RCX); FI;

+

TMP_SECS := GET_SECS_ADDRESS

+

IF (TMP_SECS.ATTRIBUTES.INIT = 0)

+

THEN #GP(0); FI;

+

(* Set the PR bit to indicate that permission restriction is in progress *)

+

EPCM(DS:RCX).PR := 1;

+

(* Update EPCM permissions *)

+

EPCM(DS:RCX).R := EPCM(DS:RCX).R & SCRATCH_SECINFO.FLAGS.R;

+

EPCM(DS:RCX).W := EPCM(DS:RCX).W & SCRATCH_SECINFO.FLAGS.W;

+

EPCM(DS:RCX).X := EPCM(DS:RCX).X & SCRATCH_SECINFO.FLAGS.X;

+

RFLAGS.ZF := 0;

+

RAX := 0;

+

DONE:

+

RFLAGS.CF,PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

Sets ZF if page is not modifiable or if other SGX2 instructions are executing concurrently, otherwise cleared. Clears CF, PF, AF, OF, SF.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
diff --git a/x86/emodt.html b/x86/emodt.html new file mode 100644 index 0000000..ef6d260 --- /dev/null +++ b/x86/emodt.html @@ -0,0 +1,240 @@ + +EMODT + — Change the Type of an EPC Page

EMODT + — Change the Type of an EPC Page

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 0FH ENCLS[EMODT]IRV/VSGX2This leaf function changes the type of an existing EPC page.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/En EAXRBXRCX
IREMODT (In)Return Error Code (Out)Address of a SECINFO (In)Address of the destination EPC page (In)
+

Description + ¶ +

+

This leaf function modifies the type of an EPC page. The security attributes are configured to prevent access to the EPC page at its new type until a corresponding invocation of the EACCEPT leaf confirms the modification. This instruction can only be executed when current privilege level is 0.

+

RBX contains the effective address of a SECINFO structure while RCX contains the effective address of an EPC page. The table below provides additional information on the memory parameter of the EMODT leaf function.

+

EMODT Memory Parameter Semantics + ¶ +

+ + + + + + +
SECINFOEPCPAGE
Read access permitted by Non EnclaveRead/Write access permitted by Enclave
+

The instruction faults if any of the following:

+

EMODT Faulting Conditions + ¶ +

+ + + + + + + + + + + + +
The operands are not properly aligned.If unsupported security attributes are set.
The Enclave is not initialized.SECS is locked by another thread.
The EPC page is locked by another thread.RCX does not contain an effective address of an EPC page in the running enclave.
The EPC page is not valid.
+

The error codes are:

+
+ + + + + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEMODT successful.
SGX_PAGE_NOT_MODIFIABLEThe EPC page cannot be modified because it is in the PENDING or MODIFIED state.
SGX_EPC_PAGE_CONFLICTPage is being written by EADD, EAUG, ECREATE, ELDU/B, EMODPR, or EWB.
+
Table 38-34. EMODT Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EMODTTarget [DS:RCX]ExclusiveSGX_EPC_PAGE_ CONFLICTEPC_PAGE_CONFLICT_ERROR
+
Table 38-35. Base Concurrency Restrictions of EMODT
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EM vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EMODTTarget [DS:RCX]ExclusiveSGX_EPC_PAGE _CONFLICTConcurrentConcurrent
+
Table 38-36. Additional Concurrency Restrictions of EMODT
+

Operation + ¶ +

+

Temp Variables in EMODT Operational Flow + ¶ +

+ + + + + + + + + + + + + + + +
NameTypeSize (bits)Description
TMP_SECSEffective Address32/64Physical address of SECS to which EPC operand belongs.
SCRATCH_SECINFOSECINFO512Scratch storage for holding the contents of DS:RBX.
+

IF (DS:RBX is not 64Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

SCRATCH_SECINFO := DS:RBX;

+

(* Check for misconfigured SECINFO flags*)

+

IF ( (SCRATCH_SECINFO reserved fields are not zero ) or

+

!(SCRATCH_SECINFO.FLAGS.PT is PT_TCS or SCRATCH_SECINFO.FLAGS.PT is PT_TRIM) )

+

THEN #GP(0); FI;

+

(* Check concurrency with SGX1 instructions on the EPC page *)

+

IF (other SGX1 instructions accessing EPC page)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

GOTO DONE;

+

FI;

+

IF (EPCM(DS:RCX).VALID is 0)

+

THEN #PF(DS:RCX); FI;

+

(* Check the EPC page for concurrency *)

+

IF (EPC page in use by another SGX2 instruction)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

GOTO DONE;

+

FI;

+

IF (!(EPCM(DS:RCX).PT is PT_REG or

+

((EPCM(DS:RCX).PT is PT_TCS or PT_SS_FIRST or PT_SS_REST) and SCRATCH_SECINFO.FLAGS.PT is PT_TRIM)))

+

THEN #PF(DS:RCX); FI;

+

IF (EPCM(DS:RCX).PENDING is not 0 or (EPCM(DS:RCX).MODIFIED is not 0) )

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_PAGE_NOT_MODIFIABLE;

+

GOTO DONE;

+

FI;

+

TMP_SECS := GET_SECS_ADDRESS

+

IF (TMP_SECS.ATTRIBUTES.INIT = 0)

+

THEN #GP(0); FI;

+

(* Update EPCM fields *)

+

EPCM(DS:RCX).PR := 0;

+

EPCM(DS:RCX).MODIFIED := 1;

+

EPCM(DS:RCX).R := 0;

+

EPCM(DS:RCX).W := 0;

+

EPCM(DS:RCX).X := 0;

+

EPCM(DS:RCX).PT := SCRATCH_SECINFO.FLAGS.PT;

+

RFLAGS.ZF := 0;

+

RAX := 0;

+

DONE:

+

RFLAGS.CF,PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

Sets ZF if page is not modifiable or if other SGX2 instructions are executing concurrently, otherwise cleared. Clears CF, PF, AF, OF, SF.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If a memory operand is locked.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
diff --git a/x86/encls.html b/x86/encls.html new file mode 100644 index 0000000..8777c8a --- /dev/null +++ b/x86/encls.html @@ -0,0 +1,139 @@ + +ENCLS + — Execute an Enclave System Function of Specified Leaf Number

ENCLS + — Execute an Enclave System Function of Specified Leaf Number

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 CF ENCLSZOV/VNAThis instruction is used to execute privileged Intel SGX leaf functions that are used for managing and debugging the enclaves.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Implicit Register Operands
ZONANANASee Section 38.3
+

Description + ¶ +

+

The ENCLS instruction invokes the specified privileged Intel SGX leaf function for managing and debugging enclaves. Software specifies the leaf function by setting the appropriate value in the register EAX as input. The registers RBX, RCX, and RDX have leaf-specific purpose, and may act as input, as output, or may be unused. In 64-bit mode, the instruction ignores upper 32 bits of the RAX register.

+

The ENCLS instruction produces an invalid-opcode exception (#UD) if CR0.PE = 0 or RFLAGS.VM = 1, or if it is executed in system-management mode (SMM). Additionally, any attempt to execute the instruction when CPL > 0 results in #UD. The instruction produces a general-protection exception (#GP) if CR0.PG = 0 or if an attempt is made to invoke an undefined leaf function.

+

In VMX non-root operation, execution of ENCLS may cause a VM exit if the “enable ENCLS exiting” VM-execution control is 1. In this case, execution of individual leaf functions of ENCLS is governed by the ENCLS-exiting bitmap field in the VMCS. Each bit in that field corresponds to the index of an ENCLS leaf function (as provided in EAX).

+

Software in VMX root operation can thus intercept the invocation of various ENCLS leaf functions in VMX non-root operation by setting the “enable ENCLS exiting” VM-execution control and setting the corresponding bits in the ENCLS-exiting bitmap.

+

Addresses and operands are 32 bits outside 64-bit mode (IA32_EFER.LMA = 0 || CS.L = 0) and are 64 bits in 64-bit mode (IA32_EFER.LMA = 1 || CS.L = 1). CS.D value has no impact on address calculation. The DS segment is used to create linear addresses.

+

Segment override prefixes and address-size override prefixes are ignored, and is the REX prefix in 64-bit mode.

+

Operation + ¶ +

+
IF TSX_ACTIVE
+    THEN GOTO TSX_ABORT_PROCESSING; FI;
+IF CR0.PE = 0 or RFLAGS.VM = 1 or in SMM or CPUID.SGX_LEAF.0:EAX.SE1 = 0
+    THEN #UD; FI;
+IF (CPL > 0)
+    THEN #UD; FI;
+IF in VMX non-root operation and the “enable ENCLS exiting“ VM-execution control is 1
+    THEN
+        IF EAX < 63 and ENCLS_exiting_bitmap[EAX] = 1 or EAX> 62 and ENCLS_exiting_bitmap[63] = 1
+            THEN VM exit;
+        FI;
+FI;
+IF IA32_FEATURE_CONTROL.LOCK = 0 or IA32_FEATURE_CONTROL.SGX_ENABLE = 0
+    THEN #GP(0); FI;
+IF (EAX is an invalid leaf number)
+    THEN #GP(0); FI;
+IF CR0.PG = 0
+    THEN #GP(0); FI;
+(* DS must not be an expanded down segment *)
+IF not in 64-bit mode and DS.Type is expand-down data
+    THEN #GP(0); FI;
+Jump to leaf specific flow
+
+

Flags Affected + ¶ +

+

See individual leaf functions

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#UDIf any of the LOCK/66H/REP/VEX prefixes are used.
If current privilege level is not 0.
If CPUID.(EAX=12H,ECX=0):EAX.SGX1 [bit 0] = 0.
If logical processor is in SMM.
#GP(0)If IA32_FEATURE_CONTROL.LOCK = 0.
If IA32_FEATURE_CONTROL.SGX_ENABLE = 0.
If input value in EAX encodes an unsupported leaf.
If data segment expand down.
If CR0.PG=0.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDENCLS is not recognized in real mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDENCLS is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#UDIf any of the LOCK/66H/REP/VEX prefixes are used.
If current privilege level is not 0.
If CPUID.(EAX=12H,ECX=0):EAX.SGX1 [bit 0] = 0.
If logical processor is in SMM.
#GP(0)If IA32_FEATURE_CONTROL.LOCK = 0.
If IA32_FEATURE_CONTROL.SGX_ENABLE = 0.
If input value in EAX encodes an unsupported leaf.
diff --git a/x86/enclu.html b/x86/enclu.html new file mode 100644 index 0000000..efa45b6 --- /dev/null +++ b/x86/enclu.html @@ -0,0 +1,167 @@ + +ENCLU + — Execute an Enclave User Function of Specified Leaf Number

ENCLU + — Execute an Enclave User Function of Specified Leaf Number

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 D7 ENCLUZOV/VNAThis instruction is used to execute non-privileged Intel SGX leaf functions.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Implicit Register Operands
ZONANANASee Section 38.4
+

Description + ¶ +

+

The ENCLU instruction invokes the specified non-privileged Intel SGX leaf functions. Software specifies the leaf function by setting the appropriate value in the register EAX as input. The registers RBX, RCX, and RDX have leaf-specific purpose, and may act as input, as output, or may be unused. In 64-bit mode, the instruction ignores upper 32 bits of the RAX register.

+

The ENCLU instruction produces an invalid-opcode exception (#UD) if CR0.PE = 0 or RFLAGS.VM = 1, or if it is executed in system-management mode (SMM). Additionally, any attempt to execute this instruction when CPL < 3 results in #UD. The instruction produces a general-protection exception (#GP) if either CR0.PG or CR0.NE is 0, or if an attempt is made to invoke an undefined leaf function. The ENCLU instruction produces a device not available exception (#NM) if CR0.TS = 1.

+

Addresses and operands are 32 bits outside 64-bit mode (IA32_EFER.LMA = 0 or CS.L = 0) and are 64 bits in 64-bit mode (IA32_EFER.LMA = 1 and CS.L = 1). CS.D value has no impact on address calculation. The DS segment is used to create linear addresses.

+

Segment override prefixes and address-size override prefixes are ignored, as is the REX prefix in 64-bit mode.

+

Operation + ¶ +

+
IN_64BIT_MODE := 0;
+IF TSX_ACTIVE
+        THEN GOTO TSX_ABORT_PROCESSING; FI;
+(* If enclosing app has CET indirect branch tracking enabled then if it is not ERESUME leaf cause a #CP fault *)
+(* If the ERESUME is not successful it will leave tracker in WAIT_FOR_ENDBRANCH *)
+TRACKER = (CPL == 3) ? IA32_U_CET.TRACKER : IA32_S_CET.TRACKER
+IF EndbranchEnabledAndNotSuppressed(CPL) and TRACKER = WAIT_FOR_ENDBRANCH and
+    (EAX != ERESUME or CR0.TS or (in SMM) or (CPUID.SGX_LEAF.0:EAX.SE1 = 0) or (CPL < 3))
+        THEN
+            Handle CET State machine violation (* see Section 17.3.6, “Legacy Compatibility Treatment,” in the
+                Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1. *)
+FI;
+IF CR0.PE= 0 or RFLAGS.VM = 1 or in SMM or CPUID.SGX_LEAF.0:EAX.SE1 = 0
+        THEN #UD; FI;
+IF CR0.TS = 1
+        THEN #NM; FI;
+IF CPL < 3
+        THEN #UD; FI;
+IF IA32_FEATURE_CONTROL.LOCK = 0 or IA32_FEATURE_CONTROL.SGX_ENABLE = 0
+        THEN #GP(0); FI;
+IF EAX is invalid leaf number
+        THEN #GP(0); FI;
+IF CR0.PG = 0 or CR0.NE = 0
+        THEN #GP(0); FI;
+IN_64BIT_MODE := IA32_EFER.LMA AND CS.L ? 1 : 0;
+(* Check not in 16-bit mode and DS is not a 16-bit segment *)
+IF not in 64-bit mode and CS.D = 0
+        THEN #GP(0); FI;
+IF CR_ENCLAVE_MODE = 1 and (EAX = 2 or EAX = 3) (* EENTER or ERESUME *)
+        THEN #GP(0); FI;
+IF CR_ENCLAVE_MODE = 0 and (EAX = 0 or EAX = 1 or EAX = 4 or EAX = 5 or EAX = 6 or EAX = 7 or EAX = 9)
+(* EREPORT, EGETKEY, EEXIT, EACCEPT, EMODPE, EACCEPTCOPY, or EDECCSSA *)
+        THEN #GP(0); FI;
+Jump to leaf specific flow
+
+

Flags Affected + ¶ +

+

See individual leaf functions

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf any of the LOCK/66H/REP/VEX prefixes are used.
If current privilege level is not 3.
If CPUID.(EAX=12H,ECX=0):EAX.SGX1 [bit 0] = 0.
If logical processor is in SMM.
#GP(0)If IA32_FEATURE_CONTROL.LOCK = 0.
If IA32_FEATURE_CONTROL.SGX_ENABLE = 0.
If input value in EAX encodes an unsupported leaf.
If input value in EAX encodes EENTER/ERESUME and ENCLAVE_MODE = 1.
If input value in EAX encodes EGETKEY/EREPORT/EEXIT/EACCEPT/EACCEPTCOPY/EMODPE and ENCLAVE_MODE = 0.
If operating in 16-bit mode.
If data segment is in 16-bit mode.
If CR0.PG = 0 or CR0.NE= 0.
#NMIf CR0.TS = 1.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDENCLS is not recognized in real mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDENCLS is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf any of the LOCK/66H/REP/VEX prefixes are used.
If current privilege level is not 3.
If CPUID.(EAX=12H,ECX=0):EAX.SGX1 [bit 0] = 0.
If logical processor is in SMM.
#GP(0)If IA32_FEATURE_CONTROL.LOCK = 0.
If IA32_FEATURE_CONTROL.SGX_ENABLE = 0.
If input value in EAX encodes an unsupported leaf.
If input value in EAX encodes EENTER/ERESUME and ENCLAVE_MODE = 1.
If input value in EAX encodes EGETKEY/EREPORT/EEXIT/EACCEPT/EACCEPTCOPY/EMODPE and ENCLAVE_MODE = 0.
If CR0.NE= 0.
#NMIf CR0.TS = 1.
diff --git a/x86/enclv.html b/x86/enclv.html new file mode 100644 index 0000000..dc25b32 --- /dev/null +++ b/x86/enclv.html @@ -0,0 +1,144 @@ + +ENCLV + — Execute an Enclave VMM Function of Specified Leaf Number

ENCLV + — Execute an Enclave VMM Function of Specified Leaf Number

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 C0 ENCLVZOV/VNAThis instruction is used to execute privileged SGX leaf functions that are reserved for VMM use. They are used for managing the enclaves.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Implicit Register Operands
ZONANANASee Section 38.3
+

Description + ¶ +

+

The ENCLV instruction invokes the virtualization SGX leaf functions for managing enclaves in a virtualized environment. Software specifies the leaf function by setting the appropriate value in the register EAX as input. The registers RBX, RCX, and RDX have leaf-specific purpose, and may act as input, as output, or may be unused. In non 64-bit mode, the instruction ignores upper 32 bits of the RAX register.

+

The ENCLV instruction produces an invalid-opcode exception (#UD) if CR0.PE = 0 or RFLAGS.VM = 1, if it is executed in system-management mode (SMM), or not in VMX operation. Additionally, any attempt to execute the instruction when CPL > 0 results in #UD. The instruction produces a general-protection exception (#GP) if CR0.PG = 0 or if an attempt is made to invoke an undefined leaf function.

+

Software in VMX root mode of operation can enable execution of the ENCLV instruction in VMX non-root mode by setting enable ENCLV execution control in the VMCS. If enable ENCLV execution control in the VMCS is clear, execution of the ENCLV instruction in VMX non-root mode results in #UD.

+

When execution of ENCLV instruction in VMX non-root mode is enabled, software in VMX root operation can intercept the invocation of various ENCLV leaf functions in VMX non-root operation by setting the corresponding bits in the ENCLV-exiting bitmap.

+

Addresses and operands are 32 bits in 32-bit mode (IA32_EFER.LMA == 0 || CS.L == 0) and are 64 bits in 64-bit mode (IA32_EFER.LMA == 1 && CS.L == 1). CS.D value has no impact on address calculation.

+

Segment override prefixes and address-size override prefixes are ignored, as is the REX prefix in 64-bit mode.

+

Operation + ¶ +

+
IF TSX_ACTIVE
+            THEN GOTO TSX_ABORT_PROCESSING; FI;
+IF CR0.PE = 0 or RFLAGS.VM = 1 or in SMM or CPUID.SGX_LEAF.0:EAX.OSS = 0
+            THEN #UD; FI;
+IF not in VMX Operation or (IA32_EFER.LMA = 1 and CS.L = 0)
+            THEN #UD; FI;
+IF (CPL > 0)
+            THEN #UD; FI;
+IF in VMX non-root operation
+    IF “enable ENCLV exiting“ VM-execution control is 1
+                THEN
+                    IF EAX < 63 and ENCLV_exiting_bitmap[EAX] = 1 or EAX> 62 and ENCLV_exiting_bitmap[63] = 1
+                        THEN VM exit;
+                    FI;
+        ELSE
+                #UD; FI;
+FI;
+IF IA32_FEATURE_CONTROL.LOCK = 0 or IA32_FEATURE_CONTROL.SGX_ENABLE = 0
+            THEN #GP(0); FI;
+IF (EAX is an invalid leaf number)
+            THEN #GP(0); FI;
+IF CR0.PG = 0
+            THEN #GP(0); FI;
+(* DS must not be an expanded down segment *)
+IF not in 64-bit mode and DS.Type is expand-down data
+            THEN #GP(0); FI;
+Jump to leaf specific flow
+
+

Flags Affected + ¶ +

+

See individual leaf functions.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#UDIf any of the LOCK/66H/REP/VEX prefixes are used.
If current privilege level is not 0.
If CPUID.(EAX=12H,ECX=0):EAX.OSS [bit 5] = 0.
If logical processor is in SMM.
#GP(0)If IA32_FEATURE_CONTROL.LOCK = 0.
If IA32_FEATURE_CONTROL.SGX_ENABLE = 0.
If input value in EAX encodes an unsupported leaf.
If data segment expand down.
If CR0.PG=0.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDENCLV is not recognized in real mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDENCLV is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#UDIf any of the LOCK/66H/REP/VEX prefixes are used.
If current privilege level is not 0.
If CPUID.(EAX=12H,ECX=0):EAX.OSS [bit 5] = 0.
If logical processor is in SMM.
#GP(0)If IA32_FEATURE_CONTROL.LOCK = 0.
If IA32_FEATURE_CONTROL.SGX_ENABLE = 0.
If input value in EAX encodes an unsupported leaf.
diff --git a/x86/encodekey128.html b/x86/encodekey128.html new file mode 100644 index 0000000..778ebf3 --- /dev/null +++ b/x86/encodekey128.html @@ -0,0 +1,216 @@ + +ENCODEKEY128 + — Encode 128-Bit Key With Key Locker

ENCODEKEY128 + — Encode 128-Bit Key With Key Locker

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 FA 11:rrr:bbb ENCODEKEY128 r32, r32, <XMM0-2>, <XMM4-6>AV/VAESKLEWrap a 128-bit AES key from XMM0 into a key handle and output handle in XMM0—2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operands 4—5Operands 6—7
AN/AModRM:reg (w)ModRM:r/m (r)Implicit XMM0 (r, w)Implicit XMM1—2 (w)Implicit XMM4—6 (w)
+

Description + ¶ +

+

The ENCODEKEY1281 instruction wraps a 128-bit AES key from the implicit operand XMM0 into a key handle that is then stored in the implicit destination operands XMM0-2.

+

The explicit source operand specifies handle restrictions, if any.

+

The explicit destination operand is populated with information on the source of the key and its attributes. XMM4 through XMM6 are reserved for future usages and software should not rely upon them being zeroed.

+

Operation + ¶ +

+

ENCODEKEY128 + ¶ +

+
#GP (0) if a reserved bit2 in SRC[31:0] is set
+InputKey[127:0] := XMM0;
+KeyMetadata[2:0] = SRC[2:0];
+KeyMetadata[23:3] = 0;
+    // Reserved for future usage
+KeyMetadata[27:24] = 0;
+    // KeyType is AES-128 (value of 0)
+KeyMetadata[127:28] = 0;
+    // Reserved for future usage
+// KeyMetadata is the AAD input and InputKey is the Plaintext input for WrapKey128
+Handle[383:0] := WrapKey128(InputKey[127:0], KeyMetadata[127:0], IWKey.Integrity Key[127:0], IWKey.Encryption Key[255:0]);
+DEST[0] := IWKey.NoBackup;
+DEST[4:1] := IWKey.KeySource[3:0];
+DEST[31:5] = 0;
+XMM0 := Handle[127:0]; // AAD
+XMM1 := Handle[255:128]; // Integrity Tag
+XMM2 := Handle[383:256]; // CipherText
+/
+XMM4 := 0;
+/
+XMM4 := 0;
+R
+XMM4 := 0;
+e
+XMM4 := 0;
+s
+XMM4 := 0;
+e
+XMM4 := 0;
+r
+XMM4 := 0;
+v
+XMM4 := 0;
+e
+XMM4 := 0;
+d
+XMM4 := 0;
+f
+XMM4 := 0;
+o
+XMM4 := 0;
+r
+XMM4 := 0;
+f
+XMM4 := 0;
+u
+XMM4 := 0;
+t
+XMM4 := 0;
+u
+XMM4 := 0;
+r
+XMM4 := 0;
+e
+XMM4 := 0;
+XMM4 := 0;
+/
+XMM5 := 0;
+/
+XMM5 := 0;
+R
+XMM5 := 0;
+e
+XMM5 := 0;
+s
+XMM5 := 0;
+e
+XMM5 := 0;
+r
+XMM5 := 0;
+v
+XMM5 := 0;
+e
+XMM5 := 0;
+d
+XMM5 := 0;
+f
+XMM5 := 0;
+o
+XMM5 := 0;
+r
+XMM5 := 0;
+f
+XMM5 := 0;
+u
+XMM5 := 0;
+t
+XMM5 := 0;
+u
+XMM5 := 0;
+r
+XMM5 := 0;
+e
+XMM5 := 0;
+XMM5 := 0;
+/
+XMM6 := 0;
+/
+XMM6 := 0;
+R
+XMM6 := 0;
+e
+XMM6 := 0;
+s
+XMM6 := 0;
+e
+XMM6 := 0;
+r
+XMM6 := 0;
+v
+XMM6 := 0;
+e
+XMM6 := 0;
+d
+XMM6 := 0;
+f
+XMM6 := 0;
+o
+XMM6 := 0;
+r
+XMM6 := 0;
+f
+XMM6 := 0;
+u
+XMM6 := 0;
+t
+XMM6 := 0;
+u
+XMM6 := 0;
+r
+XMM6 := 0;
+e
+XMM6 := 0;
+XMM6 := 0;
+RFLAGS.OF, SF, ZF, AF, PF, CF := 0;
+
+
+

2. SRC[31:3] are currently reserved for future usages. SRC[2], which indicates a no-decrypt restriction, is reserved if CPUID.19H:EAX[2] is 0. SRC[1], which indicates a no-encrypt restriction, is reserved if CPUID.19H:EAX[1] is 0. SRC[0], which indicates a CPL0-only restriction, is reserved if CPUID.19H:EAX[0] is 0.

+

Flags Affected + ¶ +

+

All arithmetic flags (OF, SF, ZF, AF, PF, CF) are cleared to 0. Although they are cleared for the currently defined operations, future extensions may report information in the flags.

+

1. Further details on Key Locker and usage of this instruction can be found here:

+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
ENCODEKEY128 unsigned int _mm_encodekey128_u32(unsigned int htype, __m128i key, void* h);
+
+

Exceptions (All Operating Modes) + ¶ +

+

#GP If reserved bit is set in source register value.

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.19H:EBX.AESKLE[bit 0] = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

#NM If CR0.TS = 1.

diff --git a/x86/encodekey256.html b/x86/encodekey256.html new file mode 100644 index 0000000..a4c1fe7 --- /dev/null +++ b/x86/encodekey256.html @@ -0,0 +1,100 @@ + +ENCODEKEY256 + — Encode 256-Bit Key With Key Locker

ENCODEKEY256 + — Encode 256-Bit Key With Key Locker

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 FB 11:rrr:bbb ENCODEKEY256 r32, r32 <XMM0-6>AV/VAESKLEWrap a 256-bit AES key from XMM1:XMM0 into a key handle and store it in XMM0—3.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operands 3—4Operands 5—9
AN/AModRM:reg (w)ModRM:r/m (r)Implicit XMM0—1 (r, w)Implicit XMM2—6 (w)
+

Description + ¶ +

+

The ENCODEKEY2561 instruction wraps a 256-bit AES key from the implicit operand XMM1:XMM0 into a key handle that is then stored in the implicit destination operands XMM0-3.

+

The explicit source operand is a general-purpose register and specifies what handle restrictions should be built into the handle.

+

The explicit destination operand is populated with information on the source of the key and its attributes. XMM4 through XMM6 are reserved for future usages and software should not rely upon them being zeroed.

+

Operation + ¶ +

+

ENCODEKEY256 + ¶ +

+
#GP (0) if a reserved bit2 in SRC[31:0] is set
+InputKey[255:0] := XMM1:XMM0;
+KeyMetadata[2:0] = SRC[2:0];
+KeyMetadata[23:3] = 0; // Reserved for future usage
+KeyMetadata[27:24] = 1; // KeyType is AES-256 (value of 1)
+KeyMetadata[127:28] = 0; // Reserved for future usage
+// KeyMetadata is the AAD input and InputKey is the Plaintext input for WrapKey256
+Handle[511:0] := WrapKey256(InputKey[255:0], KeyMetadata[127:0], IWKey.Integrity Key[127:0], IWKey.Encryption Key[255:0]);
+DEST[0] := IWKey.NoBackup;
+DEST[4:1] := IWKey.KeySource[3:0];
+DEST[31:5] = 0;
+XMM0 := Handle[127:0]; // AAD
+XMM1 := Handle[255:128]; // Integrity Tag
+XMM2 := Handle[383:256]; // CipherText[127:0]
+XMM3 := Handle[511:384]; // CipherText[255:128]
+XMM4 := 0;
+    // Reserved for future usage
+XMM5 := 0;
+    // Reserved for future usage
+XMM6 := 0;
+    // Reserved for future usage
+RFLAGS.OF, SF, ZF, AF, PF, CF := 0;
+1. Further details on Key Locker and usage of this instruction can be found here:
+
+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

2. SRC[31:3] are currently reserved for future usages. SRC[2], which indicates a no-decrypt restriction, is reserved if CPUID.19H:EAX[2] is 0. SRC[1], which indicates a no-encrypt restriction, is reserved if CPUID.19H:EAX[1] is 0. SRC[0], which indicates a CPL0-only restriction, is reserved if CPUID.19H:EAX[0] is 0.

+

Flags Affected + ¶ +

+

All arithmetic flags (OF, SF, ZF, AF, PF, CF) are cleared to 0. Although they are cleared for the currently defined operations, future extensions may report information in the flags.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
ENCODEKEY256 unsigned int _mm_encodekey256_u32(unsigned int htype, __m128i key_lo, __m128i key_hi, void* h);
+
+

Exceptions (All Operating Modes) + ¶ +

+

#GP If reserved bit is set in source register value.

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CPUID.19H:EBX.AESKLE[bit 0] = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

#NM If CR0.TS = 1.

diff --git a/x86/endbr32.html b/x86/endbr32.html new file mode 100644 index 0000000..58d73fa --- /dev/null +++ b/x86/endbr32.html @@ -0,0 +1,66 @@ + +ENDBR32 + — Terminate an Indirect Branch in 32-bit and Compatibility Mode

ENDBR32 + — Terminate an Indirect Branch in 32-bit and Compatibility Mode

+ + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 1E FB ENDBR32ZOV/VCET_IBTTerminate indirect branch in 32-bit and compatibility mode.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

Description + ¶ +

+

Terminate an indirect branch in 32 bit and compatibility mode.

+

Operation + ¶ +

+
IF EndbranchEnabled(CPL) & (IA32_EFER.LMA = 0 | (IA32_EFER.LMA=1 & CS.L = 0)
+    IF CPL = 3
+        THEN
+            IA32_U_CET.TRACKER = IDLE
+            IA32_U_CET.SUPPRESS = 0
+        ELSE
+            IA32_S_CET.TRACKER = IDLE
+            IA32_S_CET.SUPPRESS = 0
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

None.

diff --git a/x86/endbr64.html b/x86/endbr64.html new file mode 100644 index 0000000..d418cc5 --- /dev/null +++ b/x86/endbr64.html @@ -0,0 +1,66 @@ + +ENDBR64 + — Terminate an Indirect Branch in 64-bit Mode

ENDBR64 + — Terminate an Indirect Branch in 64-bit Mode

+ + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 1E FA ENDBR64ZOV/VCET_IBTTerminate indirect branch in 64-bit mode.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

Description + ¶ +

+

Terminate an indirect branch in 64 bit mode.

+

Operation + ¶ +

+
IF EndbranchEnabled(CPL) & IA32_EFER.LMA = 1 & CS.L = 1
+    IF CPL = 3
+        THEN
+            IA32_U_CET.TRACKER = IDLE
+            IA32_U_CET.SUPPRESS = 0
+        ELSE
+            IA32_S_CET.TRACKER = IDLE
+            IA32_S_CET.SUPPRESS = 0
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

None.

diff --git a/x86/enqcmd.html b/x86/enqcmd.html new file mode 100644 index 0000000..e1b7477 --- /dev/null +++ b/x86/enqcmd.html @@ -0,0 +1,172 @@ + +ENQCMD + — Enqueue Command

ENQCMD + — Enqueue Command

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 38 F8 !(11):rrr:bbb ENQCMD r32/r64, m512AV/VENQCMDAtomically enqueue 64-byte user command from source memory operand to destination offset in ES segment specified in register operand as offset in ES segment.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The ENQCMD instruction allows software to write commands to enqueue registers, which are special device registers accessed using memory-mapped I/O (MMIO).

+

Enqueue registers expect writes to have the following format:

+
+ + + + +
511 32 31 30 20 19
DEVICE SPECIFIC COMMAND PRIV RESERVED PASID
+
Figure 3-16. 64-Byte Data Written to Enqueue Registers
+

Bits 19:0 convey the process address space identifier (PASID), a value which system software may assign to individual software threads. Bit 31 contains privilege identification (0 = user; 1 = supervisor). Devices implementing enqueue registers may use these two values along with a device-specific command in the upper 60 bytes.

+

The ENQCMD instruction begins by reading 64 bytes of command data from its source memory operand. This is an ordinary load with cacheability and memory ordering implied normally by the memory type. The source operand need not be aligned, and there is no guarantee that all 64 bytes are loaded atomically. Bits 31:0 of the source operand must be zero.

+

The instruction then formats those 64 bytes into command data with a format consistent with that given in Figure 3-16:

+
    +
  • Command[19:0] get IA32_PASID[19:0].1
  • +
  • Command[30:20] are zero.
  • +
  • Command[31] is 0 (indicating user; this value is used regardless of CPL).
  • +
  • Command[511:32] get bits 511:32 of the source operand that was read from memory.
+

The ENQCMD instruction uses an enqueue store (defined below) to write this command data to the destination operand. The address of the destination operand is specified in a general-purpose register as an offset into the ES segment (the segment cannot be overridden).2 The destination linear address must be 64-byte aligned. The operation of an enqueue store disregards the memory type of the destination memory address.

+
+

1. ItisexpectedthatsystemsoftwarewillloadtheIA32_PASIDMSRsothatbits19:0containthePASIDofthecurrentsoft-ware thread. The MSR’s valid bit, IA32_PASID[31], must be 1. For additional details on the IA32_PASID MSR, see the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 4.

+

2. In64-bitmode,thewidthoftheregisteroperandis64bits(32bitswitha67Hprefix).Outside64-bitmodewhenCS.D= 1, the width is 32 bits (16 bits with a 67H prefix). Outside 64-bit mode when CS.D=0, the width is 16 bits (32 bits with a 67H prefix).

+

An enqueue store is not ordered relative to older stores to WB or WC memory (including non-temporal stores) or to executions of the CLFLUSHOPT or CLWB (when applied to addresses other than that of the enqueue store). Software can enforce such ordering by executing a fencing instruction such as SFENCE or MFENCE before the enqueue store.

+

An enqueue store does not write the data into the cache hierarchy, nor does it fetch any data into the cache hierarchy. An enqueue store’s command data is never combined with that of any other store to the same address.

+

Unlike other stores, an enqueue store returns a status, which the ENQCMD instruction loads into the ZF flag in the RFLAGS register:

+
    +
  • ZF = 0 (success) reports that the 64-byte command data was written atomically to a device’s enqueue register and has been accepted by the device. (It does not guarantee that the device has acted on the command; it may have queued it for later execution.)
  • +
  • ZF = 1 (retry) reports that the command data was not accepted. This status is returned if the destination address is an enqueue register but the command was not accepted due to capacity or other temporal reasons. This status is also returned if the destination address was not an enqueue register (including the case of a memory address); in these cases, the store is dropped and is written neither to MMIO nor to memory.
+

Availability of the ENQCMD instruction is indicated by the presence of the CPUID feature flag ENQCMD (CPUID.(EAX=07H, ECX=0H):ECX[bit 29]).

+

Operation + ¶ +

+
IF IA32_PASID[31] = 0
+    THEN #GP;
+ELSE
+    COMMAND := (SRC & ~FFFFFFFFH) | (IA32_PASID & FFFFFH);
+    DEST := COMMAND;
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
ENQCMD int_enqcmd(void *dst, const void *src)
+
+

Flags Affected + ¶ +

+

The ZF flag is set if the enqueue-store completion returns the retry status; otherwise it is cleared. All other flags are cleared.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
If destination linear address is not aligned to a 64-byte boundary.
If the PASID Valid field (bit 31) is 0 in IA32_PASID MSR.
If bits 31:0 of the source operand are not all zero.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code)For a page fault.
#UDIf CPUID.07H.0H:ECX.ENQCMD[bit 29] = 0.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GPIf any part of the operand lies outside the effective address space from 0 to FFFFH.
If destination linear address is not aligned to a 64-byte boundary.
If the PASID Valid field (bit 31) is 0 in IA32_PASID MSR.
If bits 31:0 of the source operand are not all zero.
#UDIf CPUID.07H.0H:ECX.ENQCMD[bit 29] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real-address mode. Additionally:

+ + + +
#PF(fault-code)For a page fault.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in non-canonical form.
#GP(0)If the memory address is in non-canonical form.
If destination linear address is not aligned to a 64-byte boundary.
If the PASID Valid field (bit 31) is 0 in IA32_PASID MSR.
If bits 31:0 of the source operand are not all zero.
#PF(fault-code)For a page fault.
#UDIf CPUID.07H.0H:ECX.ENQCMD[bit 29].
If the LOCK prefix is used.
diff --git a/x86/enqcmds.html b/x86/enqcmds.html new file mode 100644 index 0000000..d273b2f --- /dev/null +++ b/x86/enqcmds.html @@ -0,0 +1,155 @@ + +ENQCMDS + — Enqueue Command Supervisor

ENQCMDS + — Enqueue Command Supervisor

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 38 F8 !(11):rrr:bbb ENQCMDS r32/r64, m512AV/VENQCMDAtomically enqueue 64-byte command with PASID from source memory operand to destination offset in ES segment specified in register operand as offset in ES segment.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The ENQCMDS instruction allows system software to write commands to enqueue registers, which are special device registers accessed using memory-mapped I/O (MMIO).

+

Enqueue registers expect writes to have the format given in Figure 3-16 and explained in the section on “ENQCMD—Enqueue Command.”

+

The ENQCMDS instruction begins by reading 64 bytes of command data from its source memory operand. This is an ordinary load with cacheability and memory ordering implied normally by the memory type. The source operand need not be aligned, and there is no guarantee that all 64 bytes are loaded atomically. Bits 30:20 of the source operand must be zero.

+

ENQCMDS formats its source data differently from ENQCMD. Specifically, it formats them into command data as follows:

+
    +
  • Command[19:0] get bits 19:0 of the source operand that was read from memory. These 20 bits communicate a process address-space identifier (PASID).
  • +
  • Command[30:20] are zero.
  • +
  • Command[511:31] get bits 511:31 of the source operand that was read from memory. Bit 31 communicates a privilege identification (0 = user; 1 = supervisor).
+

The ENQCMDS instruction then uses an enqueue store (defined below) to write this command data to the destination operand. The address of the destination operand is specified in a general-purpose register as an offset into the ES segment (the segment cannot be overridden).1 The destination linear address must be 64-byte aligned. The operation of an enqueue store disregards the memory type of the destination memory address.

+
+

1. In64-bitmode,thewidthoftheregisteroperandis64bits(32bitswitha67Hprefix).Outside64-bitmodewhenCS.D= 1, the width is 32 bits (16 bits with a 67H prefix). Outside 64-bit mode when CS.D=0, the width is 16 bits (32 bits with a 67H prefix).

+

An enqueue store is not ordered relative to older stores to WB or WC memory (including non-temporal stores) or to executions of the CLFLUSHOPT or CLWB (when applied to addresses other than that of the enqueue store). Software can enforce such ordering by executing a fencing instruction such as SFENCE or MFENCE before the enqueue store.

+

An enqueue store does not write the data into the cache hierarchy, nor does it fetch any data into the cache hierarchy. An enqueue store’s command data is never combined with that of any other store to the same address.

+

Unlike other stores, an enqueue store returns a status, which the ENQCMDS instruction loads into the ZF flag in the RFLAGS register:

+
    +
  • ZF = 0 (success) reports that the 64-byte command data was written atomically to a device’s enqueue register and has been accepted by the device. (It does not guarantee that the device has acted on the command; it may have queued it for later execution.)
  • +
  • ZF = 1 (retry) reports that the command data was not accepted. This status is returned if the destination address is an enqueue register but the command was not accepted due to capacity or other temporal reasons.
+

This status is also returned if the destination address was not an enqueue register (including the case of a memory address); in these cases, the store is dropped and is written neither to MMIO nor to memory.

+

The ENQCMDS instruction may be executed only if CPL = 0. Availability of the ENQCMDS instruction is indicated by the presence of the CPUID feature flag ENQCMD (CPUID.(EAX=07H, ECX=0H):ECX[bit 29]).

+

Operation + ¶ +

+
DEST := SRC;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
ENQCMDS int_enqcmds(void *dst, const void *src)
+
+

Flags Affected + ¶ +

+

The ZF flag is set if the enqueue-store completion returns the retry status; otherwise it is cleared. All other flags are cleared.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
If destination linear address is not aligned to a 64-byte boundary.
If the current privilege level is not 0.
If bits 30:20 of the source operand are not all zero.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code)For a page fault.
#UDIf CPUID.07H.0H:ECX.ENQCMD[bit 29] = 0.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf any part of the operand lies outside the effective address space from 0 to FFFFH.
If destination linear address is not aligned to a 64-byte boundary.
If bits 30:20 of the source operand are not all zero.
#UDIf CPUID.07H.0H:ECX.ENQCMD[bit 29] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)The ENQCMDS instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in non-canonical form.
#GP(0)If the memory address is in non-canonical form.
If destination linear address is not aligned to a 64-byte boundary.
If the current privilege level is not 0.
If bits 30:20 of the source operand are not all zero.
#PF(fault-code)For a page fault.
#UDIf CPUID.07H.0H:ECX.ENQCMD[bit 29].
If the LOCK prefix is used.
diff --git a/x86/enter.html b/x86/enter.html new file mode 100644 index 0000000..ce90510 --- /dev/null +++ b/x86/enter.html @@ -0,0 +1,201 @@ + +ENTER + — Make Stack Frame for Procedure Parameters

ENTER + — Make Stack Frame for Procedure Parameters

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
C8 iw 00ENTER imm16, 0IIValidValidCreate a stack frame for a procedure.
C8 iw 01ENTER imm16,1IIValidValidCreate a stack frame with a nested pointer for a procedure.
C8 iw ibENTER imm16, imm8IIValidValidCreate a stack frame with nested pointers for a procedure.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
IIiwimm8N/AN/A
+

Description + ¶ +

+

Creates a stack frame (comprising of space for dynamic storage and 1-32 frame pointer storage) for a procedure. The first operand (imm16) specifies the size of the dynamic storage in the stack frame (that is, the number of bytes of dynamically allocated on the stack for the procedure). The second operand (imm8) gives the lexical nesting level (0 to 31) of the procedure. The nesting level (imm8 mod 32) and the OperandSize attribute determine the size in bytes of the storage space for frame pointers.

+

The nesting level determines the number of frame pointers that are copied into the “display area” of the new stack frame from the preceding frame. The default size of the frame pointer is the StackAddrSize attribute, but can be overridden using the 66H prefix. Thus, the OperandSize attribute determines the size of each frame pointer that will be copied into the stack frame and the data being transferred from SP/ESP/RSP register into the BP/EBP/RBP register.

+

The ENTER and companion LEAVE instructions are provided to support block structured languages. The ENTER instruction (when used) is typically the first instruction in a procedure and is used to set up a new stack frame for a procedure. The LEAVE instruction is then used at the end of the procedure (just before the RET instruction) to release the stack frame.

+

If the nesting level is 0, the processor pushes the frame pointer from the BP/EBP/RBP register onto the stack, copies the current stack pointer from the SP/ESP/RSP register into the BP/EBP/RBP register, and loads the SP/ESP/RSP register with the current stack-pointer value minus the value in the size operand. For nesting levels of 1 or greater, the processor pushes additional frame pointers on the stack before adjusting the stack pointer. These additional frame pointers provide the called procedure with access points to other nested frames on the stack. See “Procedure Calls for Block-Structured Languages” in Chapter 6 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for more information about the actions of the ENTER instruction.

+

The ENTER instruction causes a page fault whenever a write using the final value of the stack pointer (within the current stack segment) would do so.

+

In 64-bit mode, default operation size is 64 bits; 32-bit operation size cannot be encoded. Use of 66H prefix changes frame pointer operand size to 16 bits.

+

When the 66H prefix is used and causing the OperandSize attribute to be less than the StackAddrSize, software is responsible for the following:

+
    +
  • The companion LEAVE instruction must also use the 66H prefix,
  • +
  • The value in the RBP/EBP register prior to executing “66H ENTER” must be within the same 16KByte region of the current stack pointer (RSP/ESP), such that the value of RBP/EBP after “66H ENTER” remains a valid address in the stack. This ensures “66H LEAVE” can restore 16-bits of data from the stack.
+

Operation + ¶ +

+
AllocSize := imm16;
+NestingLevel := imm8 MOD 32;
+IF (OperandSize = 64)
+    THEN
+        Push(RBP); (* RSP decrements by 8 *)
+        FrameTemp := RSP;
+    ELSE IF OperandSize = 32
+        THEN
+            Push(EBP); (* (E)SP decrements by 4 *)
+            FrameTemp := ESP; FI;
+    ELSE (* OperandSize = 16 *)
+            Push(BP); (* RSP or (E)SP decrements by 2 *)
+            FrameTemp := SP;
+FI;
+IF NestingLevel = 0
+    THEN GOTO CONTINUE;
+FI;
+IF (NestingLevel > 1)
+    THEN FOR i := 1 to (NestingLevel - 1)
+        DO
+            IF (OperandSize = 64)
+                THEN
+                    RBP := RBP - 8;
+                    Push([RBP]); (* Quadword push *)
+                ELSE IF OperandSize = 32
+                    THEN
+                        IF StackSize = 32
+                            EBP := EBP - 4;
+                            Push([EBP]); (* Doubleword push *)
+                        ELSE (* StackSize = 16 *)
+                            BP := BP - 4;
+                            Push([BP]); (* Doubleword push *)
+                        FI;
+                    FI;
+                ELSE (* OperandSize = 16 *)
+                    IF StackSize = 64
+                        THEN
+                            RBP := RBP - 2;
+                            Push([RBP]); (* Word push *)
+                    ELSE IF StackSize = 32
+                        THEN
+                            EBP := EBP - 2;
+                            Push([EBP]); (* Word push *)
+                        ELSE (* StackSize = 16 *)
+                            BP := BP - 2;
+                            Push([BP]); (* Word push *)
+                    FI;
+                FI;
+    OD;
+FI;
+IF (OperandSize = 64) (* nestinglevel 1 *)
+    THEN
+        Push(FrameTemp); (* Quadword push and RSP decrements by 8 *)
+    ELSE IF OperandSize = 32
+        THEN
+            Push(FrameTemp); FI; (* Doubleword push and (E)SP decrements by 4 *)
+    ELSE (* OperandSize = 16 *)
+            Push(FrameTemp); (* Word push and RSP|ESP|SP decrements by 2 *)
+FI;
+CONTINUE:
+IF 64-Bit Mode (StackSize = 64)
+    THEN
+            RBP := FrameTemp;
+            RSP := RSP − AllocSize;
+    ELSE IF OperandSize = 32
+        THEN
+            EBP := FrameTemp;
+            ESP := ESP − AllocSize; FI;
+    ELSE (* OperandSize = 16 *)
+            BP := FrameTemp[15:1]; (* Bits 16 and above of applicable RBP/EBP are unmodified *)
+            SP := SP − AllocSize;
+FI;
+END;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#SS(0)If the new value of the SP or ESP register is outside the stack segment limit.
#PF(fault-code)If a page fault occurs or if a write using the final value of the stack pointer (within the current stack segment) would cause a page fault.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#SSIf the new value of the SP or ESP register is outside the stack segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + +
#SS(0)If the new value of the SP or ESP register is outside the stack segment limit.
#PF(fault-code)If a page fault occurs or if a write using the final value of the stack pointer (within the current stack segment) would cause a page fault.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + +
#SS(0)If the stack address is in a non-canonical form.
#PF(fault-code)If a page fault occurs or if a write using the final value of the stack pointer (within the current stack segment) would cause a page fault.
#UDIf the LOCK prefix is used.
diff --git a/x86/enteraccs.html b/x86/enteraccs.html new file mode 100644 index 0000000..2ae303e --- /dev/null +++ b/x86/enteraccs.html @@ -0,0 +1,387 @@ + +GETSEC[ENTERACCS] + — Execute Authenticated Chipset Code

GETSEC[ENTERACCS] + — Execute Authenticated Chipset Code

+ + + + + + + + + +
OpcodeInstructionDescription
NP 0F 37 (EAX = 2)GETSEC[ENTERACCS]Enter authenticated code execution mode. EBX holds the authenticated code module physical base address. ECX holds the authenticated code module size (bytes).
+

Description + ¶ +

+

The GETSEC[ENTERACCS] function loads, authenticates, and executes an authenticated code module using an Intel® TXT platform chipset's public key. The ENTERACCS leaf of GETSEC is selected with EAX set to 2 at entry.

+

There are certain restrictions enforced by the processor for the execution of the GETSEC[ENTERACCS] instruction:

+
    +
  • Execution is not allowed unless the processor is in protected mode or IA-32e mode with CPL = 0 and EFLAGS.VM = 0.
  • +
  • Processor cache must be available and not disabled, that is, CR0.CD and CR0.NW bits must be 0.
  • +
  • For processor packages containing more than one logical processor, CR0.CD is checked to ensure consistency between enabled logical processors.
  • +
  • For enforcing consistency of operation with numeric exception reporting using Interrupt 16, CR0.NE must be set.
  • +
  • An Intel TXT-capable chipset must be present as communicated to the processor by sampling of the power-on configuration capability field after reset.
  • +
  • The processor can not already be in authenticated code execution mode as launched by a previous GETSEC[ENTERACCS] or GETSEC[SENTER] instruction without a subsequent exiting using GETSEC[EXITAC]).
  • +
  • To avoid potential operability conflicts between modes, the processor is not allowed to execute this instruction if it currently is in SMM or VMX operation.
  • +
  • To ensure consistent handling of SIPI messages, the processor executing the GETSEC[ENTERACCS] instruction must also be designated the BSP (boot-strap processor) as defined by IA32_APIC_BASE.BSP (Bit 8).
+

Failure to conform to the above conditions results in the processor signaling a general protection exception.

+

Prior to execution of the ENTERACCS leaf, other logical processors, i.e., RLPs, in the platform must be:

+
    +
  • Idle in a wait-for-SIPI state (as initiated by an INIT assertion or through reset for non-BSP designated processors), or
  • +
  • In the SENTER sleep state as initiated by a GETSEC[SENTER] from the initiating logical processor (ILP).
+

If other logical processor(s) in the same package are not idle in one of these states, execution of ENTERACCS signals a general protection exception. The same requirement and action applies if the other logical processor(s) of the same package do not have CR0.CD = 0.

+

A successful execution of ENTERACCS results in the ILP entering an authenticated code execution mode. Prior to reaching this point, the processor performs several checks. These include:

+
    +
  • Establish and check the location and size of the specified authenticated code module to be executed by the processor.
  • +
  • Inhibit the ILP’s response to the external events: INIT, A20M, NMI, and SMI.
  • +
  • Broadcast a message to enable protection of memory and I/O from other processor agents.
  • +
  • Load the designated code module into an authenticated code execution area.
  • +
  • Isolate the contents of the authenticated code execution area from further state modification by external agents.
  • +
  • Authenticate the authenticated code module.
  • +
  • Initialize the initiating logical processor state based on information contained in the authenticated code module header.
  • +
  • Unlock the Intel® TXT-capable chipset private configuration space and TPM locality 3 space.
  • +
  • Begin execution in the authenticated code module at the defined entry point.
+

The GETSEC[ENTERACCS] function requires two additional input parameters in the general purpose registers EBX and ECX. EBX holds the authenticated code (AC) module physical base address (the AC module must reside below 4 GBytes in physical address space) and ECX holds the AC module size (in bytes). The physical base address and size are used to retrieve the code module from system memory and load it into the internal authenticated code execution area. The base physical address is checked to verify it is on a modulo-4096 byte boundary. The size is verified to be a multiple of 64, that it does not exceed the internal authenticated code execution area capacity (as reported by GETSEC[CAPABILITIES]), and that the top address of the AC module does not exceed 32 bits. An error condition results in an abort of the authenticated code execution launch and the signaling of a general protection exception.

+

As an integrity check for proper processor hardware operation, execution of GETSEC[ENTERACCS] will also check the contents of all the machine check status registers (as reported by the MSRs IA32_MCi_STATUS) for any valid uncorrectable error condition. In addition, the global machine check status register IA32_MCG_STATUS MCIP bit must be cleared and the IERR processor package pin (or its equivalent) must not be asserted, indicating that no machine check exception processing is currently in progress. These checks are performed prior to initiating the load of the authenticated code module. Any outstanding valid uncorrectable machine check error condition present in these status registers at this point will result in the processor signaling a general protection violation.

+

The ILP masks the response to the assertion of the external signals INIT#, A20M, NMI#, and SMI#. This masking remains active until optionally unmasked by GETSEC[EXITAC] (this defined unmasking behavior assumes GETSEC[ENTERACCS] was not executed by a prior GETSEC[SENTER]). The purpose of this masking control is to prevent exposure to existing external event handlers that may not be under the control of the authenticated code module.

+

The ILP sets an internal flag to indicate it has entered authenticated code execution mode. The state of the A20M pin is likewise masked and forced internally to a de-asserted state so that any external assertion is not recognized during authenticated code execution mode.

+

To prevent other (logical) processors from interfering with the ILP operating in authenticated code execution mode, memory (excluding implicit write-back transactions) access and I/O originating from other processor agents are blocked. This protection starts when the ILP enters into authenticated code execution mode. Only memory and I/O transactions initiated from the ILP are allowed to proceed. Exiting authenticated code execution mode is done by executing GETSEC[EXITAC]. The protection of memory and I/O activities remains in effect until the ILP executes GETSEC[EXITAC].

+

Prior to launching the authenticated execution module using GETSEC[ENTERACCS] or GETSEC[SENTER], the processor’s MTRRs (Memory Type Range Registers) must first be initialized to map out the authenticated RAM addresses as WB (writeback). Failure to do so may affect the ability for the processor to maintain isolation of the loaded authenticated code module. If the processor detected this requirement is not met, it will signal an Intel® TXT reset condition with an error code during the loading of the authenticated code module.

+

While physical addresses within the load module must be mapped as WB, the memory type for locations outside of the module boundaries must be mapped to one of the supported memory types as returned by GETSEC[PARAMETERS] (or UC as default).

+

To conform to the minimum granularity of MTRR MSRs for specifying the memory type, authenticated code RAM (ACRAM) is allocated to the processor in 4096 byte granular blocks. If an AC module size as specified in ECX is not a multiple of 4096 then the processor will allocate up to the next 4096 byte boundary for mapping as ACRAM with indeterminate data. This pad area will not be visible to the authenticated code module as external memory nor can it depend on the value of the data used to fill the pad area.

+

At the successful completion of GETSEC[ENTERACCS], the architectural state of the processor is partially initialized from contents held in the header of the authenticated code module. The processor GDTR, CS, and DS selectors are initialized from fields within the authenticated code module. Since the authenticated code module must be relocatable, all address references must be relative to the authenticated code module base address in EBX. The processor GDTR base value is initialized to the AC module header field GDTBasePtr + module base address held in EBX and the GDTR limit is set to the value in the GDTLimit field. The CS selector is initialized to the AC module header SegSel field, while the DS selector is initialized to CS + 8. The segment descriptor fields are implicitly initialized to BASE=0, LIMIT=FFFFFh, G=1, D=1, P=1, S=1, read/write access for DS, and execute/read access for CS. The processor begins the authenticated code module execution with the EIP set to the AC module header EntryPoint field + module base address (EBX). The AC module based fields used for initializing the processor state are checked for consistency and any failure results in a shutdown condition.

+

A summary of the register state initialization after successful completion of GETSEC[ENTERACCS] is given for the processor in Table 7-4. The paging is disabled upon entry into authenticated code execution mode. The authenticated code module is loaded and initially executed using physical addresses. It is up to the system software after execution of GETSEC[ENTERACCS] to establish a new (or restore its previous) paging environment with an appropriate mapping to meet new protection requirements. EBP is initialized to the authenticated code module base physical address for initial execution in the authenticated environment. As a result, the authenticated code can reference EBP for relative address based references, given that the authenticated code module must be position independent.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Register StateInitialization StatusComment
CR0PG←0, AM←0, WP←0: Others unchangedPaging, Alignment Check, Write-protection are disabled.
CR4MCE←0, CET←0, PCIDE←0: Others unchangedMachine Check Exceptions, Control-flow Enforcement Technology, and Process-context Identifiers disabled.
EFLAGS00000002H
IA32_EFER0HIA-32e mode disabled.
EIPAC.base + EntryPointAC.base is in EBX as input to GETSEC[ENTERACCS].
[E|R]BXPre-ENTERACCS state: Next [E|R]IP prior to GETSEC[ENTERACCS]Carry forward 64-bit processor state across GETSEC[ENTERACCS].
ECXPre-ENTERACCS state: [31:16]=GDTR.limit; [15:0]=CS.selCarry forward processor state across GETSEC[ENTERACCS].
[E|R]DXPre-ENTERACCS state: GDTR baseCarry forward 64-bit processor state across GETSEC[ENTERACCS].
EBPAC.base
CSSel=[SegSel], base=0, limit=FFFFFh, G=1, D=1, AR=9BH
DSSel=[SegSel] +8, base=0, limit=FFFFFh, G=1, D=1, AR=93H
GDTRBase= AC.base (EBX) + [GDTBasePtr], Limit=[GDTLimit]
DR700000400H
IA32_DEBUGCTL0H
IA32_MISC_ENABLESee Table 7-5 for example.The number of initialized fields may change due to processor implementation.
Performance counters and counter control registers0H
+
Table 7-4. Register State Initialization After GETSEC[ENTERACCS]
+

The segmentation related processor state that has not been initialized by GETSEC[ENTERACCS] requires appropriate initialization before use. Since a new GDT context has been established, the previous state of the segment selector values held in ES, SS, FS, GS, TR, and LDTR might not be valid.

+

The MSR IA32_EFER is also unconditionally cleared as part of the processor state initialized by ENTERACCS. Since paging is disabled upon entering authenticated code execution mode, a new paging environment will have to be reestablished in order to establish IA-32e mode while operating in authenticated code execution mode.

+

Debug exception and trap related signaling is also disabled as part of GETSEC[ENTERACCS]. This is achieved by resetting DR7, TF in EFLAGs, and the MSR IA32_DEBUGCTL. These debug functions are free to be re-enabled once supporting exception handler(s), descriptor tables, and debug registers have been properly initialized following entry into authenticated code execution mode. Also, any pending single-step trap condition will have been cleared upon entry into this mode.

+

Performance related counters and counter control registers are cleared as part of execution of ENTERACCS. This implies any active performance counters at any time of ENTERACCS execution will be disabled. To reactive the processor performance counters, this state must be re-initialized and re-enabled.

+

The IA32_MISC_ENABLE MSR is initialized upon entry into authenticated execution mode. Certain bits of this MSR are preserved because preserving these bits may be important to maintain previously established platform settings (See the footnote for Table 7-5.). The remaining bits are cleared for the purpose of establishing a more consistent environment for the execution of authenticated code modules. One of the impacts of initializing this MSR is any previous condition established by the MONITOR instruction will be cleared.

+

To support the possible return to the processor architectural state prior to execution of GETSEC[ENTERACCS], certain critical processor state is captured and stored in the general- purpose registers at instruction completion. [E|R]BX holds effective address ([E|R]IP) of the instruction that would execute next after GETSEC[ENTERACCS], ECX[15:0] holds the CS selector value, ECX[31:16] holds the GDTR limit field, and [E|R]DX holds the GDTR base field. The subsequent authenticated code can preserve the contents of these registers so that this state can be manually restored if needed, prior to exiting authenticated code execution mode with GETSEC[EXITAC]. For the processor state after exiting authenticated code execution mode, see the description of GETSEC[SEXIT].

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldBit positionDescription
Fast strings enable0Clear to 0.
FOPCODE compatibility mode enable2Clear to 0.
Thermal monitor enable3Set to 1 if other thermal monitor capability is not enabled.2
Split-lock disable4Clear to 0.
Bus lock on cache line splits disable8Clear to 0.
Hardware prefetch disable9Clear to 0.
GV1/2 legacy enable15Clear to 0.
MONITOR/MWAIT s/m enable18Clear to 0.
Adjacent sector prefetch disable19Clear to 0.
+
Table 7-5. IA32_MISC_ENABLE MSR Initialization1 by ENTERACCS and SENTER
+
+

1. The number of IA32_MISC_ENABLE fields that are initialized may vary due to processor implementations.

+

2. ENTERACCS (and SENTER) initialize the state of processor thermal throttling such that at least a minimum level is enabled. If thermal throttling is already enabled when executing one of these GETSEC leaves, then no change in the thermal throttling control settings will occur. If thermal throttling is disabled, then it will be enabled via setting of the thermal throttle control bit 3 as a result of executing these GETSEC leaves.

+

The IDTR will also require reloading with a new IDT context after entering authenticated code execution mode, before any exceptions or the external interrupts INTR and NMI can be handled. Since external interrupts are reenabled at the completion of authenticated code execution mode (as terminated with EXITAC), it is recommended

+

that a new IDT context be established before this point. Until such a new IDT context is established, the programmer must take care in not executing an INT n instruction or any other operation that would result in an exception or trap signaling.

+

Prior to completion of the GETSEC[ENTERACCS] instruction and after successful authentication of the AC module, the private configuration space of the Intel TXT chipset is unlocked. The authenticated code module alone can gain access to this normally restricted chipset state for the purpose of securing the platform.

+

Once the authenticated code module is launched at the completion of GETSEC[ENTERACCS], it is free to enable interrupts by setting EFLAGS.IF and enable NMI by execution of IRET. This presumes that it has re-established interrupt handling support through initialization of the IDT, GDT, and corresponding interrupt handling code.

+

Operation in a Uni-Processor Platform + ¶ +

+

(* The state of the internal flag ACMODEFLAG persists across instruction boundary *)

+

IF (CR4.SMXE=0)

+

THEN #UD;

+

ELSIF (in VMX non-root operation)

+

THEN VM Exit (reason=”GETSEC instruction”);

+

ELSIF (GETSEC leaf unsupported)

+

THEN #UD;

+

ELSIF ((in VMX operation) or

+

(CR0.PE=0) or (CR0.CD=1) or (CR0.NW=1) or (CR0.NE=0) or

+

(CPL>0) or (EFLAGS.VM=1) or

+

(IA32_APIC_BASE.BSP=0) or

+

(TXT chipset not present) or

+

(ACMODEFLAG=1) or (IN_SMM=1))

+

THEN #GP(0);

+

IF (GETSEC[PARAMETERS].Parameter_Type = 5, MCA_Handling (bit 6) = 0)

+

FOR I = 0 to IA32_MCG_CAP.COUNT-1 DO

+

IF (IA32_MC[I]_STATUS = uncorrectable error)

+

THEN #GP(0);

+

OD;

+

FI;

+

IF (IA32_MCG_STATUS.MCIP=1) or (IERR pin is asserted)

+

THEN #GP(0);

+

ACBASE := EBX;

+

ACSIZE := ECX;

+

IF (((ACBASE MOD 4096) ≠ 0) or ((ACSIZE MOD 64 ) ≠ 0 ) or (ACSIZE < minimum module size) OR (ACSIZE > authenticated RAM capacity)) or ((ACBASE+ACSIZE) > (2^32 -1)))

+

THEN #GP(0);

+

IF (secondary thread(s) CR0.CD = 1) or ((secondary thread(s) NOT(wait-for-SIPI)) and

+

(secondary thread(s) not in SENTER sleep state)

+

THEN #GP(0);

+

Mask SMI, INIT, A20M, and NMI external pin events;

+

IA32_MISC_ENABLE := (IA32_MISC_ENABLE & MASK_CONST*)

+

(* The hexadecimal value of MASK_CONST may vary due to processor implementations *)

+

A20M := 0;

+

IA32_DEBUGCTL := 0;

+

Invalidate processor TLB(s);

+

Drain Outgoing Transactions;

+

ACMODEFLAG := 1;

+

SignalTXTMessage(ProcessorHold);

+

Load the internal ACRAM based on the AC module size;

+

(* Ensure that all ACRAM loads hit Write Back memory space *)

+

IF (ACRAM memory type ≠ WB)

+

THEN TXT-SHUTDOWN(#BadACMMType);

+

IF (AC module header version isnot supported) OR (ACRAM[ModuleType] ≠ 2)

+

THEN TXT-SHUTDOWN(#UnsupportedACM);

+

(* Authenticate the AC Module and shutdown with an error if it fails *)

+

KEY := GETKEY(ACRAM, ACBASE);

+

KEYHASH := HASH(KEY);

+

CSKEYHASH := READ(TXT.PUBLIC.KEY);

+

IF (KEYHASH ≠ CSKEYHASH)

+

THEN TXT-SHUTDOWN(#AuthenticateFail);

+

SIGNATURE := DECRYPT(ACRAM, ACBASE, KEY);

+

(* The value of SIGNATURE_LEN_CONST is implementation-specific*)

+

FOR I=0 to SIGNATURE_LEN_CONST - 1 DO

+

ACRAM[SCRATCH.I] := SIGNATURE[I];

+

COMPUTEDSIGNATURE := HASH(ACRAM, ACBASE, ACSIZE);

+

FOR I=0 to SIGNATURE_LEN_CONST - 1 DO

+

ACRAM[SCRATCH.SIGNATURE_LEN_CONST+I] := COMPUTEDSIGNATURE[I];

+

IF (SIGNATURE ≠ COMPUTEDSIGNATURE)

+

THEN TXT-SHUTDOWN(#AuthenticateFail);

+

ACMCONTROL := ACRAM[CodeControl];

+

IF ((ACMCONTROL.0 = 0) and (ACMCONTROL.1 = 1) and (snoop hit to modified line detected on ACRAM load))

+

THEN TXT-SHUTDOWN(#UnexpectedHITM);

+

IF (ACMCONTROL reserved bits are set)

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF ((ACRAM[GDTBasePtr] < (ACRAM[HeaderLen] * 4 + Scratch_size)) OR

+

((ACRAM[GDTBasePtr] + ACRAM[GDTLimit]) >= ACSIZE))

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF ((ACMCONTROL.0 = 1) and (ACMCONTROL.1 = 1) and (snoop hit to modified line detected on ACRAM load))

+

THEN ACEntryPoint := ACBASE+ACRAM[ErrorEntryPoint];

+

ELSE

+

ACEntryPoint := ACBASE+ACRAM[EntryPoint];

+

IF ((ACEntryPoint >= ACSIZE) OR (ACEntryPoint < (ACRAM[HeaderLen] * 4 + Scratch_size)))THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF (ACRAM[GDTLimit] & FFFF0000h)

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF ((ACRAM[SegSel] > (ACRAM[GDTLimit] - 15)) OR (ACRAM[SegSel] < 8))

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF ((ACRAM[SegSel].TI=1) OR (ACRAM[SegSel].RPL≠0))

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

CR0.[PG.AM.WP] := 0;

+

CR4.MCE := 0;

+

EFLAGS := 00000002h;

+

IA32_EFER := 0h;

+

[E|R]BX := [E|R]IP of the instruction after GETSEC[ENTERACCS];

+

ECX := Pre-GETSEC[ENTERACCS] GDT.limit:CS.sel;

+

[E|R]DX := Pre-GETSEC[ENTERACCS] GDT.base;

+

EBP := ACBASE;

+

GDTR.BASE := ACBASE+ACRAM[GDTBasePtr];

+

GDTR.LIMIT := ACRAM[GDTLimit];

+

CS.SEL := ACRAM[SegSel];

+

CS.BASE := 0;

+

CS.LIMIT := FFFFFh;

+

CS.G := 1;

+

CS.D := 1;

+

CS.AR := 9Bh;

+

DS.SEL := ACRAM[SegSel]+8;

+

DS.BASE := 0;

+

DS.LIMIT := FFFFFh;

+

DS.G := 1;

+

DS.D := 1;

+

DS.AR := 93h;

+

DR7 := 00000400h;

+

IA32_DEBUGCTL := 0;

+

SignalTXTMsg(OpenPrivate);

+

SignalTXTMsg(OpenLocality3);

+

EIP := ACEntryPoint;

+

END;

+

Flags Affected + ¶ +

+

All flags are cleared.

+

Use of Prefixes + ¶ +

+

LOCK Causes #UD.

+

REP* Cause #UD (includes REPNE/REPNZ and REP/REPE/REPZ).

+

Operand size Causes #UD.

+

NP 66/F2/F3 prefixes are not allowed.

+

Segmentoverrides Ignored.

+

Address size Ignored.

+

REX Ignored.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[ENTERACCS] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)If CR0.CD = 1 or CR0.NW = 1 or CR0.NE = 0 or CR0.PE = 0 or CPL > 0 or EFLAGS.VM = 1.
If a Intel® TXT-capable chipset is not present.
If in VMX root operation.
If the initiating processor is not designated as the bootstrap processor via the MSR bit IA32_APIC_BASE.BSP.
If the processor is already in authenticated code execution mode.
If the processor is in SMM.
If a valid uncorrectable machine check error is logged in IA32_MC[I]_STATUS.
If the authenticated code base is not on a 4096 byte boundary.
If the authenticated code size > processor internal authenticated code area capacity.
If the authenticated code size is not modulo 64.
If other enabled logical processor(s) of the same package CR0.CD = 1.
If other enabled logical processor(s) of the same package are not in the wait-for-SIPI or SENTER sleep state.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[ENTERACCS] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[ENTERACCS] is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[ENTERACCS] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[ENTERACCS] is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+ + + +
#GPIf AC code module does not reside in physical address below 2^32 -1.
+

64-Bit Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+ + + +
#GPIf AC code module does not reside in physical address below 2^32 -1.
+

VM-exit Condition + ¶ +

+

Reason (GETSEC) If in VMX non-root operation.

diff --git a/x86/epa.html b/x86/epa.html new file mode 100644 index 0000000..955a462 --- /dev/null +++ b/x86/epa.html @@ -0,0 +1,200 @@ + +EPA + — Add Version Array

EPA + — Add Version Array

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 0AH ENCLS[EPA]IRV/VSGX1This leaf function adds a Version Array to the EPC.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnEAXRBXRCX
IREPA (In)PT_VA (In, Constant)Effective address of the EPC page (In)
+

Description + ¶ +

+

This leaf function creates an empty version array in the EPC page whose logical address is given by DS:RCX, and sets up EPCM attributes for that page. At the time of execution of this instruction, the register RBX must be set to PT_VA.

+

The table below provides additional information on the memory parameter of EPA leaf function.

+

EPA Memory Parameter Semantics + ¶ +

+ + + + +
EPCPAGE
Write access permitted by Enclave
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
EPA EPA +VA [DS:RCX] +Exclusive #GP EPA +VA [DS:RCX] +VA [DS:RCX]
+
Table 38-37. Base Concurrency Restrictions of EPA
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Leaf Access On Conflict +Access On Conflict +EPA +VA [DS:RCX] +Concurrent Access On Conflict +Access On Conflict +EPA +VA [DS:RCX] +ParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
EPAVA [DS:RCX]ConcurrentConcurrent
+
Table 38-38. Additional Concurrency Restrictions of EPA
+

Operation + ¶ +

+
IF (RBX ≠ PT_VA or DS:RCX is not 4KByte Aligned)
+    THEN #GP(0); FI;
+IF (DS:RCX does not resolve within an EPC)
+    THEN #PF(DS:RCX); FI;
+(* Check concurrency with other Intel SGX instructions *)
+IF (Other Intel SGX instructions accessing the page)
+    THEN
+        IF (<<VMX non-root operation>> AND <<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)
+            THEN
+                VMCS.Exit_reason := SGX_CONFLICT;
+                VMCS.Exit_qualification.code := EPC_PAGE_CONFLICT_EXCEPTION;
+                VMCS.Exit_qualification.error := 0;
+                VMCS.Guest-physical_address := << translation of DS:RCX produced by paging >>;
+                VMCS.Guest-linear_address := DS:RCX;
+            Deliver VMEXIT;
+            ELSE
+                #GP(0);
+        FI;
+FI;
+(* Check EPC page must be empty *)
+IF (EPCM(DS:RCX). VALID ≠ 0)
+    THEN #PF(DS:RCX); FI;
+(* Clears EPC page *)
+DS:RCX[32767:0] := 0;
+EPCM(DS:RCX).PT := PT_VA;
+EPCM(DS:RCX).ENCLAVEADDRESS := 0;
+EPCM(DS:RCX).BLOCKED := 0;
+EPCM(DS:RCX).PENDING := 0;
+EPCM(DS:RCX).MODIFIED := 0;
+EPCM(DS:RCX).PR := 0;
+EPCM(DS:RCX).RWX := 0;
+EPCM(DS:RCX).VALID := 1;
+
+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If another Intel SGX instruction is accessing the EPC page.
If RBX is not set to PT_VA.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
If the EPC page is valid.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If another Intel SGX instruction is accessing the EPC page.
If RBX is not set to PT_VA.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
If the EPC page is valid.
diff --git a/x86/erdinfo.html b/x86/erdinfo.html new file mode 100644 index 0000000..7ad2735 --- /dev/null +++ b/x86/erdinfo.html @@ -0,0 +1,269 @@ + +ERDINFO + — Read Type and Status Information About an EPC Page

ERDINFO + — Read Type and Status Information About an EPC Page

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 10H ENCLS[ERDINFO]IRV/VEAX[6]This leaf function returns type and status information about an EPC page.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/EnEAXRBXRCX
IRERDINFO (In)Return error code (Out)Address of a RDINFO structure (In)Address of the destination EPC page (In)
+

Description + ¶ +

+

This instruction reads type and status information about an EPC page and returns it in a RDINFO structure. The STATUS field of the structure describes the status of the page and determines the validity of the remaining fields. The FLAGS field returns the EPCM permissions of the page; the page type; and the BLOCKED, PENDING, MODIFIED, and PR status of the page. For enclave pages, the ENCLAVECONTEXT field of the structure returns the value of SECS.ENCLAVECONTEXT. For non-enclave pages (e.g., VA) ENCLAVECONTEXT returns 0.

+

For invalid or non-EPC pages, the instruction returns an information code indicating the page's status, in addition to populating the STATUS field.

+

ERDINFO returns an error code if the destination EPC page is being modified by a concurrent SGX instruction.

+

RBX contains the effective address of a RDINFO structure while RCX contains the effective address of an EPC page. The table below provides additional information on the memory parameter of ERDINFO leaf function.

+

ERDINFO Memory Parameter Semantics + ¶ +

+ + + + + + +
RDINFOEPCPAGE
Read/Write access permitted by Non EnclaveRead access permitted by Enclave
+

The instruction faults if any of the following:

+

ERDINFO Faulting Conditions + ¶ +

+ + + + + + + + + +
A memory operand effective address is outside the DS segment limit (32b mode).A memory operand is not properly aligned.
DS segment is unusable (32b mode).A page fault occurs in accessing memory operands.
A memory address is in a non-canonical form (64b mode).
+

The error codes are:

+
+ + + + + + + + + + + + + + + + + + + + +
Error CodeValueDescription
No Error0ERDINFO successful.
SGX_EPC_PAGE_CONFLICTFailure due to concurrent operation of another SGX instruction.
SGX_PG_INVLDTarget page is not a valid EPC page.
SGX_PG_NONEPCPage is not an EPC page.
+
Table 38-39. ERDINFO Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
ERDINFOTarget [DS:RCX]SharedSGX_EPC_PAGE_ CONFLICT
+
Table 38-40. Base Concurrency Restrictions of ERDINFO
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
ERDINFOTarget [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-41. Additional Concurrency Restrictions of ERDINFO
+

Operation + ¶ +

+

Temp Variables in ERDINFO Operational Flow + ¶ +

+ + + + + + +
Name Type Size (Bits) Description
TMP_SECS Physical Address 64 Physical address of the SECS of the page being modified.
TMP_RDINFO Linear Address 64 Address of the RDINFO structure.
+

(* check alignment of RDINFO structure (RBX) *)

+

IF (DS:RBX is not 32Byte Aligned) THEN

+

#GP(0); FI;

+

(* check alignment of the EPCPAGE (RCX) *)

+

IF (DS:RCX is not 4KByte Aligned) THEN

+

#GP(0); FI;

+

(* check that EPCPAGE (DS:RCX) is the address of an EPC page *)

+

IF (DS:RCX does not resolve within EPC) THEN

+

RFLAGS.CF := 1;

+

RFLAGS.ZF := 0;

+

RAX := SGX_PG_NONEPC;

+

goto DONE;

+

FI;

+

(* Check the EPC page for concurrency *)

+

IF (EPC page is being modified) THEN

+

RFLAGS.ZF = 1;

+

RFLAGS.CF = 0;

+

RAX = SGX_EPC_PAGE_CONFLICT;

+

goto DONE;

+

FI;

+

(* check page validity *)

+

IF (EPCM(DS:RCX).VALID = 0) THEN

+

RFLAGS.CF = 1;

+

RFLAGS.ZF = 0;

+

RAX = SGX_PG_INVLD;

+

goto DONE;

+

FI;

+

(* clear the fields of the RDINFO structure *)

+

TMP_RDINFO := DS:RBX;

+

TMP_RDINFO.STATUS := 0;

+

TMP_RDINFO.FLAGS := 0;

+

TMP_RDINFO.ENCLAVECONTEXT := 0;

+

(* store page info in RDINFO structure *)

+

TMP_RDINFO.FLAGS.RWX := EPCM(DS:RCX).RWX;

+

TMP_RDINFO.FLAGS.PENDING := EPCM(DS:RCX).PENDING;

+

TMP_RDINFO.FLAGS.MODIFIED := EPCM(DS:RCX).MODIFIED;

+

TMP_RDINFO.FLAGS.PR := EPCM(DS:RCX).PR;

+

TMP_RDINFO.FLAGS.PAGE_TYPE := EPCM(DS:RCX).PAGE_TYPE;

+

TMP_RDINFO.FLAGS.BLOCKED := EPCM(DS:RCX).BLOCKED;

+

(* read SECS.ENCLAVECONTEXT for enclave child pages *)

+

IF ((EPCM(DS:RCX).PAGE_TYPE = PT_REG) or

+

(EPCM(DS:RCX).PAGE_TYPE = PT_TCS) or

+

(EPCM(DS:RCX).PAGE_TYPE = PT_TRIM) or

+

(EPCM(DS:RCX).PAGE_TYPE = PT_SS_FIRST) or

+

(EPCM(DS:RCX).PAGE_TYPE = PT_SS_REST)

+

) THEN

+

TMP_SECS := Address of SECS for (DS:RCX);

+

TMP_RDINFO.ENCLAVECONTEXT := SECS(TMP_SECS).ENCLAVECONTEXT;

+

FI;

+

(* populate enclave information for SECS pages *)

+

IF (EPCM(DS:RCX).PAGE_TYPE = PT_SECS) THEN

+

IF ((VMX non-root mode) and

+

(ENABLE_EPC_VIRTUALIZATION_EXTENSIONS Execution Control = 1)

+

) THEN

+

TMP_RDINFO.STATUS.CHILDPRESENT :=

+

((SECS(DS:RCX).CHLDCNT ≠ 0) or

+

SECS(DS:RCX).VIRTCHILDCNT ≠ 0);

+

ELSE

+

TMP_RDINFO.STATUS.CHILDPRESENT := (SECS(DS:RCX).CHLDCNT ≠ 0);

+

TMP_RDINFO.STATUS.VIRTCHILDPRESENT :=

+

(SECS(DS:RCX).VIRTCHILDCNT ≠ 0);

+

TMP_RDINFO.ENCLAVECONTEXT := SECS(DS_RCX).ENCLAVECONTEXT;

+

FI;

+

FI;

+

RAX := 0;

+

RFLAGS.ZF := 0;

+

RFLAGS.CF := 0;

+

DONE:

+

(* clear flags *)

+

RFLAGS.PF := 0;

+

RFLAGS.AF := 0;

+

RFLAGS.OF := 0;

+

RFLAGS.SF := ?0;

+

Flags Affected + ¶ +

+

ZF is set if ERDINFO fails due to concurrent operation with another SGX instruction; otherwise cleared.

+

CF is set if page is not a valid EPC page or not an EPC page; otherwise cleared.

+

PF, AF, OF, and SF are cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If DS segment is unusable.
If a memory operand is not properly aligned.
#PF(errorcode) If a page fault occurs in accessing memory operands.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
If a memory operand is not properly aligned.
#PF(errorcode) If a page fault occurs in accessing memory operands.
diff --git a/x86/eremove.html b/x86/eremove.html new file mode 100644 index 0000000..1507c83 --- /dev/null +++ b/x86/eremove.html @@ -0,0 +1,265 @@ + +EREMOVE + — Remove a page from the EPC

EREMOVE + — Remove a page from the EPC

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 03H ENCLS[EREMOVE]IRV/VSGX1This leaf function removes a page from the EPC.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + +
Op/EnEAXRCX
IREREMOVE (In)Return error code (Out)Effective address of the EPC page (In)
+

Description + ¶ +

+

This leaf function causes an EPC page to be un-associated with its SECS and be marked as unused. This instruction leaf can only be executed when the current privilege level is 0.

+

The content of RCX is an effective address of an EPC page. The DS segment is used to create linear address. Segment override is not supported.

+

The instruction fails if the operand is not properly aligned or does not refer to an EPC page or the page is in use by another thread, or other threads are running in the enclave to which the page belongs. In addition the instruction fails if the operand refers to an SECS with associations.

+

EREMOVE Memory Parameter Semantics + ¶ +

+ + + + +
EPCPAGE
Write access permitted by Enclave
+

The instruction faults if any of the following:

+

EREMOVE Faulting Conditions + ¶ +

+ + + + + + + + + + + + +
The memory operand is not properly aligned.The memory operand does not resolve in an EPC page.
Refers to an invalid SECS.Refers to an EPC page that is locked by another thread.
Another Intel SGX instruction is accessing the EPC page.RCX does not contain an effective address of an EPC page.
the EPC page refers to an SECS with associations.
+

The error codes are:

+
+ + + + + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEREMOVE successful.
SGX_CHILD_PRESENTIf the SECS still have enclave pages loaded into EPC.
SGX_ENCLAVE_ACTIf there are still logical processors executing inside the enclave.
+
Table 38-42. EREMOVE Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
EREMOVE EREMOVE +Target [DS:RCX] +Exclusive #GP EREMOVE +Target [DS:RCX] +Target [DS:RCX]
+
Table 38-43. Base Concurrency Restrictions of EREMOVE
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
EREMOVETarget [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-44. Additional Concurrency Restrictions of EREMOVE
+

Operation + ¶ +

+

Temp Variables in EREMOVE Operational Flow + ¶ +

+ + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_SECSEffective Address32/64Effective address of the SECS destination page.
+

IF (DS:RCX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve to an EPC page)

+

THEN #PF(DS:RCX); FI;

+

TMP_SECS := Get_SECS_ADDRESS();

+

(* Check the EPC page for concurrency *)

+

IF (EPC page being referenced by another Intel SGX instruction)

+

THEN

+

IF (<<VMX non-root operation>> AND <<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)

+

THEN

+

VMCS.Exit_reason := SGX_CONFLICT;

+

VMCS.Exit_qualification.code := EPC_PAGE_CONFLICT_EXCEPTION;

+

VMCS.Exit_qualification.error := 0;

+

VMCS.Guest-physical_address := << translation of DS:RCX produced by paging >>;

+

VMCS.Guest-linear_address := DS:RCX;

+

Deliver VMEXIT;

+

ELSE

+

#GP(0);

+

FI;

+

FI;

+

(* if DS:RCX is already unused, nothing to do*)

+

IF ( (EPCM(DS:RCX).VALID = 0) or (EPCM(DS:RCX).PT = PT_TRIM AND EPCM(DS:RCX).MODIFIED = 0))

+

THEN GOTO DONE;

+

FI;

+

IF ( (EPCM(DS:RCX).PT = PT_VA) OR

+

((EPCM(DS:RCX).PT = PT_TRIM) AND (EPCM(DS:RCX).MODIFIED = 0)) )

+

THEN

+

EPCM(DS:RCX).VALID := 0;

+

GOTO DONE;

+

FI;

+

IF (EPCM(DS:RCX).PT = PT_SECS)

+

THEN

+

IF (DS:RCX has an EPC page associated with it)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_CHILD_PRESENT;

+

GOTO ERROR_EXIT;

+

FI;

+

(* treat SECS as having a child page when VIRTCHILDCNT is non-zero *)

+

IF (<<in VMX non-root operation>> AND

+

<<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>> AND

+

(SECS(DS:RCX).VIRTCHILDCNT ≠ 0))

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_CHILD_PRESENT

+

GOTO ERROR_EXIT

+

FI;

+

EPCM(DS:RCX).VALID := 0;

+

GOTO DONE;

+

FI;

+

IF (Other threads active using SECS)

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_ENCLAVE_ACT;

+

GOTO ERROR_EXIT;

+

FI;

+

IF ( (EPCM(DS:RCX).PT is PT_REG) or (EPCM(DS:RCX).PT is PT_TCS) or (EPCM(DS:RCX).PT is PT_TRIM) or

+

(EPCM(DS:RCX).PT is PT_SS_FIRST) or (EPCM(DS:RCX).PT is PT_SS_REST))

+

THEN

+

EPCM(DS:RCX).VALID := 0;

+

GOTO DONE;

+

FI;

+

DONE:

+

RAX := 0;

+

RFLAGS.ZF := 0;

+

ERROR_EXIT:

+

RFLAGS.CF,PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

Sets ZF if unsuccessful, otherwise cleared and RAX returns error code. Clears CF, PF, AF, OF, SF.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If another Intel SGX instruction is accessing the page.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the memory operand is not an EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the memory operand is non-canonical form.
If a memory operand is not properly aligned.
If another Intel SGX instruction is accessing the page.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If the memory operand is not an EPC page.
diff --git a/x86/ereport.html b/x86/ereport.html new file mode 100644 index 0000000..eb0ac52 --- /dev/null +++ b/x86/ereport.html @@ -0,0 +1,317 @@ + +EREPORT + — Create a Cryptographic Report of the Enclave

EREPORT + — Create a Cryptographic Report of the Enclave

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 00H ENCLU[EREPORT]IRV/VSGX1This leaf function creates a cryptographic report of the enclave.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnEAXRBXRCXRDX
IREREPORT (In)Address of TARGETINFO (In)Address of REPORTDATA (In)Address where the REPORT is written to in an OUTPUTDATA (In)
+

Description + ¶ +

+

This leaf function creates a cryptographic REPORT that describes the contents of the enclave. This instruction leaf can only be executed when inside the enclave. The cryptographic report can be used by other enclaves to determine that the enclave is running on the same platform.

+

RBX contains the effective address of the MRENCLAVE value of the enclave that will authenticate the REPORT output, using the REPORT key delivered by EGETKEY command for that enclave. RCX contains the effective address of a 64-byte REPORTDATA structure, which allows the caller of the instruction to associate data with the enclave from which the instruction is called. RDX contains the address where the REPORT will be output by the instruction.

+

EREPORT Memory Parameter Semantics + ¶ +

+ + + + + + + + +
TARGETINFOREPORTDATAOUTPUTDATA
Read access by EnclaveRead access by EnclaveRead/Write access by Enclave
+

This instruction leaf perform the following:

+

1. Validate the 3 operands (RBX, RCX, RDX) are inside the enclave.

+

2. Compute a report key for the target enclave, as indicated by the value located in RBX(TARGETINFO).

+

3. Assemble the enclave SECS data to complete the REPORT structure (including the data provided using the RCX (REPORTDATA) operand).

+

4. Computes a cryptographic hash over REPORT structure.

+

5. Add the computed hash to the REPORT structure.

+

6. Output the completed REPORT structure to the address in RDX (OUTPUTDATA).

+

The instruction fails if the operands are not properly aligned.

+

CR_REPORT_KEYID, used to provide key wearout protection, is populated with a statistically unique value on boot of the platform by a trusted entity within the SGX TCB.

+

The instruction faults if any of the following:

+

EREPORT Faulting Conditions + ¶ +

+ + + + + + + + + +
An effective address not properly aligned.An memory address does not resolve in an EPC page.
If accessing an invalid EPC page.If the EPC page is blocked.
May page fault.
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EREPORTTARGETINFO [DS:RBX]Concurrent
REPORTDATA [DS:RCX]Concurrent
OUTPUTDATA [DS:RDX]Concurrent
+
Table 38-72. Base Concurrency Restrictions of EREPORT
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EREPORTTARGETINFO [DS:RBX]ConcurrentConcurrentConcurrent
REPORTDATA [DS:RCX]ConcurrentConcurrentConcurrent
OUTPUTDATA [DS:RDX]ConcurrentConcurrentConcurrent
+
Table 38-73. Additional Concurrency Restrictions of EREPORT
+

Operation + ¶ +

+

Temp Variables in EREPORT Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSize (bits)Description
TMP_ATTRIBUTES32Physical address of SECS of the enclave to which source operand belongs.
TMP_CURRENTSECSAddress of the SECS for the currently executing enclave.
TMP_KEYDEPENDENCIESTemp space for key derivation.
TMP_REPORTKEY128REPORTKEY generated by the instruction.
TMP_REPORT3712
+

TMP_MODE64 := ((IA32_EFER.LMA = 1) && (CS.L = 1));

+

(* Address verification for TARGETINFO (RBX) *)

+

IF ( (DS:RBX is not 512Byte Aligned) or (DS:RBX is not within CR_ELRANGE) )

+

THEN #GP(0); FI;

+

IF (DS:RBX does not resolve within an EPC)

+

THEN #PF(DS:RBX); FI;

+

IF (EPCM(DS:RBX).VALID = 0)

+

THEN #PF(DS:RBX); FI;

+

IF (EPCM(DS:RBX).BLOCKED = 1)

+

THEN #PF(DS:RBX); FI;

+

(* Check page parameters for correctness *)

+

IF ( (EPCM(DS:RBX).PT ≠ PT_REG) or (EPCM(DS:RBX).ENCLAVESECS ≠ CR_ACTIVE_SECS) or (EPCM(DS:RBX).PENDING = 1) or

+

(EPCM(DS:RBX).MODIFIED = 1) or (EPCM(DS:RBX).ENCLAVEADDRESS ≠ (DS:RBX & ~0FFFH) ) or (EPCM(DS:RBX).R = 0) )

+

THEN #PF(DS:RBX);

+

FI;

+

(* Verify RESERVED spaces in TARGETINFO are valid *)

+

IF (DS:RBX.RESERVED != 0)

+

THEN #GP(0); FI;

+

(* Address verification for REPORTDATA (RCX) *)

+

IF ( (DS:RCX is not 128Byte Aligned) or (DS:RCX is not within CR_ELRANGE) )

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

IF (EPCM(DS:RCX).VALID = 0)

+

THEN #PF(DS:RCX); FI;

+

IF (EPCM(DS:RCX).BLOCKED = 1)

+

THEN #PF(DS:RCX); FI;

+

(* Check page parameters for correctness *)

+

IF ( (EPCM(DS:RCX).PT ≠ PT_REG) or (EPCM(DS:RCX).ENCLAVESECS ≠ CR_ACTIVE_SECS) or (EPCM(DS:RCX).PENDING = 1) or

+

(EPCM(DS:RCX).MODIFIED = 1) or (EPCM(DS:RCX).ENCLAVEADDRESS ≠ (DS:RCX & ~0FFFH) ) or (EPCM(DS:RCX).R = 0) )

+

THEN #PF(DS:RCX);

+

FI;

+

(* Address verification for OUTPUTDATA (RDX) *)

+

IF ( (DS:RDX is not 512Byte Aligned) or (DS:RDX is not within CR_ELRANGE) )

+

THEN #GP(0); FI;

+

IF (DS:RDX does not resolve within an EPC)

+

THEN #PF(DS:RDX); FI;

+

IF (EPCM(DS:RDX).VALID = 0)

+

THEN #PF(DS:RDX); FI;

+

IF (EPCM(DS:RDX).BLOCKED = 1)

+

THEN #PF(DS:RDX); FI;

+

(* Check page parameters for correctness *)

+

IF ( (EPCM(DS:RDX).PT ≠ PT_REG) or (EPCM(DS:RDX).ENCLAVESECS ≠ CR_ACTIVE_SECS) or (EPCM(DS:RCX).PENDING = 1) or

+

(EPCM(DS:RCX).MODIFIED = 1) or (EPCM(DS:RDX).ENCLAVEADDRESS ≠ (DS:RDX & ~0FFFH) ) or (EPCM(DS:RDX).W = 0) )

+

THEN #PF(DS:RDX);

+

FI;

+

(* REPORT MAC needs to be computed over data which cannot be modified *)

+

TMP_REPORT.CPUSVN := CR_CPUSVN;

+

TMP_REPORT.ISVFAMILYID := TMP_CURRENTSECS.ISVFAMILYID;

+

TMP_REPORT.ISVEXTPRODID := TMP_CURRENTSECS.ISVEXTPRODID;

+

TMP_REPORT.ISVPRODID := TMP_CURRENTSECS.ISVPRODID;

+

TMP_REPORT.ISVSVN := TMP_CURRENTSECS.ISVSVN;

+

TMP_REPORT.ATTRIBUTES := TMP_CURRENTSECS.ATTRIBUTES;

+

TMP_REPORT.REPORTDATA := DS:RCX[511:0];

+

TMP_REPORT.MRENCLAVE := TMP_CURRENTSECS.MRENCLAVE;

+

TMP_REPORT.MRSIGNER := TMP_CURRENTSECS.MRSIGNER;

+

TMP_REPORT.MRRESERVED := 0;

+

TMP_REPORT.KEYID[255:0] := CR_REPORT_KEYID;

+

TMP_REPORT.MISCSELECT := TMP_CURRENTSECS.MISCSELECT;

+

TMP_REPORT.CONFIGID := TMP_CURRENTSECS.CONFIGID;

+

TMP_REPORT.CONFIGSVN := TMP_CURRENTSECS.CONFIGSVN;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN TMP_REPORT.CET_ATTRIBUTES := TMP_CURRENTSECS.CET_ATTRIBUTES; FI;

+

(* Derive the report key *)

+

TMP_KEYDEPENDENCIES.KEYNAME := REPORT_KEY;

+

TMP_KEYDEPENDENCIES.ISVFAMILYID := 0;

+

TMP_KEYDEPENDENCIES.ISVEXTPRODID := 0;

+

TMP_KEYDEPENDENCIES.ISVPRODID := 0;

+

TMP_KEYDEPENDENCIES.ISVSVN := 0;

+

TMP_KEYDEPENDENCIES.SGXOWNEREPOCH := CR_SGXOWNEREPOCH;

+

TMP_KEYDEPENDENCIES.ATTRIBUTES := DS:RBX.ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.ATTRIBUTESMASK := 0;

+

TMP_KEYDEPENDENCIES.MRENCLAVE := DS:RBX.MEASUREMENT;

+

TMP_KEYDEPENDENCIES.MRSIGNER := 0;

+

TMP_KEYDEPENDENCIES.KEYID := TMP_REPORT.KEYID;

+

TMP_KEYDEPENDENCIES.SEAL_KEY_FUSES := CR_SEAL_FUSES;

+

TMP_KEYDEPENDENCIES.CPUSVN := CR_CPUSVN;

+

TMP_KEYDEPENDENCIES.PADDING := TMP_CURRENTSECS.PADDING;

+

TMP_KEYDEPENDENCIES.MISCSELECT := DS:RBX.MISCSELECT;

+

TMP_KEYDEPENDENCIES.MISCMASK := 0;

+

TMP_KEYDEPENDENCIES.KEYPOLICY := 0;

+

TMP_KEYDEPENDENCIES.CONFIGID := DS:RBX.CONFIGID;

+

TMP_KEYDEPENDENCIES.CONFIGSVN := DS:RBX.CONFIGSVN;

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES := DS:RBX.CET_ATTRIBUTES;

+

TMP_KEYDEPENDENCIES.CET_ATTRIBUTES _MASK := 0;

+

FI;

+

(* Calculate the derived key*)

+

TMP_REPORTKEY := derivekey(TMP_KEYDEPENDENCIES);

+

(* call cryptographic CMAC function, CMAC data are not including MAC&KEYID *)

+

TMP_REPORT.MAC := cmac(TMP_REPORTKEY, TMP_REPORT[3071:0] );

+

DS:RDX[3455: 0] := TMP_REPORT;

+

Flags Affected + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If the address in RCS is outside the DS segment limit.
If a memory operand is not properly aligned.
If a memory operand is not in the current enclave.
#PF(errorcode) If a page fault occurs in accessing memory operands.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If executed outside an enclave.
If RCX is non-canonical form.
If a memory operand is not properly aligned.
If a memory operand is not in the current enclave.
#PF(errorcode) If a page fault occurs in accessing memory operands.
diff --git a/x86/eresume.html b/x86/eresume.html new file mode 100644 index 0000000..81bb2b2 --- /dev/null +++ b/x86/eresume.html @@ -0,0 +1,707 @@ + +ERESUME + — Re-Enters an Enclave

ERESUME + — Re-Enters an Enclave

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 03H ENCLU[ERESUME]IRV/VSGX1This leaf function is used to re-enter an enclave after an interrupt.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnRAXRBXRCX
IRERESUME (In)Address of a TCS (In)Address of AEP (In)
+

Description + ¶ +

+

The ENCLU[ERESUME] instruction resumes execution of an enclave that was interrupted due to an exception or interrupt, using the machine state previously stored in the SSA.

+

ERESUME Memory Parameter Semantics + ¶ +

+ + + + +
TCS
Enclave read/write access
+

The instruction faults if any of the following occurs:

+ + + + + + + + + + + + + + + + + + + + + +
Address in RBX is not properly aligned.Any TCS.FLAGS’s must-be-zero bit is not zero.
TCS pointed to by RBX is not valid or available or locked.Current 32/64 mode does not match the enclave mode in SECS.ATTRIBUTES.MODE64.
The SECS is in use by another enclave.Either of TCS-specified FS and GS segment is not a subset of the current DS segment.
Any one of DS, ES, CS, SS is not zero.If XSAVE available, CR4.OSXSAVE = 0, but SECS.ATTRIBUTES.XFRM ≠ 3.
CR4.OSFXSR ≠ 1.If CR4.OSXSAVE = 1, SECS.ATTRIBUTES.XFRM is not a subset of XCR0.
Offsets 520-535 of the XSAVE area not 0.The bit vector stored at offset 512 of the XSAVE area must be a subset of SECS.ATTRIBUTES.XFRM.
The SSA frame is not valid or in use.If SECS.ATTRIBUTES.AEXNOTIFY ≠ TCS.FLAGS.AEXNOTIFY and TCS.FLAGS.DBGOPTIN = 0.
+

The following operations are performed by ERESUME:

+
    +
  • RSP and RBP are saved in the current SSA frame on EENTER and are automatically restored on EEXIT or an asynchronous exit due to any Interrupt event.
  • +
  • The AEP contained in RCX is stored into the TCS for use by AEXs.FS and GS (including hidden portions) are saved and new values are constructed using TCS.OFSBASE/GSBASE (32 and 64-bit mode) and TCS.OFSLIMIT/GSLIMIT (32-bit mode only). The resulting segments must be a subset of the DS segment.
  • +
  • If CR4.OSXSAVE == 1, XCR0 is saved and replaced by SECS.ATTRIBUTES.XFRM. The effect of RFLAGS.TF depends on whether the enclave entry is opt-in or opt-out (see Section 40.1.2): +
      +
    • On opt-out entry, TF is saved and cleared (it is restored on EEXIT or AEX). Any attempt to set TF via a POPF instruction while inside the enclave clears TF (see Section 40.2.5).
    • +
    • On opt-out entry, TF is saved and cleared (it is restored on EEXIT or AEX). Any attempt to set TF via a POPF instruction while inside the enclave clears TF (see Section 40.2.5).
    • +
    • On opt-in entry, a single-step debug exception is pended on the instruction boundary immediately after EENTER (see Section 40.2.3).
    • +
    • On opt-in entry, a single-step debug exception is pended on the instruction boundary immediately after EENTER (see Section 40.2.3).
  • +
  • All code breakpoints that do not overlap with ELRANGE are also suppressed. If the entry is an opt-out entry, all code and data breakpoints that overlap with the ELRANGE are suppressed.
  • +
  • On opt-out entry, a number of performance monitoring counters and behaviors are modified or suppressed (see Section 40.2.3): +
      +
    • All performance monitoring activity on the current thread is suppressed except for incrementing and firing of FIXED_CTR1 and FIXED_CTR2.
    • +
    • All performance monitoring activity on the current thread is suppressed except for incrementing and firing of FIXED_CTR1 and FIXED_CTR2.
    • +
    • PEBS is suppressed.
    • +
    • PEBS is suppressed.
    • +
    • AnyThread counting on other threads is demoted to MyThread mode and IA32_PERF_GLOBAL_STATUS[60] on that thread is set.
    • +
    • AnyThread counting on other threads is demoted to MyThread mode and IA32_PERF_GLOBAL_STATUS[60] on that thread is set.
    • +
    • If the opt-out entry on a hardware thread results in suppression of any performance monitoring, then the processor sets IA32_PERF_GLOBAL_STATUS[60] and IA32_PERF_GLOBAL_STATUS[63].
    • +
    • If the opt-out entry on a hardware thread results in suppression of any performance monitoring, then the processor sets IA32_PERF_GLOBAL_STATUS[60] and IA32_PERF_GLOBAL_STATUS[63].
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
ERESUME ERESUME +TCS [DS:RBX] +Shared ERESUME +TCS [DS:RBX] +TCS [DS:RBX]
+
Table 38-74. Base Concurrency Restrictions of ERESUME
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
ERESUMETCS [DS:RBX]ConcurrentConcurrentConcurrent
+
Table 38-75. Additional Concurrency Restrictions of ERESUME
+

Operation + ¶ +

+

Temp Variables in ERESUME Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSizeDescription
TMP_FSBASEEffective Address32/64Proposed base address for FS segment.
TMP_GSBASEEffective Address32/64Proposed base address for FS segment.
TMP_FSLIMITEffective Address32/64Highest legal address in proposed FS segment.
TMP_GSLIMITEffective Address32/64Highest legal address in proposed GS segment.
TMP_TARGETEffective Address32/64Address of first instruction inside enclave at which execution is to resume.
TMP_SECSEffective Address32/64Physical address of SECS for this enclave.
TMP_SSAEffective Address32/64Address of current SSA frame.
TMP_XSIZEinteger64Size of XSAVE area based on SECS.ATTRIBUTES.XFRM.
TMP_SSA_PAGEEffective Address32/64Pointer used to iterate over the SSA pages in the current frame.
TMP_GPREffective Address32/64Address of the GPR area within the current SSA frame.
TMP_BRANCH_RECORDLBR RecordFrom/to addresses to be pushed onto the LBR stack.
TMP_NOTIFYBoolean1When set to 1, deliver an AEX notification.
+

TMP_MODE64 := ((IA32_EFER.LMA = 1) && (CS.L = 1));

+

(* Make sure DS is usable, expand up *)

+

IF (TMP_MODE64 = 0 and (DS not usable or ( ( DS[S] = 1) and (DS[bit 11] = 0) and DS[bit 10] = 1))))

+

THEN #GP(0); FI;

+

(* Check that CS, SS, DS, ES.base is 0 *)

+

IF (TMP_MODE64 = 0)

+

THEN

+

IF(CS.base ≠ 0 or DS.base ≠ 0) #GP(0); FI;

+

IF(ES usable and ES.base ≠ 0) #GP(0); FI;

+

IF(SS usable and SS.base ≠ 0) #GP(0); FI;

+

IF(SS usable and SS.B = 0) #GP(0); FI;

+

FI;

+

IF (DS:RBX is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RBX does not resolve within an EPC)

+

THEN #PF(DS:RBX); FI;

+

(* Check AEP is canonical*)

+

IF (TMP_MODE64 = 1 and (CS:RCX is not canonical))

+

THEN #GP(0); FI;

+

(* Check concurrency of TCS operation*)

+

IF (Other Intel SGX instructions are operating on TCS)

+

THEN #GP(0); FI;

+

(* TCS verification *)

+

IF (EPCM(DS:RBX).VALID = 0)

+

THEN #PF(DS:RBX); FI;

+

IF (EPCM(DS:RBX).BLOCKED = 1)

+

THEN #PF(DS:RBX); FI;

+

IF ((EPCM(DS:RBX).PENDING = 1) or (EPCM(DS:RBX).MODIFIED = 1))

+

THEN #PF(DS:RBX); FI;

+

IF ( (EPCM(DS:RBX).ENCLAVEADDRESS ≠ DS:RBX) or (EPCM(DS:RBX).PT ≠ PT_TCS))

+

THEN #PF(DS:RBX); FI;

+

IF ( (DS:RBX).OSSA is not 4KByte Aligned)

+

THEN #GP(0); FI;

+

(* Check proposed FS and GS *)

+

IF ( ( (DS:RBX).OFSBASE is not 4KByte Aligned) or ( (DS:RBX).OGSBASE is not 4KByte Aligned))

+

THEN #GP(0); FI;

+

(* Get the SECS for the enclave in which the TCS resides *)

+

TMP_SECS := Address of SECS for TCS;

+

(* Make sure that the FLAGS field in the TCS does not have any reserved bits set *)

+

IF ( ( (DS:RBX).FLAGS & FFFFFFFFFFFFFFFCH) ≠ 0)

+

THEN #GP(0); FI;

+

(* SECS must exist and enclave must have previously been EINITted *)

+

IF (the enclave is not already initialized)

+

THEN #GP(0); FI;

+

(* make sure the logical processor's operating mode matches the enclave *)

+

IF ( (TMP_MODE64 ≠ TMP_SECS.ATTRIBUTES.MODE64BIT))

+

THEN #GP(0); FI;

+

IF (CR4.OSFXSR = 0)

+

THEN #GP(0); FI;

+

(* Check for legal values of SECS.ATTRIBUTES.XFRM *)

+

IF (CR4.OSXSAVE = 0)

+

THEN

+

IF (TMP_SECS.ATTRIBUTES.XFRM ≠ 03H) THEN #GP(0); FI;

+

ELSE

+

IF ( (TMP_SECS.ATTRIBUTES.XFRM & XCR0) ≠ TMP_SECS.ATTRIBUTES.XFRM) THEN #GP(0); FI;

+

FI;

+

IF ( (DS:RBX).CSSA.FLAGS.DBGOPTIN = 0) and (DS:RBX).CSSA.FLAGS.AEXNOTIFY ≠ TMP_SECS.ATTRIBUTES.AEXNOTIFY))

+

THEN #GP(0); FI;

+

(* Make sure the SSA contains at least one active frame *)

+

IF ( (DS:RBX).CSSA = 0)

+

THEN #GP(0); FI;

+

(* Compute linear address of SSA frame *)

+

TMP_SSA := (DS:RBX).OSSA + TMP_SECS.BASEADDR + 4096 * TMP_SECS.SSAFRAMESIZE * ( (DS:RBX).CSSA - 1);

+

TMP_XSIZE := compute_XSAVE_frame_size(TMP_SECS.ATTRIBUTES.XFRM);

+

FOR EACH TMP_SSA_PAGE = TMP_SSA to TMP_SSA + TMP_XSIZE

+

(* Check page is read/write accessible *)

+

Check that DS:TMP_SSA_PAGE is read/write accessible;

+

If a fault occurs, release locks, abort and deliver that fault;

+

IF (DS:TMP_SSA_PAGE does not resolve to EPC page)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF (EPCM(DS:TMP_SSA_PAGE).VALID = 0)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF (EPCM(DS:TMP_SSA_PAGE).BLOCKED = 1)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF ((EPCM(DS:TMP_SSA_PAGE).PENDING = 1) or (EPCM(DS:TMP_SSA_PAGE_.MODIFIED = 1))

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF ( ( EPCM(DS:TMP_SSA_PAGE).ENCLAVEADDRESS ≠ DS:TMPSSA_PAGE) or (EPCM(DS:TMP_SSA_PAGE).PT ≠ PT_REG) or

+

(EPCM(DS:TMP_SSA_PAGE).ENCLAVESECS ≠ EPCM(DS:RBX).ENCLAVESECS) or

+

(EPCM(DS:TMP_SSA_PAGE).R = 0) or (EPCM(DS:TMP_SSA_PAGE).W = 0) )

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

CR_XSAVE_PAGE_n := Physical_Address(DS:TMP_SSA_PAGE);

+

ENDFOR

+

(* Compute address of GPR area*)

+

TMP_GPR := TMP_SSA + 4096 * DS:TMP_SECS.SSAFRAMESIZE - sizeof(GPRSGX_AREA);

+

Check that DS:TMP_SSA_PAGE is read/write accessible;

+

If a fault occurs, release locks, abort and deliver that fault;

+

IF (DS:TMP_GPR does not resolve to EPC page)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (EPCM(DS:TMP_GPR).VALID = 0)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (EPCM(DS:TMP_GPR).BLOCKED = 1)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF ((EPCM(DS:TMP_GPR).PENDING = 1) or (EPCM(DS:TMP_GPR).MODIFIED = 1))

+

THEN #PF(DS:TMP_GPR); FI;

+

IF ( ( EPCM(DS:TMP_GPR).ENCLAVEADDRESS ≠ DS:TMP_GPR) or (EPCM(DS:TMP_GPR).PT ≠ PT_REG) or

+

(EPCM(DS:TMP_GPR).ENCLAVESECS ≠ EPCM(DS:RBX).ENCLAVESECS) or

+

(EPCM(DS:TMP_GPR).R = 0) or (EPCM(DS:TMP_GPR).W = 0))

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (TMP_MODE64 = 0)

+

THEN

+

IF (TMP_GPR + (GPR_SIZE -1) is not in DS segment) THEN #GP(0); FI;

+

FI;

+

CR_GPR_PA := Physical_Address (DS: TMP_GPR);

+

IF ((DS:RBX).FLAGS.AEXNOTIFY = 1) and (DS:TMP_GPR.AEXNOTIFY[0] = 1))

+

THEN

+

TMP_NOTIFY := 1;

+

ELSE

+

TMP_NOTIFY := 0;

+

FI;

+

IF (TMP_NOTIFY = 1)

+

THEN

+

(* Make sure the SSA contains at least one more frame *)

+

IF ((DS:RBX).CSSA ≥ (DS:RBX).NSSA)

+

THEN #GP(0); FI;

+

TMP_SSA := TMP_SSA + 4096 * TMP_SECS.SSAFRAMESIZE;

+

TMP_XSIZE := compute_XSAVE_frame_size(TMP_SECS.ATTRIBUTES.XFRM);

+

FOR EACH TMP_SSA_PAGE = TMP_SSA to TMP_SSA + TMP_XSIZE

+

(* Check page is read/write accessible *)

+

Check that DS:TMP_SSA_PAGE is read/write accessible;

+

If a fault occurs, release locks, abort and deliver that fault;

+

IF (DS:TMP_SSA_PAGE does not resolve to EPC page)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF (EPCM(DS:TMP_SSA_PAGE).VALID = 0)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF (EPCM(DS:TMP_SSA_PAGE).BLOCKED = 1)

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF ((EPCM(DS:TMP_SSA_PAGE).PENDING = 1) or

+

(EPCM(DS:TMP_SSA_PAGE).MODIFIED = 1))

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

IF ((EPCM(DS:TMP_SSA_PAGE).ENCLAVEADDRESS ≠ DS:TMP_SSA_PAGE) or

+

(EPCM(DS:TMP_SSA_PAGE).PT ≠ PT_REG) or

+

(EPCM(DS:TMP_SSA_PAGE).ENCLAVESECS ≠ EPCM(DS:RBX).ENCLAVESECS) or

+

(EPCM(DS:TMP_SSA_PAGE).R = 0) or (EPCM(DS:TMP_SSA_PAGE).W = 0))

+

THEN #PF(DS:TMP_SSA_PAGE); FI;

+

CR_XSAVE_PAGE_n := Physical_Address(DS:TMP_SSA_PAGE);

+

ENDFOR

+

(* Compute address of GPR area*)

+

TMP_GPR := TMP_SSA + 4096 * DS:TMP_SECS.SSAFRAMESIZE - sizeof(GPRSGX_AREA);

+

If a fault occurs; release locks, abort and deliver that fault;

+

IF (DS:TMP_GPR does not resolve to EPC page)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (EPCM(DS:TMP_GPR).VALID = 0)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (EPCM(DS:TMP_GPR).BLOCKED = 1)

+

THEN #PF(DS:TMP_GPR); FI;

+

IF ((EPCM(DS:TMP_GPR).PENDING = 1) or (EPCM(DS:TMP_GPR).MODIFIED = 1))

+

THEN #PF(DS:TMP_GPR); FI;

+

IF ((EPCM(DS:TMP_GPR).ENCLAVEADDRESS ≠ DS:TMP_GPR) or

+

(EPCM(DS:TMP_GPR).PT ≠ PT_REG) or

+

(EPCM(DS:TMP_GPR).ENCLAVESECS EPCM(DS:RBX).ENCLAVESECS) or

+

(EPCM(DS:TMP_GPR).R = 0) or (EPCM(DS:TMP_GPR).W = 0))

+

THEN #PF(DS:TMP_GPR); FI;

+

IF (TMP_MODE64 = 0)

+

THEN

+

IF (TMP_GPR + (GPR_SIZE -1) is not in DS segment) THEN #GP(0); FI;

+

FI;

+

CR_GPR_PA := Physical_Address (DS: TMP_GPR);

+

TMP_TARGET := (DS:RBX).OENTRY + TMP_SECS.BASEADDR;

+

ELSE

+

TMP_TARGET := (DS:TMP_GPR).RIP;

+

FI;

+

IF (TMP_MODE64 = 1)

+

THEN

+

IF (TMP_TARGET is not canonical) THEN #GP(0); FI;

+

ELSE

+

IF (TMP_TARGET > CS limit) THEN #GP(0); FI;

+

FI;

+

(* Check proposed FS/GS segments fall within DS *)

+

IF (TMP_MODE64 = 0)

+

THEN

+

TMP_FSBASE := (DS:RBX).OFSBASE + TMP_SECS.BASEADDR;

+

TMP_FSLIMIT := (DS:RBX).OFSBASE + TMP_SECS.BASEADDR + (DS:RBX).FSLIMIT;

+

TMP_GSBASE := (DS:RBX).OGSBASE + TMP_SECS.BASEADDR;

+

TMP_GSLIMIT := (DS:RBX).OGSBASE + TMP_SECS.BASEADDR + (DS:RBX).GSLIMIT;

+

(* if FS wrap-around, make sure DS has no holes*)

+

IF (TMP_FSLIMIT < TMP_FSBASE)

+

THEN

+

IF (DS.limit < 4GB) THEN #GP(0); FI;

+

ELSE

+

IF (TMP_FSLIMIT > DS.limit) THEN #GP(0); FI;

+

FI;

+

(* if GS wrap-around, make sure DS has no holes*)

+

IF (TMP_GSLIMIT < TMP_GSBASE)

+

THEN

+

IF (DS.limit < 4GB) THEN #GP(0); FI;

+

ELSE

+

IF (TMP_GSLIMIT > DS.limit) THEN #GP(0); FI;

+

FI;

+

ELSE

+

IF (TMP_NOTIFY = 1)

+

THEN

+

TMP_FSBASE := (DS:RBX).OFSBASE + TMP_SECS.BASEADDR;

+

TMP_GSBASE := (DS:RBX).OGSBASE + TMP_SECS.BASEADDR;

+

ELSE

+

TMP_FSBASE := DS:TMP_GPR.FSBASE;

+

TMP_GSBASE := DS:TMP_GPR.GSBASE;

+

FI;

+

IF ((TMP_FSBASE is not canonical) or (TMP_GSBASE is not canonical))

+

THEN #GP(0); FI;

+

FI;

+

(* Ensure the enclave is not already active and this thread is the only one using the TCS*)

+

IF (DS:RBX.STATE = ACTIVE))

+

THEN #GP(0); FI;

+

TMP_IA32_U_CET := 0

+

TMP_SSP := 0

+

IF (CPUID.(EAX=12H, ECX=1):EAX[6] = 1)

+

THEN

+

IF ( CR4.CET = 0 )

+

THEN

+

(* If part does not support CET or CET has not been enabled and enclave requires CET then fail *)

+

IF (TMP_SECS.CET_ATTRIBUTES ≠ 0 OR TMP_SECS.CET_LEG_BITMAP_OFFSET ≠ 0) #GP(0); FI;

+

FI;

+

(* If indirect branch tracking or shadow stacks enabled but CET state save area is not 16B aligned then fail ERESUME *)

+

IF (TMP_SECS.CET_ATTRIBUTES.SH_STK_EN = 1 OR TMP_SECS.CET_ATTRIBUTES.ENDBR_EN = 1)

+

THEN

+

IF (DS:RBX.OCETSSA is not 16B aligned) #GP(0); FI;

+

FI;

+

IF (TMP_SECS.CET_ATTRIBUTES.SH_STK_EN OR TMP_SECS.CET_ATTRIBUTES.ENDBR_EN)

+

THEN

+

(* Setup CET state from SECS, note tracker goes to IDLE *)

+

TMP_IA32_U_CET = TMP_SECS.CET_ATTRIBUTES;

+

IF (TMP_IA32_U_CET.LEG_IW_EN = 1 AND TMP_IA32_U_CET.ENDBR_EN = 1)

+

THEN

+

TMP_IA32_U_CET := TMP_IA32_U_CET + TMP_SECS.BASEADDR;

+

TMP_IA32_U_CET := TMP_IA32_U_CET + TMP_SECS.CET_LEG_BITMAP_BASE;

+

FI;

+

(* Compute linear address of what will become new CET state save area and cache its PA *)

+

IF (TMP_NOTIFY = 1)

+

THEN

+

TMP_CET_SAVE_AREA = DS:RBX.OCETSSA + TMP_SECS.BASEADDR + (DS:RBX.CSSA) * 16;

+

ELSE

+

TMP_CET_SAVE_AREA = DS:RBX.OCETSSA + TMP_SECS.BASEADDR + (DS:RBX.CSSA - 1) * 16;

+

FI;

+

TMP_CET_SAVE_PAGE = TMP_CET_SAVE_AREA & ~0xFFF;

+

Check the TMP_CET_SAVE_PAGE page is read/write accessible

+

If fault occurs release locks, abort and deliver fault

+

(* read the EPCM VALID, PENDING, MODIFIED, BLOCKED and PT fields atomically *)

+

IF ((DS:TMP_CET_SAVE_PAGE Does NOT RESOLVE TO EPC PAGE) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).VALID = 0) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).PENDING = 1) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).MODIFIED = 1) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).BLOCKED = 1) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).R = 0) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).W = 0) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).ENCLAVEADDRESS ≠ DS:TMP_CET_SAVE_PAGE) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).PT ≠ PT_SS_REST) OR

+

(EPCM(DS:TMP_CET_SAVE_PAGE).ENCLAVESECS ≠ EPCM(DS:RBX).ENCLAVESECS))

+

THEN

+

#PF(DS:TMP_CET_SAVE_PAGE);

+

FI;

+

CR_CET_SAVE_AREA_PA := Physical address(DS:TMP_CET_SAVE_AREA)

+

IF (TMP_NOTIFY = 1)

+

THEN

+

IF TMP_IA32_U_CET.SH_STK_EN = 1

+

THEN TMP_SSP = TCS.PREVSSP; FI;

+

ELSE

+

TMP_SSP = CR_CET_SAVE_AREA_PA.SSP

+

TMP_IA32_U_CET.TRACKER = CR_CET_SAVE_AREA_PA.TRACKER;

+

TMP_IA32_U_CET.SUPPRESS = CR_CET_SAVE_AREA_PA.SUPPRESS;

+

IF ( (TMP_MODE64 = 1 AND TMP_SSP is not canonical) OR

+

(TMP_MODE64 = 0 AND (TMP_SSP & 0xFFFFFFFF00000000) ≠ 0) OR

+

(TMP_SSP is not 4 byte aligned) OR

+

(TMP_IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH AND TMP_IA32_U_CET.SUPPRESS = 1) OR

+

(CR_CET_SAVE_AREA_PA.Reserved ≠ 0) ) #GP(0); FI;

+

FI;

+

FI;

+

FI;

+

IF (TMP_NOTIFY = 0)

+

THEN

+

(* SECS.ATTRIBUTES.XFRM selects the features to be saved. *)

+

(* CR_XSAVE_PAGE_n: A list of 1 or more physical address of pages that contain the XSAVE area. *)

+

XRSTOR(TMP_MODE64, SECS.ATTRIBUTES.XFRM, CR_XSAVE_PAGE_n);

+

IF (XRSTOR failed with #GP)

+

THEN

+

DS:RBX.STATE := INACTIVE;

+

#GP(0);

+

FI;

+

FI;

+

CR_ENCLAVE_MODE := 1;

+

CR_ACTIVE_SECS := TMP_SECS;

+

CR_ELRANGE := (TMP_SECS.BASEADDR, TMP_SECS.SIZE);

+

(* Save sate for possible AEXs *)

+

CR_TCS_PA := Physical_Address (DS:RBX);

+

CR_TCS_LA := RBX;

+

CR_TCS_LA.AEP := RCX;

+

(* Save the hidden portions of FS and GS *)

+

CR_SAVE_FS_selector := FS.selector;

+

CR_SAVE_FS_base := FS.base;

+

CR_SAVE_FS_limit := FS.limit;

+

CR_SAVE_FS_access_rights := FS.access_rights;

+

CR_SAVE_GS_selector := GS.selector;

+

CR_SAVE_GS_base := GS.base;

+

CR_SAVE_GS_limit := GS.limit;

+

CR_SAVE_GS_access_rights := GS.access_rights;

+

IF (TMP_NOTIFY = 1)

+

THEN

+

(* If XSAVE is enabled, save XCR0 and replace it with SECS.ATTRIBUTES.XFRM*)

+

IF (CR4.OSXSAVE = 1)

+

THEN

+

CR_SAVE_XCR0 := XCR0;

+

XCR0 := TMP_SECS.ATTRIBUTES.XFRM;

+

FI;

+

FI;

+

RIP := TMP_TARGET;

+

IF (TMP_NOTIFY = 1)

+

THEN

+

RCX := RIP;

+

RAX := (DS:RBX).CSSA;

+

(* Save the outside RSP and RBP so they can be restored on interrupt or EEXIT *)

+

DS:TMP_SSA.U_RSP := RSP;

+

DS:TMP_SSA.U_RBP := RBP;

+

ELSE

+

Restore_GPRs from DS:TMP_GPR;

+

(*Restore the RFLAGS values from SSA*)

+

RFLAGS.CF := DS:TMP_GPR.RFLAGS.CF;

+

RFLAGS.PF := DS:TMP_GPR.RFLAGS.PF;

+

RFLAGS.AF := DS:TMP_GPR.RFLAGS.AF;

+

RFLAGS.ZF := DS:TMP_GPR.RFLAGS.ZF;

+

RFLAGS.SF := DS:TMP_GPR.RFLAGS.SF;

+

RFLAGS.DF := DS:TMP_GPR.RFLAGS.DF;

+

RFLAGS.OF := DS:TMP_GPR.RFLAGS.OF;

+

RFLAGS.NT := DS:TMP_GPR.RFLAGS.NT;

+

RFLAGS.AC := DS:TMP_GPR.RFLAGS.AC;

+

RFLAGS.ID := DS:TMP_GPR.RFLAGS.ID;

+

RFLAGS.RF := DS:TMP_GPR.RFLAGS.RF;

+

RFLAGS.VM := 0;

+

IF (RFLAGS.IOPL = 3)

+

THEN RFLAGS.IF := DS:TMP_GPR.RFLAGS.IF; FI;

+

IF (TCS.FLAGS.OPTIN = 0)

+

THEN RFLAGS.TF := 0; FI;

+

(* If XSAVE is enabled, save XCR0 and replace it with SECS.ATTRIBUTES.XFRM*)

+

IF (CR4.OSXSAVE = 1)

+

THEN

+

CR_SAVE_XCR0 := XCR0;

+

XCR0 := TMP_SECS.ATTRIBUTES.XFRM;

+

FI;

+

(* Pop the SSA stack*)

+

(DS:RBX).CSSA := (DS:RBX).CSSA -1;

+

FI;

+

(* Do the FS/GS swap *)

+

FS.base := TMP_FSBASE;

+

FS.limit := DS:RBX.FSLIMIT;

+

FS.type := 0001b;

+

FS.W := DS.W;

+

FS.S := 1;

+

FS.DPL := DS.DPL;

+

FS.G := 1;

+

FS.B := 1;

+

FS.P := 1;

+

FS.AVL := DS.AVL;

+

FS.L := DS.L;

+

FS.unusable := 0;

+

FS.selector := 0BH;

+

GS.base := TMP_GSBASE;

+

GS.limit := DS:RBX.GSLIMIT;

+

GS.type := 0001b;

+

GS.W := DS.W;

+

GS.S := 1;

+

GS.DPL := DS.DPL;

+

GS.G := 1;

+

GS.B := 1;

+

GS.P := 1;

+

GS.AVL := DS.AVL;

+

GS.L := DS.L;

+

GS.unusable := 0;

+

GS.selector := 0BH;

+

CR_DBGOPTIN := TCS.FLAGS.DBGOPTIN;

+

Suppress all code breakpoints that are outside ELRANGE;

+

IF (CR_DBGOPTIN = 0)

+

THEN

+

Suppress all code breakpoints that overlap with ELRANGE;

+

CR_SAVE_TF := RFLAGS.TF;

+

RFLAGS.TF := 0;

+

Suppress any MTF VM exits during execution of the enclave;

+

Clear all pending debug exceptions;

+

Clear any pending MTF VM exit;

+

ELSE

+

IF (TMP_NOTIFY = 1)

+

THEN

+

IF RFLAGS.TF = 1

+

THEN pend a single-step #DB at the end of ERESUME; FI;

+

IF the “monitor trap flag” VM-execution control is set

+

THEN pend an MTF VM exit at the end of ERESUME; FI;

+

ELSE

+

Clear all pending debug exceptions;

+

Clear pending MTF VM exits;

+

FI;

+

FI;

+

IF ((CPUID.(EAX=7H, ECX=0):EDX[CET_IBT] = 1) OR (CPUID.(EAX=7, ECX=0):ECX[CET_SS] = 1)

+

THEN

+

(* Save enclosing application CET state into save registers *)

+

CR_SAVE_IA32_U_CET := IA32_U_CET

+

(* Setup enclave CET state *)

+

IF CPUID.(EAX=07H, ECX=00h):ECX[CET_SS] = 1

+

THEN

+

CR_SAVE_SSP := SSP

+

SSP := TMP_SSP;

+

FI;

+

IA32_U_CET := TMP_IA32_U_CET;

+

FI;

+

(* Assure consistent translations *)

+

Flush_linear_context;

+

Clear_Monitor_FSM;

+

Allow_front_end_to_begin_fetch_at_new_RIP;

+

Flags Affected + ¶ +

+

RFLAGS.TF is cleared on opt-out entry

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If DS:RBX is not page aligned.
If the enclave is not initialized.
If the thread is not in the INACTIVE state.
If CS, DS, ES or SS bases are not all zero.
If executed in enclave mode.
If part or all of the FS or GS segment specified by TCS is outside the DS segment.
If any reserved field in the TCS FLAG is set.
If the target address is not within the CS segment.
If CR4.OSFXSR = 0.
If CR4.OSXSAVE = 0 and SECS.ATTRIBUTES.XFRM ≠ 3.
If CR4.OSXSAVE = 1and SECS.ATTRIBUTES.XFRM is not a subset of XCR0.
If SECS.ATTRIBUTES.AEXNOTIFY ≠ TCS.FLAGS.AEXNOTIFY and TCS.FLAGS.DBGOPTIN = 0.
#PF(errorcode) If a page fault occurs in accessing memory.
If DS:RBX does not point to a valid TCS.
If one or more pages of the current SSA frame are not readable/writable, or do not resolve to a valid PT_REG EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If DS:RBX is not page aligned.
If the enclave is not initialized.
If the thread is not in the INACTIVE state.
If CS, DS, ES or SS bases are not all zero.
If executed in enclave mode.
If part or all of the FS or GS segment specified by TCS is outside the DS segment.
If any reserved field in the TCS FLAG is set.
If the target address is not canonical.
If CR4.OSFXSR = 0.
If CR4.OSXSAVE = 0 and SECS.ATTRIBUTES.XFRM ≠ 3.
If CR4.OSXSAVE = 1and SECS.ATTRIBUTES.XFRM is not a subset of XCR0.
If SECS.ATTRIBUTES.AEXNOTIFY ≠ TCS.FLAGS.AEXNOTIFY and TCS.FLAGS.DBGOPTIN = 0.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If DS:RBX does not point to a valid TCS.
If one or more pages of the current SSA frame are not readable/writable, or do not resolve to a valid PT_REG EPC page.
diff --git a/x86/esetcontext.html b/x86/esetcontext.html new file mode 100644 index 0000000..c87f53b --- /dev/null +++ b/x86/esetcontext.html @@ -0,0 +1,231 @@ + +ESETCONTEXT + — Set the ENCLAVECONTEXT Field in SECS

ESETCONTEXT + — Set the ENCLAVECONTEXT Field in SECS

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 02H ENCLV[ESETCONTEXT]IRV/VEAX[5]This leaf function sets the ENCLAVECONTEXT field in SECS.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + +
Op/EnEAXRCXRDX
IRESETCONTEXT (In)Return error code (Out)Address of the destination EPC page (In, EA)Context Value (In, EA)
+

Description + ¶ +

+

The ESETCONTEXT leaf overwrites the ENCLAVECONTEXT field in the SECS. ECREATE and ELD of an SECS set the ENCLAVECONTEXT field in the SECS to the address of the SECS (for access later in ERDINFO). The ESETCONTEXT instruction allows a VMM to overwrite the default context value if necessary, for example, if the VMM is emulating ECREATE or ELD on behalf of the guest.

+

The content of RCX is an effective address of the SECS page to be updated, RDX contains the address pointing to the value to be stored in the SECS. The DS segment is used to create linear address. Segment override is not supported.

+

The instruction fails if:

+
    +
  • The operand is not properly aligned.
  • +
  • RCX does not refer to an SECS page.
+

ESETCONTEXT Memory Parameter Semantics + ¶ +

+ + + + + + +
EPCPAGECONTEXT
Read access permitted by EnclaveRead/Write access permitted by Non Enclave
+

The instruction faults if any of the following:

+

ESETCONTEXT Faulting Conditions + ¶ +

+ + + + + + + + + +
A memory operand effective address is outside the DS segment limit (32b mode).A memory operand is not properly aligned.
DS segment is unusable (32b mode).A page fault occurs in accessing memory operands.
A memory address is in a non-canonical form (64b mode).
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
ESETCONTEXTSECS [DS:RCX]SharedSGX_EPC_PAGE_ CONFLICT
+
Table 38-80. Base Concurrency Restrictions of ESETCONTEXT
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +Access vs. ETRACK, ETRACKC +Access On Conflict +Access vs. ETRACK, ETRACKC +Access On Conflict + EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINIT vs. EADD, EEXTEND, EINIT +vs. ETRACK, ETRACKC +vs. ETRACK, ETRACKC
Access On Conflict +Access On Conflict +Access Access On Conflict +Access On Conflict +
ESETCONTEXTSECS [DS:RCX]ConcurrentConcurrentConcurrent
+
Table 38-81. Additional Concurrency Restrictions of ESETCONTEXT
+

Operation + ¶ +

+

Temp Variables in ESETCONTEXT Operational Flow + ¶ +

+ + + + + + + + + + + + + + + +
NameTypeSize (bits)Description
TMP_SECSPhysical Address64Physical address of the SECS of the page being modified.
TMP_CONTEXTCONTEXT64Data Value of CONTEXT.
+

ESETCONTEXT Return Value in RAX + ¶ +

+ + + + + + + + + + + + +
ErrorValueDescription
No Error0ESETCONTEXT Successful.
SGX_EPC_PAGE_CONFLICTFailure due to concurrent operation of another SGX instruction.
+

(* check alignment of the EPCPAGE (RCX) *)

+

IF (DS:RCX is not 4KByte Aligned) THEN

+

#GP(0); FI;

+

(* check that EPCPAGE (DS:RCX) is the address of an EPC page *)

+

IF (DS:RCX does not resolve within an EPC)THEN

+

#PF(DS:RCX, PFEC.SGX); FI;

+

(* check alignment of the CONTEXT field (RDX) *)

+

IF (DS:RDX is not 8Byte Aligned) THEN

+

#GP(0); FI;

+

(* Load CONTEXT into local variable *)

+

TMP_CONTEXT := DS:RDX

+

(* Check the EPC page for concurrency *)

+

IF (EPC page is being modified) THEN

+

RFLAGS.ZF := 1;

+

RFLAGS.CF := 0;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

goto DONE;

+

FI;

+

(* check page validity *)

+

IF (EPCM(DS:RCX).VALID = 0) THEN

+

#PF(DS:RCX, PFEC.SGX);

+

FI;

+

(* check EPC page is an SECS page *)

+

IF (EPCM(DS:RCX).PT is not PT_SECS) THEN

+

#PF(DS:RCX, PFEC.SGX);

+

FI;

+

(* load the context value into SECS(DS:RCX).ENCLAVECONTEXT *)

+

SECS(DS:RCX).ENCLAVECONTEXT := TMP_CONTEXT;

+

RAX := 0;

+

RFLAGS.ZF := 0;

+

DONE:

+

(* clear flags *)

+

RFLAGS.CF,PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

ZF is set if ESETCONTEXT fails due to concurrent operation with another SGX instruction; otherwise cleared.

+

CF, PF, AF, OF, and SF are cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If DS segment is unusable.
If a memory operand is not properly aligned.
#PF(errorcode) If a page fault occurs in accessing memory operands.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If a memory address is in a non-canonical form.
If a memory operand is not properly aligned.
#PF(errorcode) If a page fault occurs in accessing memory operands.
diff --git a/x86/etrack.html b/x86/etrack.html new file mode 100644 index 0000000..f679802 --- /dev/null +++ b/x86/etrack.html @@ -0,0 +1,197 @@ + +ETRACK + — Activates EBLOCK Checks

ETRACK + — Activates EBLOCK Checks

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 0CH ENCLS[ETRACK]IRV/VSGX1This leaf function activates EBLOCK checks.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + +
Op/EnEAXRCX
IRETRACK (In)Return error code (Out)Pointer to the SECS of the EPC page (In)
+

Description + ¶ +

+

This leaf function provides the mechanism for hardware to track that software has completed the required TLB address clears successfully. The instruction can only be executed when the current privilege level is 0.

+

The content of RCX is an effective address of an EPC page.

+

The table below provides additional information on the memory parameter of ETRACK leaf function.

+

ETRACK Memory Parameter Semantics + ¶ +

+ + + + +
EPCPAGE
Read/Write access permitted by Enclave
+

The error codes are:

+
+ + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorETRACK successful.
SGX_PREV_TRK_INCMPLAll processors did not complete the previous shoot-down sequence.
+
Table 38-45. ETRACK Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
On Conflict
ETRACK ETRACK +SECS [DS:RCX] +Shared ETRACK +SECS [DS:RCX] +SECS [DS:RCX]
+
Table 38-46. Base Concurrency Restrictions of ETRACK
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
ETRACKSECS [DS:RCX]ConcurrentConcurrentExclusiveSGX_EPC_PAGE _CONFLICT
+
Table 38-47. Additional Concurrency Restrictions of ETRACK
+

Operation + ¶ +

+
IF (DS:RCX is not 4KByte Aligned)
+    THEN #GP(0); FI;
+IF (DS:RCX does not resolve within an EPC)
+    THEN #PF(DS:RCX); FI;
+(* Check concurrency with other Intel SGX instructions *)
+IF (Other Intel SGX instructions using tracking facility on this SECS)
+    THEN
+        IF (<<VMX non-root operation>> AND <<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)
+            THEN
+                VMCS.Exit_reason := SGX_CONFLICT;
+                VMCS.Exit_qualification.code := TRACKING_RESOURCE_CONFLICT;
+                VMCS.Exit_qualification.error := 0;
+                VMCS.Guest-physical_address := SECS(TMP_SECS).ENCLAVECONTEXT;
+                VMCS.Guest-linear_address := 0;
+            Deliver VMEXIT;
+            ELSE
+                #GP(0);
+        FI;
+FI;
+IF (EPCM(DS:RCX). VALID = 0)
+    THEN #PF(DS:RCX); FI;
+IF (EPCM(DS:RCX).PT ≠ PT_SECS)
+    THEN #PF(DS:RCX); FI;
+(* All processors must have completed the previous tracking cycle*)
+IF ( (DS:RCX).TRACKING ≠ 0) )
+    THEN
+        IF (<<VMX non-root operation>> AND <<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)
+            THEN
+                VMCS.Exit_reason := SGX_CONFLICT;
+                VMCS.Exit_qualification.code := TRACKING_REFERENCE_CONFLICT;
+                VMCS.Exit_qualification.error := 0;
+                VMCS.Guest-physical_address := SECS(TMP_SECS).ENCLAVECONTEXT;
+                VMCS.Guest-linear_address := 0;
+            Deliver VMEXIT;
+        FI;
+    RFLAGS.ZF := 1;
+        RAX := SGX_PREV_TRK_INCMPL;
+        GOTO DONE;
+    ELSE
+        RAX := 0;
+        RFLAGS.ZF := 0;
+FI;
+DONE:
+RFLAGS.CF,PF,AF,OF,SF := 0;
+
+

Flags Affected + ¶ +

+

Sets ZF if SECS is in use or invalid, otherwise cleared. Clears CF, PF, AF, OF, SF.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If another thread is concurrently using the tracking facility on this SECS.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If the specified EPC resource is in use.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
diff --git a/x86/etrackc.html b/x86/etrackc.html new file mode 100644 index 0000000..6076fac --- /dev/null +++ b/x86/etrackc.html @@ -0,0 +1,264 @@ + +ETRACKC + — Activates EBLOCK Checks

ETRACKC + — Activates EBLOCK Checks

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 11H ENCLS[ETRACKC]IRV/VEAX[6]This leaf function activates EBLOCK checks.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnEAXRCX
IRETRACK (In)Return error code (Out)Address of the destination EPC page (In, EA)Address of the SECS page (In, EA)
+

Description + ¶ +

+

The ETRACKC instruction is thread safe variant of ETRACK leaf and can be executed concurrently with other CPU threads operating on the same SECS.

+

This leaf function provides the mechanism for hardware to track that software has completed the required TLB address clears successfully. The instruction can only be executed when the current privilege level is 0.

+

The content of RCX is an effective address of an EPC page.

+

The table below provides additional information on the memory parameter of ETRACK leaf function.

+

ETRACKC Memory Parameter Semantics + ¶ +

+ + + + +
EPCPAGE
Read/Write access permitted by Enclave
+

The error codes are:

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Error CodeValueDescription
No Error0ETRACKC successful.
SGX_EPC_PAGE_CONFLICT7Failure due to concurrent operation of another SGX instruction.
SGX_PG_INVLD6Target page is not a VALID EPC page.
SGX_PREV_TRK_INCMPL17All processors did not complete the previous tracking sequence.
SGX_TRACK_NOT_REQUIRED27Target page type does not require tracking.
+
Table 38-48. ETRACKC Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
ETRACKCTarget [DS:RCX]SharedSGX_EPC_PAGE_ CONFLICT
SECS implicitConcurrent
+
Table 38-49. Base Concurrency Restrictions of ETRACKC
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
ETRACKCTarget [DS:RCX]ConcurrentConcurrentConcurrent
SECS implicitConcurrentConcurrentExclusiveSGX_EPC_PAGE _CONFLICT
+
Table 38-50. Additional Concurrency Restrictions of ETRACKC
+

Operation + ¶ +

+

Temp Variables in ETRACKC Operational Flow + ¶ +

+ + + + + + + + + + +
NameTypeSize (Bits)Description
TMP_SECSPhysical Address64Physical address of the SECS of the page being modified.
+

(* check alignment of EPCPAGE (RCX) *)

+

IF (DS:RCX is not 4KByte Aligned) THEN

+

#GP(0); FI;

+

(* check that EPCPAGE (DS:RCX) is the address of an EPC page *)

+

IF (DS:RCX does not resolve within an EPC) THEN

+

#PF(DS:RCX, PFEC.SGX); FI;

+

(* Check the EPC page for concurrency *)

+

IF (EPC page is being modified) THEN

+

RFLAGS.ZF := 1;

+

RFLAGS.CF := 0;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

goto DONE_POST_LOCK_RELEASE;

+

FI;

+

(* check to make sure the page is valid *)

+

IF (EPCM(DS:RCX).VALID = 0) THEN

+

RFLAGS.ZF := 1;

+

RFLAGS.CF := 0;

+

RAX := SGX_PG_INVLD;

+

GOTO DONE;

+

FI;

+

(* find out the target SECS page *)

+

IF (EPCM(DS:RCX).PT is PT_REG or PT_TCS or PT_TRIM or PT_SS_FIRST or PT_SS_REST) THEN

+

TMP_SECS := Obtain SECS through EPCM(DS:RCX).ENCLAVESECS;

+

ELSE IF (EPCM(DS:RCX).PT is PT_SECS) THEN

+

TMP_SECS := Obtain SECS through (DS:RCX);

+

ELSE

+

RFLAGS.ZF := 0;

+

RFLAGS.CF := 1;

+

RAX := SGX_TRACK_NOT_REQUIRED;

+

GOTO DONE;

+

FI;

+

(* Check concurrency with other Intel SGX instructions *)

+

IF (Other Intel SGX instructions using tracking facility on this SECS) THEN

+

IF ((VMX non-root mode) and

+

(ENABLE_EPC_VIRTUALIZATION_EXTENSIONS Execution Control = 1)) THEN

+

VMCS.Exit_reason := SGX_CONFLICT;

+

VMCS.Exit_qualification.code := TRACKING_RESOURCE_CONFLICT;

+

VMCS.Exit_qualification.error := 0;

+

VMCS.Guest-physical_address :=

+

SECS(TMP_SECS).ENCLAVECONTEXT;

+

VMCS.Guest-linear_address := 0;

+

Deliver VMEXIT;

+

FI;

+

RFLAGS.ZF := 1;

+

RFLAGS.CF := 0;

+

RAX := SGX_EPC_PAGE_CONFLICT;

+

GOTO DONE;

+

FI;

+

(* All processors must have completed the previous tracking cycle*)

+

IF ( (TMP_SECS).TRACKING ≠ 0) )

+

THEN

+

IF ((VMX non-root mode) and

+

(ENABLE_EPC_VIRTUALIZATION_EXTENSIONS Execution Control = 1)) THEN

+

VMCS.Exit_reason := SGX_CONFLICT;

+

VMCS.Exit_qualification.code := TRACKING_REFERENCE_CONFLICT;

+

VMCS.Exit_qualification.error := 0;

+

VMCS.Guest-physical_address :=

+

SECS(TMP_SECS).ENCLAVECONTEXT;

+

VMCS.Guest-linear_address := 0;

+

Deliver VMEXIT;

+

FI;

+

RFLAGS.ZF := 1;

+

RFLAGS.CF := 0;

+

RAX := SGX_PREV_TRK_INCMPL;

+

GOTO DONE;

+

FI;

+

RFLAGS.ZF := 0;

+

RFLAGS.CF := 0;

+

RAX := 0;

+

DONE:

+

(* clear flags *)

+

RFLAGS.PF,AF,OF,SF := 0;

+

Flags Affected + ¶ +

+

ZF is set if ETRACKC fails due to concurrent operations with another SGX instructions or target page is an invalid EPC page or tracking is not completed on SECS page; otherwise cleared.

+

CF is set if target page is not of a type that requires tracking; otherwise cleared.

+

PF, AF, OF, and SF are cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the memory operand violates access-control policies of DS segment.
If DS segment is unusable.
If the memory operand is not properly aligned.
#PF(errorcode) If the memory operand expected to be in EPC does not resolve to an EPC page.
If a page fault occurs in access memory operand.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If a memory address is in a non-canonical form.
If a memory operand is not properly aligned.
#PF(errorcode) If the memory operand expected to be in EPC does not resolve to an EPC page.
If a page fault occurs in access memory operand.
diff --git a/x86/ewb.html b/x86/ewb.html new file mode 100644 index 0000000..e1c37ab --- /dev/null +++ b/x86/ewb.html @@ -0,0 +1,388 @@ + +EWB + — Invalidate an EPC Page and Write out to Main Memory

EWB + — Invalidate an EPC Page and Write out to Main Memory

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EAX = 0BH ENCLS[EWB]IRV/VSGX1This leaf function invalidates an EPC page and writes it out to main memory.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + +
Op/En EAXRBXRCXRDX
IREWB (In)Error code (Out)Address of an PAGEINFO (In)Address of the EPC page (In)Address of a VA slot (In)
+

Description + ¶ +

+

This leaf function copies a page from the EPC to regular main memory. As part of the copying process, the page is cryptographically protected. This instruction can only be executed when current privilege level is 0.

+

The table below provides additional information on the memory parameter of EPA leaf function.

+

EWB Memory Parameter Semantics + ¶ +

+ + + + + + + + + + + + +
PAGEINFOPAGEINFO.SRCPGEPAGEINFO.PCMDEPCPAGEVASLOT
Non-EPC R/W accessNon-EPC R/W accessNon-EPC R/W accessEPC R/W accessEPC R/W access
+

The error codes are:

+
+ + + + + + + + + + + + + + + + + + +
Error Code (see Table 38-4)Description
No ErrorEWB successful.
SGX_PAGE_NOT_BLOCKEDIf page is not marked as blocked.
SGX_NOT_TRACKEDIf EWB is racing with ETRACK instruction.
SGX_VA_SLOT_OCCUPIEDVersion array slot contained valid entry.
SGX_CHILD_PRESENTChild page present while attempting to page out enclave.
+
Table 38-51. EWB Return Value in RAX
+

Concurrency Restrictions + ¶ +

+
+ + + + + + + + + + + + + + + + + + + +
LeafParameterBase Concurrency Restrictions
AccessOn ConflictSGX_CONFLICT VM Exit Qualification
EWBSource [DS:RCX]Exclusive#GPEPC_PAGE_CONFLICT_EXCEPTION
VA [DS:RDX]Shared#GP
+
Table 38-52. Base Concurrency Restrictions of EWB
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LeafParameterAdditional Concurrency Restrictions
vs. EACCEPT, EACCEPTCOPY, EMODPE, EMODPR, EMODTvs. EADD, EEXTEND, EINITvs. ETRACK, ETRACKC
AccessOn ConflictAccessOn ConflictAccessOn Conflict
EWBSource [DS:RCX]ConcurrentConcurrentConcurrent
VA [DS:RDX]ConcurrentConcurrentExclusive
+
Table 38-53. Additional Concurrency Restrictions of EWB
+

Operation + ¶ +

+

Temp Variables in EWB Operational Flow + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeSize (Bytes)Description
TMP_SRCPGEMemory page4096
TMP_PCMDPCMD128
TMP_SECSSECS4096
TMP_BPEPOCHUINT648
TMP_BPREFCOUNTUINT648
TMP_HEADERMAC Header128
TMP_PCMD_ENCLAVEIDUINT648
TMP_VERUINT648
TMP_PKUINT12816
+

IF ( (DS:RBX is not 32Byte Aligned) or (DS:RCX is not 4KByte Aligned) )

+

THEN #GP(0); FI;

+

IF (DS:RCX does not resolve within an EPC)

+

THEN #PF(DS:RCX); FI;

+

IF (DS:RDX is not 8Byte Aligned)

+

THEN #GP(0); FI;

+

IF (DS:RDX does not resolve within an EPC)

+

THEN #PF(DS:RDX); FI;

+

(* EPCPAGE and VASLOT should not resolve to the same EPC page*)

+

IF (DS:RCX and DS:RDX resolve to the same EPC page)

+

THEN #GP(0); FI;

+

TMP_SRCPGE := DS:RBX.SRCPGE;

+

(* Note PAGEINFO.PCMD is overlaid on top of PAGEINFO.SECINFO *)

+

TMP_PCMD := DS:RBX.PCMD;

+

If (DS:RBX.LINADDR ≠ 0) OR (DS:RBX.SECS ≠ 0)

+

THEN #GP(0); FI;

+

IF ( (DS:TMP_PCMD is not 128Byte Aligned) or (DS:TMP_SRCPGE is not 4KByte Aligned) )

+

THEN #GP(0); FI;

+

(* Check for concurrent Intel SGX instruction access to the page *)

+

IF (Other Intel SGX instruction is accessing page)

+

THEN

+

IF (<<VMX non-root operation>> AND <<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>>)

+

THEN

+

VMCS.Exit_reason := SGX_CONFLICT;

+

VMCS.Exit_qualification.code := EPC_PAGE_CONFLICT_EXCEPTION;

+

VMCS.Exit_qualification.error := 0;

+

VMCS.Guest-physical_address := << translation of DS:RCX produced by paging >>;

+

VMCS.Guest-linear_address := DS:RCX;

+

Deliver VMEXIT;

+

ELSE

+

#GP(0);

+

FI;

+

FI;

+

(*Check if the VA Page is being removed or changed*)

+

IF (VA Page is being modified)

+

THEN #GP(0); FI;

+

(* Verify that EPCPAGE and VASLOT page are valid EPC pages and DS:RDX is VA *)

+

IF (EPCM(DS:RCX).VALID = 0)

+

THEN #PF(DS:RCX); FI;

+

IF ( (EPCM(DS:RDX & ~0FFFH).VALID = 0) or (EPCM(DS:RDX & ~FFFH).PT is not PT_VA) )

+

THEN #PF(DS:RDX); FI;

+

(* Perform page-type-specific exception checks *)

+

IF ( (EPCM(DS:RCX).PT is PT_REG) or (EPCM(DS:RCX).PT is PT_TCS) or (EPCM(DS:RCX).PT is PT_TRIM ) or

+

(EPCM(DS:RCX).PT is PT_SS_FIRST ) or (EPCM(DS:RCX).PT is PT_SS_REST))

+

THEN

+

TMP_SECS = Obtain SECS through EPCM(DS:RCX)

+

(* Check that EBLOCK has occurred correctly *)

+

IF (EBLOCK is not correct)

+

THEN #GP(0); FI;

+

FI;

+

RFLAGS.ZF,CF,PF,AF,OF,SF := 0;

+

RAX := 0;

+

(* Zero out TMP_HEADER*)

+

TMP_HEADER[ sizeof(TMP_HEADER) - 1 : 0] := 0;

+

(* Perform page-type-specific checks *)

+

IF ( (EPCM(DS:RCX).PT is PT_REG) or (EPCM(DS:RCX).PT is PT_TCS) or (EPCM(DS:RCX).PT is PT_TRIM )or

+

(EPCM(DS:RCX).PT is PT_SS_FIRST ) or (EPCM(DS:RCX).PT is PT_SS_REST))

+

THEN

+

(* check to see if the page is evictable *)

+

IF (EPCM(DS:RCX).BLOCKED = 0)

+

THEN

+

RAX := SGX_PAGE NOT_BLOCKED;

+

RFLAGS.ZF := 1;

+

GOTO ERROR_EXIT;

+

FI;

+

(* Check if tracking done correctly *)

+

IF (Tracking not correct)

+

THEN

+

RAX := SGX_NOT_TRACKED;

+

RFLAGS.ZF := 1;

+

GOTO ERROR_EXIT;

+

FI;

+

(* Obtain EID to establish cryptographic binding between the paged-out page and the enclave *)

+

TMP_HEADER.EID := TMP_SECS.EID;

+

(* Obtain EID as an enclave handle for software *)

+

TMP_PCMD_ENCLAVEID := TMP_SECS.EID;

+

ELSE IF (EPCM(DS:RCX).PT is PT_SECS)

+

(*check that there are no child pages inside the enclave *)

+

IF (DS:RCX has an EPC page associated with it)

+

THEN

+

RAX := SGX_CHILD_PRESENT;

+

RFLAGS.ZF := 1;

+

GOTO ERROR_EXIT;

+

FI:

+

(* treat SECS as having a child page when VIRTCHILDCNT is non-zero *)

+

IF (<<in VMX non-root operation>> AND

+

<<ENABLE_EPC_VIRTUALIZATION_EXTENSIONS>> AND

+

(SECS(DS:RCX).VIRTCHILDCNT ≠ 0))

+

THEN

+

RFLAGS.ZF := 1;

+

RAX := SGX_CHILD_PRESENT;

+

GOTO ERROR_EXIT;

+

FI;

+

TMP_HEADER.EID := 0;

+

(* Obtain EID as an enclave handle for software *)

+

TMP_PCMD_ENCLAVEID := (DS:RCX).EID;

+

ELSE IF (EPCM(DS:RCX).PT is PT_VA)

+

TMP_HEADER.EID := 0; // Zero is not a special value

+

(* No enclave handle for VA pages*)

+

TMP_PCMD_ENCLAVEID := 0;

+

FI;

+

TMP_HEADER.LINADDR := EPCM(DS:RCX).ENCLAVEADDRESS;

+

TMP_HEADER.SECINFO.FLAGS.PT := EPCM(DS:RCX).PT;

+

TMP_HEADER.SECINFO.FLAGS.RWX := EPCM(DS:RCX).RWX;

+

TMP_HEADER.SECINFO.FLAGS.PENDING := EPCM(DS:RCX).PENDING;

+

TMP_HEADER.SECINFO.FLAGS.MODIFIED := EPCM(DS:RCX).MODIFIED;

+

TMP_HEADER.SECINFO.FLAGS.PR := EPCM(DS:RCX).PR;

+

(* Encrypt the page, DS:RCX could be encrypted in place. AES-GCM produces 2 values, {ciphertext, MAC}. *)

+

(* AES-GCM input parameters: key, GCM Counter, MAC_HDR, MAC_HDR_SIZE, SRC, SRC_SIZE)*)

+

{DS:TMP_SRCPGE, DS:TMP_PCMD.MAC} := AES_GCM_ENC(CR_BASE_PK), (TMP_VER << 32),

+

TMP_HEADER, 128, DS:RCX, 4096);

+

(* Write the output *)

+

Zero out DS:TMP_PCMD.SECINFO

+

DS:TMP_PCMD.SECINFO.FLAGS.PT := EPCM(DS:RCX).PT;

+

DS:TMP_PCMD.SECINFO.FLAGS.RWX := EPCM(DS:RCX).RWX;

+

DS:TMP_PCMD.SECINFO.FLAGS.PENDING := EPCM(DS:RCX).PENDING;

+

DS:TMP_PCMD.SECINFO.FLAGS.MODIFIED := EPCM(DS:RCX).MODIFIED;

+

DS:TMP_PCMD.SECINFO.FLAGS.PR := EPCM(DS:RCX).PR;

+

DS:TMP_PCMD.RESERVED := 0;

+

DS:TMP_PCMD.ENCLAVEID := TMP_PCMD_ENCLAVEID;

+

DS:RBX.LINADDR := EPCM(DS:RCX).ENCLAVEADDRESS;

+

(*Check if version array slot was empty *)

+

IF ([DS.RDX])

+

THEN

+

RAX := SGX_VA_SLOT_OCCUPIED

+

RFLAGS.CF := 1;

+

FI;

+

(* Write version to Version Array slot *)

+

[DS.RDX] := TMP_VER;

+

(* Free up EPCM Entry *)

+

EPCM.(DS:RCX).VALID := 0;

+

ERROR_EXIT:

+

Flags Affected + ¶ +

+

ZF is set if page is not blocked, not tracked, or a child is present. Otherwise cleared.

+

CF is set if VA slot is previously occupied, Otherwise cleared.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the DS segment limit.
If a memory operand is not properly aligned.
If the EPC page and VASLOT resolve to the same EPC page.
If another Intel SGX instruction is concurrently accessing either the target EPC, VA, or SECS pages.
If the tracking resource is in use.
If the EPC page or the version array page is invalid.
If the parameters fail consistency checks.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
If one of the EPC memory operands has incorrect page type.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand is non-canonical form.
If a memory operand is not properly aligned.
If the EPC page and VASLOT resolve to the same EPC page.
If another Intel SGX instruction is concurrently accessing either the target EPC, VA, or SECS pages.
If the tracking resource is in use.
If the EPC page or the version array page in invalid.
If the parameters fail consistency checks.
#PF(errorcode) If a page fault occurs in accessing memory operands.
If a memory operand is not an EPC page.
If one of the EPC memory operands has incorrect page type.
diff --git a/x86/exitac.html b/x86/exitac.html new file mode 100644 index 0000000..0f5ddb8 --- /dev/null +++ b/x86/exitac.html @@ -0,0 +1,149 @@ + +GETSEC[EXITAC] + — Exit Authenticated Code Execution Mode

GETSEC[EXITAC] + — Exit Authenticated Code Execution Mode

+ + + + + + + + + +
OpcodeInstructionDescription
NP 0F 37 (EAX=3)GETSEC[EXITAC]Exit authenticated code execution mode. RBX holds the Near Absolute Indirect jump target and EDX hold the exit parameter flags.
+

Description + ¶ +

+

The GETSEC[EXITAC] leaf function exits the ILP out of authenticated code execution mode established by GETSEC[ENTERACCS] or GETSEC[SENTER]. The EXITAC leaf of GETSEC is selected with EAX set to 3 at entry. EBX (or RBX, if in 64-bit mode) holds the near jump target offset for where the processor execution resumes upon exiting authenticated code execution mode. EDX contains additional parameter control information. Currently only an input value of 0 in EDX is supported. All other EDX settings are considered reserved and result in a general protection violation.

+

GETSEC[EXITAC] can only be executed if the processor is in protected mode with CPL = 0 and EFLAGS.VM = 0. The processor must also be in authenticated code execution mode. To avoid potential operability conflicts between modes, the processor is not allowed to execute this instruction if it is in SMM or in VMX operation. A violation of these conditions results in a general protection violation.

+

Upon completion of the GETSEC[EXITAC] operation, the processor unmasks responses to external event signals INIT#, NMI#, and SMI#. This unmasking is performed conditionally, based on whether the authenticated code execution mode was entered via execution of GETSEC[SENTER] or GETSEC[ENTERACCS]. If the processor is in authenticated code execution mode due to the execution of GETSEC[SENTER], then these external event signals will remain masked. In this case, A20M is kept disabled in the measured environment until the measured environment executes GETSEC[SEXIT]. INIT# is unconditionally unmasked by EXITAC. Note that any events that are pending, but have been blocked while in authenticated code execution mode, will be recognized at the completion of the GETSEC[EXITAC] instruction if the pin event is unmasked.

+

The intent of providing the ability to optionally leave the pin events SMI#, and NMI# masked is to support the completion of a measured environment bring-up that makes use of VMX. In this envisioned security usage scenario, these events will remain masked until an appropriate virtual machine has been established in order to field servicing of these events in a safer manner. Details on when and how events are masked and unmasked in VMX operation are described in Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C. It should be cautioned that if no VMX environment is to be activated following GETSEC[EXITAC], that these events will remain masked until the measured environment is exited with GETSEC[SEXIT]. If this is not desired then the GETSEC function SMCTRL(0) can be used for unmasking SMI# in this context. NMI# can be correspondingly unmasked by execution of IRET.

+

A successful exit of the authenticated code execution mode requires the ILP to perform additional steps as outlined below:

+
    +
  • Invalidate the contents of the internal authenticated code execution area.
  • +
  • Invalidate processor TLBs.
  • +
  • Clear the internal processor AC Mode indicator flag.
  • +
  • Re-lock the TPM locality 3 space.
  • +
  • Unlock the Intel® TXT-capable chipset memory and I/O protections to allow memory and I/O activity by other processor agents.
  • +
  • Perform a near absolute indirect jump to the designated instruction location.
+

The content of the authenticated code execution area is invalidated by hardware in order to protect it from further use or visibility. This internal processor storage area can no longer be used or relied upon after GETSEC[EXITAC]. Data structures need to be re-established outside of the authenticated code execution area if they are to be referenced after EXITAC. Since addressed memory content formerly mapped to the authenticated code execution area may no longer be coherent with external system memory after EXITAC, processor TLBs in support of linear to physical address translation are also invalidated.

+

Upon completion of GETSEC[EXITAC] a near absolute indirect transfer is performed with EIP loaded with the contents of EBX (based on the current operating mode size). In 64-bit mode, all 64 bits of RBX are loaded into RIP if REX.W precedes GETSEC[EXITAC]. Otherwise RBX is treated as 32 bits even while in 64-bit mode. Conventional CS limit checking is performed as part of this control transfer. Any exception conditions generated as part of this control transfer will be directed to the existing IDT; thus it is recommended that an IDTR should also be established prior to execution of the EXITAC function if there is a need for fault handling. In addition, any segmentation related (and paging) data structures to be used after EXITAC should be re-established or validated by the authenticated code prior to EXITAC.

+

In addition, any segmentation related (and paging) data structures to be used after EXITAC need to be re-established and mapped outside of the authenticated RAM designated area by the authenticated code prior to EXITAC. Any data structure held within the authenticated RAM allocated area will no longer be accessible after completion by EXITAC.

+

Operation + ¶ +

+
(* The state of the internal flag ACMODEFLAG and SENTERFLAG persist across instruction boundary *)
+IF (CR4.SMXE=0)
+    THEN #UD;
+ELSIF ( in VMX non-root operation)
+    THEN VM Exit (reason=”GETSEC instruction”);
+ELSIF (GETSEC leaf unsupported)
+    THEN #UD;
+ELSIF ((in VMX operation) or ( (in 64-bit mode) and ( RBX is non-canonical) )
+    (CR0.PE=0) or (CPL>0) or (EFLAGS.VM=1) or
+    (ACMODEFLAG=0) or (IN_SMM=1)) or (EDX ≠ 0))
+    THEN #GP(0);
+IF (OperandSize = 32)
+    THEN tempEIP := EBX;
+ELSIF (OperandSize = 64)
+    THEN tempEIP := RBX;
+ELSE
+    tempEIP := EBX AND 0000FFFFH;
+IF (tempEIP > code segment limit)
+    THEN #GP(0);
+Invalidate ACRAM contents;
+Invalidate processor TLB(s);
+Drain outgoing messages;
+SignalTXTMsg(CloseLocality3);
+SignalTXTMsg(LockSMRAM);
+SignalTXTMsg(ProcessorRelease);
+Unmask INIT;
+IF (SENTERFLAG=0)
+    THEN Unmask SMI, INIT, NMI, and A20M pin event;
+ELSEIF (IA32_SMM_MONITOR_CTL[0] = 0)
+    THEN Unmask SMI pin event;
+ACMODEFLAG := 0;
+IF IA32_EFER.LMA == 1
+    THEN CR3 := R8;
+EIP := tempEIP;
+END;
+
+

Flags Affected + ¶ +

+

None.

+

Use of Prefixes + ¶ +

+

LOCK Causes #UD.

+

REP* Cause #UD (includes REPNE/REPNZ and REP/REPE/REPZ).

+

Operand size Causes #UD.

+

NP 66/F2/F3 prefixes are not allowed.

+

Segmentoverrides Ignored.

+

Address size Ignored.

+

REX.W Sets 64-bit mode Operand size attribute.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[EXITAC] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)If CR0.PE = 0 or CPL>0 or EFLAGS.VM =1.
If in VMX root operation.
If the processor is not currently in authenticated code execution mode.
If the processor is in SMM.
If any reserved bit position is set in the EDX parameter register.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[EXITAC] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[EXITAC] is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[EXITAC] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[EXITAC] is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+

64-Bit Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+ + + +
#GP(0)If the target address in RBX is not in a canonical form.
+

VM-Exit Condition + ¶ +

+

Reason (GETSEC) If in VMX non-root operation.

diff --git a/x86/extractps.html b/x86/extractps.html new file mode 100644 index 0000000..ad6c7f8 --- /dev/null +++ b/x86/extractps.html @@ -0,0 +1,117 @@ + +EXTRACTPS + — Extract Packed Floating-Point Values

EXTRACTPS + — Extract Packed Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 17 /r ib EXTRACTPS reg/m32, xmm1, imm8AVVSSE4_1Extract one single precision floating-point value from xmm1 at the offset specified by imm8 and store the result in reg or m32. Zero extend the results in 64-bit register if applicable.
VEX.128.66.0F3A.WIG 17 /r ib VEXTRACTPS reg/m32, xmm1, imm8AV/VAVXExtract one single precision floating-point value from xmm1 at the offset specified by imm8 and store the result in reg or m32. Zero extend the results in 64-bit register if applicable.
EVEX.128.66.0F3A.WIG 17 /r ib VEXTRACTPS reg/m32, xmm1, imm8BV/VAVX512FExtract one single precision floating-point value from xmm1 at the offset specified by imm8 and store the result in reg or m32. Zero extend the results in 64-bit register if applicable.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)imm8N/A
BTuple1 ScalarModRM:r/m (w)ModRM:reg (r)imm8N/A
+

Description + ¶ +

+

Extracts a single precision floating-point value from the source operand (second operand) at the 32-bit offset specified from imm8. Immediate bits higher than the most significant offset for the vector length are ignored.

+

The extracted single precision floating-point value is stored in the low 32-bits of the destination operand

+

In 64-bit mode, destination register operand has default operand size of 64 bits. The upper 32-bits of the register are filled with zero. REX.W is ignored.

+

VEX.128 and EVEX encoded version: When VEX.W1 or EVEX.W1 form is used in 64-bit mode with a general purpose register (GPR) as a destination operand, the packed single quantity is zero extended to 64 bits.

+

VEX.vvvv/EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

128-bit Legacy SSE version: When a REX.W prefix is used in 64-bit mode with a general purpose register (GPR) as a destination operand, the packed single quantity is zero extended to 64 bits.

+

The source register is an XMM register. Imm8[1:0] determine the starting DWORD offset from which to extract the 32-bit floating-point value.

+

If VEXTRACTPS is encoded with VEX.L= 1, an attempt to execute the instruction encoded with VEX.L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

VEXTRACTPS (EVEX and VEX.128 Encoded Version) + ¶ +

+
SRC_OFFSET := IMM8[1:0]
+IF (64-Bit Mode and DEST is register)
+    DEST[31:0] := (SRC[127:0] >> (SRC_OFFSET*32)) AND 0FFFFFFFFh
+    DEST[63:32] := 0
+ELSE
+    DEST[31:0] := (SRC[127:0] >> (SRC_OFFSET*32)) AND 0FFFFFFFFh
+FI
+
+

EXTRACTPS (128-bit Legacy SSE Version) + ¶ +

+
SRC_OFFSET := IMM8[1:0]
+IF (64-Bit Mode and DEST is register)
+    DEST[31:0] := (SRC[127:0] >> (SRC_OFFSET*32)) AND 0FFFFFFFFh
+    DEST[63:32] := 0
+ELSE
+    DEST[31:0] := (SRC[127:0] >> (SRC_OFFSET*32)) AND 0FFFFFFFFh
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
EXTRACTPS int _mm_extract_ps (__m128 a, const int nidx);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-57, “Type E9NF Class Exception Conditions.”

+

Additionally:

+ + + + + + +
#UDIF VEX.L = 0.
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/f2xm1.html b/x86/f2xm1.html new file mode 100644 index 0000000..7125bab --- /dev/null +++ b/x86/f2xm1.html @@ -0,0 +1,110 @@ + +F2XM1 + — Compute 2x–1

F2XM1 + — Compute 2x–1

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 F0 Replace ST(0) with (2ST(0) – 1).
+

Description + ¶ +

+

Computes the exponential value of 2 to the power of the source operand minus 1. The source operand is located in register ST(0) and the result is also stored in ST(0). The value of the source operand must lie in the range –1.0 to +1.0. If the source value is outside this range, the result is undefined.

+

The following table shows the results obtained when computing the exponential value of various classes of numbers, assuming that neither overflow nor underflow occurs.

+
+ + + + + + + + + + + + + + + +
ST(0) SRCST(0) DEST
− 1.0 to −0− 0.5 to − 0
−0−0
+0+0
+ 0 to +1.0+ 0 to 1.0
+
Table 3-16. Results Obtained from F2XM1
+

Values other than 2 can be exponentiated using the following formula:

+

xy := 2(y ∗ log2x)

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
ST(0) := (2ST(0) − 1);
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IASource operand is an SNaN value or unsupported format.
#DSource is a denormal value.
#UResult is too small for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fabs.html b/x86/fabs.html new file mode 100644 index 0000000..a57389b --- /dev/null +++ b/x86/fabs.html @@ -0,0 +1,104 @@ + +FABS + — Absolute Value

FABS + — Absolute Value

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 E1 Replace ST with its absolute value.
+

Description + ¶ +

+

Clears the sign bit of ST(0) to create the absolute value of the operand. The following table shows the results obtained when creating the absolute value of various classes of numbers.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
ST(0) SRCST(0) DEST
−∞+∞
−F+F
−0+0
+0+0
+F+F
+∞+∞
NaNNaN
+
Table 3-17. Results Obtained from FABS
+
+

F Meansfinitefloating-pointvalue.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
ST(0) := |ST(0)|;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + +
#ISStack underflow occurred.
+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fadd.faddp.fiadd.html b/x86/fadd.faddp.fiadd.html new file mode 100644 index 0000000..cac1281 --- /dev/null +++ b/x86/fadd.faddp.fiadd.html @@ -0,0 +1,301 @@ + +FADD/FADDP/FIADD + — Add

FADD/FADDP/FIADD + — Add

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D8 /0FADD m32fpValidValidAdd m32fp to ST(0) and store result in ST(0).
DC /0FADD m64fpValidValidAdd m64fp to ST(0) and store result in ST(0).
D8 C0+iFADD ST(0), ST(i)ValidValidAdd ST(0) to ST(i) and store result in ST(0).
DC C0+iFADD ST(i), ST(0)ValidValidAdd ST(i) to ST(0) and store result in ST(i).
DE C0+iFADDP ST(i), ST(0)ValidValidAdd ST(0) to ST(i), store result in ST(i), and pop the register stack.
DE C1FADDPValidValidAdd ST(0) to ST(1), store result in ST(1), and pop the register stack.
DA /0FIADD m32intValidValidAdd m32int to ST(0) and store result in ST(0).
DE /0FIADD m16intValidValidAdd m16int to ST(0) and store result in ST(0).
+

Description + ¶ +

+

Adds the destination and source operands and stores the sum in the destination location. The destination operand is always an FPU register; the source operand can be a register or a memory location. Source operands in memory can be in single precision or double precision floating-point format or in word or doubleword integer format.

+

The no-operand version of the instruction adds the contents of the ST(0) register to the ST(1) register. The one-operand version adds the contents of a memory location (either a floating-point or an integer value) to the contents of the ST(0) register. The two-operand version, adds the contents of the ST(0) register to the ST(i) register or vice versa. The value in ST(0) can be doubled by coding:

+

FADD ST(0), ST(0);

+

The FADDP instructions perform the additional operation of popping the FPU register stack after storing the result. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1. (The no-operand version of the floating-point add instructions always results in the register stack being popped. In some assemblers, the mnemonic for this instruction is FADD rather than FADDP.)

+

The FIADD instructions convert an integer source operand to double extended-precision floating-point format before performing the addition.

+

The table on the following page shows the results obtained when adding various classes of numbers, assuming that neither overflow nor underflow occurs.

+

When the sum of two operands with opposite signs is 0, the result is +0, except for the round toward −∞ mode, in which case the result is −0. When the source operand is an integer 0, it is treated as a +0.

+

When both operand are infinities of the same sign, the result is ∞ of the expected sign. If both operands are infinities of opposite signs, an invalid-operation exception is generated. See Table 3-18.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DEST
SRC−∞−F−0+0+F+∞NaN
−∞−∞−∞−∞−∞−∞*NaN
− F or − I−∞−FSRCSRC± F or ± 0+∞NaN
−0−∞DEST−0±0DEST+∞NaN
+0−∞DEST±0+0DEST+∞NaN
+ F or + I−∞± F or ± 0SRCSRC+F+∞NaN
+∞*+∞+∞+∞+∞+∞NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-18. FADD/FADDP/FIADD Results
+
+

F Means finite floating-point value.

+

I Means integer.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF Instruction = FIADD
+    THEN
+        DEST := DEST + ConvertToDoubleExtendedPrecisionFP(SRC);
+    ELSE (* Source operand is floating-point value *)
+        DEST := DEST + SRC;
+FI;
+IF Instruction = FADDP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IAOperand is an SNaN value or unsupported format.
Operands are infinities of unlike sign.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#OResult is too large for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fbld.html b/x86/fbld.html new file mode 100644 index 0000000..998dfcf --- /dev/null +++ b/x86/fbld.html @@ -0,0 +1,142 @@ + +FBLD + — Load Binary Coded Decimal

FBLD + — Load Binary Coded Decimal

+ + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
DF /4FBLD m80bcdValidValidConvert BCD value to floating-point and push onto the FPU stack.
+

Description + ¶ +

+

Converts the BCD source operand into double extended-precision floating-point format and pushes the value onto the FPU stack. The source operand is loaded without rounding errors. The sign of the source operand is preserved, including that of −0.

+

The packed BCD digits are assumed to be in the range 0 through 9; the instruction does not check for invalid digits (AH through FH). Attempting to load an invalid encoding produces an undefined result.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
TOP := TOP − 1;
+ST(0) := ConvertToDoubleExtendedPrecisionFP(SRC);
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 1 if stack overflow occurred; otherwise, set to 0.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + +
#ISStack overflow occurred.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fbstp.html b/x86/fbstp.html new file mode 100644 index 0000000..d10a6e8 --- /dev/null +++ b/x86/fbstp.html @@ -0,0 +1,194 @@ + +FBSTP + — Store BCD Integer and Pop

FBSTP + — Store BCD Integer and Pop

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
DF /6 Store ST(0) in m80bcd and pop ST(0).
+

Description + ¶ +

+

Converts the value in the ST(0) register to an 18-digit packed BCD integer, stores the result in the destination operand, and pops the register stack. If the source value is a non-integral value, it is rounded to an integer value, according to rounding mode specified by the RC field of the FPU control word. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1.

+

The destination operand specifies the address where the first byte destination value is to be stored. The BCD value (including its sign bit) requires 10 bytes of space in memory.

+

The following table shows the results obtained when storing various classes of numbers in packed BCD format.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ST(0)DEST
− ∞ or Value Too Large for DEST Format*
F≤−1−D
−1 < F < -0**
−0−0
+0+0
+ 0 < F < +1**
F ≥ +1+D
+ ∞ or Value Too Large for DEST Format*
NaN*
+
Table 3-19. FBSTP Results
+
+

F Meansfinitefloating-pointvalue.

+

D Means packed-BCD number.

+

* Indicatesfloating-pointinvalid-operation(#IA)exception.

+

** ±0 or ±1, depending on the rounding mode.

+

If the converted value is too large for the destination format, or if the source operand is an ∞, SNaN, QNAN, or is in an unsupported format, an invalid-arithmetic-operand condition is signaled. If the invalid-operation exception is not masked, an invalid-arithmetic-operand exception (#IA) is generated and no value is stored in the destination operand. If the invalid-operation exception is masked, the packed BCD indefinite value is stored in memory.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
DEST := BCD(ST(0));
+PopRegisterStack;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + +
#ISStack underflow occurred.
#IAConverted value that exceeds 18 BCD digits in length.
Source operand is an SNaN, QNaN, ±∞, or in an unsupported format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a segment register is being loaded with a segment selector that points to a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fchs.html b/x86/fchs.html new file mode 100644 index 0000000..6d5b243 --- /dev/null +++ b/x86/fchs.html @@ -0,0 +1,104 @@ + +FCHS + — Change Sign

FCHS + — Change Sign

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 E0 Complements sign of ST(0).
+

Description + ¶ +

+

Complements the sign bit of ST(0). This operation changes a positive value into a negative value of equal magnitude or vice versa. The following table shows the results obtained when changing the sign of various classes of numbers.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
ST(0) SRCST(0) DEST
−∞+∞
−F+F
−0+0
+0−0
+F−F
+∞−∞
NaNNaN
+
Table 3-20. FCHS Results
+
+

* Fmeansfinitefloating-pointvalue.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
SignBit(ST(0)) := NOT (SignBit(ST(0)));
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + +
#ISStack underflow occurred.
+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fclex.fnclex.html b/x86/fclex.fnclex.html new file mode 100644 index 0000000..02bfb92 --- /dev/null +++ b/x86/fclex.fnclex.html @@ -0,0 +1,83 @@ + +FCLEX/FNCLEX + — Clear Exceptions

FCLEX/FNCLEX + — Clear Exceptions

+ + +

Opcode1

+ + + + + + + + + + + + + + + + + + +
Instruction64-Bit ModeCompat/Leg ModeDescription
9B DB E2FCLEXValidValidClear floating-point exception flags after checking for pending unmasked floating-point exceptions.
DB E2FNCLEX1ValidValidClear floating-point exception flags without checking for pending unmasked floating-point exceptions.
+
+

1. See IA-32 Architecture Compatibility section below.

+

Description + ¶ +

+

Clears the floating-point exception flags (PE, UE, OE, ZE, DE, and IE), the exception summary status flag (ES), the stack fault flag (SF), and the busy flag (B) in the FPU status word. The FCLEX instruction checks for and handles any pending unmasked floating-point exceptions before clearing the exception flags; the FNCLEX instruction does not.

+

The assembler issues two instructions for the FCLEX instruction (an FWAIT instruction followed by an FNCLEX instruction), and the processor executes each of these instructions separately. If an exception is generated for either of these instructions, the save EIP points to the instruction that caused the exception.

+

IA-32 Architecture Compatibility + ¶ +

+

When operating a Pentium or Intel486 processor in MS-DOS* compatibility mode, it is possible (under unusual circumstances) for an FNCLEX instruction to be interrupted prior to being executed to handle a pending FPU exception. See the section titled “No-Wait FPU Instructions Can Get FPU Interrupt in Window” in Appendix D of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of these circumstances. An FNCLEX instruction cannot be interrupted in this way on later Intel processors, except for the Intel QuarkTM X1000 processor.

+

This instruction affects only the x87 FPU floating-point exception flags. It does not affect the SIMD floating-point exception flags in the MXCSR register.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
FPUStatusWord[0:7] := 0;
+FPUStatusWord[15] := 0;
+
+

FPU Flags Affected + ¶ +

+

The PE, UE, OE, ZE, DE, IE, ES, SF, and B flags in the FPU status word are cleared. The C0, C1, C2, and C3 flags are undefined.

+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fcmovcc.html b/x86/fcmovcc.html new file mode 100644 index 0000000..7cb6939 --- /dev/null +++ b/x86/fcmovcc.html @@ -0,0 +1,132 @@ + +FCMOVcc + — Floating-Point Conditional Move

FCMOVcc + — Floating-Point Conditional Move

+ +

Opcode1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction64-Bit ModeCompat/ 1 Leg ModeDescription
DA C0+iFCMOVB ST(0), ST(i)ValidValidMove if below (CF=1).
DA C8+iFCMOVE ST(0), ST(i)ValidValidMove if equal (ZF=1).
DA D0+iFCMOVBE ST(0), ST(i)ValidValidMove if below or equal (CF=1 or ZF=1).
DA D8+iFCMOVU ST(0), ST(i)ValidValidMove if unordered (PF=1).
DB C0+iFCMOVNB ST(0), ST(i)ValidValidMove if not below (CF=0).
DB C8+iFCMOVNE ST(0), ST(i)ValidValidMove if not equal (ZF=0).
DB D0+iFCMOVNBE ST(0), ST(i)ValidValidMove if not below or equal (CF=0 and ZF=0).
DB D8+iFCMOVNU ST(0), ST(i)ValidValidMove if not unordered (PF=0).
+
+

1. See IA-32 Architecture Compatibility section below.

+

Description + ¶ +

+

Tests the status flags in the EFLAGS register and moves the source operand (second operand) to the destination operand (first operand) if the given test condition is true. The condition for each mnemonic os given in the Description column above and in Chapter 8 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1. The source operand is always in the ST(i) register and the destination operand is always ST(0).

+

The FCMOVcc instructions are useful for optimizing small IF constructions. They also help eliminate branching overhead for IF operations and the possibility of branch mispredictions by the processor.

+

A processor may not support the FCMOVcc instructions. Software can check if the FCMOVcc instructions are supported by checking the processor’s feature information with the CPUID instruction (see “COMISS—Compare Scalar Ordered Single Precision Floating-Point Values and Set EFLAGS” in this chapter). If both the CMOV and FPU feature bits are set, the FCMOVcc instructions are supported.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

The FCMOVcc instructions were introduced to the IA-32 Architecture in the P6 family processors and are not available in earlier IA-32 processors.

+

Operation + ¶ +

+
IF condition TRUE
+    THEN ST(0) := ST(i);
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0 if stack underflow occurred.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + +
#ISStack underflow occurred.
+

Integer Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fcom.fcomp.fcompp.html b/x86/fcom.fcomp.fcompp.html new file mode 100644 index 0000000..93fa162 --- /dev/null +++ b/x86/fcom.fcomp.fcompp.html @@ -0,0 +1,255 @@ + +FCOM/FCOMP/FCOMPP + — Compare Floating-Point Values

FCOM/FCOMP/FCOMPP + — Compare Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D8 /2FCOM m32fpValidValidCompare ST(0) with m32fp.
DC /2FCOM m64fpValidValidCompare ST(0) with m64fp.
D8 D0+iFCOM ST(i)ValidValidCompare ST(0) with ST(i).
D8 D1FCOMValidValidCompare ST(0) with ST(1).
D8 /3FCOMP m32fpValidValidCompare ST(0) with m32fp and pop register stack.
DC /3FCOMP m64fpValidValidCompare ST(0) with m64fp and pop register stack.
D8 D8+iFCOMP ST(i)ValidValidCompare ST(0) with ST(i) and pop register stack.
D8 D9FCOMPValidValidCompare ST(0) with ST(1) and pop register stack.
DE D9FCOMPPValidValidCompare ST(0) with ST(1) and pop register stack twice.
+

Description + ¶ +

+

Compares the contents of register ST(0) and source value and sets condition code flags C0, C2, and C3 in the FPU status word according to the results (see the table below). The source operand can be a data register or a memory location. If no source operand is given, the value in ST(0) is compared with the value in ST(1). The sign of zero is ignored, so that –0.0 is equal to +0.0.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + +
ConditionC3C2C0
ST(0) > SRC000
ST(0) < SRC001
ST(0) = SRC100
Unordered*111
+
Table 3-21. FCOM/FCOMP/FCOMPP Results
+
+

* Flagsnotsetifunmaskedinvalid-arithmetic-operand(#IA)exceptionisgenerated.

+

This instruction checks the class of the numbers being compared (see “FXAM—Examine Floating-Point” in this chapter). If either operand is a NaN or is in an unsupported format, an invalid-arithmetic-operand exception (#IA) is raised and, if the exception is masked, the condition flags are set to “unordered.” If the invalid-arithmetic-operand exception is unmasked, the condition code flags are not set.

+

The FCOMP instruction pops the register stack following the comparison operation and the FCOMPP instruction pops the register stack twice following the comparison operation. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1.

+

The FCOM instructions perform the same operation as the FUCOM instructions. The only difference is how they handle QNaN operands. The FCOM instructions raise an invalid-arithmetic-operand exception (#IA) when either or both of the operands is a NaN value or is in an unsupported format. The FUCOM instructions perform the same operation as the FCOM instructions, except that they do not generate an invalid-arithmetic-operand exception for QNaNs.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
CASE (relation of operands) OF
+    ST > SRC:
+                    C3, C2, C0 := 000;
+    ST < SRC:
+                    C3, C2, C0 := 001;
+    ST = SRC:
+                    C3, C2, C0 := 100;
+ESAC;
+IF ST(0) or SRC = NaN or unsupported format
+    THEN
+        #IA
+        IF FPUControlWord.IM = 1
+            THEN
+                C3, C2, C0 := 111;
+        FI;
+FI;
+IF Instruction = FCOMP
+    THEN
+        PopRegisterStack;
+FI;
+IF Instruction = FCOMPP
+    THEN
+        PopRegisterStack;
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0.
C0, C2, C3See table on previous page.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + +
#ISStack underflow occurred.
#IAOne or both operands are NaN values or have unsupported formats.
Register is marked empty.
#DOne or both operands are denormal values.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fcomi.fcomip.fucomi.fucomip.html b/x86/fcomi.fcomip.fucomi.fucomip.html new file mode 100644 index 0000000..912a260 --- /dev/null +++ b/x86/fcomi.fcomip.fucomi.fucomip.html @@ -0,0 +1,178 @@ + +FCOMI/FCOMIP/FUCOMI/FUCOMIP + — Compare Floating-Point Values and Set EFLAGS

FCOMI/FCOMIP/FUCOMI/FUCOMIP + — Compare Floating-Point Values and Set EFLAGS

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
DB F0+iFCOMI ST, ST(i)ValidValidCompare ST(0) with ST(i) and set status flags accordingly.
DF F0+iFCOMIP ST, ST(i)ValidValidCompare ST(0) with ST(i), set status flags accordingly, and pop register stack.
DB E8+iFUCOMI ST, ST(i)ValidValidCompare ST(0) with ST(i), check for ordered values, and set status flags accordingly.
DF E8+iFUCOMIP ST, ST(i)ValidValidCompare ST(0) with ST(i), check for ordered values, set status flags accordingly, and pop register stack.
+

Description + ¶ +

+

Performs an unordered comparison of the contents of registers ST(0) and ST(i) and sets the status flags ZF, PF, and CF in the EFLAGS register according to the results (see the table below). The sign of zero is ignored for comparisons, so that –0.0 is equal to +0.0.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + +
Comparison Results*ZFPFCF
ST0 > ST(i)000
ST0 < ST(i)001
ST0 = ST(i)100
Unordered**111
+
Table 3-22. FCOMI/FCOMIP/ FUCOMI/FUCOMIP Results
+
+

* SeetheIA-32ArchitectureCompatibilitysectionbelow.

+

** Flags not set if unmasked invalid-arithmetic-operand (#IA) exception is generated.

+

An unordered comparison checks the class of the numbers being compared (see “FXAM—Examine Floating-Point” in this chapter). The FUCOMI/FUCOMIP instructions perform the same operations as the FCOMI/FCOMIP instructions. The only difference is that the FUCOMI/FUCOMIP instructions raise the invalid-arithmetic-operand exception (#IA) only when either or both operands are an SNaN or are in an unsupported format; QNaNs cause the condition code flags to be set to unordered, but do not cause an exception to be generated. The FCOMI/FCOMIP instructions raise an invalid-operation exception when either or both of the operands are a NaN value of any kind or are in an unsupported format.

+

If the operation results in an invalid-arithmetic-operand exception being raised, the status flags in the EFLAGS register are set only if the exception is masked.

+

The FCOMI/FCOMIP and FUCOMI/FUCOMIP instructions set the OF, SF, and AF flags to zero in the EFLAGS register (regardless of whether an invalid-operation exception is detected).

+

The FCOMIP and FUCOMIP instructions also pop the register stack following the comparison operation. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

The FCOMI/FCOMIP/FUCOMI/FUCOMIP instructions were introduced to the IA-32 Architecture in the P6 family processors and are not available in earlier IA-32 processors.

+

Operation + ¶ +

+
CASE (relation of operands) OF
+    ST(0) > ST(i):
+                        ZF, PF, CF := 000;
+    ST(0) < ST(i):
+                        ZF, PF, CF := 001;
+    ST(0) = ST(i):
+                        ZF, PF, CF := 100;
+ESAC;
+IF Instruction is FCOMI or FCOMIP
+    THEN
+        IF ST(0) or ST(i) = NaN or unsupported format
+            THEN
+                #IA
+                IF FPUControlWord.IM = 1
+                        THEN
+                            ZF, PF, CF := 111;
+                FI;
+        FI;
+FI;
+IF Instruction is FUCOMI or FUCOMIP
+    THEN
+        IF ST(0) or ST(i) = QNaN, but not SNaN or unsupported format
+            THEN
+                ZF, PF, CF := 111;
+            ELSE (* ST(0) or ST(i) is SNaN or unsupported format *)
+                    #IA;
+                IF FPUControlWord.IM = 1
+                        THEN
+                            ZF, PF, CF := 111;
+                FI;
+        FI;
+FI;
+IF Instruction is FCOMIP or FUCOMIP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0.
C0, C2, C3Not affected.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + +
#ISStack underflow occurred.
#IA(FCOMI or FCOMIP instruction) One or both operands are NaN values or have unsupported formats.
(FUCOMI or FUCOMIP instruction) One or both operands are SNaN values (but not QNaNs) or have undefined formats. Detection of a QNaN value does not raise an invalid-operand exception.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fcos.html b/x86/fcos.html new file mode 100644 index 0000000..08729c6 --- /dev/null +++ b/x86/fcos.html @@ -0,0 +1,132 @@ + +FCOS + — Cosine

FCOS + — Cosine

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 FF Replace ST(0) with its approximate cosine.
+

Description + ¶ +

+

Computes the approximate cosine of the source operand in register ST(0) and stores the result in ST(0). The source operand must be given in radians and must be within the range −263 to +263. The following table shows the results obtained when taking the cosine of various classes of numbers.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
ST(0) SRCST(0) DEST
−∞*
−F−1 to +1
−0+1
+0+1
+F− 1 to + 1
+∞*
NaNNaN
+
Table 3-23. FCOS Results
+
+

F Meansfinitefloating-pointvalue.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

If the source operand is outside the acceptable range, the C2 flag in the FPU status word is set, and the value in register ST(0) remains unchanged. The instruction does not raise an exception when the source operand is out of range. It is up to the program to check the C2 flag for out-of-range conditions. Source values outside the range − 263 to +263 can be reduced to the range of the instruction by subtracting an appropriate integer multiple of 2π. However, even within the range -263 to +263, inaccurate results can occur because the finite approximation of π used internally for argument reduction is not sufficient in all cases. Therefore, for accurate results it is safe to apply FCOS only to arguments reduced accurately in software, to a value smaller in absolute value than 3π/8. See the sections titled “Approximation of Pi” and “Transcendental Instruction Accuracy” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a discussion of the proper value to use for π in performing such reductions.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF |ST(0)| < 263
+THEN
+    C2 := 0;
+    ST(0) := FCOS(ST(0)); // approximation of cosine
+ELSE (* Source operand is out-of-range *)
+    C2 := 1;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
Undefined if C2 is 1.
Set to 1 if outside range (−263 < source operand < +263); otherwise, set to 0.
C2
C0, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + +
#ISStack underflow occurred.
#IASource operand is an SNaN value, ∞, or unsupported format.
#DSource is a denormal value.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fdecstp.html b/x86/fdecstp.html new file mode 100644 index 0000000..725b5f1 --- /dev/null +++ b/x86/fdecstp.html @@ -0,0 +1,72 @@ + +FDECSTP + — Decrement Stack-Top Pointer

FDECSTP + — Decrement Stack-Top Pointer

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 F6 Decrement TOP field in FPU status word.
+

Description + ¶ +

+

Subtracts one from the TOP field of the FPU status word (decrements the top-of-stack pointer). If the TOP field contains a 0, it is set to 7. The effect of this instruction is to rotate the stack by one position. The contents of the FPU data registers and tag register are not affected.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF TOP = 0
+    THEN TOP := 7;
+    ELSE TOP := TOP – 1;
+FI;
+
+

FPU Flags Affected + ¶ +

+

The C1 flag is set to 0. The C0, C2, and C3 flags are undefined.

+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fdiv.fdivp.fidiv.html b/x86/fdiv.fdivp.fidiv.html new file mode 100644 index 0000000..bcbb79d --- /dev/null +++ b/x86/fdiv.fdivp.fidiv.html @@ -0,0 +1,326 @@ + +FDIV/FDIVP/FIDIV + — Divide

FDIV/FDIVP/FIDIV + — Divide

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D8 /6FDIV m32fpValidValidDivide ST(0) by m32fp and store result in ST(0).
DC /6FDIV m64fpValidValidDivide ST(0) by m64fp and store result in ST(0).
D8 F0+iFDIV ST(0), ST(i)ValidValidDivide ST(0) by ST(i) and store result in ST(0).
DC F8+iFDIV ST(i), ST(0)ValidValidDivide ST(i) by ST(0) and store result in ST(i).
DE F8+iFDIVP ST(i), ST(0)ValidValidDivide ST(i) by ST(0), store result in ST(i), and pop the register stack.
DE F9FDIVPValidValidDivide ST(1) by ST(0), store result in ST(1), and pop the register stack.
DA /6FIDIV m32intValidValidDivide ST(0) by m32int and store result in ST(0).
DE /6FIDIV m16intValidValidDivide ST(0) by m16int and store result in ST(0).
+

Description + ¶ +

+

Divides the destination operand by the source operand and stores the result in the destination location. The destination operand (dividend) is always in an FPU register; the source operand (divisor) can be a register or a memory location. Source operands in memory can be in single precision or double precision floating-point format, word or doubleword integer format.

+

The no-operand version of the instruction divides the contents of the ST(1) register by the contents of the ST(0) register. The one-operand version divides the contents of the ST(0) register by the contents of a memory location (either a floating-point or an integer value). The two-operand version, divides the contents of the ST(0) register by the contents of the ST(i) register or vice versa.

+

The FDIVP instructions perform the additional operation of popping the FPU register stack after storing the result. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1. The no-operand version of the floating-point divide instructions always results in the register stack being popped. In some assemblers, the mnemonic for this instruction is FDIV rather than FDIVP.

+

The FIDIV instructions convert an integer source operand to double extended-precision floating-point format before performing the division. When the source operand is an integer 0, it is treated as a +0.

+

If an unmasked divide-by-zero exception (#Z) is generated, no result is stored; if the exception is masked, an ∞ of the appropriate sign is stored in the destination operand.

+

The following table shows the results obtained when dividing various classes of numbers, assuming that neither overflow nor underflow occurs.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DEST
SRC−∞−F−0+0+F+∞NaN
−∞*+0+0−0−0*NaN
−F+∞+F+0−0−F−∞NaN
−I+∞+F+0−0−F−∞NaN
−0+∞******−∞NaN
+0−∞******+∞NaN
+I−∞−F−0+0+F+∞NaN
+F−∞−F−0+0+F+∞NaN
+∞*−0−0+0+0*NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-24. FDIV/FDIVP/FIDIV Results
+
+

F Meansfinitefloating-pointvalue.

+

I Means integer.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

** Indicates floating-point zero-divide (#Z) exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF SRC = 0
+    THEN
+        #Z;
+    ELSE
+        IF Instruction is FIDIV
+            THEN
+                DEST := DEST / ConvertToDoubleExtendedPrecisionFP(SRC);
+            ELSE (* Source operand is floating-point value *)
+                DEST := DEST / SRC;
+        FI;
+FI;
+IF Instruction = FDIVP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IAOperand is an SNaN value or unsupported format.
±∞ / ±∞; ±0 / ±0
#DSource is a denormal value.
#ZDEST / ±0, where DEST is not equal to ±0.
#UResult is too small for destination format.
#OResult is too large for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fdivr.fdivrp.fidivr.html b/x86/fdivr.fdivrp.fidivr.html new file mode 100644 index 0000000..3843067 --- /dev/null +++ b/x86/fdivr.fdivrp.fidivr.html @@ -0,0 +1,327 @@ + +FDIVR/FDIVRP/FIDIVR + — Reverse Divide

FDIVR/FDIVRP/FIDIVR + — Reverse Divide

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D8 /7FDIVR m32fpValidValidDivide m32fp by ST(0) and store result in ST(0).
DC /7FDIVR m64fpValidValidDivide m64fp by ST(0) and store result in ST(0).
D8 F8+iFDIVR ST(0), ST(i)ValidValidDivide ST(i) by ST(0) and store result in ST(0).
DC F0+iFDIVR ST(i), ST(0)ValidValidDivide ST(0) by ST(i) and store result in ST(i).
DE F0+iFDIVRP ST(i), ST(0)ValidValidDivide ST(0) by ST(i), store result in ST(i), and pop the register stack.
DE F1FDIVRPValidValidDivide ST(0) by ST(1), store result in ST(1), and pop the register stack.
DA /7FIDIVR m32intValidValidDivide m32int by ST(0) and store result in ST(0).
DE /7FIDIVR m16intValidValidDivide m16int by ST(0) and store result in ST(0).
+

Description + ¶ +

+

Divides the source operand by the destination operand and stores the result in the destination location. The destination operand (divisor) is always in an FPU register; the source operand (dividend) can be a register or a memory location. Source operands in memory can be in single precision or double precision floating-point format, word or doubleword integer format.

+

These instructions perform the reverse operations of the FDIV, FDIVP, and FIDIV instructions. They are provided to support more efficient coding.

+

The no-operand version of the instruction divides the contents of the ST(0) register by the contents of the ST(1) register. The one-operand version divides the contents of a memory location (either a floating-point or an integer value) by the contents of the ST(0) register. The two-operand version, divides the contents of the ST(i) register by the contents of the ST(0) register or vice versa.

+

The FDIVRP instructions perform the additional operation of popping the FPU register stack after storing the result. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1. The no-operand version of the floating-point divide instructions always results in the register stack being popped. In some assemblers, the mnemonic for this instruction is FDIVR rather than FDIVRP.

+

The FIDIVR instructions convert an integer source operand to double extended-precision floating-point format before performing the division.

+

If an unmasked divide-by-zero exception (#Z) is generated, no result is stored; if the exception is masked, an ∞ of the appropriate sign is stored in the destination operand.

+

The following table shows the results obtained when dividing various classes of numbers, assuming that neither overflow nor underflow occurs.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DEST
SRC−∞−F−0+0+F+∞NaN
−∞*+∞+∞−∞−∞*NaN
−F+0+F****−F−0NaN
−I+0+F****−F−0NaN
−0+0+0**−0−0NaN
+0−0−0**+0+0NaN
+I−0−F****+F+0NaN
+F−0−F****+F+0NaN
+∞*−∞−∞+∞+∞*NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-25. FDIVR/FDIVRP/FIDIVR Results
+
+

F Means finite floating-point value.

+

I Means integer.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

** Indicates floating-point zero-divide (#Z) exception.

+

When the source operand is an integer 0, it is treated as a +0. This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF DEST = 0
+    THEN
+        #Z;
+    ELSE
+        IF Instruction = FIDIVR
+            THEN
+                DEST := ConvertToDoubleExtendedPrecisionFP(SRC) / DEST;
+            ELSE (* Source operand is floating-point value *)
+                DEST := SRC / DEST;
+        FI;
+FI;
+IF Instruction = FDIVRP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IAOperand is an SNaN value or unsupported format.
±∞ / ±∞; ±0 / ±0
#DSource is a denormal value.
#ZSRC / ±0, where SRC is not equal to ±0.
#UResult is too small for destination format.
#OResult is too large for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/ffree.html b/x86/ffree.html new file mode 100644 index 0000000..1d63079 --- /dev/null +++ b/x86/ffree.html @@ -0,0 +1,72 @@ + +FFREE + — Free Floating-Point Register

FFREE + — Free Floating-Point Register

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
DD C0+i Sets tag for ST(i) to empty.
+

Description + ¶ +

+

Sets the tag in the FPU tag register associated with register ST(i) to empty (11B). The contents of ST(i) and the FPU stack-top pointer (TOP) are not affected.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
TAG(i) := 11B;
+
+

FPU Flags Affected + ¶ +

+ + + +
C0, C1, C2, C3undefined.
+

Floating-Point Exceptions + ¶ +

+

None

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/ficom.ficomp.html b/x86/ficom.ficomp.html new file mode 100644 index 0000000..3604585 --- /dev/null +++ b/x86/ficom.ficomp.html @@ -0,0 +1,209 @@ + +FICOM/FICOMP + — Compare Integer

FICOM/FICOMP + — Compare Integer

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
DE /2FICOM m16intValidValidCompare ST(0) with m16int.
DA /2FICOM m32intValidValidCompare ST(0) with m32int.
DE /3FICOMP m16intValidValidCompare ST(0) with m16int and pop stack register.
DA /3FICOMP m32intValidValidCompare ST(0) with m32int and pop stack register.
+

Description + ¶ +

+

Compares the value in ST(0) with an integer source operand and sets the condition code flags C0, C2, and C3 in the FPU status word according to the results (see table below). The integer value is converted to double extended-precision floating-point format before the comparison is made.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + +
ConditionC3C2C0
ST(0) > SRC000
ST(0) < SRC001
ST(0) = SRC100
Unordered111
+
Table 3-26. FICOM/FICOMP Results
+

These instructions perform an “unordered comparison.” An unordered comparison also checks the class of the numbers being compared (see “FXAM—Examine Floating-Point” in this chapter). If either operand is a NaN or is in an undefined format, the condition flags are set to “unordered.”

+

The sign of zero is ignored, so that –0.0 := +0.0.

+

The FICOMP instructions pop the register stack following the comparison. To pop the register stack, the processor marks the ST(0) register empty and increments the stack pointer (TOP) by 1.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
CASE (relation of operands) OF
+    ST(0) > SRC:
+            C3, C2, C0 := 000;
+    ST(0) < SRC:
+            C3, C2, C0 := 001;
+    ST(0) = SRC:
+            C3, C2, C0 := 100;
+    Unordered:
+            C3, C2, C0 := 111;
+ESAC;
+IF Instruction = FICOMP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0.
C0, C2, C3See table on previous page.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + +
#ISStack underflow occurred.
#IAOne or both operands are NaN values or have unsupported formats.
#DOne or both operands are denormal values.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fild.html b/x86/fild.html new file mode 100644 index 0000000..af4a096 --- /dev/null +++ b/x86/fild.html @@ -0,0 +1,153 @@ + +FILD + — Load Integer

FILD + — Load Integer

+ + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
DF /0FILD m16intValidValidPush m16int onto the FPU register stack.
DB /0FILD m32intValidValidPush m32int onto the FPU register stack.
DF /5FILD m64intValidValidPush m64int onto the FPU register stack.
+

Description + ¶ +

+

Converts the signed-integer source operand into double extended-precision floating-point format and pushes the value onto the FPU register stack. The source operand can be a word, doubleword, or quadword integer. It is loaded without rounding errors. The sign of the source operand is preserved.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
TOP := TOP − 1;
+ST(0) := ConvertToDoubleExtendedPrecisionFP(SRC);
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 1 if stack overflow occurred; set to 0 otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + +
#ISStack overflow occurred.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fincstp.html b/x86/fincstp.html new file mode 100644 index 0000000..b387536 --- /dev/null +++ b/x86/fincstp.html @@ -0,0 +1,72 @@ + +FINCSTP + — Increment Stack-Top Pointer

FINCSTP + — Increment Stack-Top Pointer

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 F7 Increment the TOP field in the FPU status register.
+

Description + ¶ +

+

Adds one to the TOP field of the FPU status word (increments the top-of-stack pointer). If the TOP field contains a 7, it is set to 0. The effect of this instruction is to rotate the stack by one position. The contents of the FPU data registers and tag register are not affected. This operation is not equivalent to popping the stack, because the tag for the previous top-of-stack register is not marked empty.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF TOP = 7
+    THEN TOP := 0;
+    ELSE TOP := TOP + 1;
+FI;
+
+

FPU Flags Affected + ¶ +

+

The C1 flag is set to 0. The C0, C2, and C3 flags are undefined.

+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/finit.fninit.html b/x86/finit.fninit.html new file mode 100644 index 0000000..6733bcb --- /dev/null +++ b/x86/finit.fninit.html @@ -0,0 +1,94 @@ + +FINIT/FNINIT + — Initialize Floating-Point Unit

FINIT/FNINIT + — Initialize Floating-Point Unit

+ + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
9B DB E3FINITValidValidInitialize FPU after checking for pending unmasked floating-point exceptions.
DB E3FNINIT1ValidValidInitialize FPU without checking for pending unmasked floating-point exceptions.
+
+

1. See IA-32 Architecture Compatibility section below.

+

Description + ¶ +

+

Sets the FPU control, status, tag, instruction pointer, and data pointer registers to their default states. The FPU control word is set to 037FH (round to nearest, all exceptions masked, 64-bit precision). The status word is cleared (no exception flags set, TOP is set to 0). The data registers in the register stack are left unchanged, but they are all tagged as empty (11B). Both the instruction and data pointers are cleared.

+

The FINIT instruction checks for and handles any pending unmasked floating-point exceptions before performing the initialization; the FNINIT instruction does not.

+

The assembler issues two instructions for the FINIT instruction (an FWAIT instruction followed by an FNINIT instruction), and the processor executes each of these instructions in separately. If an exception is generated for either of these instructions, the save EIP points to the instruction that caused the exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

When operating a Pentium or Intel486 processor in MS-DOS compatibility mode, it is possible (under unusual circumstances) for an FNINIT instruction to be interrupted prior to being executed to handle a pending FPU exception. See the section titled “No-Wait FPU Instructions Can Get FPU Interrupt in Window” in Appendix D of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of these circumstances. An FNINIT instruction cannot be interrupted in this way on later Intel processors, except for the Intel QuarkTM X1000 processor.

+

In the Intel387 math coprocessor, the FINIT/FNINIT instruction does not clear the instruction and data pointers.

+

This instruction affects only the x87 FPU. It does not affect the XMM and MXCSR registers.

+

Operation + ¶ +

+
FPUControlWord := 037FH;
+FPUStatusWord := 0;
+FPUTagWord := FFFFH;
+FPUDataPointer := 0;
+FPUInstructionPointer := 0;
+FPULastInstructionOpcode := 0;
+
+

FPU Flags Affected + ¶ +

+ + + +
C0, C1, C2, C3set to 0.
+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fist.fistp.html b/x86/fist.fistp.html new file mode 100644 index 0000000..d1c771d --- /dev/null +++ b/x86/fist.fistp.html @@ -0,0 +1,222 @@ + +FIST/FISTP + — Store Integer

FIST/FISTP + — Store Integer

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
DF /2FIST m16intValidValidStore ST(0) in m16int.
DB /2FIST m32intValidValidStore ST(0) in m32int.
DF /3FISTP m16intValidValidStore ST(0) in m16int and pop register stack.
DB /3FISTP m32intValidValidStore ST(0) in m32int and pop register stack.
DF /7FISTP m64intValidValidStore ST(0) in m64int and pop register stack.
+

Description + ¶ +

+

The FIST instruction converts the value in the ST(0) register to a signed integer and stores the result in the destination operand. Values can be stored in word or doubleword integer format. The destination operand specifies the address where the first byte of the destination value is to be stored.

+

The FISTP instruction performs the same operation as the FIST instruction and then pops the register stack. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1. The FISTP instruction also stores values in quadword integer format.

+

The following table shows the results obtained when storing various classes of numbers in integer format.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ST(0)DEST
− ∞ or Value Too Large for DEST Format*
F ≤ −1−I
−1 < F < −0**
−00
+00
+0<F<+1**
F≥+1+I
+ ∞ or Value Too Large for DEST Format*
NaN*
NOTES: F Meansfinitefloating-pointvalue. I Means integer. * Indicatesfloating-pointinvalid-operation(#IA)exception. ** 0 or ±1, depending on the rounding mode.
+
Table 3-27. FIST/FISTP Results
+

If the source value is a non-integral value, it is rounded to an integer value, according to the rounding mode specified by the RC field of the FPU control word.

+

If the converted value is too large for the destination format, or if the source operand is an ∞, SNaN, QNAN, or is in an unsupported format, an invalid-arithmetic-operand condition is signaled. If the invalid-operation exception is not masked, an invalid-arithmetic-operand exception (#IA) is generated and no value is stored in the destination operand. If the invalid-operation exception is masked, the integer indefinite value is stored in memory.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
DEST := Integer(ST(0));
+IF Instruction = FISTP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + + + +
C1Set to 0 if stack underflow occurred.
Indicates rounding direction of if the inexact exception (#P) is generated: 0 := not roundup; 1 := roundup.
Set to 0 otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + +
#ISStack underflow occurred.
#IAConverted value is too large for the destination format.
Source operand is an SNaN, QNaN, ±∞, or unsupported format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fisttp.html b/x86/fisttp.html new file mode 100644 index 0000000..49ba999 --- /dev/null +++ b/x86/fisttp.html @@ -0,0 +1,174 @@ + +FISTTP + — Store Integer With Truncation

FISTTP + — Store Integer With Truncation

+ + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
DF /1FISTTP m16intValidValidStore ST(0) in m16int with truncation.
DB /1FISTTP m32intValidValidStore ST(0) in m32int with truncation.
DD /1FISTTP m64intValidValidStore ST(0) in m64int with truncation.
+

Description + ¶ +

+

FISTTP converts the value in ST into a signed integer using truncation (chop) as rounding mode, transfers the result to the destination, and pop ST. FISTTP accepts word, short integer, and long integer destinations.

+

The following table shows the results obtained when storing various classes of numbers in integer format.

+
+ + + + + + + + + + + + + + + + + + + + + +
ST(0)DEST
− ∞ or Value Too Large for DEST Format*
F≤ −1−I
−1<F<+10
FŠ+1+I
+ ∞ or Value Too Large for DEST Format*
NaN*
+
Table 3-28. FISTTP Results
+
+

F Meansfinitefloating-pointvalue.

+

Ι Means integer.

+

∗ Indicates floating-point invalid-operation (#IA) exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
DEST := ST;
+pop ST;
+
+

Flags Affected + ¶ +

+

C1 is cleared; C0, C2, C3 undefined.

+

Numeric Exceptions + ¶ +

+

Invalid, Stack Invalid (stack underflow), Precision.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is in a nonwritable segment.
For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code)For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#NMIf CR0.EM[bit 2] = 1.
If CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.SSE3[bit 0] = 0.
If the LOCK prefix is used.
+

Real Address Mode Exceptions + ¶ +

+

GP(0) If any part of the operand would lie outside of the effective address space from 0 to 0FFFFH.

+ + + + + + + + + + +
#NMIf CR0.EM[bit 2] = 1.
If CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.SSE3[bit 0] = 0.
If the LOCK prefix is used.
+

Virtual 8086 Mode Exceptions + ¶ +

+

GP(0) If any part of the operand would lie outside of the effective address space from 0 to 0FFFFH.

+ + + + + + + + + + + + + + + + +
#NMIf CR0.EM[bit 2] = 1.
If CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.SSE3[bit 0] = 0.
If the LOCK prefix is used.
#PF(fault-code)For a page fault.
#AC(0)For unaligned memory reference if the current privilege is 3.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
If the LOCK prefix is used.
diff --git a/x86/fld.html b/x86/fld.html new file mode 100644 index 0000000..28fbce2 --- /dev/null +++ b/x86/fld.html @@ -0,0 +1,179 @@ + +FLD + — Load Floating-Point Value

FLD + — Load Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D9 /0FLD m32fpValidValidPush m32fp onto the FPU register stack.
DD /0FLD m64fpValidValidPush m64fp onto the FPU register stack.
DB /5FLD m80fpValidValidPush m80fp onto the FPU register stack.
D9 C0+iFLD ST(i)ValidValidPush ST(i) onto the FPU register stack.
+

Description + ¶ +

+

Pushes the source operand onto the FPU register stack. The source operand can be in single precision, double precision, or double extended-precision floating-point format. If the source operand is in single precision or double precision floating-point format, it is automatically converted to the double extended-precision floating-point format before being pushed on the stack.

+

The FLD instruction can also push the value in a selected FPU register [ST(i)] onto the stack. Here, pushing register ST(0) duplicates the stack top.

+
+

When the FLD instruction loads a denormal value and the DM bit in the CW is not masked, an exception is flagged but the value is still pushed onto the x87 stack.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF SRC is ST(i)
+    THEN
+        temp := ST(i);
+FI;
+TOP := TOP − 1;
+IF SRC is memory-operand
+    THEN
+        ST(0) := ConvertToDoubleExtendedPrecisionFP(SRC);
+    ELSE (* SRC is ST(i) *)
+        ST(0) := temp;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 1 if stack overflow occurred; otherwise, set to 0.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + +
#ISStack underflow or overflow occurred.
#IASource operand is an SNaN. Does not occur if the source operand is in double extended-precision floating-point format (FLD m80fp or FLD ST(i)).
#DSource operand is a denormal value. Does not occur if the source operand is in double extended-precision floating-point format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz.html b/x86/fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz.html new file mode 100644 index 0000000..592cbdb --- /dev/null +++ b/x86/fld1.fldl2t.fldl2e.fldpi.fldlg2.fldln2.fldz.html @@ -0,0 +1,132 @@ + +FLD1/FLDL2T/FLDL2E/FLDPI/FLDLG2/FLDLN2/FLDZ + — Load Constant

FLD1/FLDL2T/FLDL2E/FLDPI/FLDLG2/FLDLN2/FLDZ + — Load Constant

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*Instruction64-Bit ModeCompat/Leg ModeDescription
D9 E8FLD1ValidValidPush +1.0 onto the FPU register stack.
D9 E9FLDL2TValidValidPush log210 onto the FPU register stack.
D9 EAFLDL2EValidValidPush log2e onto the FPU register stack.
D9 EBFLDPIValidValidPush π onto the FPU register stack.
D9 ECFLDLG2ValidValidPush log102 onto the FPU register stack.
D9 EDFLDLN2ValidValidPush loge2 onto the FPU register stack.
D9 EEFLDZValidValidPush +0.0 onto the FPU register stack.
+
+

* SeeIA-32ArchitectureCompatibilitysectionbelow.

+

Description + ¶ +

+

Push one of seven commonly used constants (in double extended-precision floating-point format) onto the FPU register stack. The constants that can be loaded with these instructions include +1.0, +0.0, log210, log2e, π, log102, and loge2. For each constant, an internal 66-bit constant is rounded (as specified by the RC field in the FPU control word) to double extended-precision floating-point format. The inexact-result exception (#P) is not generated as a result of the rounding, nor is the C1 flag set in the x87 FPU status word if the value is rounded up.

+

See the section titled “Approximation of Pi” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of the π constant.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

When the RC field is set to round-to-nearest, the FPU produces the same constants that is produced by the Intel 8087 and Intel 287 math coprocessors.

+

Operation + ¶ +

+
TOP := TOP − 1;
+ST(0) := CONSTANT;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 1 if stack overflow occurred; otherwise, set to 0.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + +
#ISStack overflow occurred.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fldcw.html b/x86/fldcw.html new file mode 100644 index 0000000..4ea2da4 --- /dev/null +++ b/x86/fldcw.html @@ -0,0 +1,135 @@ + +FLDCW + — Load x87 FPU Control Word

FLDCW + — Load x87 FPU Control Word

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 /5 Load FPU control word from m2byte.
+

Description + ¶ +

+

Loads the 16-bit source operand into the FPU control word. The source operand is a memory location. This instruction is typically used to establish or change the FPU’s mode of operation.

+

If one or more exception flags are set in the FPU status word prior to loading a new FPU control word and the new control word unmasks one or more of those exceptions, a floating-point exception will be generated upon execution of the next floating-point instruction (except for the no-wait floating-point instructions, see the section titled “Software Exception Handling” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1). To avoid raising exceptions when changing FPU operating modes, clear any pending exceptions (using the FCLEX or FNCLEX instruction) before loading the new control word.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
FPUControlWord := SRC;
+
+

FPU Flags Affected + ¶ +

+ + + +
C0, C1, C2, C3undefined.
+

Floating-Point Exceptions + ¶ +

+

None; however, this operation might unmask a pending exception in the FPU status word. That exception is then generated upon execution of the next “waiting” floating-point instruction.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fldenv.html b/x86/fldenv.html new file mode 100644 index 0000000..217fddf --- /dev/null +++ b/x86/fldenv.html @@ -0,0 +1,140 @@ + +FLDENV + — Load x87 FPU Environment

FLDENV + — Load x87 FPU Environment

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 /4 Load FPU environment from m14byte or m28byte.
+

Description + ¶ +

+

Loads the complete x87 FPU operating environment from memory into the FPU registers. The source operand specifies the first byte of the operating-environment data in memory. This data is typically written to the specified memory location by a FSTENV or FNSTENV instruction.

+

The FPU operating environment consists of the FPU control word, status word, tag word, instruction pointer, data pointer, and last opcode. Figures 8-9 through 8-12 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, show the layout in memory of the loaded environment, depending on the operating mode of the processor (protected or real) and the current operand-size attribute (16-bit or 32-bit). In virtual-8086 mode, the real mode layouts are used.

+

The FLDENV instruction should be executed in the same operating mode as the corresponding FSTENV/FNSTENV instruction.

+

If one or more unmasked exception flags are set in the new FPU status word, a floating-point exception will be generated upon execution of the next floating-point instruction (except for the no-wait floating-point instructions, see the section titled “Software Exception Handling” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1). To avoid generating exceptions when loading a new environment, clear all the exception flags in the FPU status word that is being loaded.

+

If a page or limit fault occurs during the execution of this instruction, the state of the x87 FPU registers as seen by the fault handler may be different than the state being loaded from memory. In such situations, the fault handler should ignore the status of the x87 FPU registers, handle the fault, and return. The FLDENV instruction will then complete the loading of the x87 FPU registers with no resulting context inconsistency.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
FPUControlWord := SRC[FPUControlWord];
+FPUStatusWord := SRC[FPUStatusWord];
+FPUTagWord := SRC[FPUTagWord];
+FPUDataPointer := SRC[FPUDataPointer];
+FPUInstructionPointer := SRC[FPUInstructionPointer];
+FPULastInstructionOpcode := SRC[FPULastInstructionOpcode];
+
+

FPU Flags Affected + ¶ +

+

The C0, C1, C2, C3 flags are loaded.

+

Floating-Point Exceptions + ¶ +

+

None; however, if an unmasked exception is loaded in the status word, it is generated upon execution of the next “waiting” floating-point instruction.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fmul.fmulp.fimul.html b/x86/fmul.fmulp.fimul.html new file mode 100644 index 0000000..d9c7500 --- /dev/null +++ b/x86/fmul.fmulp.fimul.html @@ -0,0 +1,317 @@ + +FMUL/FMULP/FIMUL + — Multiply

FMUL/FMULP/FIMUL + — Multiply

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D8 /1FMUL m32fpValidValidMultiply ST(0) by m32fp and store result in ST(0).
DC /1FMUL m64fpValidValidMultiply ST(0) by m64fp and store result in ST(0).
D8 C8+iFMUL ST(0), ST(i)ValidValidMultiply ST(0) by ST(i) and store result in ST(0).
DC C8+iFMUL ST(i), ST(0)ValidValidMultiply ST(i) by ST(0) and store result in ST(i).
DE C8+iFMULP ST(i), ST(0)ValidValidMultiply ST(i) by ST(0), store result in ST(i), and pop the register stack.
DE C9FMULPValidValidMultiply ST(1) by ST(0), store result in ST(1), and pop the register stack.
DA /1FIMUL m32intValidValidMultiply ST(0) by m32int and store result in ST(0).
DE /1FIMUL m16intValidValidMultiply ST(0) by m16int and store result in ST(0).
+

Description + ¶ +

+

Multiplies the destination and source operands and stores the product in the destination location. The destination operand is always an FPU data register; the source operand can be an FPU data register or a memory location. Source operands in memory can be in single precision or double precision floating-point format or in word or doubleword integer format.

+

The no-operand version of the instruction multiplies the contents of the ST(1) register by the contents of the ST(0) register and stores the product in the ST(1) register. The one-operand version multiplies the contents of the ST(0) register by the contents of a memory location (either a floating-point or an integer value) and stores the product in the ST(0) register. The two-operand version, multiplies the contents of the ST(0) register by the contents of the ST(i) register, or vice versa, with the result being stored in the register specified with the first operand (the destination operand).

+

The FMULP instructions perform the additional operation of popping the FPU register stack after storing the product. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1. The no-operand version of the floating-point multiply instructions always results in the register stack being popped. In some assemblers, the mnemonic for this instruction is FMUL rather than FMULP.

+

The FIMUL instructions convert an integer source operand to double extended-precision floating-point format before performing the multiplication.

+

The sign of the result is always the exclusive-OR of the source signs, even if one or more of the values being multiplied is 0 or ∞. When the source operand is an integer 0, it is treated as a +0.

+

The following table shows the results obtained when multiplying various classes of numbers, assuming that neither overflow nor underflow occurs.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DEST
SRC−∞−F−0+0+F+∞NaN
−∞+∞+∞**−∞−∞NaN
−F+∞+F+0−0−F−∞NaN
−I+∞+F+0−0−F−∞NaN
−0*+0+0−0−0*NaN
+0*−0−0+0+0*NaN
+I−∞−F−0+0+F+∞NaN
+F−∞−F−0+0+F+∞NaN
+∞−∞−∞**+∞+∞NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-29. FMUL/FMULP/FIMUL Results
+
+

F Means finite floating-point value.

+

I Means Integer.

+

* Indicatesinvalid-arithmetic-operand(#IA)exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF Instruction = FIMUL
+    THEN
+        DEST := DEST ∗ ConvertToDoubleExtendedPrecisionFP(SRC);
+    ELSE (* Source operand is floating-point value *)
+        DEST := DEST ∗ SRC;
+FI;
+IF Instruction = FMULP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IAOperand is an SNaN value or unsupported format.
One operand is ±0 and the other is ±∞.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#OResult is too large for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fnop.html b/x86/fnop.html new file mode 100644 index 0000000..2ea05db --- /dev/null +++ b/x86/fnop.html @@ -0,0 +1,67 @@ + +FNOP + — No Operation

FNOP + — No Operation

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 D0 No operation is performed.
+

Description + ¶ +

+

Performs no FPU operation. This instruction takes up space in the instruction stream but does not affect the FPU or machine context, except the EIP register and the FPU Instruction Pointer.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

FPU Flags Affected + ¶ +

+ + + +
C0, C1, C2, C3undefined.
+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fpatan.html b/x86/fpatan.html new file mode 100644 index 0000000..1a75156 --- /dev/null +++ b/x86/fpatan.html @@ -0,0 +1,183 @@ + +FPATAN + — Partial Arctangent

FPATAN + — Partial Arctangent

+ +

Opcode1

+ + + + + + + + + + + + +
Instruction64-Bit ModeCompat/Leg ModeDescription
D9 F3ValidReplace ST(1) with arctan(ST(1)/ST(0)) and pop the register stack.
+

1. See IA-32 Architecture Compatibility section below.

+

Description + ¶ +

+

Computes the arctangent of the source operand in register ST(1) divided by the source operand in register ST(0), stores the result in ST(1), and pops the FPU register stack. The result in register ST(0) has the same sign as the source operand ST(1) and a magnitude less than +π.

+

The FPATAN instruction returns the angle between the X axis and the line from the origin to the point (X,Y), where Y (the ordinate) is ST(1) and X (the abscissa) is ST(0). The angle depends on the sign of X and Y independently, not just on the sign of the ratio Y/X. This is because a point (−X,Y) is in the second quadrant, resulting in an angle between π/2 and π, while a point (X,−Y) is in the fourth quadrant, resulting in an angle between 0 and −π/2. A point (−X,−Y) is in the third quadrant, giving an angle between −π/2 and −π.

+

The following table shows the results obtained when computing the arctangent of various classes of numbers, assuming that underflow does not occur.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ST(0)
ST(1)−∞−F−0+0+F+∞NaN
−∞− 3π/4*− π/2− π/2− π/2− π/2− π/4*NaN
−F-p−π to −π/2−π/2−π/2−π/2 to −0-0NaN
−0-p-p-p*− 0*−0−0NaN
+0+p+p+ π*+ 0*+0+0NaN
+F+p+π to +π/2+ π/2+π/2+π/2 to +0+0NaN
+∞+3π/4*+π/2+π/2+π/2+ π/2+ π/4*NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-30. FPATAN Results
+
+

F Means finite floating-point value. * Table8-10intheIntel®64andIA-32ArchitecturesSoftwareDeveloper’sManual,Volume1,specifiesthattheratios0/0and∞/∞ generate the floating-point invalid arithmetic-operation exception and, if this exception is masked, the floating-point QNaN indefinite value is returned. With the FPATAN instruction, the 0/0 or ∞/∞ value is actually not calculated using division. Instead, the arctangent of the two variables is derived from a standard mathematical formulation that is generalized to allow complex numbers as arguments. In this complex variable formulation, arctangent(0,0) etc. has well defined values. These values are needed to develop a library to compute transcendental functions with complex arguments, based on the FPU functions that only allow floating-point values as arguments.

+

There is no restriction on the range of source operands that FPATAN can accept.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

The source operands for this instruction are restricted for the 80287 math coprocessor to the following range:

+

0 ≤ |ST(1)| < |ST(0)| < +∞

+

Operation + ¶ +

+
ST(1) := arctan(ST(1) / ST(0));
+PopRegisterStack;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IASource operand is an SNaN value or unsupported format.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fprem.html b/x86/fprem.html new file mode 100644 index 0000000..8515460 --- /dev/null +++ b/x86/fprem.html @@ -0,0 +1,194 @@ + +FPREM + — Partial Remainder

FPREM + — Partial Remainder

+ + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D9 F8FPREMValidValidReplace ST(0) with the remainder obtained from dividing ST(0) by ST(1).
+

Description + ¶ +

+

Computes the remainder obtained from dividing the value in the ST(0) register (the dividend) by the value in the ST(1) register (the divisor or modulus), and stores the result in ST(0). The remainder represents the following value:

+

Remainder := ST(0) − (Q ∗ ST(1))

+

Here, Q is an integer value that is obtained by truncating the floating-point number quotient of [ST(0) / ST(1)] toward zero. The sign of the remainder is the same as the sign of the dividend. The magnitude of the remainder is less than that of the modulus, unless a partial remainder was computed (as described below).

+

This instruction produces an exact result; the inexact-result exception does not occur and the rounding control has no effect. The following table shows the results obtained when computing the remainder of various classes of numbers, assuming that underflow does not occur.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ST(1)
ST(0)-∞-F-0+0+F+∞NaN
-∞******NaN
-FST(0)-F or -0**-F or -0ST(0)NaN
-0-0-0**-0-0NaN
+0+0+0**+0+0NaN
+FST(0)+F or +0**+F or +0ST(0)NaN
+∞******NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-31. FPREM Results
+
+

F Meansfinitefloating-pointvalue.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

When the result is 0, its sign is the same as that of the dividend. When the modulus is ∞, the result is equal to the value in ST(0).

+

The FPREM instruction does not compute the remainder specified in IEEE Std 754. The IEEE specified remainder can be computed with the FPREM1 instruction. The FPREM instruction is provided for compatibility with the Intel 8087 and Intel287 math coprocessors.

+

The FPREM instruction gets its name “partial remainder” because of the way it computes the remainder. This instruction arrives at a remainder through iterative subtraction. It can, however, reduce the exponent of ST(0) by no more than 63 in one execution of the instruction. If the instruction succeeds in producing a remainder that is less than the modulus, the operation is complete and the C2 flag in the FPU status word is cleared. Otherwise, C2 is set, and the result in ST(0) is called the partial remainder. The exponent of the partial remainder will be less than the exponent of the original dividend by at least 32. Software can re-execute the instruction (using the partial remainder in ST(0) as the dividend) until C2 is cleared. (Note that while executing such a remainder-computation loop, a higher-priority interrupting routine that needs the FPU can force a context switch in-between the instructions in the loop.)

+

An important use of the FPREM instruction is to reduce the arguments of periodic functions. When reduction is complete, the instruction stores the three least-significant bits of the quotient in the C3, C1, and C0 flags of the FPU

+

status word. This information is important in argument reduction for the tangent function (using a modulus of π/4), because it locates the original angle in the correct one of eight sectors of the unit circle.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
D := exponent(ST(0)) – exponent(ST(1));
+IF D < 64
+    THEN
+        Q := Integer(TruncateTowardZero(ST(0) / ST(1)));
+        ST(0) := ST(0) – (ST(1) ∗ Q);
+        C2 := 0;
+        C0, C3, C1 := LeastSignificantBits(Q); (* Q2, Q1, Q0 *)
+    ELSE
+        C2 := 1;
+        N := An implementation-dependent number between 32 and 63;
+        QQ := Integer(TruncateTowardZero((ST(0) / ST(1)) / 2(D − N)));
+        ST(0) := ST(0) – (ST(1) ∗ QQ ∗ 2(D − N));
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + + + + + +
C0Set to bit 2 (Q2) of the quotient.
C1Set to 0 if stack underflow occurred; otherwise, set to least significant bit of quotient (Q0).
C2Set to 0 if reduction complete; set to 1 if incomplete.
C3Set to bit 1 (Q1) of the quotient.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + +
#ISStack underflow occurred.
#IASource operand is an SNaN value, modulus is 0, dividend is ∞, or unsupported format.
#DSource operand is a denormal value.
#UResult is too small for destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fprem1.html b/x86/fprem1.html new file mode 100644 index 0000000..8cdebb6 --- /dev/null +++ b/x86/fprem1.html @@ -0,0 +1,193 @@ + +FPREM1 + — Partial Remainder

FPREM1 + — Partial Remainder

+ + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D9 F5FPREM1ValidValidReplace ST(0) with the IEEE remainder obtained from dividing ST(0) by ST(1).
+

Description + ¶ +

+

Computes the IEEE remainder obtained from dividing the value in the ST(0) register (the dividend) by the value in the ST(1) register (the divisor or modulus), and stores the result in ST(0). The remainder represents the following value:

+

Remainder := ST(0) − (Q ∗ ST(1))

+

Here, Q is an integer value that is obtained by rounding the floating-point number quotient of [ST(0) / ST(1)] toward the nearest integer value. The magnitude of the remainder is less than or equal to half the magnitude of the modulus, unless a partial remainder was computed (as described below).

+

This instruction produces an exact result; the precision (inexact) exception does not occur and the rounding control has no effect. The following table shows the results obtained when computing the remainder of various classes of numbers, assuming that underflow does not occur.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ST(1)
ST(0)−∞−F−0+0+F+∞NaN
−∞******NaN
−FST(0)±F or −0**± F or − 0ST(0)NaN
−0−0−0**−0-0NaN
+0+0+0**+0+0NaN
+FST(0)± F or + 0**± F or + 0ST(0)NaN
+∞******NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-32. FPREM1 Results
+
+

F Means finite floating-point value.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

When the result is 0, its sign is the same as that of the dividend. When the modulus is ∞, the result is equal to the value in ST(0).

+

The FPREM1 instruction computes the remainder specified in IEEE Standard 754. This instruction operates differently from the FPREM instruction in the way that it rounds the quotient of ST(0) divided by ST(1) to an integer (see the “Operation” section below).

+

Like the FPREM instruction, FPREM1 computes the remainder through iterative subtraction, but can reduce the exponent of ST(0) by no more than 63 in one execution of the instruction. If the instruction succeeds in producing a remainder that is less than one half the modulus, the operation is complete and the C2 flag in the FPU status word is cleared. Otherwise, C2 is set, and the result in ST(0) is called the partial remainder. The exponent of the partial remainder will be less than the exponent of the original dividend by at least 32. Software can re-execute the instruction (using the partial remainder in ST(0) as the dividend) until C2 is cleared. (Note that while executing such a remainder-computation loop, a higher-priority interrupting routine that needs the FPU can force a context switch in-between the instructions in the loop.)

+

An important use of the FPREM1 instruction is to reduce the arguments of periodic functions. When reduction is complete, the instruction stores the three least-significant bits of the quotient in the C3, C1, and C0 flags of the FPU status word. This information is important in argument reduction for the tangent function (using a modulus of π/4), because it locates the original angle in the correct one of eight sectors of the unit circle.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
D := exponent(ST(0)) – exponent(ST(1));
+IF D < 64
+    THEN
+        Q := Integer(RoundTowardNearestInteger(ST(0) / ST(1)));
+        ST(0) := ST(0) – (ST(1) ∗ Q);
+        C2 := 0;
+        C0, C3, C1 := LeastSignificantBits(Q); (* Q2, Q1, Q0 *)
+    ELSE
+        C2 := 1;
+        N := An implementation-dependent number between 32 and 63;
+        QQ := Integer(TruncateTowardZero((ST(0) / ST(1)) / 2(D − N)));
+        ST(0) := ST(0) – (ST(1) ∗ QQ ∗ 2(D − N));
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + + + + + +
C0Set to bit 2 (Q2) of the quotient.
C1Set to 0 if stack underflow occurred; otherwise, set to least significant bit of quotient (Q0).
C2Set to 0 if reduction complete; set to 1 if incomplete.
C3Set to bit 1 (Q1) of the quotient.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + +
#ISStack underflow occurred.
#IASource operand is an SNaN value, modulus (divisor) is 0, dividend is ∞, or unsupported format.
#DSource operand is a denormal value.
#UResult is too small for destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fptan.html b/x86/fptan.html new file mode 100644 index 0000000..30da84f --- /dev/null +++ b/x86/fptan.html @@ -0,0 +1,136 @@ + +FPTAN + — Partial Tangent

FPTAN + — Partial Tangent

+ + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D9 F2FPTANValidValidReplace ST(0) with its approximate tangent and push 1 onto the FPU stack.
+

Description + ¶ +

+

Computes the approximate tangent of the source operand in register ST(0), stores the result in ST(0), and pushes a 1.0 onto the FPU register stack. The source operand must be given in radians and must be less than ±263. The following table shows the unmasked results obtained when computing the partial tangent of various classes of numbers, assuming that underflow does not occur.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
ST(0) SRCST(0) DEST
−∞*
−F− F to + F
−0-0
+0+0
+F− F to + F
+∞*
NaNNaN
+
Table 3-33. FPTAN Results
+
+

F Means finite floating-point value.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

If the source operand is outside the acceptable range, the C2 flag in the FPU status word is set, and the value in register ST(0) remains unchanged. The instruction does not raise an exception when the source operand is out of range. It is up to the program to check the C2 flag for out-of-range conditions. Source values outside the range − 263 to +263 can be reduced to the range of the instruction by subtracting an appropriate integer multiple of 2π. However, even within the range -263 to +263, inaccurate results can occur because the finite approximation of π used internally for argument reduction is not sufficient in all cases. Therefore, for accurate results it is safe to apply FPTAN only to arguments reduced accurately in software, to a value smaller in absolute value than 3π/8. See the sections titled “Approximation of Pi” and “Transcendental Instruction Accuracy” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a discussion of the proper value to use for π in performing such reductions.

+

The value 1.0 is pushed onto the register stack after the tangent has been computed to maintain compatibility with the Intel 8087 and Intel287 math coprocessors. This operation also simplifies the calculation of other trigonometric functions. For instance, the cotangent (which is the reciprocal of the tangent) can be computed by executing a FDIVR instruction after the FPTAN instruction.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF ST(0) < 263
+    THEN
+        C2 := 0;
+        ST(0) := fptan(ST(0)); // approximation of tan
+        TOP := TOP − 1;
+        ST(0) := 1.0;
+    ELSE (* Source operand is out-of-range *)
+        C2 := 1;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + + + + + +
C1Set to 0 if stack underflow occurred; set to 1 if stack overflow occurred.
Set if result was rounded up; cleared otherwise.
Set to 1 if outside range (−263 < source operand < +263); otherwise, set to 0.
C2
C0, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#ISStack underflow or overflow occurred.
#IASource operand is an SNaN value, ∞, or unsupported format.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/frndint.html b/x86/frndint.html new file mode 100644 index 0000000..d74a7dd --- /dev/null +++ b/x86/frndint.html @@ -0,0 +1,90 @@ + +FRNDINT + — Round to Integer

FRNDINT + — Round to Integer

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 FC Round ST(0) to an integer.
+

Description + ¶ +

+

Rounds the source value in the ST(0) register to the nearest integral value, depending on the current rounding mode (setting of the RC field of the FPU control word), and stores the result in ST(0).

+

If the source value is ∞, the value is not changed. If the source value is not an integral value, the floating-point inexact-result exception (#P) is generated.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
ST(0) := RoundToIntegralValue(ST(0));
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + +
#ISStack underflow occurred.
#IASource operand is an SNaN value or unsupported format.
#DSource operand is a denormal value.
#PSource operand is not an integral value.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/frstor.html b/x86/frstor.html new file mode 100644 index 0000000..349baf4 --- /dev/null +++ b/x86/frstor.html @@ -0,0 +1,144 @@ + +FRSTOR + — Restore x87 FPU State

FRSTOR + — Restore x87 FPU State

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
DD /4 Load FPU state from m94byte or m108byte.
+

Description + ¶ +

+

Loads the FPU state (operating environment and register stack) from the memory area specified with the source operand. This state data is typically written to the specified memory location by a previous FSAVE/FNSAVE instruction.

+

The FPU operating environment consists of the FPU control word, status word, tag word, instruction pointer, data pointer, and last opcode. Figures 8-9 through 8-12 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, show the layout in memory of the stored environment, depending on the operating mode of the processor (protected or real) and the current operand-size attribute (16-bit or 32-bit). In virtual-8086 mode, the real mode layouts are used. The contents of the FPU register stack are stored in the 80 bytes immediately following the operating environment image.

+

The FRSTOR instruction should be executed in the same operating mode as the corresponding FSAVE/FNSAVE instruction.

+

If one or more unmasked exception bits are set in the new FPU status word, a floating-point exception will be generated upon execution of the next floating-point instruction (except for the no-wait floating-point instructions, see the section titled “Software Exception Handling” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1). To avoid raising exceptions when loading a new operating environment, clear all the exception flags in the FPU status word that is being loaded.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
FPUControlWord := SRC[FPUControlWord];
+FPUStatusWord := SRC[FPUStatusWord];
+FPUTagWord := SRC[FPUTagWord];
+FPUDataPointer := SRC[FPUDataPointer];
+FPUInstructionPointer := SRC[FPUInstructionPointer];
+FPULastInstructionOpcode := SRC[FPULastInstructionOpcode];
+ST(0) := SRC[ST(0)];
+ST(1) := SRC[ST(1)];
+ST(2) := SRC[ST(2)];
+ST(3) := SRC[ST(3)];
+ST(4) := SRC[ST(4)];
+ST(5) := SRC[ST(5)];
+ST(6) := SRC[ST(6)];
+ST(7) := SRC[ST(7)];
+
+

FPU Flags Affected + ¶ +

+

The C0, C1, C2, C3 flags are loaded.

+

Floating-Point Exceptions + ¶ +

+

None; however, if an unmasked exception is loaded in the status word, it is generated upon execution of the next “waiting” floating-point instruction.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fsave.fnsave.html b/x86/fsave.fnsave.html new file mode 100644 index 0000000..46e6f93 --- /dev/null +++ b/x86/fsave.fnsave.html @@ -0,0 +1,170 @@ + +FSAVE/FNSAVE + — Store x87 FPU State

FSAVE/FNSAVE + — Store x87 FPU State

+ + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
9B DD /6FSAVE m94/108byteValidValidStore FPU state to m94byte or m108byte after checking for pending unmasked floating-point exceptions. Then re-initialize the FPU.
DD /6FNSAVE1 m94/108byteValidValidStore FPU environment to m94byte or m108byte without checking for pending unmasked floating-point exceptions. Then re-initialize the FPU.
+
+

1. See IA-32 Architecture Compatibility section below.

+

Description + ¶ +

+

Stores the current FPU state (operating environment and register stack) at the specified destination in memory, and then re-initializes the FPU. The FSAVE instruction checks for and handles pending unmasked floating-point exceptions before storing the FPU state; the FNSAVE instruction does not.

+

The FPU operating environment consists of the FPU control word, status word, tag word, instruction pointer, data pointer, and last opcode. Figures 8-9 through 8-12 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, show the layout in memory of the stored environment, depending on the operating mode of the processor (protected or real) and the current operand-size attribute (16-bit or 32-bit). In virtual-8086 mode, the real mode layouts are used. The contents of the FPU register stack are stored in the 80 bytes immediately follow the operating environment image.

+

The saved image reflects the state of the FPU after all floating-point instructions preceding the FSAVE/FNSAVE instruction in the instruction stream have been executed.

+

After the FPU state has been saved, the FPU is reset to the same default values it is set to with the FINIT/FNINIT instructions (see “FINIT/FNINIT—Initialize Floating-Point Unit” in this chapter).

+

The FSAVE/FNSAVE instructions are typically used when the operating system needs to perform a context switch, an exception handler needs to use the FPU, or an application program needs to pass a “clean” FPU to a procedure.

+

The assembler issues two instructions for the FSAVE instruction (an FWAIT instruction followed by an FNSAVE instruction), and the processor executes each of these instructions separately. If an exception is generated for either of these instructions, the save EIP points to the instruction that caused the exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

For Intel math coprocessors and FPUs prior to the Intel Pentium processor, an FWAIT instruction should be executed before attempting to read from the memory image stored with a prior FSAVE/FNSAVE instruction. This FWAIT instruction helps ensure that the storage operation has been completed.

+

When operating a Pentium or Intel486 processor in MS-DOS compatibility mode, it is possible (under unusual circumstances) for an FNSAVE instruction to be interrupted prior to being executed to handle a pending FPU exception. See the section titled “No-Wait FPU Instructions Can Get FPU Interrupt in Window” in Appendix D of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of these circumstances. An FNSAVE instruction cannot be interrupted in this way on later Intel processors, except for the Intel QuarkTM X1000 processor.

+

Operation + ¶ +

+
(* Save FPU State and Registers *)
+DEST[FPUControlWord] := FPUControlWord;
+DEST[FPUStatusWord] := FPUStatusWord;
+DEST[FPUTagWord] := FPUTagWord;
+DEST[FPUDataPointer] := FPUDataPointer;
+DEST[FPUInstructionPointer] := FPUInstructionPointer;
+DEST[FPULastInstructionOpcode] := FPULastInstructionOpcode;
+DEST[ST(0)] := ST(0);
+DEST[ST(1)] := ST(1);
+DEST[ST(2)] := ST(2);
+DEST[ST(3)] := ST(3);
+DEST[ST(4)]:= ST(4);
+DEST[ST(5)] := ST(5);
+DEST[ST(6)] := ST(6);
+DEST[ST(7)] := ST(7);
+(* Initialize FPU *)
+FPUControlWord := 037FH;
+FPUStatusWord := 0;
+FPUTagWord := FFFFH;
+FPUDataPointer := 0;
+FPUInstructionPointer := 0;
+FPULastInstructionOpcode := 0;
+
+

FPU Flags Affected + ¶ +

+

The C0, C1, C2, and C3 flags are saved and then cleared.

+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
diff --git a/x86/fscale.html b/x86/fscale.html new file mode 100644 index 0000000..c3fc181 --- /dev/null +++ b/x86/fscale.html @@ -0,0 +1,181 @@ + +FSCALE + — Scale

FSCALE + — Scale

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 FD Scale ST(0) by ST(1).
+

Description + ¶ +

+

Truncates the value in the source operand (toward 0) to an integral value and adds that value to the exponent of the destination operand. The destination and source operands are floating-point values located in registers ST(0) and ST(1), respectively. This instruction provides rapid multiplication or division by integral powers of 2. The following table shows the results obtained when scaling various classes of numbers, assuming that neither overflow nor underflow occurs.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ST(1)
ST(0)−∞−F−0+0+F+∞NaN
NaN−∞−∞−∞−∞−∞NaN
−F−0−F−F−F−F−∞NaN
−0−0−0−0−0−0NaNNaN
+0+0+0+0+0+0NaNNaN
+F+0+F+F+F+F+∞NaN
+∞NaN+∞+∞+∞+∞+∞NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-34. FSCALE Results
+
+

F Meansfinitefloating-pointvalue.

+

In most cases, only the exponent is changed and the mantissa (significand) remains unchanged. However, when the value being scaled in ST(0) is a denormal value, the mantissa is also changed and the result may turn out to be a normalized number. Similarly, if overflow or underflow results from a scale operation, the resulting mantissa will differ from the source’s mantissa.

+

The FSCALE instruction can also be used to reverse the action of the FXTRACT instruction, as shown in the following example:

+

FXTRACT;

+

FSCALE;

+

FSTP ST(1);

+

In this example, the FXTRACT instruction extracts the significand and exponent from the value in ST(0) and stores them in ST(0) and ST(1) respectively. The FSCALE then scales the significand in ST(0) by the exponent in ST(1), recreating the original value before the FXTRACT operation was performed. The FSTP ST(1) instruction overwrites the exponent (extracted by the FXTRACT instruction) with the recreated value, which returns the stack to its original state with only one register [ST(0)] occupied.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
ST(0) := ST(0) ∗ 2RoundTowardZero(ST(1));
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IASource operand is an SNaN value or unsupported format.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#OResult is too large for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fsin.html b/x86/fsin.html new file mode 100644 index 0000000..eabd5c2 --- /dev/null +++ b/x86/fsin.html @@ -0,0 +1,130 @@ + +FSIN + — Sine

FSIN + — Sine

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 FE Replace ST(0) with the approximate of its sine.
+

Description + ¶ +

+

Computes an approximation of the sine of the source operand in register ST(0) and stores the result in ST(0). The source operand must be given in radians and must be within the range −263 to +263. The following table shows the results obtained when taking the sine of various classes of numbers, assuming that underflow does not occur.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
SRC (ST(0))DEST (ST(0))
−∞*
−F− 1 to + 1
−0−0
+0+0
+F− 1 to +1
+∞*
NaNNaN
+
Table 3-35. FSIN Results
+
+

F Means finite floating-point value.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

If the source operand is outside the acceptable range, the C2 flag in the FPU status word is set, and the value in register ST(0) remains unchanged. The instruction does not raise an exception when the source operand is out of range. It is up to the program to check the C2 flag for out-of-range conditions. Source values outside the range − 263 to +263 can be reduced to the range of the instruction by subtracting an appropriate integer multiple of 2π. However, even within the range -263 to +263, inaccurate results can occur because the finite approximation of π used internally for argument reduction is not sufficient in all cases. Therefore, for accurate results it is safe to apply FSIN only to arguments reduced accurately in software, to a value smaller in absolute value than 3π/4. See the sections titled “Approximation of Pi” and “Transcendental Instruction Accuracy” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a discussion of the proper value to use for π in performing such reductions.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF -263 < ST(0) < 263
+    THEN
+        C2 := 0;
+        ST(0) := fsin(ST(0)); // approximation of the mathematical sin function
+    ELSE (* Source operand out of range *)
+        C2 := 1;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
Set to 1 if outside range (−263 < source operand < +263); otherwise, set to 0.
C2
C0, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + +
#ISStack underflow occurred.
#IASource operand is an SNaN value, ∞, or unsupported format.
#DSource operand is a denormal value.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fsincos.html b/x86/fsincos.html new file mode 100644 index 0000000..b98568d --- /dev/null +++ b/x86/fsincos.html @@ -0,0 +1,148 @@ + +FSINCOS + — Sine and Cosine

FSINCOS + — Sine and Cosine

+ + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D9 FBFSINCOSValidValidCompute the sine and cosine of ST(0); replace ST(0) with the approximate sine, and push the approximate cosine onto the register stack.
+

Description + ¶ +

+

Computes both the approximate sine and the cosine of the source operand in register ST(0), stores the sine in ST(0), and pushes the cosine onto the top of the FPU register stack. (This instruction is faster than executing the FSIN and FCOS instructions in succession.)

+

The source operand must be given in radians and must be within the range −263 to +263. The following table shows the results obtained when taking the sine and cosine of various classes of numbers, assuming that underflow does not occur.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
SRCDEST
ST(0)ST(1) CosineST(0) Sine
−∞**
−F− 1 to + 1− 1 to + 1
−0+1−0
+0+1+0
+F− 1 to + 1− 1 to + 1
+∞**
NaNNaNNaN
+
Table 3-36. FSINCOS Results
+
+

F Meansfinitefloating-pointvalue.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

If the source operand is outside the acceptable range, the C2 flag in the FPU status word is set, and the value in register ST(0) remains unchanged. The instruction does not raise an exception when the source operand is out of range. It is up to the program to check the C2 flag for out-of-range conditions. Source values outside the range − 263 to +263 can be reduced to the range of the instruction by subtracting an appropriate integer multiple of 2π. However, even within the range -263 to +263, inaccurate results can occur because the finite approximation of π used internally for argument reduction is not sufficient in all cases. Therefore, for accurate results it is safe to apply FSINCOS only to arguments reduced accurately in software, to a value smaller in absolute value than 3π/8. See the sections titled “Approximation of Pi” and “Transcendental Instruction Accuracy” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a discussion of the proper value to use for π in performing such reductions.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF ST(0) < 263
+    THEN
+        C2 := 0;
+        TEMP := fcos(ST(0)); // approximation of cosine
+        ST(0) := fsin(ST(0)); // approximation of sine
+        TOP := TOP − 1;
+        ST(0) := TEMP;
+    ELSE (* Source operand out of range *)
+        C2 := 1;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + + + + + +
C1Set to 0 if stack underflow occurred; set to 1 of stack overflow occurs.
Set if result was rounded up; cleared otherwise.
Set to 1 if outside range (−263 < source operand < +263); otherwise, set to 0.
C2
C0, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#ISStack underflow or overflow occurred.
#IASource operand is an SNaN value, ∞, or unsupported format.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fsqrt.html b/x86/fsqrt.html new file mode 100644 index 0000000..0842f49 --- /dev/null +++ b/x86/fsqrt.html @@ -0,0 +1,122 @@ + +FSQRT + — Square Root

FSQRT + — Square Root

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 FA Computes square root of ST(0) and stores the result in ST(0).
+

Description + ¶ +

+

Computes the square root of the source value in the ST(0) register and stores the result in ST(0).

+

The following table shows the results obtained when taking the square root of various classes of numbers, assuming that neither overflow nor underflow occurs.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
SRC (ST(0))DEST (ST(0))
−∞*
−F*
−0−0
+0+0
+F+F
+∞+∞
NaNNaN
+
Table 3-37. FSQRT Results
+
+

F Meansfinitefloating-pointvalue.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
ST(0) := SquareRoot(ST(0));
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IASource operand is an SNaN value or unsupported format.
Source operand is a negative value (except for −0).
#DSource operand is a denormal value.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fst.fstp.html b/x86/fst.fstp.html new file mode 100644 index 0000000..8232fec --- /dev/null +++ b/x86/fst.fstp.html @@ -0,0 +1,202 @@ + +FST/FSTP + — Store Floating-Point Value

FST/FSTP + — Store Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D9 /2FST m32fpValidValidCopy ST(0) to m32fp.
DD /2FST m64fpValidValidCopy ST(0) to m64fp.
DD D0+iFST ST(i)ValidValidCopy ST(0) to ST(i).
D9 /3FSTP m32fpValidValidCopy ST(0) to m32fp and pop register stack.
DD /3FSTP m64fpValidValidCopy ST(0) to m64fp and pop register stack.
DB /7FSTP m80fpValidValidCopy ST(0) to m80fp and pop register stack.
DD D8+iFSTP ST(i)ValidValidCopy ST(0) to ST(i) and pop register stack.
+

Description + ¶ +

+

The FST instruction copies the value in the ST(0) register to the destination operand, which can be a memory location or another register in the FPU register stack. When storing the value in memory, the value is converted to single precision or double precision floating-point format.

+

The FSTP instruction performs the same operation as the FST instruction and then pops the register stack. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1. The FSTP instruction can also store values in memory in double extended-precision floating-point format.

+

If the destination operand is a memory location, the operand specifies the address where the first byte of the destination value is to be stored. If the destination operand is a register, the operand specifies a register in the register stack relative to the top of the stack.

+

If the destination size is single precision or double precision, the significand of the value being stored is rounded to the width of the destination (according to the rounding mode specified by the RC field of the FPU control word), and the exponent is converted to the width and bias of the destination format. If the value being stored is too large for the destination format, a numeric overflow exception (#O) is generated and, if the exception is unmasked, no value is stored in the destination operand. If the value being stored is a denormal value, the denormal exception (#D) is not generated. This condition is simply signaled as a numeric underflow exception (#U) condition.

+

If the value being stored is ±0, ±∞, or a NaN, the least-significant bits of the significand and the exponent are truncated to fit the destination format. This operation preserves the value’s identity as a 0, ∞, or NaN.

+

If the destination operand is a non-empty register, the invalid-operation exception is not generated.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
DEST := ST(0);
+IF Instruction = FSTP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Indicates rounding direction of if the floating-point inexact exception (#P) is generated: 0 := not roundup; 1 := roundup.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IAIf destination result is an SNaN value or unsupported format, except when the destination format is in double extended-precision floating-point format.
#UResult is too small for the destination format.
#OResult is too large for the destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fstcw.fnstcw.html b/x86/fstcw.fnstcw.html new file mode 100644 index 0000000..d584b21 --- /dev/null +++ b/x86/fstcw.fnstcw.html @@ -0,0 +1,147 @@ + +FSTCW/FNSTCW + — Store x87 FPU Control Word

FSTCW/FNSTCW + — Store x87 FPU Control Word

+ + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
9B D9 /7FSTCW m2byteValidValidStore FPU control word to m2byte after checking for pending unmasked floating-point exceptions.
D9 /7FNSTCW1 m2byteValidValidStore FPU control word to m2byte without checking for pending unmasked floating-point exceptions.
+
+

1. See IA-32 Architecture Compatibility section below.

+

Description + ¶ +

+

Stores the current value of the FPU control word at the specified destination in memory. The FSTCW instruction checks for and handles pending unmasked floating-point exceptions before storing the control word; the FNSTCW instruction does not.

+

The assembler issues two instructions for the FSTCW instruction (an FWAIT instruction followed by an FNSTCW instruction), and the processor executes each of these instructions in separately. If an exception is generated for either of these instructions, the save EIP points to the instruction that caused the exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

When operating a Pentium or Intel486 processor in MS-DOS compatibility mode, it is possible (under unusual circumstances) for an FNSTCW instruction to be interrupted prior to being executed to handle a pending FPU exception. See the section titled “No-Wait FPU Instructions Can Get FPU Interrupt in Window” in Appendix D of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of these circumstances. An FNSTCW instruction cannot be interrupted in this way on later Intel processors, except for the Intel QuarkTM X1000 processor.

+

Operation + ¶ +

+
DEST := FPUControlWord;
+
+

FPU Flags Affected + ¶ +

+

The C0, C1, C2, and C3 flags are undefined.

+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fstenv.fnstenv.html b/x86/fstenv.fnstenv.html new file mode 100644 index 0000000..13c941b --- /dev/null +++ b/x86/fstenv.fnstenv.html @@ -0,0 +1,154 @@ + +FSTENV/FNSTENV + — Store x87 FPU Environment

FSTENV/FNSTENV + — Store x87 FPU Environment

+ + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
9B D9 /6FSTENV m14/28byteValidValidStore FPU environment to m14byte or m28byte after checking for pending unmasked floating-point exceptions. Then mask all floating-point exceptions.
D9 /6FNSTENV1 m14/28byteValidValidStore FPU environment to m14byte or m28byte without checking for pending unmasked floating-point exceptions. Then mask all floating-point exceptions.
+
+

1. See IA-32 Architecture Compatibility section below.

+

Description + ¶ +

+

Saves the current FPU operating environment at the memory location specified with the destination operand, and then masks all floating-point exceptions. The FPU operating environment consists of the FPU control word, status word, tag word, instruction pointer, data pointer, and last opcode. Figures 8-9 through 8-12 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, show the layout in memory of the stored environment, depending on the operating mode of the processor (protected or real) and the current operand-size attribute (16-bit or 32-bit). In virtual-8086 mode, the real mode layouts are used.

+

The FSTENV instruction checks for and handles any pending unmasked floating-point exceptions before storing the FPU environment; the FNSTENV instruction does not. The saved image reflects the state of the FPU after all floating-point instructions preceding the FSTENV/FNSTENV instruction in the instruction stream have been executed.

+

These instructions are often used by exception handlers because they provide access to the FPU instruction and data pointers. The environment is typically saved in the stack. Masking all exceptions after saving the environment prevents floating-point exceptions from interrupting the exception handler.

+

The assembler issues two instructions for the FSTENV instruction (an FWAIT instruction followed by an FNSTENV instruction), and the processor executes each of these instructions separately. If an exception is generated for either of these instructions, the save EIP points to the instruction that caused the exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

When operating a Pentium or Intel486 processor in MS-DOS compatibility mode, it is possible (under unusual circumstances) for an FNSTENV instruction to be interrupted prior to being executed to handle a pending FPU exception. See the section titled “No-Wait FPU Instructions Can Get FPU Interrupt in Window” in Appendix D of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of these circumstances. An FNSTENV instruction cannot be interrupted in this way on later Intel processors, except for the Intel QuarkTM X1000 processor.

+

Operation + ¶ +

+
DEST[FPUControlWord] := FPUControlWord;
+DEST[FPUStatusWord] := FPUStatusWord;
+DEST[FPUTagWord] := FPUTagWord;
+DEST[FPUDataPointer] := FPUDataPointer;
+DEST[FPUInstructionPointer] := FPUInstructionPointer;
+DEST[FPULastInstructionOpcode] := FPULastInstructionOpcode;
+
+

FPU Flags Affected + ¶ +

+

The C0, C1, C2, and C3 are undefined.

+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fstsw.fnstsw.html b/x86/fstsw.fnstsw.html new file mode 100644 index 0000000..6faa41f --- /dev/null +++ b/x86/fstsw.fnstsw.html @@ -0,0 +1,157 @@ + +FSTSW/FNSTSW + — Store x87 FPU Status Word

FSTSW/FNSTSW + — Store x87 FPU Status Word

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
9B DD /7FSTSW m2byteValidValidStore FPU status word at m2byte after checking for pending unmasked floating-point exceptions.
9B DF E0FSTSW AXValidValidStore FPU status word in AX register after checking for pending unmasked floating-point exceptions.
DD /7FNSTSW1 m2byteValidValidStore FPU status word at m2byte without checking for pending unmasked floating-point exceptions.
DF E0FNSTSW1 AXValidValidStore FPU status word in AX register without checking for pending unmasked floating-point exceptions.
+
+

1. See IA-32 Architecture Compatibility section below.

+

Description + ¶ +

+

Stores the current value of the x87 FPU status word in the destination location. The destination operand can be either a two-byte memory location or the AX register. The FSTSW instruction checks for and handles pending unmasked floating-point exceptions before storing the status word; the FNSTSW instruction does not.

+

The FNSTSW AX form of the instruction is used primarily in conditional branching (for instance, after an FPU comparison instruction or an FPREM, FPREM1, or FXAM instruction), where the direction of the branch depends on the state of the FPU condition code flags. (See the section titled “Branching and Conditional Moves on FPU Condition Codes” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.) This instruction can also be used to invoke exception handlers (by examining the exception flags) in environments that do not use interrupts. When the FNSTSW AX instruction is executed, the AX register is updated before the processor executes any further instructions. The status stored in the AX register is thus guaranteed to be from the completion of the prior FPU instruction.

+

The assembler issues two instructions for the FSTSW instruction (an FWAIT instruction followed by an FNSTSW instruction), and the processor executes each of these instructions separately. If an exception is generated for either of these instructions, the save EIP points to the instruction that caused the exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

When operating a Pentium or Intel486 processor in MS-DOS compatibility mode, it is possible (under unusual circumstances) for an FNSTSW instruction to be interrupted prior to being executed to handle a pending FPU exception. See the section titled “No-Wait FPU Instructions Can Get FPU Interrupt in Window” in Appendix D of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of these circumstances. An FNSTSW instruction cannot be interrupted in this way on later Intel processors, except for the Intel QuarkTM X1000 processor.

+

Operation + ¶ +

+
DEST := FPUStatusWord;
+
+

FPU Flags Affected + ¶ +

+

The C0, C1, C2, and C3 are undefined.

+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fsub.fsubp.fisub.html b/x86/fsub.fsubp.fisub.html new file mode 100644 index 0000000..fc0aad0 --- /dev/null +++ b/x86/fsub.fsubp.fisub.html @@ -0,0 +1,300 @@ + +FSUB/FSUBP/FISUB + — Subtract

FSUB/FSUBP/FISUB + — Subtract

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D8 /4FSUB m32fpValidValidSubtract m32fp from ST(0) and store result in ST(0).
DC /4FSUB m64fpValidValidSubtract m64fp from ST(0) and store result in ST(0).
D8 E0+iFSUB ST(0), ST(i)ValidValidSubtract ST(i) from ST(0) and store result in ST(0).
DC E8+iFSUB ST(i), ST(0)ValidValidSubtract ST(0) from ST(i) and store result in ST(i).
DE E8+iFSUBP ST(i), ST(0)ValidValidSubtract ST(0) from ST(i), store result in ST(i), and pop register stack.
DE E9FSUBPValidValidSubtract ST(0) from ST(1), store result in ST(1), and pop register stack.
DA /4FISUB m32intValidValidSubtract m32int from ST(0) and store result in ST(0).
DE /4FISUB m16intValidValidSubtract m16int from ST(0) and store result in ST(0).
+

Description + ¶ +

+

Subtracts the source operand from the destination operand and stores the difference in the destination location. The destination operand is always an FPU data register; the source operand can be a register or a memory location. Source operands in memory can be in single precision or double precision floating-point format or in word or doubleword integer format.

+

The no-operand version of the instruction subtracts the contents of the ST(0) register from the ST(1) register and stores the result in ST(1). The one-operand version subtracts the contents of a memory location (either a floating-point or an integer value) from the contents of the ST(0) register and stores the result in ST(0). The two-operand version, subtracts the contents of the ST(0) register from the ST(i) register or vice versa.

+

The FSUBP instructions perform the additional operation of popping the FPU register stack following the subtraction. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1. The no-operand version of the floating-point subtract instructions always results in the register stack being popped. In some assemblers, the mnemonic for this instruction is FSUB rather than FSUBP.

+

The FISUB instructions convert an integer source operand to double extended-precision floating-point format before performing the subtraction.

+

Table 3-38 shows the results obtained when subtracting various classes of numbers from one another, assuming that neither overflow nor underflow occurs. Here, the SRC value is subtracted from the DEST value (DEST − SRC = result).

+

When the difference between two operands of like sign is 0, the result is +0, except for the round toward −∞ mode, in which case the result is −0. This instruction also guarantees that +0 − (−0) = +0, and that −0 − (+0) = −0. When the source operand is an integer 0, it is treated as a +0.

+

When one operand is ∞, the result is ∞ of the expected sign. If both operands are ∞ of the same sign, an invalidoperation exception is generated.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
SRC
DEST−∞− F or − I−0+0+ F or + I+∞NaN
−∞*−∞−∞−∞−∞−∞NaN
−F+∞±F or ±0DESTDEST−F−∞NaN
−0+∞−SRC±0−0− SRC−∞NaN
+0+∞−SRC+0±0− SRC−∞NaN
+F+∞+FDESTDEST±F or ±0−∞NaN
+∞+∞+∞+∞+∞+∞*NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-38. FSUB/FSUBP/FISUB Results
+
+

F Means finite floating-point value.

+

I Means integer.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF Instruction = FISUB
+    THEN
+        DEST := DEST − ConvertToDoubleExtendedPrecisionFP(SRC);
+    ELSE (* Source operand is floating-point value *)
+        DEST := DEST − SRC;
+FI;
+IF Instruction = FSUBP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IAOperand is an SNaN value or unsupported format.
Operands are infinities of like sign.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#OResult is too large for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/fsubr.fsubrp.fisubr.html b/x86/fsubr.fsubrp.fisubr.html new file mode 100644 index 0000000..9b22bab --- /dev/null +++ b/x86/fsubr.fsubrp.fisubr.html @@ -0,0 +1,299 @@ + +FSUBR/FSUBRP/FISUBR + — Reverse Subtract

FSUBR/FSUBRP/FISUBR + — Reverse Subtract

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D8 /5FSUBR m32fpValidValidSubtract ST(0) from m32fp and store result in ST(0).
DC /5FSUBR m64fpValidValidSubtract ST(0) from m64fp and store result in ST(0).
D8 E8+iFSUBR ST(0), ST(i)ValidValidSubtract ST(0) from ST(i) and store result in ST(0).
DC E0+iFSUBR ST(i), ST(0)ValidValidSubtract ST(i) from ST(0) and store result in ST(i).
DE E0+iFSUBRP ST(i), ST(0)ValidValidSubtract ST(i) from ST(0), store result in ST(i), and pop register stack.
DE E1FSUBRPValidValidSubtract ST(1) from ST(0), store result in ST(1), and pop register stack.
DA /5FISUBR m32intValidValidSubtract ST(0) from m32int and store result in ST(0).
DE /5FISUBR m16intValidValidSubtract ST(0) from m16int and store result in ST(0).
+

Description + ¶ +

+

Subtracts the destination operand from the source operand and stores the difference in the destination location. The destination operand is always an FPU register; the source operand can be a register or a memory location. Source operands in memory can be in single precision or double precision floating-point format or in word or doubleword integer format.

+

These instructions perform the reverse operations of the FSUB, FSUBP, and FISUB instructions. They are provided to support more efficient coding.

+

The no-operand version of the instruction subtracts the contents of the ST(1) register from the ST(0) register and stores the result in ST(1). The one-operand version subtracts the contents of the ST(0) register from the contents of a memory location (either a floating-point or an integer value) and stores the result in ST(0). The two-operand version, subtracts the contents of the ST(i) register from the ST(0) register or vice versa.

+

The FSUBRP instructions perform the additional operation of popping the FPU register stack following the subtraction. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1. The no-operand version of the floating-point reverse subtract instructions always results in the register stack being popped. In some assemblers, the mnemonic for this instruction is FSUBR rather than FSUBRP.

+

The FISUBR instructions convert an integer source operand to double extended-precision floating-point format before performing the subtraction.

+

The following table shows the results obtained when subtracting various classes of numbers from one another, assuming that neither overflow nor underflow occurs. Here, the DEST value is subtracted from the SRC value (SRC − DEST = result).

+

When the difference between two operands of like sign is 0, the result is +0, except for the round toward −∞ mode, in which case the result is −0. This instruction also guarantees that +0 − (−0) = +0, and that −0 − (+0) = −0. When the source operand is an integer 0, it is treated as a +0.

+

When one operand is ∞, the result is ∞ of the expected sign. If both operands are ∞ of the same sign, an invalidoperation exception is generated.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
SRC
DEST−∞−F or −I−0+0+F or +I+∞NaN
−∞*+∞+∞+∞+∞+∞NaN
−F−∞±F or ±0−DEST−DEST+F+∞NaN
−0−∞SRC±0+0SRC+∞NaN
+0−∞SRC−0±0SRC+∞NaN
+F−∞−F−DEST−DEST±F or ±0+∞NaN
+∞−∞−∞−∞−∞−∞*NaN
NaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-39. FSUBR/FSUBRP/FISUBR Results
+
+

F Meansfinitefloating-pointvalue.

+

I Means integer.

+

* Indicatesfloating-pointinvalid-arithmetic-operand(#IA)exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF Instruction = FISUBR
+    THEN
+        DEST := ConvertToDoubleExtendedPrecisionFP(SRC) − DEST;
+    ELSE (* Source operand is floating-point value *)
+        DEST := SRC − DEST; FI;
+IF Instruction = FSUBRP
+    THEN
+        PopRegisterStack; FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IAOperand is an SNaN value or unsupported format.
Operands are infinities of like sign.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#OResult is too large for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/ftst.html b/x86/ftst.html new file mode 100644 index 0000000..18b8328 --- /dev/null +++ b/x86/ftst.html @@ -0,0 +1,123 @@ + +FTST + — TEST

FTST + — TEST

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 E4 Compare ST(0) with 0.0.
+

Description + ¶ +

+

Compares the value in the ST(0) register with 0.0 and sets the condition code flags C0, C2, and C3 in the FPU status word according to the results (see table below).

+
+ + + + + + + + + + + + + + + + + + + + + + + + + +
ConditionC3C2C0
ST(0) > 0.0000
ST(0) < 0.0001
ST(0) = 0.0100
Unordered111
+
Table 3-40. FTST Results
+

This instruction performs an “unordered comparison.” An unordered comparison also checks the class of the numbers being compared (see “FXAM—Examine Floating-Point” in this chapter). If the value in register ST(0) is a NaN or is in an undefined format, the condition flags are set to “unordered” and the invalid operation exception is generated.

+

The sign of zero is ignored, so that (– 0.0 := +0.0).

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
CASE (relation of operands) OF
+    Not comparable:
+        C3, C2, C0 := 111;
+    ST(0) > 0.0:
+        C3, C2, C0 := 000;
+    ST(0) < 0.0:
+        C3, C2, C0 := 001;
+    ST(0) = 0.0:
+        C3, C2, C0 := 100;
+ESAC;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0.
C0, C2, C3See Table 3-40.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + +
#ISStack underflow occurred.
#IAThe source operand is a NaN value or is in an unsupported format.
#DThe source operand is a denormal value.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fucom.fucomp.fucompp.html b/x86/fucom.fucomp.fucompp.html new file mode 100644 index 0000000..f27e3b8 --- /dev/null +++ b/x86/fucom.fucomp.fucompp.html @@ -0,0 +1,168 @@ + +FUCOM/FUCOMP/FUCOMPP + — Unordered Compare Floating-Point Values

FUCOM/FUCOMP/FUCOMPP + — Unordered Compare Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
DD E0+iFUCOM ST(i)ValidValidCompare ST(0) with ST(i).
DD E1FUCOMValidValidCompare ST(0) with ST(1).
DD E8+iFUCOMP ST(i)ValidValidCompare ST(0) with ST(i) and pop register stack.
DD E9FUCOMPValidValidCompare ST(0) with ST(1) and pop register stack.
DA E9FUCOMPPValidValidCompare ST(0) with ST(1) and pop register stack twice.
+

Description + ¶ +

+

Performs an unordered comparison of the contents of register ST(0) and ST(i) and sets condition code flags C0, C2, and C3 in the FPU status word according to the results (see the table below). If no operand is specified, the contents of registers ST(0) and ST(1) are compared. The sign of zero is ignored, so that –0.0 is equal to +0.0.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + +
Comparison Results*C3C2C0
ST0 > ST(i)000
ST0 < ST(i)001
ST0 = ST(i)100
Unordered111
+
Table 3-41. FUCOM/FUCOMP/FUCOMPP Results
+
+

* Flagsnotsetifunmaskedinvalid-arithmetic-operand(#IA)exceptionisgenerated.

+

An unordered comparison checks the class of the numbers being compared (see “FXAM—Examine Floating-Point” in this chapter). The FUCOM/FUCOMP/FUCOMPP instructions perform the same operations as the FCOM/FCOMP/FCOMPP instructions. The only difference is that the FUCOM/FUCOMP/FUCOMPP instructions raise the invalid-arithmetic-operand exception (#IA) only when either or both operands are an SNaN or are in an unsupported format; QNaNs cause the condition code flags to be set to unordered, but do not cause an exception to be generated. The FCOM/FCOMP/FCOMPP instructions raise an invalid-operation exception when either or both of the operands are a NaN value of any kind or are in an unsupported format.

+

As with the FCOM/FCOMP/FCOMPP instructions, if the operation results in an invalid-arithmetic-operand exception being raised, the condition code flags are set only if the exception is masked.

+

The FUCOMP instruction pops the register stack following the comparison operation and the FUCOMPP instruction pops the register stack twice following the comparison operation. To pop the register stack, the processor marks the ST(0) register as empty and increments the stack pointer (TOP) by 1.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
CASE (relation of operands) OF
+    ST > SRC:
+                        C3, C2, C0 := 000;
+    ST < SRC:
+                        C3, C2, C0 := 001;
+    ST = SRC:
+                        C3, C2, C0 := 100;
+ESAC;
+IF ST(0) or SRC = QNaN, but not SNaN or unsupported format
+    THEN
+        C3, C2, C0 := 111;
+    ELSE (* ST(0) or SRC is SNaN or unsupported format *)
+            #IA;
+        IF FPUControlWord.IM = 1
+                THEN
+                    C3, C2, C0 := 111;
+        FI;
+FI;
+IF Instruction = FUCOMP
+    THEN
+        PopRegisterStack;
+FI;
+IF Instruction = FUCOMPP
+    THEN
+        PopRegisterStack;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0 if stack underflow occurred.
C0, C2, C3See Table 3-41.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + +
#ISStack underflow occurred.
#IAOne or both operands are SNaN values or have unsupported formats. Detection of a QNaN value in and of itself does not raise an invalid-operand exception.
#DOne or both operands are denormal values.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fxam.html b/x86/fxam.html new file mode 100644 index 0000000..6ee699b --- /dev/null +++ b/x86/fxam.html @@ -0,0 +1,134 @@ + +FXAM + — Examine Floating-Point

FXAM + — Examine Floating-Point

+ + + + + + + + + + + + + +
Opcode ModeLeg ModeDescription
D9 E5 Classify value or number in ST(0).
+

Description + ¶ +

+

Examines the contents of the ST(0) register and sets the condition code flags C0, C2, and C3 in the FPU status word to indicate the class of value or number in the register (see the table below).

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ClassC3C2C0
Unsupported000
NaN001
Normal finite number010
Infinity011
Zero100
Empty101
Denormal number110
+
Table 3-42. FXAM Results .
+

The C1 flag is set to the sign of the value in ST(0), regardless of whether the register is empty or full.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
C1 := sign bit of ST; (* 0 for positive, 1 for negative *)
+CASE (class of value or number in ST(0)) OF
+    Unsupported:C3, C2, C0 := 000;
+    NaN:
+        C3, C2, C0 := 001;
+    Normal:
+        C3, C2, C0 := 010;
+    Infinity:
+        C3, C2, C0 := 011;
+    Zero:
+        C3, C2, C0 := 100;
+    Empty:
+        C3, C2, C0 := 101;
+    Denormal:
+        C3, C2, C0 := 110;
+ESAC;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Sign of value in ST(0).
C0, C2, C3See Table 3-42.
+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fxch.html b/x86/fxch.html new file mode 100644 index 0000000..57e5c5a --- /dev/null +++ b/x86/fxch.html @@ -0,0 +1,97 @@ + +FXCH + — Exchange Register Contents

FXCH + — Exchange Register Contents

+ + + + + + + + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D9 C8+iFXCH ST(i)ValidValidExchange the contents of ST(0) and ST(i).
D9 C9FXCHValidValidExchange the contents of ST(0) and ST(1).
+

Description + ¶ +

+

Exchanges the contents of registers ST(0) and ST(i). If no source operand is specified, the contents of ST(0) and ST(1) are exchanged.

+

This instruction provides a simple means of moving values in the FPU register stack to the top of the stack [ST(0)], so that they can be operated on by those floating-point instructions that can only operate on values in ST(0). For example, the following instruction sequence takes the square root of the third register from the top of the register stack:

+

FXCH ST(3);

+

FSQRT;

+

FXCH ST(3);

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF (Number-of-operands) is 1
+    THEN
+        temp := ST(0);
+        ST(0) := SRC;
+        SRC := temp;
+    ELSE
+        temp := ST(0);
+        ST(0) := ST(1);
+        ST(1) := temp;
+FI;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + +
#ISStack underflow occurred.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fxrstor.html b/x86/fxrstor.html new file mode 100644 index 0000000..e40ed0c --- /dev/null +++ b/x86/fxrstor.html @@ -0,0 +1,171 @@ + +FXRSTOR + — Restore x87 FPU, MMX, XMM, and MXCSR State

FXRSTOR + — Restore x87 FPU, MMX, XMM, and MXCSR State

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F AE /1 FXRSTOR m512byteMValidValidRestore the x87 FPU, MMX, XMM, and MXCSR register state from m512byte.
NP REX.W + 0F AE /1 FXRSTOR64 m512byteMValidN.E.Restore the x87 FPU, MMX, XMM, and MXCSR register state from m512byte.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Reloads the x87 FPU, MMX technology, XMM, and MXCSR registers from the 512-byte memory image specified in the source operand. This data should have been written to memory previously using the FXSAVE instruction, and in the same format as required by the operating modes. The first byte of the data should be located on a 16-byte boundary. There are three distinct layouts of the FXSAVE state map: one for legacy and compatibility mode, a second format for 64-bit mode FXSAVE/FXRSTOR with REX.W=0, and the third format is for 64-bit mode with FXSAVE64/FXRSTOR64. Table 3-43 shows the layout of the legacy/compatibility mode state information in memory and describes the fields in the memory image for the FXRSTOR and FXSAVE instructions. Table 3-46 shows the layout of the 64-bit mode state information when REX.W is set (FXSAVE64/FXRSTOR64). Table 3-47 shows the layout of the 64-bit mode state information when REX.W is clear (FXSAVE/FXRSTOR).

+

The state image referenced with an FXRSTOR instruction must have been saved using an FXSAVE instruction or be in the same format as required by Table 3-43, Table 3-46, or Table 3-47. Referencing a state image saved with an FSAVE, FNSAVE instruction or incompatible field layout will result in an incorrect state restoration.

+

The FXRSTOR instruction does not flush pending x87 FPU exceptions. To check and raise exceptions when loading x87 FPU state information with the FXRSTOR instruction, use an FWAIT instruction after the FXRSTOR instruction.

+

If the OSFXSR bit in control register CR4 is not set, the FXRSTOR instruction may not restore the states of the XMM and MXCSR registers. This behavior is implementation dependent.

+

If the MXCSR state contains an unmasked exception with a corresponding status flag also set, loading the register with the FXRSTOR instruction will not result in a SIMD floating-point error condition being generated. Only the next occurrence of this unmasked exception will result in the exception being generated.

+

Bits 16 through 32 of the MXCSR register are defined as reserved and should be set to 0. Attempting to write a 1 in any of these bits from the saved state image will result in a general protection exception (#GP) being generated.

+

Bytes 464:511 of an FXSAVE image are available for software use. FXRSTOR ignores the content of bytes 464:511 in an FXSAVE state image.

+

Operation + ¶ +

+
IF 64-Bit Mode
+    THEN
+        (x87 FPU, MMX, XMM15-XMM0, MXCSR)
+                Load(SRC);
+    ELSE
+            (x87 FPU, MMX, XMM7-XMM0, MXCSR) := Load(SRC);
+FI;
+
+

x87 FPU and SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
If a memory operand is not aligned on a 16-byte boundary, regardless of segment. (See alignment check exception [#AC] below.)
For an attempt to set reserved bits in MXCSR.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code)For a page fault.
#NMIf CR0.TS[bit 3] = 1.
If CR0.EM[bit 2] = 1.
#UDIf CPUID.01H:EDX.FXSR[bit 24] = 0.
If instruction is preceded by a LOCK prefix.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 16-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GPIf a memory operand is not aligned on a 16-byte boundary, regardless of segment.
If any part of the operand lies outside the effective address space from 0 to FFFFH.
For an attempt to set reserved bits in MXCSR.
#NMIf CR0.TS[bit 3] = 1.
If CR0.EM[bit 2] = 1.
#UDIf CPUID.01H:EDX.FXSR[bit 24] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+ + + + + + + + + +
#PF(fault-code)For a page fault.
#ACFor unaligned memory reference.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
If memory operand is not aligned on a 16-byte boundary, regardless of segment.
For an attempt to set reserved bits in MXCSR.
#PF(fault-code)For a page fault.
#NMIf CR0.TS[bit 3] = 1.
If CR0.EM[bit 2] = 1.
#UDIf CPUID.01H:EDX.FXSR[bit 24] = 0.
If instruction is preceded by a LOCK prefix.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 16-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
diff --git a/x86/fxsave.html b/x86/fxsave.html new file mode 100644 index 0000000..9796023 --- /dev/null +++ b/x86/fxsave.html @@ -0,0 +1,691 @@ + +FXSAVE + — Save x87 FPU, MMX Technology, and SSE State

FXSAVE + — Save x87 FPU, MMX Technology, and SSE State

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F AE /0 FXSAVE m512byteMValidValidSave the x87 FPU, MMX, XMM, and MXCSR register state to m512byte.
NP REX.W + 0F AE /0 FXSAVE64 m512byteMValidN.E.Save the x87 FPU, MMX, XMM, and MXCSR register state to m512byte.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Saves the current state of the x87 FPU, MMX technology, XMM, and MXCSR registers to a 512-byte memory location specified in the destination operand. The content layout of the 512 byte region depends on whether the processor is operating in non-64-bit operating modes or 64-bit sub-mode of IA-32e mode.

+

Bytes 464:511 are available to software use. The processor does not write to bytes 464:511 of an FXSAVE area.

+

The operation of FXSAVE in non-64-bit modes is described first.

+

Non-64-Bit Mode Operation + ¶ +

+

Table 3-43 shows the layout of the state information in memory when the processor is operating in legacy modes.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
15 1413 1211109876543210
RsvdFCSFIP[31:0]FOPRsvdFTWFSWFCW0
MXCSR_MASKMXCSRR +FDS s +FDS r +FDS v +FDS d +FDSFDSFDP[31:0]16
ReservedST0/MM032
ReservedST1/MM148
ReservedST2/MM264
ReservedST3/MM380
ReservedST4/MM496
ReservedST5/MM5112
ReservedST6/MM6128
ReservedST7/MM7144
XMM0160
XMM1176
XMM2192
XMM3208
XMM4224
XMM5240
XMM6256
XMM7272
Reserved288
+
Table 3-43. Non-64-Bit-Mode Layout of FXSAVE and FXRSTOR Memory Region
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
15 1413 1211 109 87 6543 21 0
Reserved304
Reserved320
Reserved336
Reserved352
Reserved368
Reserved384
Reserved400
Reserved416
Reserved432
Reserved448
Available464
Available480
Available496
+
Table 3-43. Non-64-Bit-Mode Layout of FXSAVE and FXRSTOR Memory Region (Contd.)
+

The destination operand contains the first byte of the memory image, and it must be aligned on a 16-byte boundary. A misaligned destination operand will result in a general-protection (#GP) exception being generated (or in some cases, an alignment check exception [#AC]).

+

The FXSAVE instruction is used when an operating system needs to perform a context switch or when an exception handler needs to save and examine the current state of the x87 FPU, MMX technology, and/or XMM and MXCSR registers.

+

The fields in Table 3-43 are defined in Table 3-44.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDefinition
FCWx87 FPU Control Word (16 bits). See Figure 8-6 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for the layout of the x87 FPU control word.
FSWx87 FPU Status Word (16 bits). See Figure 8-4 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for the layout of the x87 FPU status word.
Abridged FTWx87 FPU Tag Word (8 bits). The tag information saved here is abridged, as described in the following paragraphs.
FOPx87 FPU Opcode (16 bits). The lower 11 bits of this field contain the opcode, upper 5 bits are reserved. See Figure 8-8 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for the layout of the x87 FPU opcode field.
FIPx87 FPU Instruction Pointer Offset (64 bits). The contents of this field differ depending on the current addressing mode (32-bit, 16-bit, or 64-bit) of the processor when the FXSAVE instruction was executed: 32-bit mode — 32-bit IP offset. 16-bit mode — low 16 bits are IP offset; high 16 bits are reserved. 64-bit mode with REX.W — 64-bit IP offset. 64-bit mode without REX.W — 32-bit IP offset. See “x87 FPU Instruction and Operand (Data) Pointers” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of the x87 FPU instruction pointer.
FCSx87 FPU Instruction Pointer Selector (16 bits). If CPUID.(EAX=07H,ECX=0H):EBX[bit 13] = 1, the processor deprecates FCS and FDS, and this field is saved as 0000H.
FDPx87 FPU Instruction Operand (Data) Pointer Offset (64 bits). The contents of this field differ depending on the current addressing mode (32-bit, 16-bit, or 64-bit) of the processor when the FXSAVE instruction was executed: 32-bit mode — 32-bit DP offset. 16-bit mode — low 16 bits are DP offset; high 16 bits are reserved. 64-bit mode with REX.W — 64-bit DP offset. 64-bit mode without REX.W — 32-bit DP offset. See “x87 FPU Instruction and Operand (Data) Pointers” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of the x87 FPU operand pointer.
FDSx87 FPU Instruction Operand (Data) Pointer Selector (16 bits). If CPUID.(EAX=07H,ECX=0H):EBX[bit 13] = 1, the processor deprecates FCS and FDS, and this field is saved as 0000H.
MXCSRMXCSR Register State (32 bits). See Figure 10-3 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for the layout of the MXCSR register. If the OSFXSR bit in control register CR4 is not set, the FXSAVE instruction may not save this register. This behavior is implementation dependent.
MXCSR_ MASKMXCSR_MASK (32 bits). This mask can be used to adjust values written to the MXCSR register, ensuring that reserved bits are set to 0. Set the mask bits and flags in MXCSR to the mode of operation desired for SSE and SSE2 SIMD floating-point instructions. See “Guidelines for Writing to the MXCSR Register” in Chapter 11 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for instructions for how to determine and use the MXCSR_MASK value.
ST0/MM0 through ST7/MM7x87 FPU or MMX technology registers. These 80-bit fields contain the x87 FPU data registers or the MMX technology registers, depending on the state of the processor prior to the execution of the FXSAVE instruction. If the processor had been executing x87 FPU instruction prior to the FXSAVE instruction, the x87 FPU data registers are saved; if it had been executing MMX instructions (or SSE or SSE2 instructions that operated on the MMX technology registers), the MMX technology registers are saved. When the MMX technology registers are saved, the high 16 bits of the field are reserved.
XMM0 through XMM7XMM registers (128 bits per field). If the OSFXSR bit in control register CR4 is not set, the FXSAVE instruction may not save these registers. This behavior is implementation dependent.
+
Table 3-44. Field Definitions
+

The FXSAVE instruction saves an abridged version of the x87 FPU tag word in the FTW field (unlike the FSAVE instruction, which saves the complete tag word). The tag information is saved in physical register order (R0 through R7), rather than in top-of-stack (TOS) order. With the FXSAVE instruction, however, only a single bit (1 for valid or 0 for empty) is saved for each tag. For example, assume that the tag word is currently set as follows:

+

R7 R6 R5 R4 R3 R2 R1 R0

+

11 xx xx xx 11 11 11 11

+

Here, 11B indicates empty stack elements and “xx” indicates valid (00B), zero (01B), or special (10B).

+

For this example, the FXSAVE instruction saves only the following 8 bits of information:

+

R7 R6 R5 R4 R3 R2 R1 R0

+

01110000

+

Here, a 1 is saved for any valid, zero, or special tag, and a 0 is saved for any empty tag.

+

The operation of the FXSAVE instruction differs from that of the FSAVE instruction, the as follows:

+
    +
  • FXSAVE instruction does not check for pending unmasked floating-point exceptions. (The FXSAVE operation in this regard is similar to the operation of the FNSAVE instruction).
  • +
  • After the FXSAVE instruction has saved the state of the x87 FPU, MMX technology, XMM, and MXCSR registers, the processor retains the contents of the registers. Because of this behavior, the FXSAVE instruction cannot be used by an application program to pass a “clean” x87 FPU state to a procedure, since it retains the current state. To clean the x87 FPU state, an application must explicitly execute an FINIT instruction after an FXSAVE instruction to reinitialize the x87 FPU state.
  • +
  • The format of the memory image saved with the FXSAVE instruction is the same regardless of the current addressing mode (32-bit or 16-bit) and operating mode (protected, real address, or system management). This behavior differs from the FSAVE instructions, where the memory image format is different depending on the addressing mode and operating mode. Because of the different image formats, the memory image saved with the FXSAVE instruction cannot be restored correctly with the FRSTOR instruction, and likewise the state saved with the FSAVE instruction cannot be restored correctly with the FXRSTOR instruction.
+

The FSAVE format for FTW can be recreated from the FTW valid bits and the stored 80-bit floating-point data (assuming the stored data was not the contents of MMX technology registers) using Table 3-45.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Exponent all 1’sExponent all 0’sFraction all 0’sJ and M bitsFTW valid bitx87 FTW
0 00 00 00x 1x1 1Special 10 Valid 00
0 00 01 100 101 1Special 10 Valid 00
0 01 10 00x 1x1 1Special 10 Special 10
0 01 11 100 101 1Zero 01 Special 10
1 10 00 01x 1x1 1Special 10 Special 10
1 10 01 100 101 1Special 10 Special 10
For all legal combinations above.0Empty 11
+
Table 3-45. Recreating FSAVE Format
+

The J-bit is defined to be the 1-bit binary integer to the left of the decimal place in the significand. The M-bit is defined to be the most significant bit of the fractional portion of the significand (i.e., the bit immediately to the right of the decimal place).

+

When the M-bit is the most significant bit of the fractional portion of the significand, it must be 0 if the fraction is all 0’s.

+

IA-32e Mode Operation + ¶ +

+

In compatibility sub-mode of IA-32e mode, legacy SSE registers, XMM0 through XMM7, are saved according to the legacy FXSAVE map. In 64-bit mode, all of the SSE registers, XMM0 through XMM15, are saved. Additionally, there are two different layouts of the FXSAVE map in 64-bit mode, corresponding to FXSAVE64 (which requires REX.W=1) and FXSAVE (REX.W=0). In the FXSAVE64 map (Table 3-46), the FPU IP and FPU DP pointers are 64-bit wide. In the FXSAVE map for 64-bit mode (Table 3-47), the FPU IP and FPU DP pointers are 32-bits.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
15 1413 1211 109 876543210
FIPFOPReservedFTWFSWFCW0
MXCSR_MASKMXCSRFDP16
ReservedST0/MM032
ReservedST1/MM148
ReservedST2/MM264
ReservedST3/MM380
ReservedST4/MM496
ReservedST5/MM5112
ReservedST6/MM6128
ReservedST7/MM7144
XMM0160
XMM1176
XMM2192
XMM3208
XMM4224
XMM5240
XMM6256
XMM7272
XMM8288
XMM9304
XMM10320
XMM11336
XMM12352
XMM13368
XMM14384
XMM15400
Reserved416
Reserved432
Reserved448
Available464
Available480
Available496
+
Table 3-46. Layout of the 64-Bit Mode FXSAVE64 Map (Requires REX.W = 1)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
15 1413 1211109876543210
ReservedFCSFIP[31:0]FOPReservedFTWFSWFCW0
MXCSR_MASKMXCSRReservedFDSFDP[31:0]16
ReservedST0/MM032
ReservedST1/MM148
ReservedST2/MM264
ReservedST3/MM380
ReservedST4/MM496
ReservedST5/MM5112
ReservedST6/MM6128
ReservedST7/MM7144
XMM0160
XMM1176
XMM2192
XMM3208
XMM4224
XMM5240
XMM6256
XMM7272
XMM8288
XMM9304
XMM10320
XMM11336
XMM12352
XMM13368
XMM14384
XMM15400
Reserved416
Reserved432
Reserved448
Available464
Available480
Available496
+
Table 3-47. Layout of the 64-Bit Mode FXSAVE Map (REX.W = 0)
+

Operation + ¶ +

+
IF 64-Bit Mode
+    THEN
+        IF REX.W = 1
+            THEN
+                DEST := Save64BitPromotedFxsave(x87 FPU, MMX, XMM15-XMM0,
+                MXCSR);
+            ELSE
+                DEST := Save64BitDefaultFxsave(x87 FPU, MMX, XMM15-XMM0, MXCSR);
+        FI;
+    ELSE
+        DEST := SaveLegacyFxsave(x87 FPU, MMX, XMM7-XMM0, MXCSR);
+FI;
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
If a memory operand is not aligned on a 16-byte boundary, regardless of segment. (See the description of the alignment check exception [#AC] below.)
#SS(0)For an illegal address in the SS segment.
#PF(fault-code)For a page fault.
#NMIf CR0.TS[bit 3] = 1.
If CR0.EM[bit 2] = 1.
#UDIf CPUID.01H:EDX.FXSR[bit 24] = 0.
#UDIf the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 16-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GPIf a memory operand is not aligned on a 16-byte boundary, regardless of segment.
If any part of the operand lies outside the effective address space from 0 to FFFFH.
#NMIf CR0.TS[bit 3] = 1.
If CR0.EM[bit 2] = 1.
#UDIf CPUID.01H:EDX.FXSR[bit 24] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+ + + + + + + + + +
#PF(fault-code)For a page fault.
#ACFor unaligned memory reference.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
If memory operand is not aligned on a 16-byte boundary, regardless of segment.
#PF(fault-code)For a page fault.
#NMIf CR0.TS[bit 3] = 1.
If CR0.EM[bit 2] = 1.
#UDIf CPUID.01H:EDX.FXSR[bit 24] = 0.
If the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 16-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
+

Implementation Note + ¶ +

+

The order in which the processor signals general-protection (#GP) and page-fault (#PF) exceptions when they both occur on an instruction boundary is given in Table 5-2 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B. This order vary for FXSAVE for different processor implementations.

diff --git a/x86/fxtract.html b/x86/fxtract.html new file mode 100644 index 0000000..560e918 --- /dev/null +++ b/x86/fxtract.html @@ -0,0 +1,90 @@ + +FXTRACT + — Extract Exponent and Significand

FXTRACT + — Extract Exponent and Significand

+ + + + + + + + + + + +
Opcode/Instruction64-Bit ModeCompat/Leg ModeDescription
D9 F4 FXTRACTValidValidSeparate value in ST(0) into exponent and significand, store exponent in ST(0), and push the significand onto the register stack.
+

Description + ¶ +

+

Separates the source value in the ST(0) register into its exponent and significand, stores the exponent in ST(0), and pushes the significand onto the register stack. Following this operation, the new top-of-stack register ST(0) contains the value of the original significand expressed as a floating-point value. The sign and significand of this value are the same as those found in the source operand, and the exponent is 3FFFH (biased value for a true exponent of zero). The ST(1) register contains the value of the original operand’s true (unbiased) exponent expressed as a floating-point value. (The operation performed by this instruction is a superset of the IEEE-recommended logb(x) function.)

+

This instruction and the F2XM1 instruction are useful for performing power and range scaling operations. The FXTRACT instruction is also useful for converting numbers in double extended-precision floating-point format to decimal representations (e.g., for printing or displaying).

+

If the floating-point zero-divide exception (#Z) is masked and the source operand is zero, an exponent value of –∞ is stored in register ST(1) and 0 with the sign of the source operand is stored in register ST(0).

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
TEMP := Significand(ST(0));
+ST(0) := Exponent(ST(0));
+TOP := TOP − 1;
+ST(0) := TEMP;
+
+

FPU Flags Affected + ¶ +

+ + + + + + +
C1Set to 0 if stack underflow occurred; set to 1 if stack overflow occurred.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + +
#ISStack underflow or overflow occurred.
#IASource operand is an SNaN value or unsupported format.
#ZST(0) operand is ±0.
#DSource operand is a denormal value.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fyl2x.html b/x86/fyl2x.html new file mode 100644 index 0000000..54e6e38 --- /dev/null +++ b/x86/fyl2x.html @@ -0,0 +1,195 @@ + +FYL2X + — Compute y ∗ log2x

FYL2X + — Compute y ∗ log2x

+ + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D9 F1FYL2XValidValidReplace ST(1) with (ST(1) ∗ log2ST(0)) and pop the register stack.
+

Description + ¶ +

+

Computes (ST(1) ∗ log2 (ST(0))), stores the result in register ST(1), and pops the FPU register stack. The source operand in ST(0) must be a non-zero positive number.

+

The following table shows the results obtained when taking the log of various classes of numbers, assuming that neither overflow nor underflow occurs.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ST(0)
ST(1)−∞−F±0+0<+F<+1+1+F>+1+∞NaN
−∞**+∞+∞*−∞−∞NaN
−F****+F−0−F−∞NaN
−0***+0−0−0*NaN
+0***−0+0+0*NaN
+F****−F+0+F+∞NaN
+∞**−∞−∞*+∞+∞NaN
NaNNaNNaNNaNNaNNaNNaNNaNNaN
+
Table 3-48. FYL2X Results
+
+

F Means finite floating-point value.

+

* Indicatesfloating-pointinvalid-operation(#IA)exception.

+

** Indicates floating-point zero-divide (#Z) exception.

+

If the divide-by-zero exception is masked and register ST(0) contains ±0, the instruction returns ∞ with a sign that is the opposite of the sign of the source operand in register ST(1).

+

The FYL2X instruction is designed with a built-in multiplication to optimize the calculation of logarithms with an arbitrary positive base (b):

+

logbx := (log2b)–1 ∗ log2x

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
ST(1) := ST(1) ∗ log2ST(0);
+PopRegisterStack;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IAEither operand is an SNaN or unsupported format.
Source operand in register ST(0) is a negative finite value (not -0).
#ZSource operand in register ST(0) is ±0.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#OResult is too large for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/fyl2xp1.html b/x86/fyl2xp1.html new file mode 100644 index 0000000..408308f --- /dev/null +++ b/x86/fyl2xp1.html @@ -0,0 +1,198 @@ + +FYL2XP1 + — Compute y ∗ log2(x +1)

FYL2XP1 + — Compute y ∗ log2(x +1)

+ + + + + + + + + + + + + +
OpcodeInstruction64-Bit ModeCompat/Leg ModeDescription
D9 F9FYL2XP1ValidValidReplace ST(1) with ST(1) ∗ log2(ST(0) + 1.0) and pop the register stack.
+

Description + ¶ +

+

Computes (ST(1) ∗ log2(ST(0) + 1.0)), stores the result in register ST(1), and pops the FPU register stack. The source operand in ST(0) must be in the range:

+ + + + + + + + + +–(1– 2⁄2))to(1– 2⁄2) +

The source operand in ST(1) can range from −∞ to +∞. If the ST(0) operand is outside of its acceptable range, the result is undefined and software should not rely on an exception being generated. Under some circumstances exceptions may be generated when ST(0) is out of range, but this behavior is implementation specific and not guaranteed.

+

The following table shows the results obtained when taking the log epsilon of various classes of numbers, assuming that underflow does not occur.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +ST(0) +-0 ++0 +−(1−( 2⁄2 ))to−0 ++0to+(1-( 2⁄2 )) +NaN +* +* +NaN ++∞ +−∞ +−∞ +ST(1) +−F ++F ++0 +-0 +−F +NaN +−0 ++0 ++0 +-0 +−0 +NaN ++0 +−0 +−0 ++0 ++0 +NaN ++F +−F +−0 ++0 ++F +NaN ++∞ +* +* ++∞ +NaN +−∞ +NaN +NaN +NaN +NaN +NaN +NaN +
Table 3-49. FYL2XP1 Results
+
+

F Means finite floating-point value.

+

* Indicatesfloating-pointinvalid-operation(#IA)exception.

+

This instruction provides optimal accuracy for values of epsilon [the value in register ST(0)] that are close to 0. For small epsilon (ε) values, more significant digits can be retained by using the FYL2XP1 instruction than by using (ε+1) as an argument to the FYL2X instruction. The (ε+1) expression is commonly found in compound interest and annuity calculations. The result can be simply converted into a value in another logarithm base by including a scale factor in the ST(1) source operand. The following equation is used to calculate the scale factor for a particular logarithm base, where n is the logarithm base desired for the result of the FYL2XP1 instruction:

+

scale factor := logn 2

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
ST(1) := ST(1) ∗ log2(ST(0) + 1.0);
+PopRegisterStack;
+
+

FPU Flags Affected + ¶ +

+ + + + + + + + +
C1Set to 0 if stack underflow occurred.
Set if result was rounded up; cleared otherwise.
C0, C2, C3Undefined.
+

Floating-Point Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#ISStack underflow occurred.
#IAEither operand is an SNaN value or unsupported format.
#DSource operand is a denormal value.
#UResult is too small for destination format.
#OResult is too large for destination format.
#PValue cannot be represented exactly in destination format.
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#NMCR0.EM[bit 2] or CR0.TS[bit 3] = 1.
#MFIf there is a pending x87 FPU exception.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/gf2p8affineinvqb.html b/x86/gf2p8affineinvqb.html new file mode 100644 index 0000000..ce516e2 --- /dev/null +++ b/x86/gf2p8affineinvqb.html @@ -0,0 +1,474 @@ + +GF2P8AFFINEINVQB + — Galois Field Affine Transformation Inverse

GF2P8AFFINEINVQB + — Galois Field Affine Transformation Inverse

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F3A CF /r /ib GF2P8AFFINEINVQB xmm1, xmm2/m128, imm8AV/VGFNIComputes inverse affine transformation in the finite field GF(2^8).
VEX.128.66.0F3A.W1 CF /r /ib VGF2P8AFFINEINVQB xmm1, xmm2, xmm3/m128, imm8BV/VAVX GFNIComputes inverse affine transformation in the finite field GF(2^8).
VEX.256.66.0F3A.W1 CF /r /ib VGF2P8AFFINEINVQB ymm1, ymm2, ymm3/m256, imm8BV/VAVX GFNIComputes inverse affine transformation in the finite field GF(2^8).
EVEX.128.66.0F3A.W1 CF /r /ib VGF2P8AFFINEINVQB xmm1{k1}{z}, xmm2, xmm3/m128/m64bcst, imm8CV/VAVX512VL GFNIComputes inverse affine transformation in the finite field GF(2^8).
EVEX.256.66.0F3A.W1 CF /r /ib VGF2P8AFFINEINVQB ymm1{k1}{z}, ymm2, ymm3/m256/m64bcst, imm8CV/VAVX512VL GFNIComputes inverse affine transformation in the finite field GF(2^8).
EVEX.512.66.0F3A.W1 CF /r /ib VGF2P8AFFINEINVQB zmm1{k1}{z}, zmm2, zmm3/m512/m64bcst, imm8CV/VAVX512F GFNIComputes inverse affine transformation in the finite field GF(2^8).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8 (r)N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8 (r)
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

The AFFINEINVB instruction computes an affine transformation in the Galois Field 28. For this instruction, an affine transformation is defined by A * inv(x) + b where “A” is an 8 by 8 bit matrix, and “x” and “b” are 8-bit vectors. The inverse of the bytes in x is defined with respect to the reduction polynomial x8 + x4 + x3 + x + 1.

+

One SIMD register (operand 1) holds “x” as either 16, 32 or 64 8-bit vectors. A second SIMD (operand 2) register or memory operand contains 2, 4, or 8 “A” values, which are operated upon by the correspondingly aligned 8 “x” values in the first register. The “b” vector is constant for all calculations and contained in the immediate byte.

+

The EVEX encoded form of this instruction does not support memory fault suppression. The SSE encoded forms of the instruction require 16B alignment on their memory operations.

+

The inverse of each byte is given by the following table. The upper nibble is on the vertical axis and the lower nibble is on the horizontal axis. For example, the inverse of 0x95 is 0x8A.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
-0123456789ABCDEF
0018DF6CB527BD1E84F29C0B0E1E5C7
174B4AA4B992B605F583FFDCCFF40EEB2
23A6E5AF1554DA8C9C1A98153044A2C2
32C45926CF3396642F235206F77BB5919
41DFE37672D31F569A764AB135425E99
5ED5C5CA4C2487BF183E22F051EC6117
6165EAFD349A63643F44791DF3393213B
779B7978510B5BA3CB670D06A1FA8182
8837E7F809673BE569B9E95D9F72B9A4
9DE6A326DD88A84722A149F88F9DC899A
AFB7C2EC38FB8654826C8124ACEE7D262
BCE01FEF11757871A58E763DBDBC8657
CB282FA3DAD4E4FA9275341BFCACE6
D7A7AE63C5DBE2EA948BC4D59DF8906B
EB1DD6EBC6ECFAD84ED7E35D501EB3
F5B233834684638CDD9C7DA0CD1A411C
+
Table 3-50. Inverse Byte Listings
+

Operation + ¶ +

+
define affine_inverse_byte(tsrc2qw, src1byte, imm):
+    FOR i := 0 to 7:
+        * parity(x) = 1 if x has an odd number of 1s in it, and 0 otherwise.*
+        * inverse(x) is defined in the table above *
+        retbyte.bit[i] := parity(tsrc2qw.byte[7-i] AND inverse(src1byte)) XOR imm8.bit[i]
+    return retbyte
+
+

VGF2P8AFFINEINVQB dest, src1, src2, imm8 (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1:
+    IF SRC2 is memory and EVEX.b==1:
+        tsrc2 := SRC2.qword[0]
+    ELSE:
+        tsrc2 := SRC2.qword[j]
+FOR b := 0 to 7:
+    IF k1[j*8+b] OR *no writemask*:
+        FOR i := 0 to 7:
+            DEST.qword[j].byte[b] := affine_inverse_byte(tsrc2, SRC1.qword[j].byte[b], imm8)
+    ELSE IF *zeroing*:
+        DEST.qword[j].byte[b] := 0
+    *ELSE DEST.qword[j].byte[b] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VGF2P8AFFINEINVQB dest, src1, src2, imm8 (128b and 256b VEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256)
+FOR j := 0 TO KL-1:
+    FOR b := 0 to 7:
+        DEST.qword[j].byte[b] := affine_inverse_byte(SRC2.qword[j], SRC1.qword[j].byte[b], imm8)
+DEST[MAX_VL-1:VL] := 0
+
+

GF2P8AFFINEINVQB srcdest, src1, imm8 (128b SSE Encoded Version) + ¶ +

+
FOR j := 0 TO 1:
+    FOR b := 0 to 7:
+        SRCDEST.qword[j].byte[b] := affine_inverse_byte(SRC1.qword[j], SRCDEST.qword[j].byte[b], imm8)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)GF2P8AFFINEINVQB __m128i _mm_gf2p8affineinv_epi64_epi8(__m128i, __m128i, int);
+
+
(V)GF2P8AFFINEINVQB __m128i _mm_mask_gf2p8affineinv_epi64_epi8(__m128i, __mmask16, __m128i, __m128i, int);
+
+
(V)GF2P8AFFINEINVQB __m128i _mm_maskz_gf2p8affineinv_epi64_epi8(__mmask16, __m128i, __m128i, int);
+
+
VGF2P8AFFINEINVQB __m256i _mm256_gf2p8affineinv_epi64_epi8(__m256i, __m256i, int);
+
+
VGF2P8AFFINEINVQB __m256i _mm256_mask_gf2p8affineinv_epi64_epi8(__m256i, __mmask32, __m256i, __m256i, int);
+
+
VGF2P8AFFINEINVQB __m256i _mm256_maskz_gf2p8affineinv_epi64_epi8(__mmask32, __m256i, __m256i, int);
+
+
VGF2P8AFFINEINVQB __m512i _mm512_gf2p8affineinv_epi64_epi8(__m512i, __m512i, int);
+
+
VGF2P8AFFINEINVQB __m512i _mm512_mask_gf2p8affineinv_epi64_epi8(__m512i, __mmask64, __m512i, __m512i, int);
+
+
VGF2P8AFFINEINVQB __m512i _mm512_maskz_gf2p8affineinv_epi64_epi8(__mmask64, __m512i, __m512i, int);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Legacy-encoded and VEX-encoded: See Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded: See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/gf2p8affineqb.html b/x86/gf2p8affineqb.html new file mode 100644 index 0000000..7ecb533 --- /dev/null +++ b/x86/gf2p8affineqb.html @@ -0,0 +1,166 @@ + +GF2P8AFFINEQB + — Galois Field Affine Transformation

GF2P8AFFINEQB + — Galois Field Affine Transformation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F3A CE /r /ib GF2P8AFFINEQB xmm1, xmm2/m128, imm8AV/VGFNIComputes affine transformation in the finite field GF(2^8).
VEX.128.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB xmm1, xmm2, xmm3/m128, imm8BV/VAVX GFNIComputes affine transformation in the finite field GF(2^8).
VEX.256.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB ymm1, ymm2, ymm3/m256, imm8BV/VAVX GFNIComputes affine transformation in the finite field GF(2^8).
EVEX.128.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB xmm1{k1}{z}, xmm2, xmm3/m128/m64bcst, imm8CV/VAVX512VL GFNIComputes affine transformation in the finite field GF(2^8).
EVEX.256.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB ymm1{k1}{z}, ymm2, ymm3/m256/m64bcst, imm8CV/VAVX512VL GFNIComputes affine transformation in the finite field GF(2^8).
EVEX.512.66.0F3A.W1 CE /r /ib VGF2P8AFFINEQB zmm1{k1}{z}, zmm2, zmm3/m512/m64bcst, imm8CV/VAVX512F GFNIComputes affine transformation in the finite field GF(2^8).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8 (r)N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8 (r)
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

The AFFINEB instruction computes an affine transformation in the Galois Field 28. For this instruction, an affine transformation is defined by A * x + b where “A” is an 8 by 8 bit matrix, and “x” and “b” are 8-bit vectors. One SIMD register (operand 1) holds “x” as either 16, 32 or 64 8-bit vectors. A second SIMD (operand 2) register or memory operand contains 2, 4, or 8 “A” values, which are operated upon by the correspondingly aligned 8 “x” values in the first register. The “b” vector is constant for all calculations and contained in the immediate byte.

+

The EVEX encoded form of this instruction does not support memory fault suppression. The SSE encoded forms of the instruction require16B alignment on their memory operations.

+

Operation + ¶ +

+
define parity(x):
+    t := 0 // single bit
+    FOR i := 0 to 7:
+        t = t xor x.bit[i]
+    return t
+define affine_byte(tsrc2qw, src1byte, imm):
+    FOR i := 0 to 7:
+        * parity(x) = 1 if x has an odd number of 1s in it, and 0 otherwise.*
+        retbyte.bit[i] := parity(tsrc2qw.byte[7-i] AND src1byte) XOR imm8.bit[i]
+    return retbyte
+
+

VGF2P8AFFINEQB dest, src1, src2, imm8 (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1:
+    IF SRC2 is memory and EVEX.b==1:
+        tsrc2 := SRC2.qword[0]
+    ELSE:
+        tsrc2 := SRC2.qword[j]
+    FOR b := 0 to 7:
+        IF k1[j*8+b] OR *no writemask*:
+            DEST.qword[j].byte[b] := affine_byte(tsrc2, SRC1.qword[j].byte[b], imm8)
+        ELSE IF *zeroing*:
+            DEST.qword[j].byte[b] := 0
+        *ELSE DEST.qword[j].byte[b] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VGF2P8AFFINEQB dest, src1, src2, imm8 (128b and 256b VEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256)
+FOR j := 0 TO KL-1:
+    FOR b := 0 to 7:
+        DEST.qword[j].byte[b] := affine_byte(SRC2.qword[j], SRC1.qword[j].byte[b], imm8)
+DEST[MAX_VL-1:VL] := 0
+
+

GF2P8AFFINEQB srcdest, src1, imm8 (128b SSE Encoded Version) + ¶ +

+
FOR j := 0 TO 1:
+    FOR b := 0 to 7:
+        SRCDEST.qword[j].byte[b] := affine_byte(SRC1.qword[j], SRCDEST.qword[j].byte[b], imm8)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)GF2P8AFFINEQB __m128i _mm_gf2p8affine_epi64_epi8(__m128i, __m128i, int);
+
+
(V)GF2P8AFFINEQB __m128i _mm_mask_gf2p8affine_epi64_epi8(__m128i, __mmask16, __m128i, __m128i, int);
+
+
(V)GF2P8AFFINEQB __m128i _mm_maskz_gf2p8affine_epi64_epi8(__mmask16, __m128i, __m128i, int);
+
+
VGF2P8AFFINEQB __m256i _mm256_gf2p8affine_epi64_epi8(__m256i, __m256i, int);
+
+
VGF2P8AFFINEQB __m256i _mm256_mask_gf2p8affine_epi64_epi8(__m256i, __mmask32, __m256i, __m256i, int);
+
+
VGF2P8AFFINEQB __m256i _mm256_maskz_gf2p8affine_epi64_epi8(__mmask32, __m256i, __m256i, int);
+
+
VGF2P8AFFINEQB __m512i _mm512_gf2p8affine_epi64_epi8(__m512i, __m512i, int);
+
+
VGF2P8AFFINEQB __m512i _mm512_mask_gf2p8affine_epi64_epi8(__m512i, __mmask64, __m512i, __m512i, int);
+
+
VGF2P8AFFINEQB __m512i _mm512_maskz_gf2p8affine_epi64_epi8(__mmask64, __m512i, __m512i, int);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Legacy-encoded and VEX-encoded: See Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded: See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/gf2p8mulb.html b/x86/gf2p8mulb.html new file mode 100644 index 0000000..38a338a --- /dev/null +++ b/x86/gf2p8mulb.html @@ -0,0 +1,162 @@ + +GF2P8MULB + — Galois Field Multiply Bytes

GF2P8MULB + — Galois Field Multiply Bytes

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F38 CF /r GF2P8MULB xmm1, xmm2/m128AV/VGFNIMultiplies elements in the finite field GF(2^8).
VEX.128.66.0F38.W0 CF /r VGF2P8MULB xmm1, xmm2, xmm3/m128BV/VAVX GFNIMultiplies elements in the finite field GF(2^8).
VEX.256.66.0F38.W0 CF /r VGF2P8MULB ymm1, ymm2, ymm3/m256BV/VAVX GFNIMultiplies elements in the finite field GF(2^8).
EVEX.128.66.0F38.W0 CF /r VGF2P8MULB xmm1{k1}{z}, xmm2, xmm3/m128CV/VAVX512VL GFNIMultiplies elements in the finite field GF(2^8).
EVEX.256.66.0F38.W0 CF /r VGF2P8MULB ymm1{k1}{z}, ymm2, ymm3/m256CV/VAVX512VL GFNIMultiplies elements in the finite field GF(2^8).
EVEX.512.66.0F38.W0 CF /r VGF2P8MULB zmm1{k1}{z}, zmm2, zmm3/m512CV/VAVX512F GFNIMultiplies elements in the finite field GF(2^8).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The instruction multiplies elements in the finite field GF(28), operating on a byte (field element) in the first source operand and the corresponding byte in a second source operand. The field GF(28) is represented in polynomial representation with the reduction polynomial x8 + x4 + x3 + x + 1.

+

This instruction does not support broadcasting.

+

The EVEX encoded form of this instruction supports memory fault suppression. The SSE encoded forms of the instruction require16B alignment on their memory operations.

+

Operation + ¶ +

+
define gf2p8mul_byte(src1byte, src2byte):
+    tword := 0
+    FOR i := 0 to 7:
+        IF src2byte.bit[i]:
+            tword := tword XOR (src1byte<< i)
+        * carry out polynomial reduction by the characteristic polynomial p*
+    FOR i := 14 downto 8:
+        p := 0x11B << (i-8)
+                *0x11B = 0000_0001_0001_1011 in binary*
+        IF tword.bit[i]:
+            tword := tword XOR p
+return tword.byte[0]
+
+

VGF2P8MULB dest, src1, src2 (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.byte[j] := gf2p8mul_byte(SRC1.byte[j], SRC2.byte[j])
+    ELSE iF *zeroing*:
+        DEST.byte[j] := 0
+    * ELSE DEST.byte[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VGF2P8MULB dest, src1, src2 (128b and 256b VEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256)
+FOR j := 0 TO KL-1:
+    DEST.byte[j] := gf2p8mul_byte(SRC1.byte[j], SRC2.byte[j])
+DEST[MAX_VL-1:VL] := 0
+
+

GF2P8MULB srcdest, src1 (128b SSE Encoded Version) + ¶ +

+
FOR j := 0 TO 15:
+    SRCDEST.byte[j] :=gf2p8mul_byte(SRCDEST.byte[j], SRC1.byte[j])
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)GF2P8MULB __m128i _mm_gf2p8mul_epi8(__m128i, __m128i);
+
+
(V)GF2P8MULB __m128i _mm_mask_gf2p8mul_epi8(__m128i, __mmask16, __m128i, __m128i);
+
+
(V)GF2P8MULB __m128i _mm_maskz_gf2p8mul_epi8(__mmask16, __m128i, __m128i);
+
+
VGF2P8MULB __m256i _mm256_gf2p8mul_epi8(__m256i, __m256i);
+
+
VGF2P8MULB __m256i _mm256_mask_gf2p8mul_epi8(__m256i, __mmask32, __m256i, __m256i);
+
+
VGF2P8MULB __m256i _mm256_maskz_gf2p8mul_epi8(__mmask32, __m256i, __m256i);
+
+
VGF2P8MULB __m512i _mm512_gf2p8mul_epi8(__m512i, __m512i);
+
+
VGF2P8MULB __m512i _mm512_mask_gf2p8mul_epi8(__m512i, __mmask64, __m512i, __m512i);
+
+
VGF2P8MULB __m512i _mm512_maskz_gf2p8mul_epi8(__mmask64, __m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Legacy-encoded and VEX-encoded: See Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded: See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/haddpd.html b/x86/haddpd.html new file mode 100644 index 0000000..923f60c --- /dev/null +++ b/x86/haddpd.html @@ -0,0 +1,267 @@ + +HADDPD + — Packed Double Precision Floating-Point Horizontal Add

HADDPD + — Packed Double Precision Floating-Point Horizontal Add

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 7C /r HADDPD xmm1, xmm2/m128RMV/VSSE3Horizontal add packed double precision floating-point values from xmm2/m128 to xmm1.
VEX.128.66.0F.WIG 7C /r VHADDPD xmm1,xmm2, xmm3/m128RVMV/VAVXHorizontal add packed double precision floating-point values from xmm2 and xmm3/mem.
VEX.256.66.0F.WIG 7C /r VHADDPD ymm1, ymm2, ymm3/m256RVMV/VAVXHorizontal add packed double precision floating-point values from ymm2 and ymm3/mem.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Adds the double precision floating-point values in the high and low quadwords of the destination operand and stores the result in the low quadword of the destination operand.

+

Adds the double precision floating-point values in the high and low quadwords of the source operand and stores the result in the high quadword of the destination operand.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

See Figure 3-17 for HADDPD; see Figure 3-18 for VHADDPD.

+
+ + + + + + + + + + + + + + + + + + + + + +HADDPD xmm1, xmm2/m128 +xmm2 +[127:64] +[63:0] +/m128 +xmm1 +[127:64] +[63:0] +Result: +xmm2/m128[63:0] + +xmm1[63:0] + xmm1[127:64] +xmm1 +xmm2/m128[127:64] +[127:64] +[63:0] +
Figure 3-17. HADDPD—Packed Double Precision Floating-Point Horizontal Add
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X3 +X2 +X1 +X0 +SRC1 +Y3 +Y2 +Y1 +Y0 +SRC2 +DEST Y2 + Y3 +X2 + X3 +Y0 + Y1 +X0 + X1 +
Figure 3-18. VHADDPD Operation
+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Operation + ¶ +

+

HADDPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SRC1[127:64] + SRC1[63:0]
+DEST[127:64] := SRC2[127:64] + SRC2[63:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VHADDPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[127:64] + SRC1[63:0]
+DEST[127:64] := SRC2[127:64] + SRC2[63:0]
+DEST[MAXVL-1:128] := 0
+
+

VHADDPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[127:64] + SRC1[63:0]
+DEST[127:64] := SRC2[127:64] + SRC2[63:0]
+DEST[191:128] := SRC1[255:192] + SRC1[191:128]
+DEST[255:192] := SRC2[255:192] + SRC2[191:128]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VHADDPD __m256d _mm256_hadd_pd (__m256d a, __m256d b);
+
+
HADDPD __m128d _mm_hadd_pd (__m128d a, __m128d b);
+
+

Exceptions + ¶ +

+

When the source operand is a memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

Numeric Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions.”

diff --git a/x86/haddps.html b/x86/haddps.html new file mode 100644 index 0000000..394b05c --- /dev/null +++ b/x86/haddps.html @@ -0,0 +1,399 @@ + +HADDPS + — Packed Single Precision Floating-Point Horizontal Add

HADDPS + — Packed Single Precision Floating-Point Horizontal Add

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F2 0F 7C /r HADDPS xmm1, xmm2/m128RMV/VSSE3Horizontal add packed single precision floating-point values from xmm2/m128 to xmm1.
VEX.128.F2.0F.WIG 7C /r VHADDPS xmm1, xmm2, xmm3/m128RVMV/VAVXHorizontal add packed single precision floating-point values from xmm2 and xmm3/mem.
VEX.256.F2.0F.WIG 7C /r VHADDPS ymm1, ymm2, ymm3/m256RVMV/VAVXHorizontal add packed single precision floating-point values from ymm2 and ymm3/mem.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Adds the single precision floating-point values in the first and second dwords of the destination operand and stores the result in the first dword of the destination operand.

+

Adds single precision floating-point values in the third and fourth dword of the destination operand and stores the result in the second dword of the destination operand.

+

Adds single precision floating-point values in the first and second dword of the source operand and stores the result in the third dword of the destination operand.

+

Adds single precision floating-point values in the third and fourth dword of the source operand and stores the result in the fourth dword of the destination operand.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

See Figure 3-19 for HADDPS; see Figure 3-20 for VHADDPS.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +HADDPS xmm1, xmm2/m128 +xmm2/ +[127:96] +[95:64] +[63:32] +[31:0] +m128 +xmm1 +[127:96] +[95:64] +[63:32] +[31:0] +xmm2/m128 +xmm2/m128 +RESULT: +xmm1[95:64] + +xmm1[31:0] + +[95:64] + xmm2/ +[31:0] + xmm2/ +xmm1 +xmm1[127:96] +xmm1[63:32] +m128[127:96] +m128[63:32] +[127:96] +[95:64] +[63:32] +[31:0] +
Figure 3-19. HADDPS—Packed Single Precision Floating-Point Horizontal Add
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X7 X6 X5 X4 X3 X2 X1 X0 +SRC1 +Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 +SRC2 +Y6+Y7 Y4+Y5 X6+X7 X4+X5 Y2+Y3 Y0+Y1 +DEST +X2+X3 X0+X1 +
Figure 3-20. VHADDPS Operation
+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Operation + ¶ +

+

HADDPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[63:32] + SRC1[31:0]
+DEST[63:32] := SRC1[127:96] + SRC1[95:64]
+DEST[95:64] := SRC2[63:32] + SRC2[31:0]
+DEST[127:96] := SRC2[127:96] + SRC2[95:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VHADDPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[63:32] + SRC1[31:0]
+DEST[63:32] := SRC1[127:96] + SRC1[95:64]
+DEST[95:64] := SRC2[63:32] + SRC2[31:0]
+DEST[127:96] := SRC2[127:96] + SRC2[95:64]
+DEST[MAXVL-1:128] := 0
+
+

VHADDPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[63:32] + SRC1[31:0]
+DEST[63:32] := SRC1[127:96] + SRC1[95:64]
+DEST[95:64] := SRC2[63:32] + SRC2[31:0]
+DEST[127:96] := SRC2[127:96] + SRC2[95:64]
+DEST[159:128] := SRC1[191:160] + SRC1[159:128]
+DEST[191:160] := SRC1[255:224] + SRC1[223:192]
+DEST[223:192] := SRC2[191:160] + SRC2[159:128]
+DEST[255:224] := SRC2[255:224] + SRC2[223:192]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
HADDPS __m128 _mm_hadd_ps (__m128 a, __m128 b);
+
+
VHADDPS __m256 _mm256_hadd_ps (__m256 a, __m256 b);
+
+

Exceptions + ¶ +

+

When the source operand is a memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

Numeric Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions.”

diff --git a/x86/hlt.html b/x86/hlt.html new file mode 100644 index 0000000..5efcdbd --- /dev/null +++ b/x86/hlt.html @@ -0,0 +1,82 @@ + +HLT + — Halt

HLT + — Halt

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F4HLTZOValidValidHalt
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Stops instruction execution and places the processor in a HALT state. An enabled interrupt (including NMI and SMI), a debug exception, the BINIT# signal, the INIT# signal, or the RESET# signal will resume execution. If an interrupt (including NMI) is used to resume execution after a HLT instruction, the saved instruction pointer (CS:EIP) points to the instruction following the HLT instruction.

+

When a HLT instruction is executed on an Intel 64 or IA-32 processor supporting Intel Hyper-Threading Technology, only the logical processor that executes the instruction is halted. The other logical processors in the physical processor remain active, unless they are each individually halted by executing a HLT instruction.

+

The HLT instruction is a privileged instruction. When the processor is running in protected or virtual-8086 mode, the privilege level of a program or procedure must be 0 to execute the HLT instruction.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
Enter Halt state;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the current privilege level is not 0.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

None.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/hreset.html b/x86/hreset.html new file mode 100644 index 0000000..9835ab4 --- /dev/null +++ b/x86/hreset.html @@ -0,0 +1,92 @@ + +HRESET + — History Reset

HRESET + — History Reset

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 3A F0 C0 /ib HRESET imm8, <EAX>AV/VHRESETProcessor history reset request. Controlled by the EAX implicit operand.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Requests the processor to selectively reset selected components of hardware history maintained by the current logical processor. HRESET operation is controlled by the implicit EAX operand. The value of the explicit imm8 operand is ignored. This instruction can only be executed at privilege level 0.

+

The HRESET instruction can be used to request reset of multiple components of hardware history. Prior to the execution of HRESET, the system software must take the following steps:

+

1. Enumerate the HRESET capabilities via CPUID.20H.0H:EBX, which indicates what components of hardware history can be reset.

+

2. Only the bits enumerated by CPUID.20H.0H:EBX can be set in the IA32_HRESET_ENABLE MSR.

+

HRESET causes a general-protection exception (#GP) if EAX sets any bits that are not set in the IA32_HRESET_EN-ABLE MSR.

+

Any attempt to execute the HRESET instruction inside a transactional region will result in a transaction abort.

+

Operation + ¶ +

+
IF EAX = 0
+    THEN NOP
+    ELSE
+        FOREACH i such that EAX[i] = 1
+            Reset prediction history for feature i
+FI
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If CPL > 0 or (EAX AND NOT IA32_HRESET_ENABLE) ≠0.
#UDIf CPUID.07H.01H:EAX.HRESET[bit 22] = 0.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)HRESET instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/hsubpd.html b/x86/hsubpd.html new file mode 100644 index 0000000..315b6bd --- /dev/null +++ b/x86/hsubpd.html @@ -0,0 +1,268 @@ + +HSUBPD + — Packed Double Precision Floating-Point Horizontal Subtract

HSUBPD + — Packed Double Precision Floating-Point Horizontal Subtract

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 7D /r HSUBPD xmm1, xmm2/m128RMV/VSSE3Horizontal subtract packed double precision floating-point values from xmm2/m128 to xmm1.
VEX.128.66.0F.WIG 7D /r VHSUBPD xmm1,xmm2, xmm3/m128RVMV/VAVXHorizontal subtract packed double precision floating-point values from xmm2 and xmm3/mem.
VEX.256.66.0F.WIG 7D /r VHSUBPD ymm1, ymm2, ymm3/m256RVMV/VAVXHorizontal subtract packed double precision floating-point values from ymm2 and ymm3/mem.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The HSUBPD instruction subtracts horizontally the packed double precision floating-point numbers of both operands.

+

Subtracts the double precision floating-point value in the high quadword of the destination operand from the low quadword of the destination operand and stores the result in the low quadword of the destination operand.

+

Subtracts the double precision floating-point value in the high quadword of the source operand from the low quadword of the source operand and stores the result in the high quadword of the destination operand.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

See Figure 3-21 for HSUBPD; see Figure 3-22 for VHSUBPD.

+
+ + + + + + + + + + + + + + + + + + + + + +HSUBPD xmm1, xmm2/m128 +xmm2 +[127:64] +[63:0] +/m128 +xmm1 +[127:64] +[63:0] +Result: +xmm2/m128[63:0] - +xmm1[63:0] - xmm1[127:64] +xmm1 +xmm2/m128[127:64] +[127:64] +[63:0] +
Figure 3-21. HSUBPD—Packed Double Precision Floating-Point Horizontal Subtract
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X3 +X2 +X1 +X0 +SRC1 +Y3 +Y2 +Y1 +Y0 +SRC2 +DEST Y2 - Y3 +X2 - X3 +Y0 - Y1 +X0 - X1 +
Figure 3-22. VHSUBPD operation
+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Operation + ¶ +

+

HSUBPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] - SRC1[127:64]
+DEST[127:64] := SRC2[63:0] - SRC2[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VHSUBPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] - SRC1[127:64]
+DEST[127:64] := SRC2[63:0] - SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VHSUBPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] - SRC1[127:64]
+DEST[127:64] := SRC2[63:0] - SRC2[127:64]
+DEST[191:128] := SRC1[191:128] - SRC1[255:192]
+DEST[255:192] := SRC2[191:128] - SRC2[255:192]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
HSUBPD __m128d _mm_hsub_pd(__m128d a, __m128d b)
+
+
VHSUBPD __m256d _mm256_hsub_pd (__m256d a, __m256d b);
+
+

Exceptions + ¶ +

+

When the source operand is a memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

Numeric Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions.”

diff --git a/x86/hsubps.html b/x86/hsubps.html new file mode 100644 index 0000000..ee463c2 --- /dev/null +++ b/x86/hsubps.html @@ -0,0 +1,399 @@ + +HSUBPS + — Packed Single Precision Floating-Point Horizontal Subtract

HSUBPS + — Packed Single Precision Floating-Point Horizontal Subtract

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F2 0F 7D /r HSUBPS xmm1, xmm2/m128RMV/VSSE3Horizontal subtract packed single precision floating-point values from xmm2/m128 to xmm1.
VEX.128.F2.0F.WIG 7D /r VHSUBPS xmm1, xmm2, xmm3/m128RVMV/VAVXHorizontal subtract packed single precision floating-point values from xmm2 and xmm3/mem.
VEX.256.F2.0F.WIG 7D /r VHSUBPS ymm1, ymm2, ymm3/m256RVMV/VAVXHorizontal subtract packed single precision floating-point values from ymm2 and ymm3/mem.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Subtracts the single precision floating-point value in the second dword of the destination operand from the first dword of the destination operand and stores the result in the first dword of the destination operand.

+

Subtracts the single precision floating-point value in the fourth dword of the destination operand from the third dword of the destination operand and stores the result in the second dword of the destination operand.

+

Subtracts the single precision floating-point value in the second dword of the source operand from the first dword of the source operand and stores the result in the third dword of the destination operand.

+

Subtracts the single precision floating-point value in the fourth dword of the source operand from the third dword of the source operand and stores the result in the fourth dword of the destination operand.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

See Figure 3-23 for HSUBPS; see Figure 3-24 for VHSUBPS.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +HSUBPS xmm1, xmm2/m128 +xmm2/ +[127:96] +[95:64] +[63:32] +[31:0] +m128 +xmm1 +[127:96] +[95:64] +[63:32] +[31:0] +xmm2/m128 +xmm2/m128 +RESULT: +xmm1[95:64] - +xmm1[31:0] - +[95:64] - xmm2/ +[31:0] - xmm2/ +xmm1 +xmm1[127:96] +xmm1[63:32] +m128[127:96] +m128[63:32] +[127:96] +[95:64] +[63:32] +[31:0] +
Figure 3-23. HSUBPS—Packed Single Precision Floating-Point Horizontal Subtract
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X7 X6 X5 X4 X3 X2 X1 X0 +SRC1 +Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 +SRC2 +Y6-Y7 Y4-Y5 X6-X7 X4-X5 Y2-Y3 Y0-Y1 +DEST +X2-X3 X0-X1 +
Figure 3-24. VHSUBPS Operation
+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Operation + ¶ +

+

HSUBPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC1[63:32]
+DEST[63:32] := SRC1[95:64] - SRC1[127:96]
+DEST[95:64] := SRC2[31:0] - SRC2[63:32]
+DEST[127:96] := SRC2[95:64] - SRC2[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VHSUBPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC1[63:32]
+DEST[63:32] := SRC1[95:64] - SRC1[127:96]
+DEST[95:64] := SRC2[31:0] - SRC2[63:32]
+DEST[127:96] := SRC2[95:64] - SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

VHSUBPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC1[63:32]
+DEST[63:32] := SRC1[95:64] - SRC1[127:96]
+DEST[95:64] := SRC2[31:0] - SRC2[63:32]
+DEST[127:96] := SRC2[95:64] - SRC2[127:96]
+DEST[159:128] := SRC1[159:128] - SRC1[191:160]
+DEST[191:160] := SRC1[223:192] - SRC1[255:224]
+DEST[223:192] := SRC2[159:128] - SRC2[191:160]
+DEST[255:224] := SRC2[223:192] - SRC2[255:224]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
HSUBPS __m128 _mm_hsub_ps(__m128 a, __m128 b);
+
+
VHSUBPS __m256 _mm256_hsub_ps (__m256 a, __m256 b);
+
+

Exceptions + ¶ +

+

When the source operand is a memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

Numeric Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions.”

diff --git a/x86/idiv.html b/x86/idiv.html new file mode 100644 index 0000000..77c737e --- /dev/null +++ b/x86/idiv.html @@ -0,0 +1,249 @@ + +IDIV + — Signed Divide

IDIV + — Signed Divide

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F6 /7IDIV r/m8MValidValidSigned divide AX by r/m8, with result stored in: AL := Quotient, AH := Remainder.
REX + F6 /7IDIV r/m81MValidN.E.Signed divide AX by r/m8, with result stored in AL := Quotient, AH := Remainder.
F7 /7IDIV r/m16MValidValidSigned divide DX:AX by r/m16, with result stored in AX := Quotient, DX := Remainder.
F7 /7IDIV r/m32MValidValidSigned divide EDX:EAX by r/m32, with result stored in EAX := Quotient, EDX := Remainder.
REX.W + F7 /7IDIV r/m64MValidN.E.Signed divide RDX:RAX by r/m64, with result stored in RAX := Quotient, RDX := Remainder.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Divides the (signed) value in the AX, DX:AX, or EDX:EAX (dividend) by the source operand (divisor) and stores the result in the AX (AH:AL), DX:AX, or EDX:EAX registers. The source operand can be a general-purpose register or a memory location. The action of this instruction depends on the operand size (dividend/divisor).

+

Non-integral results are truncated (chopped) towards 0. The remainder is always less than the divisor in magnitude. Overflow is indicated with the #DE (divide error) exception rather than with the CF flag.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. In 64-bit mode when REX.W is applied, the instruction divides the signed value in RDX:RAX by the source operand. RAX contains a 64-bit quotient; RDX contains a 64-bit remainder.

+

See the summary chart at the beginning of this section for encoding data and limits. See Table 3-51.

+
+ + + + + + + + + + + + + + +
Operand SizeDividendDivisorQuotientRemainderQuotient Range
Word/byte Doubleword/word Quadword/doubleword Doublequadword/ quadwordAX DX:AX EDX:EAX RDX:RAXr/m8 r/m16 r/m32 r/m64AL AX EAX RAXAH DX EDX RDX−128 to +127 −32,768 to +32,767 −231 to 231 − 1 −263 to 263 − 1
+
Table 3-51. IDIV Results
+

Operation + ¶ +

+
IF SRC = 0
+    THEN #DE; (* Divide error *)
+FI;
+IF OperandSize = 8 (* Word/byte operation *)
+    THEN
+        temp := AX / SRC; (* Signed division *)
+        IF (temp > 7FH) or (temp < 80H)
+        (* If a positive result is greater than 7FH or a negative result is less than 80H *)
+            THEN #DE; (* Divide error *)
+            ELSE
+                AL := temp;
+                AH := AX SignedModulus SRC;
+        FI;
+    ELSE IF OperandSize = 16 (* Doubleword/word operation *)
+        THEN
+            temp := DX:AX / SRC; (* Signed division *)
+            IF (temp > 7FFFH) or (temp < 8000H)
+            (* If a positive result is greater than 7FFFH
+            or a negative result is less than 8000H *)
+                THEN
+                    #DE; (* Divide error *)
+                ELSE
+                    AX := temp;
+                    DX := DX:AX SignedModulus SRC;
+            FI;
+        FI;
+    ELSE IF OperandSize = 32 (* Quadword/doubleword operation *)
+            temp := EDX:EAX / SRC; (* Signed division *)
+            IF (temp > 7FFFFFFFH) or (temp < 80000000H)
+            (* If a positive result is greater than 7FFFFFFFH
+            or a negative result is less than 80000000H *)
+                THEN
+                    #DE; (* Divide error *)
+                ELSE
+                    EAX := temp;
+                    EDX := EDXE:AX SignedModulus SRC;
+            FI;
+        FI;
+    ELSE IF OperandSize = 64 (* Doublequadword/quadword operation *)
+            temp := RDX:RAX / SRC; (* Signed division *)
+            IF (temp > 7FFFFFFFFFFFFFFFH) or (temp < 8000000000000000H)
+            (* If a positive result is greater than 7FFFFFFFFFFFFFFFH
+            or a negative result is less than 8000000000000000H *)
+                THEN
+                    #DE; (* Divide error *)
+                ELSE
+                    RAX := temp;
+                    RDX := RDE:RAX SignedModulus SRC;
+            FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

The CF, OF, SF, ZF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#DEIf the source operand (divisor) is 0.
The signed result (quotient) is too large for the destination.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#DEIf the source operand (divisor) is 0.
The signed result (quotient) is too large for the destination.
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#DEIf the source operand (divisor) is 0.
The signed result (quotient) is too large for the destination.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#DEIf the source operand (divisor) is 0
If the quotient is too large for the designated register.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/imul.html b/x86/imul.html new file mode 100644 index 0000000..3de4e07 --- /dev/null +++ b/x86/imul.html @@ -0,0 +1,284 @@ + +IMUL + — Signed Multiply

IMUL + — Signed Multiply

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F6 /5IMUL r/m81MValidValidAX:= AL ∗ r/m byte.
F7 /5IMUL r/m16MValidValidDX:AX := AX ∗ r/m word.
F7 /5IMUL r/m32MValidValidEDX:EAX := EAX ∗ r/m32.
REX.W + F7 /5IMUL r/m64MValidN.E.RDX:RAX := RAX ∗ r/m64.
0F AF /rIMUL r16, r/m16RMValidValidword register := word register ∗ r/m16.
0F AF /rIMUL r32, r/m32RMValidValiddoubleword register := doubleword register ∗ r/m32.
REX.W + 0F AF /rIMUL r64, r/m64RMValidN.E.Quadword register := Quadword register ∗ r/m64.
6B /r ibIMUL r16, r/m16, imm8RMIValidValidword register := r/m16 ∗ sign-extended immediate byte.
6B /r ibIMUL r32, r/m32, imm8RMIValidValiddoubleword register := r/m32 ∗ sign-extended immediate byte.
REX.W + 6B /r ibIMUL r64, r/m64, imm8RMIValidN.E.Quadword register := r/m64 ∗ sign-extended immediate byte.
69 /r iwIMUL r16, r/m16, imm16RMIValidValidword register := r/m16 ∗ immediate word.
69 /r idIMUL r32, r/m32, imm32RMIValidValiddoubleword register := r/m32 ∗ immediate doubleword.
REX.W + 69 /r idIMUL r64, r/m64, imm32RMIValidN.E.Quadword register := r/m64 ∗ immediate doubleword.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r, w)N/AN/AN/A
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RMIModRM:reg (r, w)ModRM:r/m (r)imm8/16/32N/A
+

Description + ¶ +

+

Performs a signed multiplication of two operands. This instruction has three forms, depending on the number of operands.

+
    +
  • One-operand form — This form is identical to that used by the MUL instruction. Here, the source operand (in a general-purpose register or memory location) is multiplied by the value in the AL, AX, EAX, or RAX register (depending on the operand size) and the product (twice the size of the input operand) is stored in the AX, DX:AX, EDX:EAX, or RDX:RAX registers, respectively.
  • +
  • Two-operand form — With this form the destination operand (the first operand) is multiplied by the source operand (second operand). The destination operand is a general-purpose register and the source operand is an immediate value, a general-purpose register, or a memory location. The intermediate product (twice the size of the input operand) is truncated and stored in the destination operand location.
  • +
  • Three-operand form — This form requires a destination operand (the first operand) and two source operands (the second and the third operands). Here, the first source operand (which can be a general-purpose register or a memory location) is multiplied by the second source operand (an immediate value). The intermediate product (twice the size of the first source operand) is truncated and stored in the destination operand (a general-purpose register).
+

When an immediate value is used as an operand, it is sign-extended to the length of the destination operand format.

+

The CF and OF flags are set when the signed integer value of the intermediate product differs from the sign extended operand-size-truncated product, otherwise the CF and OF flags are cleared.

+

The three forms of the IMUL instruction are similar in that the length of the product is calculated to twice the length of the operands. With the one-operand form, the product is stored exactly in the destination. With the two- and three- operand forms, however, the result is truncated to the length of the destination before it is stored in the destination register. Because of this truncation, the CF or OF flag should be tested to ensure that no significant bits are lost.

+

The two- and three-operand forms may also be used with unsigned operands because the lower half of the product is the same regardless if the operands are signed or unsigned. The CF and OF flags, however, cannot be used to determine if the upper half of the result is non-zero.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. Use of REX.W modifies the three forms of the instruction as follows.

+
    +
  • One-operand form —The source operand (in a 64-bit general-purpose register or memory location) is multiplied by the value in the RAX register and the product is stored in the RDX:RAX registers.
  • +
  • Two-operand form — The source operand is promoted to 64 bits if it is a register or a memory location. The destination operand is promoted to 64 bits.
  • +
  • Three-operand form — The first source operand (either a register or a memory location) and destination operand are promoted to 64 bits. If the source operand is an immediate, it is sign extended to 64 bits.
+

Operation + ¶ +

+
IF (NumberOfOperands = 1)
+    THEN IF (OperandSize = 8)
+        THEN
+            TMP_XP := AL ∗ SRC (* Signed multiplication; TMP_XP is a signed integer at twice the width of the SRC *);
+            AX := TMP_XP[15:0];
+            IF SignExtend(TMP_XP[7:0]) = TMP_XP
+                THEN CF := 0; OF := 0;
+                ELSE CF := 1; OF := 1; FI;
+        ELSE IF OperandSize = 16
+            THEN
+                TMP_XP := AX ∗ SRC (* Signed multiplication; TMP_XP is a signed integer at twice the width of the SRC *)
+                DX:AX := TMP_XP[31:0];
+                IF SignExtend(TMP_XP[15:0]) = TMP_XP
+                    THEN CF := 0; OF := 0;
+                    ELSE CF := 1; OF := 1; FI;
+            ELSE IF OperandSize = 32
+                THEN
+                    TMP_XP := EAX ∗ SRC (* Signed multiplication; TMP_XP is a signed integer at twice the width of the SRC*)
+                    EDX:EAX := TMP_XP[63:0];
+                    IF SignExtend(TMP_XP[31:0]) = TMP_XP
+                        THEN CF := 0; OF := 0;
+                        ELSE CF := 1; OF := 1; FI;
+                ELSE (* OperandSize = 64 *)
+                    TMP_XP := RAX ∗ SRC (* Signed multiplication; TMP_XP is a signed integer at twice the width of the SRC *)
+                    EDX:EAX := TMP_XP[127:0];
+                    IF SignExtend(TMP_XP[63:0]) = TMP_XP
+                        THEN CF := 0; OF := 0;
+                        ELSE CF := 1; OF := 1; FI;
+                FI;
+        FI;
+    ELSE IF (NumberOfOperands = 2)
+        THEN
+            TMP_XP := DEST ∗ SRC (* Signed multiplication; TMP_XP is a signed integer at twice the width of the SRC *)
+            DEST := TruncateToOperandSize(TMP_XP);
+            IF SignExtend(DEST) ≠ TMP_XP
+                THEN CF := 1; OF := 1;
+                ELSE CF := 0; OF := 0; FI;
+        ELSE (* NumberOfOperands = 3 *)
+            TMP_XP := SRC1 ∗ SRC2 (* Signed multiplication; TMP_XP is a signed integer at twice the width of the SRC1 *)
+            DEST := TruncateToOperandSize(TMP_XP);
+            IF SignExtend(DEST) ≠ TMP_XP
+                THEN CF := 1; OF := 1;
+                ELSE CF := 0; OF := 0; FI;
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

For the one operand form of the instruction, the CF and OF flags are set when significant bits are carried into the upper half of the result and cleared when the result fits exactly in the lower half of the result. For the two- and three-operand forms of the instruction, the CF and OF flags are set when the result must be truncated to fit in the destination operand size and cleared when the result fits exactly in the destination operand size. The SF, ZF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/in.html b/x86/in.html new file mode 100644 index 0000000..78aaf96 --- /dev/null +++ b/x86/in.html @@ -0,0 +1,157 @@ + +IN + — Input From Port

IN + — Input From Port

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
E4 ibIN AL, imm8IValidValidInput byte from imm8 I/O port address into AL.
E5 ibIN AX, imm8IValidValidInput word from imm8 I/O port address into AX.
E5 ibIN EAX, imm8IValidValidInput dword from imm8 I/O port address into EAX.
ECIN AL,DXZOValidValidInput byte from I/O port in DX into AL.
EDIN AX,DXZOValidValidInput word from I/O port in DX into AX.
EDIN EAX,DXZOValidValidInput doubleword from I/O port in DX into EAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
Iimm8N/AN/AN/A
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Copies the value from the I/O port specified with the second operand (source operand) to the destination operand (first operand). The source operand can be a byte-immediate or the DX register; the destination operand can be register AL, AX, or EAX, depending on the size of the port being accessed (8, 16, or 32 bits, respectively). Using the DX register as a source operand allows I/O port addresses from 0 to 65,535 to be accessed; using a byte immediate allows I/O port addresses 0 to 255 to be accessed.

+

When accessing an 8-bit I/O port, the opcode determines the port size; when accessing a 16- and 32-bit I/O port, the operand-size attribute determines the port size. At the machine code level, I/O instructions are shorter when accessing 8-bit I/O ports. Here, the upper eight bits of the port address will be 0.

+

This instruction is only useful for accessing I/O ports located in the processor’s I/O address space. See Chapter 19, “Input/Output,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for more information on accessing I/O ports in the I/O address space.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
IF ((PE = 1) and ((CPL > IOPL) or (VM = 1)))
+    THEN (* Protected mode with CPL > IOPL or virtual-8086 mode *)
+        IF (Any I/O Permission Bit for I/O port being accessed = 1)
+            THEN (* I/O operation is not allowed *)
+                #GP(0);
+            ELSE ( * I/O operation is allowed *)
+                DEST := SRC; (* Read from selected I/O port *)
+        FI;
+    ELSE (Real Mode or Protected Mode with CPL ≤ IOPL *)
+        DEST := SRC; (* Read from selected I/O port *)
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + +
#GP(0)If the CPL is greater than (has less privilege) the I/O privilege level (IOPL) and any of the corresponding I/O permission bits in TSS for the I/O port being accessed is 1.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + +
#GP(0)If any of the I/O permission bits in the TSS for the I/O port being accessed is 1.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + +
#GP(0)If the CPL is greater than (has less privilege) the I/O privilege level (IOPL) and any of the corresponding I/O permission bits in TSS for the I/O port being accessed is 1.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
diff --git a/x86/inc.html b/x86/inc.html new file mode 100644 index 0000000..c641be0 --- /dev/null +++ b/x86/inc.html @@ -0,0 +1,184 @@ + +INC + — Increment by 1

INC + — Increment by 1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
FE /0INC r/m8MValidValidIncrement r/m byte by 1.
REX + FE /0INC r/m81MValidN.E.Increment r/m byte by 1.
FF /0INC r/m16MValidValidIncrement r/m word by 1.
FF /0INC r/m32MValidValidIncrement r/m doubleword by 1.
REX.W + FF /0INC r/m64MValidN.E.Increment r/m quadword by 1.
40+ rw2INC r16ON.E.ValidIncrement word register by 1.
40+ rdINC r32ON.E.ValidIncrement doubleword register by 1.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

2. 40H through 47H are REX prefixes in 64-bit mode.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r, w)N/AN/AN/A
Oopcode + rd (r, w)N/AN/AN/A
+

Description + ¶ +

+

Adds 1 to the destination operand, while preserving the state of the CF flag. The destination operand can be a register or a memory location. This instruction allows a loop counter to be updated without disturbing the CF flag. (Use a ADD instruction with an immediate operand of 1 to perform an increment operation that does updates the CF flag.)

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, INC r16 and INC r32 are not encodable (because opcodes 40H through 47H are REX prefixes). Otherwise, the instruction’s 64-bit mode default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits.

+

Operation + ¶ +

+
DEST := DEST + 1;
+
+

Flags Affected + ¶ +

+

The CF flag is not affected. The OF, SF, ZF, AF, and PF flags are set according to the result.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULLsegment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/incsspd.incsspq.html b/x86/incsspd.incsspq.html new file mode 100644 index 0000000..7dada1f --- /dev/null +++ b/x86/incsspd.incsspq.html @@ -0,0 +1,127 @@ + +INCSSPD/INCSSPQ + — Increment Shadow Stack Pointer

INCSSPD/INCSSPQ + — Increment Shadow Stack Pointer

+ + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F AE /05 INCSSPD r32RV/VCET_SSIncrement SSP by 4 * r32[7:0].
F3 REX.W 0F AE /05 INCSSPQ r64RV/N.E.CET_SSIncrement SSP by 8 * r64[7:0].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
RN/AModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

This instruction can be used to increment the current shadow stack pointer by the operand size of the instruction times the unsigned 8-bit value specified by bits 7:0 in the source operand. The instruction performs a pop and discard of the first and last element on the shadow stack in the range specified by the unsigned 8-bit value in bits 7:0 of the source operand.

+

Operation + ¶ +

+
IF CPL = 3
+    IF (CR4.CET & IA32_U_CET.SH_STK_EN) = 0
+        THEN #UD; FI;
+ELSE
+    IF (CR4.CET & IA32_S_CET.SH_STK_EN) = 0
+        THEN #UD; FI;
+FI;
+IF (operand size is 64-bit)
+    THEN
+        Range := R64[7:0];
+        shadow_stack_load 8 bytes from SSP;
+        IF Range > 0
+            THEN shadow_stack_load 8 bytes from SSP + 8 * (Range - 1);
+        FI;
+        SSP := SSP + Range * 8;
+    ELSE
+        Range := R32[7:0];
+        shadow_stack_load 4 bytes from SSP;
+        IF Range > 0
+            THEN shadow_stack_load 4 bytes from SSP + 4 * (Range - 1);
+        FI;
+        SSP := SSP + Range * 4;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
INCSSPD void _incsspd(int);
+
+
INCSSPQ void _incsspq(int);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
IF CPL = 3 and IA32_U_CET.SH_STK_EN = 0.
IF CPL < 3 and IA32_S_CET.SH_STK_EN = 0.
#PF(fault-code)If a page fault occurs.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe INCSSP instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe INCSSP instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/index.html b/x86/index.html new file mode 100644 index 0000000..12d2751 --- /dev/null +++ b/x86/index.html @@ -0,0 +1,9 @@ + +x86 and amd64 instruction reference

x86 and amd64 instruction reference

Derived from the December 2023 version of the Intel® 64 and IA-32 Architectures Software Developer’s Manual. Last updated 2024-02-18.

THIS REFERENCE IS NOT PERFECT. It's been mechanically separated into distinct files by a + dumb script. It may be enough to replace the official documentation on your weekend reverse engineering + project, but for anything where money is at stake, go get the official and freely available documentation. +

Core Instructions

MnemonicSummary
AAAASCII Adjust After Addition
AADASCII Adjust AX Before Division
AAMASCII Adjust AX After Multiply
AASASCII Adjust AL After Subtraction
ADCAdd With Carry
ADCXUnsigned Integer Addition of Two Operands With Carry Flag
ADDAdd
ADDPDAdd Packed Double Precision Floating-Point Values
ADDPSAdd Packed Single Precision Floating-Point Values
ADDSDAdd Scalar Double Precision Floating-Point Values
ADDSSAdd Scalar Single Precision Floating-Point Values
ADDSUBPDPacked Double Precision Floating-Point Add/Subtract
ADDSUBPSPacked Single Precision Floating-Point Add/Subtract
ADOXUnsigned Integer Addition of Two Operands With Overflow Flag
AESDECPerform One Round of an AES Decryption Flow
AESDEC128KLPerform Ten Rounds of AES Decryption Flow With Key Locker Using 128-BitKey
AESDEC256KLPerform 14 Rounds of AES Decryption Flow With Key Locker Using 256-Bit Key
AESDECLASTPerform Last Round of an AES Decryption Flow
AESDECWIDE128KLPerform Ten Rounds of AES Decryption Flow With Key Locker on 8 BlocksUsing 128-Bit Key
AESDECWIDE256KLPerform 14 Rounds of AES Decryption Flow With Key Locker on 8 BlocksUsing 256-Bit Key
AESENCPerform One Round of an AES Encryption Flow
AESENC128KLPerform Ten Rounds of AES Encryption Flow With Key Locker Using 128-Bit Key
AESENC256KLPerform 14 Rounds of AES Encryption Flow With Key Locker Using 256-Bit Key
AESENCLASTPerform Last Round of an AES Encryption Flow
AESENCWIDE128KLPerform Ten Rounds of AES Encryption Flow With Key Locker on 8 BlocksUsing 128-Bit Key
AESENCWIDE256KLPerform 14 Rounds of AES Encryption Flow With Key Locker on 8 BlocksUsing 256-Bit Key
AESIMCPerform the AES InvMixColumn Transformation
AESKEYGENASSISTAES Round Key Generation Assist
ANDLogical AND
ANDNLogical AND NOT
ANDNPDBitwise Logical AND NOT of Packed Double Precision Floating-Point Values
ANDNPSBitwise Logical AND NOT of Packed Single Precision Floating-Point Values
ANDPDBitwise Logical AND of Packed Double Precision Floating-Point Values
ANDPSBitwise Logical AND of Packed Single Precision Floating-Point Values
ARPLAdjust RPL Field of Segment Selector
BEXTRBit Field Extract
BLENDPDBlend Packed Double Precision Floating-Point Values
BLENDPSBlend Packed Single Precision Floating-Point Values
BLENDVPDVariable Blend Packed Double Precision Floating-Point Values
BLENDVPSVariable Blend Packed Single Precision Floating-Point Values
BLSIExtract Lowest Set Isolated Bit
BLSMSKGet Mask Up to Lowest Set Bit
BLSRReset Lowest Set Bit
BNDCLCheck Lower Bound
BNDCNCheck Upper Bound
BNDCUCheck Upper Bound
BNDLDXLoad Extended Bounds Using Address Translation
BNDMKMake Bounds
BNDMOVMove Bounds
BNDSTXStore Extended Bounds Using Address Translation
BOUNDCheck Array Index Against Bounds
BSFBit Scan Forward
BSRBit Scan Reverse
BSWAPByte Swap
BTBit Test
BTCBit Test and Complement
BTRBit Test and Reset
BTSBit Test and Set
BZHIZero High Bits Starting with Specified Bit Position
CALLCall Procedure
CBWConvert Byte to Word/Convert Word to Doubleword/Convert Doubleword toQuadword
CDQConvert Word to Doubleword/Convert Doubleword to Quadword
CDQEConvert Byte to Word/Convert Word to Doubleword/Convert Doubleword toQuadword
CLACClear AC Flag in EFLAGS Register
CLCClear Carry Flag
CLDClear Direction Flag
CLDEMOTECache Line Demote
CLFLUSHFlush Cache Line
CLFLUSHOPTFlush Cache Line Optimized
CLIClear Interrupt Flag
CLRSSBSYClear Busy Flag in a Supervisor Shadow Stack Token
CLTSClear Task-Switched Flag in CR0
CLUIClear User Interrupt Flag
CLWBCache Line Write Back
CMCComplement Carry Flag
CMOVccConditional Move
CMPCompare Two Operands
CMPPDCompare Packed Double Precision Floating-Point Values
CMPPSCompare Packed Single Precision Floating-Point Values
CMPSCompare String Operands
CMPSBCompare String Operands
CMPSDCompare String Operands
CMPSD (1)Compare Scalar Double Precision Floating-Point Value
CMPSQCompare String Operands
CMPSSCompare Scalar Single Precision Floating-Point Value
CMPSWCompare String Operands
CMPXCHGCompare and Exchange
CMPXCHG16BCompare and Exchange Bytes
CMPXCHG8BCompare and Exchange Bytes
COMISDCompare Scalar Ordered Double Precision Floating-Point Values and Set EFLAGS
COMISSCompare Scalar Ordered Single Precision Floating-Point Values and Set EFLAGS
CPUIDCPU Identification
CQOConvert Word to Doubleword/Convert Doubleword to Quadword
CRC32Accumulate CRC32 Value
CVTDQ2PDConvert Packed Doubleword Integers to Packed Double Precision Floating-PointValues
CVTDQ2PSConvert Packed Doubleword Integers to Packed Single Precision Floating-PointValues
CVTPD2DQConvert Packed Double Precision Floating-Point Values to Packed DoublewordIntegers
CVTPD2PIConvert Packed Double Precision Floating-Point Values to Packed Dword Integers
CVTPD2PSConvert Packed Double Precision Floating-Point Values to Packed Single PrecisionFloating-Point Values
CVTPI2PDConvert Packed Dword Integers to Packed Double Precision Floating-Point Values
CVTPI2PSConvert Packed Dword Integers to Packed Single Precision Floating-Point Values
CVTPS2DQConvert Packed Single Precision Floating-Point Values to Packed SignedDoubleword Integer Values
CVTPS2PDConvert Packed Single Precision Floating-Point Values to Packed Double PrecisionFloating-Point Values
CVTPS2PIConvert Packed Single Precision Floating-Point Values to Packed Dword Integers
CVTSD2SIConvert Scalar Double Precision Floating-Point Value to Doubleword Integer
CVTSD2SSConvert Scalar Double Precision Floating-Point Value to Scalar Single PrecisionFloating-Point Value
CVTSI2SDConvert Doubleword Integer to Scalar Double Precision Floating-Point Value
CVTSI2SSConvert Doubleword Integer to Scalar Single Precision Floating-Point Value
CVTSS2SDConvert Scalar Single Precision Floating-Point Value to Scalar Double PrecisionFloating-Point Value
CVTSS2SIConvert Scalar Single Precision Floating-Point Value to Doubleword Integer
CVTTPD2DQConvert with Truncation Packed Double Precision Floating-Point Values toPacked Doubleword Integers
CVTTPD2PIConvert With Truncation Packed Double Precision Floating-Point Values to PackedDword Integers
CVTTPS2DQConvert With Truncation Packed Single Precision Floating-Point Values to PackedSigned Doubleword Integer Values
CVTTPS2PIConvert With Truncation Packed Single Precision Floating-Point Values to PackedDword Integers
CVTTSD2SIConvert With Truncation Scalar Double Precision Floating-Point Value to SignedInteger
CVTTSS2SIConvert With Truncation Scalar Single Precision Floating-Point Value to Integer
CWDConvert Word to Doubleword/Convert Doubleword to Quadword
CWDEConvert Byte to Word/Convert Word to Doubleword/Convert Doubleword toQuadword
DAADecimal Adjust AL After Addition
DASDecimal Adjust AL After Subtraction
DECDecrement by 1
DIVUnsigned Divide
DIVPDDivide Packed Double Precision Floating-Point Values
DIVPSDivide Packed Single Precision Floating-Point Values
DIVSDDivide Scalar Double Precision Floating-Point Value
DIVSSDivide Scalar Single Precision Floating-Point Values
DPPDDot Product of Packed Double Precision Floating-Point Values
DPPSDot Product of Packed Single Precision Floating-Point Values
EMMSEmpty MMX Technology State
ENCODEKEY128Encode 128-Bit Key With Key Locker
ENCODEKEY256Encode 256-Bit Key With Key Locker
ENDBR32Terminate an Indirect Branch in 32-bit and Compatibility Mode
ENDBR64Terminate an Indirect Branch in 64-bit Mode
ENQCMDEnqueue Command
ENQCMDSEnqueue Command Supervisor
ENTERMake Stack Frame for Procedure Parameters
EXTRACTPSExtract Packed Floating-Point Values
F2XM1Compute 2x–1
FABSAbsolute Value
FADDAdd
FADDPAdd
FBLDLoad Binary Coded Decimal
FBSTPStore BCD Integer and Pop
FCHSChange Sign
FCLEXClear Exceptions
FCMOVccFloating-Point Conditional Move
FCOMCompare Floating-Point Values
FCOMICompare Floating-Point Values and Set EFLAGS
FCOMIPCompare Floating-Point Values and Set EFLAGS
FCOMPCompare Floating-Point Values
FCOMPPCompare Floating-Point Values
FCOSCosine
FDECSTPDecrement Stack-Top Pointer
FDIVDivide
FDIVPDivide
FDIVRReverse Divide
FDIVRPReverse Divide
FFREEFree Floating-Point Register
FIADDAdd
FICOMCompare Integer
FICOMPCompare Integer
FIDIVDivide
FIDIVRReverse Divide
FILDLoad Integer
FIMULMultiply
FINCSTPIncrement Stack-Top Pointer
FINITInitialize Floating-Point Unit
FISTStore Integer
FISTPStore Integer
FISTTPStore Integer With Truncation
FISUBSubtract
FISUBRReverse Subtract
FLDLoad Floating-Point Value
FLD1Load Constant
FLDCWLoad x87 FPU Control Word
FLDENVLoad x87 FPU Environment
FLDL2ELoad Constant
FLDL2TLoad Constant
FLDLG2Load Constant
FLDLN2Load Constant
FLDPILoad Constant
FLDZLoad Constant
FMULMultiply
FMULPMultiply
FNCLEXClear Exceptions
FNINITInitialize Floating-Point Unit
FNOPNo Operation
FNSAVEStore x87 FPU State
FNSTCWStore x87 FPU Control Word
FNSTENVStore x87 FPU Environment
FNSTSWStore x87 FPU Status Word
FPATANPartial Arctangent
FPREMPartial Remainder
FPREM1Partial Remainder
FPTANPartial Tangent
FRNDINTRound to Integer
FRSTORRestore x87 FPU State
FSAVEStore x87 FPU State
FSCALEScale
FSINSine
FSINCOSSine and Cosine
FSQRTSquare Root
FSTStore Floating-Point Value
FSTCWStore x87 FPU Control Word
FSTENVStore x87 FPU Environment
FSTPStore Floating-Point Value
FSTSWStore x87 FPU Status Word
FSUBSubtract
FSUBPSubtract
FSUBRReverse Subtract
FSUBRPReverse Subtract
FTSTTEST
FUCOMUnordered Compare Floating-Point Values
FUCOMICompare Floating-Point Values and Set EFLAGS
FUCOMIPCompare Floating-Point Values and Set EFLAGS
FUCOMPUnordered Compare Floating-Point Values
FUCOMPPUnordered Compare Floating-Point Values
FWAITWait
FXAMExamine Floating-Point
FXCHExchange Register Contents
FXRSTORRestore x87 FPU, MMX, XMM, and MXCSR State
FXSAVESave x87 FPU, MMX Technology, and SSE State
FXTRACTExtract Exponent and Significand
FYL2XCompute y ∗ log2x
FYL2XP1Compute y ∗ log2(x +1)
GF2P8AFFINEINVQBGalois Field Affine Transformation Inverse
GF2P8AFFINEQBGalois Field Affine Transformation
GF2P8MULBGalois Field Multiply Bytes
HADDPDPacked Double Precision Floating-Point Horizontal Add
HADDPSPacked Single Precision Floating-Point Horizontal Add
HLTHalt
HRESETHistory Reset
HSUBPDPacked Double Precision Floating-Point Horizontal Subtract
HSUBPSPacked Single Precision Floating-Point Horizontal Subtract
IDIVSigned Divide
IMULSigned Multiply
INInput From Port
INCIncrement by 1
INCSSPDIncrement Shadow Stack Pointer
INCSSPQIncrement Shadow Stack Pointer
INSInput from Port to String
INSBInput from Port to String
INSDInput from Port to String
INSERTPSInsert Scalar Single Precision Floating-Point Value
INSWInput from Port to String
INT nCall to Interrupt Procedure
INT1Call to Interrupt Procedure
INT3Call to Interrupt Procedure
INTOCall to Interrupt Procedure
INVDInvalidate Internal Caches
INVLPGInvalidate TLB Entries
INVPCIDInvalidate Process-Context Identifier
IRETInterrupt Return
IRETDInterrupt Return
IRETQInterrupt Return
JMPJump
JccJump if Condition Is Met
KADDBADD Two Masks
KADDDADD Two Masks
KADDQADD Two Masks
KADDWADD Two Masks
KANDBBitwise Logical AND Masks
KANDDBitwise Logical AND Masks
KANDNBBitwise Logical AND NOT Masks
KANDNDBitwise Logical AND NOT Masks
KANDNQBitwise Logical AND NOT Masks
KANDNWBitwise Logical AND NOT Masks
KANDQBitwise Logical AND Masks
KANDWBitwise Logical AND Masks
KMOVBMove From and to Mask Registers
KMOVDMove From and to Mask Registers
KMOVQMove From and to Mask Registers
KMOVWMove From and to Mask Registers
KNOTBNOT Mask Register
KNOTDNOT Mask Register
KNOTQNOT Mask Register
KNOTWNOT Mask Register
KORBBitwise Logical OR Masks
KORDBitwise Logical OR Masks
KORQBitwise Logical OR Masks
KORTESTBOR Masks and Set Flags
KORTESTDOR Masks and Set Flags
KORTESTQOR Masks and Set Flags
KORTESTWOR Masks and Set Flags
KORWBitwise Logical OR Masks
KSHIFTLBShift Left Mask Registers
KSHIFTLDShift Left Mask Registers
KSHIFTLQShift Left Mask Registers
KSHIFTLWShift Left Mask Registers
KSHIFTRBShift Right Mask Registers
KSHIFTRDShift Right Mask Registers
KSHIFTRQShift Right Mask Registers
KSHIFTRWShift Right Mask Registers
KTESTBPacked Bit Test Masks and Set Flags
KTESTDPacked Bit Test Masks and Set Flags
KTESTQPacked Bit Test Masks and Set Flags
KTESTWPacked Bit Test Masks and Set Flags
KUNPCKBWUnpack for Mask Registers
KUNPCKDQUnpack for Mask Registers
KUNPCKWDUnpack for Mask Registers
KXNORBBitwise Logical XNOR Masks
KXNORDBitwise Logical XNOR Masks
KXNORQBitwise Logical XNOR Masks
KXNORWBitwise Logical XNOR Masks
KXORBBitwise Logical XOR Masks
KXORDBitwise Logical XOR Masks
KXORQBitwise Logical XOR Masks
KXORWBitwise Logical XOR Masks
LAHFLoad Status Flags Into AH Register
LARLoad Access Rights Byte
LDDQULoad Unaligned Integer 128 Bits
LDMXCSRLoad MXCSR Register
LDSLoad Far Pointer
LDTILECFGLoad Tile Configuration
LEALoad Effective Address
LEAVEHigh Level Procedure Exit
LESLoad Far Pointer
LFENCELoad Fence
LFSLoad Far Pointer
LGDTLoad Global/Interrupt Descriptor Table Register
LGSLoad Far Pointer
LIDTLoad Global/Interrupt Descriptor Table Register
LLDTLoad Local Descriptor Table Register
LMSWLoad Machine Status Word
LOADIWKEYLoad Internal Wrapping Key With Key Locker
LOCKAssert LOCK# Signal Prefix
LODSLoad String
LODSBLoad String
LODSDLoad String
LODSQLoad String
LODSWLoad String
LOOPLoop According to ECX Counter
LOOPccLoop According to ECX Counter
LSLLoad Segment Limit
LSSLoad Far Pointer
LTRLoad Task Register
LZCNTCount the Number of Leading Zero Bits
MASKMOVDQUStore Selected Bytes of Double Quadword
MASKMOVQStore Selected Bytes of Quadword
MAXPDMaximum of Packed Double Precision Floating-Point Values
MAXPSMaximum of Packed Single Precision Floating-Point Values
MAXSDReturn Maximum Scalar Double Precision Floating-Point Value
MAXSSReturn Maximum Scalar Single Precision Floating-Point Value
MFENCEMemory Fence
MINPDMinimum of Packed Double Precision Floating-Point Values
MINPSMinimum of Packed Single Precision Floating-Point Values
MINSDReturn Minimum Scalar Double Precision Floating-Point Value
MINSSReturn Minimum Scalar Single Precision Floating-Point Value
MONITORSet Up Monitor Address
MOVMove
MOV (1)Move to/from Control Registers
MOV (2)Move to/from Debug Registers
MOVAPDMove Aligned Packed Double Precision Floating-Point Values
MOVAPSMove Aligned Packed Single Precision Floating-Point Values
MOVBEMove Data After Swapping Bytes
MOVDMove Doubleword/Move Quadword
MOVDDUPReplicate Double Precision Floating-Point Values
MOVDIR64BMove 64 Bytes as Direct Store
MOVDIRIMove Doubleword as Direct Store
MOVDQ2QMove Quadword from XMM to MMX Technology Register
MOVDQAMove Aligned Packed Integer Values
MOVDQUMove Unaligned Packed Integer Values
MOVHLPSMove Packed Single Precision Floating-Point Values High to Low
MOVHPDMove High Packed Double Precision Floating-Point Value
MOVHPSMove High Packed Single Precision Floating-Point Values
MOVLHPSMove Packed Single Precision Floating-Point Values Low to High
MOVLPDMove Low Packed Double Precision Floating-Point Value
MOVLPSMove Low Packed Single Precision Floating-Point Values
MOVMSKPDExtract Packed Double Precision Floating-Point Sign Mask
MOVMSKPSExtract Packed Single Precision Floating-Point Sign Mask
MOVNTDQStore Packed Integers Using Non-Temporal Hint
MOVNTDQALoad Double Quadword Non-Temporal Aligned Hint
MOVNTIStore Doubleword Using Non-Temporal Hint
MOVNTPDStore Packed Double Precision Floating-Point Values Using Non-Temporal Hint
MOVNTPSStore Packed Single Precision Floating-Point Values Using Non-Temporal Hint
MOVNTQStore of Quadword Using Non-Temporal Hint
MOVQMove Doubleword/Move Quadword
MOVQ (1)Move Quadword
MOVQ2DQMove Quadword from MMX Technology to XMM Register
MOVSMove Data From String to String
MOVSBMove Data From String to String
MOVSDMove Data From String to String
MOVSD (1)Move or Merge Scalar Double Precision Floating-Point Value
MOVSHDUPReplicate Single Precision Floating-Point Values
MOVSLDUPReplicate Single Precision Floating-Point Values
MOVSQMove Data From String to String
MOVSSMove or Merge Scalar Single Precision Floating-Point Value
MOVSWMove Data From String to String
MOVSXMove With Sign-Extension
MOVSXDMove With Sign-Extension
MOVUPDMove Unaligned Packed Double Precision Floating-Point Values
MOVUPSMove Unaligned Packed Single Precision Floating-Point Values
MOVZXMove With Zero-Extend
MPSADBWCompute Multiple Packed Sums of Absolute Difference
MULUnsigned Multiply
MULPDMultiply Packed Double Precision Floating-Point Values
MULPSMultiply Packed Single Precision Floating-Point Values
MULSDMultiply Scalar Double Precision Floating-Point Value
MULSSMultiply Scalar Single Precision Floating-Point Values
MULXUnsigned Multiply Without Affecting Flags
MWAITMonitor Wait
NEGTwo's Complement Negation
NOPNo Operation
NOTOne's Complement Negation
ORLogical Inclusive OR
ORPDBitwise Logical OR of Packed Double Precision Floating-Point Values
ORPSBitwise Logical OR of Packed Single Precision Floating-Point Values
OUTOutput to Port
OUTSOutput String to Port
OUTSBOutput String to Port
OUTSDOutput String to Port
OUTSWOutput String to Port
PABSBPacked Absolute Value
PABSDPacked Absolute Value
PABSQPacked Absolute Value
PABSWPacked Absolute Value
PACKSSDWPack With Signed Saturation
PACKSSWBPack With Signed Saturation
PACKUSDWPack With Unsigned Saturation
PACKUSWBPack With Unsigned Saturation
PADDBAdd Packed Integers
PADDDAdd Packed Integers
PADDQAdd Packed Integers
PADDSBAdd Packed Signed Integers with Signed Saturation
PADDSWAdd Packed Signed Integers with Signed Saturation
PADDUSBAdd Packed Unsigned Integers With Unsigned Saturation
PADDUSWAdd Packed Unsigned Integers With Unsigned Saturation
PADDWAdd Packed Integers
PALIGNRPacked Align Right
PANDLogical AND
PANDNLogical AND NOT
PAUSESpin Loop Hint
PAVGBAverage Packed Integers
PAVGWAverage Packed Integers
PBLENDVBVariable Blend Packed Bytes
PBLENDWBlend Packed Words
PCLMULQDQCarry-Less Multiplication Quadword
PCMPEQBCompare Packed Data for Equal
PCMPEQDCompare Packed Data for Equal
PCMPEQQCompare Packed Qword Data for Equal
PCMPEQWCompare Packed Data for Equal
PCMPESTRIPacked Compare Explicit Length Strings, Return Index
PCMPESTRMPacked Compare Explicit Length Strings, Return Mask
PCMPGTBCompare Packed Signed Integers for Greater Than
PCMPGTDCompare Packed Signed Integers for Greater Than
PCMPGTQCompare Packed Data for Greater Than
PCMPGTWCompare Packed Signed Integers for Greater Than
PCMPISTRIPacked Compare Implicit Length Strings, Return Index
PCMPISTRMPacked Compare Implicit Length Strings, Return Mask
PCONFIGPlatform Configuration
PDEPParallel Bits Deposit
PEXTParallel Bits Extract
PEXTRBExtract Byte/Dword/Qword
PEXTRDExtract Byte/Dword/Qword
PEXTRQExtract Byte/Dword/Qword
PEXTRWExtract Word
PHADDDPacked Horizontal Add
PHADDSWPacked Horizontal Add and Saturate
PHADDWPacked Horizontal Add
PHMINPOSUWPacked Horizontal Word Minimum
PHSUBDPacked Horizontal Subtract
PHSUBSWPacked Horizontal Subtract and Saturate
PHSUBWPacked Horizontal Subtract
PINSRBInsert Byte/Dword/Qword
PINSRDInsert Byte/Dword/Qword
PINSRQInsert Byte/Dword/Qword
PINSRWInsert Word
PMADDUBSWMultiply and Add Packed Signed and Unsigned Bytes
PMADDWDMultiply and Add Packed Integers
PMAXSBMaximum of Packed Signed Integers
PMAXSDMaximum of Packed Signed Integers
PMAXSQMaximum of Packed Signed Integers
PMAXSWMaximum of Packed Signed Integers
PMAXUBMaximum of Packed Unsigned Integers
PMAXUDMaximum of Packed Unsigned Integers
PMAXUQMaximum of Packed Unsigned Integers
PMAXUWMaximum of Packed Unsigned Integers
PMINSBMinimum of Packed Signed Integers
PMINSDMinimum of Packed Signed Integers
PMINSQMinimum of Packed Signed Integers
PMINSWMinimum of Packed Signed Integers
PMINUBMinimum of Packed Unsigned Integers
PMINUDMinimum of Packed Unsigned Integers
PMINUQMinimum of Packed Unsigned Integers
PMINUWMinimum of Packed Unsigned Integers
PMOVMSKBMove Byte Mask
PMOVSXPacked Move With Sign Extend
PMOVZXPacked Move With Zero Extend
PMULDQMultiply Packed Doubleword Integers
PMULHRSWPacked Multiply High With Round and Scale
PMULHUWMultiply Packed Unsigned Integers and Store High Result
PMULHWMultiply Packed Signed Integers and Store High Result
PMULLDMultiply Packed Integers and Store Low Result
PMULLQMultiply Packed Integers and Store Low Result
PMULLWMultiply Packed Signed Integers and Store Low Result
PMULUDQMultiply Packed Unsigned Doubleword Integers
POPPop a Value From the Stack
POPAPop All General-Purpose Registers
POPADPop All General-Purpose Registers
POPCNTReturn the Count of Number of Bits Set to 1
POPFPop Stack Into EFLAGS Register
POPFDPop Stack Into EFLAGS Register
POPFQPop Stack Into EFLAGS Register
PORBitwise Logical OR
PREFETCHWPrefetch Data Into Caches in Anticipation of a Write
PREFETCHhPrefetch Data Into Caches
PSADBWCompute Sum of Absolute Differences
PSHUFBPacked Shuffle Bytes
PSHUFDShuffle Packed Doublewords
PSHUFHWShuffle Packed High Words
PSHUFLWShuffle Packed Low Words
PSHUFWShuffle Packed Words
PSIGNBPacked SIGN
PSIGNDPacked SIGN
PSIGNWPacked SIGN
PSLLDShift Packed Data Left Logical
PSLLDQShift Double Quadword Left Logical
PSLLQShift Packed Data Left Logical
PSLLWShift Packed Data Left Logical
PSRADShift Packed Data Right Arithmetic
PSRAQShift Packed Data Right Arithmetic
PSRAWShift Packed Data Right Arithmetic
PSRLDShift Packed Data Right Logical
PSRLDQShift Double Quadword Right Logical
PSRLQShift Packed Data Right Logical
PSRLWShift Packed Data Right Logical
PSUBBSubtract Packed Integers
PSUBDSubtract Packed Integers
PSUBQSubtract Packed Quadword Integers
PSUBSBSubtract Packed Signed Integers With Signed Saturation
PSUBSWSubtract Packed Signed Integers With Signed Saturation
PSUBUSBSubtract Packed Unsigned Integers With Unsigned Saturation
PSUBUSWSubtract Packed Unsigned Integers With Unsigned Saturation
PSUBWSubtract Packed Integers
PTESTLogical Compare
PTWRITEWrite Data to a Processor Trace Packet
PUNPCKHBWUnpack High Data
PUNPCKHDQUnpack High Data
PUNPCKHQDQUnpack High Data
PUNPCKHWDUnpack High Data
PUNPCKLBWUnpack Low Data
PUNPCKLDQUnpack Low Data
PUNPCKLQDQUnpack Low Data
PUNPCKLWDUnpack Low Data
PUSHPush Word, Doubleword, or Quadword Onto the Stack
PUSHAPush All General-Purpose Registers
PUSHADPush All General-Purpose Registers
PUSHFPush EFLAGS Register Onto the Stack
PUSHFDPush EFLAGS Register Onto the Stack
PUSHFQPush EFLAGS Register Onto the Stack
PXORLogical Exclusive OR
RCLRotate
RCPPSCompute Reciprocals of Packed Single Precision Floating-Point Values
RCPSSCompute Reciprocal of Scalar Single Precision Floating-Point Values
RCRRotate
RDFSBASERead FS/GS Segment Base
RDGSBASERead FS/GS Segment Base
RDMSRRead From Model Specific Register
RDPIDRead Processor ID
RDPKRURead Protection Key Rights for User Pages
RDPMCRead Performance-Monitoring Counters
RDRANDRead Random Number
RDSEEDRead Random SEED
RDSSPDRead Shadow Stack Pointer
RDSSPQRead Shadow Stack Pointer
RDTSCRead Time-Stamp Counter
RDTSCPRead Time-Stamp Counter and Processor ID
REPRepeat String Operation Prefix
REPERepeat String Operation Prefix
REPNERepeat String Operation Prefix
REPNZRepeat String Operation Prefix
REPZRepeat String Operation Prefix
RETReturn From Procedure
ROLRotate
RORRotate
RORXRotate Right Logical Without Affecting Flags
ROUNDPDRound Packed Double Precision Floating-Point Values
ROUNDPSRound Packed Single Precision Floating-Point Values
ROUNDSDRound Scalar Double Precision Floating-Point Values
ROUNDSSRound Scalar Single Precision Floating-Point Values
RSMResume From System Management Mode
RSQRTPSCompute Reciprocals of Square Roots of Packed Single Precision Floating-PointValues
RSQRTSSCompute Reciprocal of Square Root of Scalar Single Precision Floating-Point Value
RSTORSSPRestore Saved Shadow Stack Pointer
SAHFStore AH Into Flags
SALShift
SARShift
SARXShift Without Affecting Flags
SAVEPREVSSPSave Previous Shadow Stack Pointer
SBBInteger Subtraction With Borrow
SCASScan String
SCASBScan String
SCASDScan String
SCASWScan String
SENDUIPISend User Interprocessor Interrupt
SERIALIZESerialize Instruction Execution
SETSSBSYMark Shadow Stack Busy
SETccSet Byte on Condition
SFENCEStore Fence
SGDTStore Global Descriptor Table Register
SHA1MSG1Perform an Intermediate Calculation for the Next Four SHA1 Message Dwords
SHA1MSG2Perform a Final Calculation for the Next Four SHA1 Message Dwords
SHA1NEXTECalculate SHA1 State Variable E After Four Rounds
SHA1RNDS4Perform Four Rounds of SHA1 Operation
SHA256MSG1Perform an Intermediate Calculation for the Next Four SHA256 MessageDwords
SHA256MSG2Perform a Final Calculation for the Next Four SHA256 Message Dwords
SHA256RNDS2Perform Two Rounds of SHA256 Operation
SHLShift
SHLDDouble Precision Shift Left
SHLXShift Without Affecting Flags
SHRShift
SHRDDouble Precision Shift Right
SHRXShift Without Affecting Flags
SHUFPDPacked Interleave Shuffle of Pairs of Double Precision Floating-Point Values
SHUFPSPacked Interleave Shuffle of Quadruplets of Single Precision Floating-Point Values
SIDTStore Interrupt Descriptor Table Register
SLDTStore Local Descriptor Table Register
SMSWStore Machine Status Word
SQRTPDSquare Root of Double Precision Floating-Point Values
SQRTPSSquare Root of Single Precision Floating-Point Values
SQRTSDCompute Square Root of Scalar Double Precision Floating-Point Value
SQRTSSCompute Square Root of Scalar Single Precision Value
STACSet AC Flag in EFLAGS Register
STCSet Carry Flag
STDSet Direction Flag
STISet Interrupt Flag
STMXCSRStore MXCSR Register State
STOSStore String
STOSBStore String
STOSDStore String
STOSQStore String
STOSWStore String
STRStore Task Register
STTILECFGStore Tile Configuration
STUISet User Interrupt Flag
SUBSubtract
SUBPDSubtract Packed Double Precision Floating-Point Values
SUBPSSubtract Packed Single Precision Floating-Point Values
SUBSDSubtract Scalar Double Precision Floating-Point Value
SUBSSSubtract Scalar Single Precision Floating-Point Value
SWAPGSSwap GS Base Register
SYSCALLFast System Call
SYSENTERFast System Call
SYSEXITFast Return from Fast System Call
SYSRETReturn From Fast System Call
TDPBF16PSDot Product of BF16 Tiles Accumulated into Packed Single Precision Tile
TDPBSSDDot Product of Signed/Unsigned Bytes with DwordAccumulation
TDPBSUDDot Product of Signed/Unsigned Bytes with DwordAccumulation
TDPBUSDDot Product of Signed/Unsigned Bytes with DwordAccumulation
TDPBUUDDot Product of Signed/Unsigned Bytes with DwordAccumulation
TESTLogical Compare
TESTUIDetermine User Interrupt Flag
TILELOADDLoad Tile
TILELOADDT1Load Tile
TILERELEASERelease Tile
TILESTOREDStore Tile
TILEZEROZero Tile
TPAUSETimed PAUSE
TZCNTCount the Number of Trailing Zero Bits
UCOMISDUnordered Compare Scalar Double Precision Floating-Point Values and Set EFLAGS
UCOMISSUnordered Compare Scalar Single Precision Floating-Point Values and Set EFLAGS
UDUndefined Instruction
UIRETUser-Interrupt Return
UMONITORUser Level Set Up Monitor Address
UMWAITUser Level Monitor Wait
UNPCKHPDUnpack and Interleave High Packed Double Precision Floating-Point Values
UNPCKHPSUnpack and Interleave High Packed Single Precision Floating-Point Values
UNPCKLPDUnpack and Interleave Low Packed Double Precision Floating-Point Values
UNPCKLPSUnpack and Interleave Low Packed Single Precision Floating-Point Values
VADDPHAdd Packed FP16 Values
VADDSHAdd Scalar FP16 Values
VALIGNDAlign Doubleword/Quadword Vectors
VALIGNQAlign Doubleword/Quadword Vectors
VBLENDMPDBlend Float64/Float32 Vectors Using an OpMask Control
VBLENDMPSBlend Float64/Float32 Vectors Using an OpMask Control
VBROADCASTLoad with Broadcast Floating-Point Data
VCMPPHCompare Packed FP16 Values
VCMPSHCompare Scalar FP16 Values
VCOMISHCompare Scalar Ordered FP16 Values and Set EFLAGS
VCOMPRESSPDStore Sparse Packed Double Precision Floating-Point Values Into DenseMemory
VCOMPRESSPSStore Sparse Packed Single Precision Floating-Point Values Into Dense Memory
VCOMPRESSWStore Sparse Packed Byte/Word Integer Values Into DenseMemory/Register
VCVTDQ2PHConvert Packed Signed Doubleword Integers to Packed FP16 Values
VCVTNE2PS2BF16Convert Two Packed Single Data to One Packed BF16 Data
VCVTNEPS2BF16Convert Packed Single Data to Packed BF16 Data
VCVTPD2PHConvert Packed Double Precision FP Values to Packed FP16 Values
VCVTPD2QQConvert Packed Double Precision Floating-Point Values to Packed QuadwordIntegers
VCVTPD2UDQConvert Packed Double Precision Floating-Point Values to Packed UnsignedDoubleword Integers
VCVTPD2UQQConvert Packed Double Precision Floating-Point Values to Packed UnsignedQuadword Integers
VCVTPH2DQConvert Packed FP16 Values to Signed Doubleword Integers
VCVTPH2PDConvert Packed FP16 Values to FP64 Values
VCVTPH2PSConvert Packed FP16 Values to Single Precision Floating-PointValues
VCVTPH2PSXConvert Packed FP16 Values to Single Precision Floating-PointValues
VCVTPH2QQConvert Packed FP16 Values to Signed Quadword Integer Values
VCVTPH2UDQConvert Packed FP16 Values to Unsigned Doubleword Integers
VCVTPH2UQQConvert Packed FP16 Values to Unsigned Quadword Integers
VCVTPH2UWConvert Packed FP16 Values to Unsigned Word Integers
VCVTPH2WConvert Packed FP16 Values to Signed Word Integers
VCVTPS2PHConvert Single-Precision FP Value to 16-bit FP Value
VCVTPS2PHXConvert Packed Single Precision Floating-Point Values to Packed FP16 Values
VCVTPS2QQConvert Packed Single Precision Floating-Point Values to Packed SignedQuadword Integer Values
VCVTPS2UDQConvert Packed Single Precision Floating-Point Values to Packed UnsignedDoubleword Integer Values
VCVTPS2UQQConvert Packed Single Precision Floating-Point Values to Packed UnsignedQuadword Integer Values
VCVTQQ2PDConvert Packed Quadword Integers to Packed Double Precision Floating-PointValues
VCVTQQ2PHConvert Packed Signed Quadword Integers to Packed FP16 Values
VCVTQQ2PSConvert Packed Quadword Integers to Packed Single Precision Floating-PointValues
VCVTSD2SHConvert Low FP64 Value to an FP16 Value
VCVTSD2USIConvert Scalar Double Precision Floating-Point Value to Unsigned DoublewordInteger
VCVTSH2SDConvert Low FP16 Value to an FP64 Value
VCVTSH2SIConvert Low FP16 Value to Signed Integer
VCVTSH2SSConvert Low FP16 Value to FP32 Value
VCVTSH2USIConvert Low FP16 Value to Unsigned Integer
VCVTSI2SHConvert a Signed Doubleword/Quadword Integer to an FP16 Value
VCVTSS2SHConvert Low FP32 Value to an FP16 Value
VCVTSS2USIConvert Scalar Single Precision Floating-Point Value to Unsigned DoublewordInteger
VCVTTPD2QQConvert With Truncation Packed Double Precision Floating-Point Values toPacked Quadword Integers
VCVTTPD2UDQConvert With Truncation Packed Double Precision Floating-Point Values toPacked Unsigned Doubleword Integers
VCVTTPD2UQQConvert With Truncation Packed Double Precision Floating-Point Values toPacked Unsigned Quadword Integers
VCVTTPH2DQConvert with Truncation Packed FP16 Values to Signed Doubleword Integers
VCVTTPH2QQConvert with Truncation Packed FP16 Values to Signed Quadword Integers
VCVTTPH2UDQConvert with Truncation Packed FP16 Values to Unsigned DoublewordIntegers
VCVTTPH2UQQConvert with Truncation Packed FP16 Values to Unsigned Quadword Integers
VCVTTPH2UWConvert Packed FP16 Values to Unsigned Word Integers
VCVTTPH2WConvert Packed FP16 Values to Signed Word Integers
VCVTTPS2QQConvert With Truncation Packed Single Precision Floating-Point Values toPacked Signed Quadword Integer Values
VCVTTPS2UDQConvert With Truncation Packed Single Precision Floating-Point Values toPacked Unsigned Doubleword Integer Values
VCVTTPS2UQQConvert With Truncation Packed Single Precision Floating-Point Values toPacked Unsigned Quadword Integer Values
VCVTTSD2USIConvert With Truncation Scalar Double Precision Floating-Point Value toUnsigned Integer
VCVTTSH2SIConvert with Truncation Low FP16 Value to a Signed Integer
VCVTTSH2USIConvert with Truncation Low FP16 Value to an Unsigned Integer
VCVTTSS2USIConvert With Truncation Scalar Single Precision Floating-Point Value toUnsigned Integer
VCVTUDQ2PDConvert Packed Unsigned Doubleword Integers to Packed Double PrecisionFloating-Point Values
VCVTUDQ2PHConvert Packed Unsigned Doubleword Integers to Packed FP16 Values
VCVTUDQ2PSConvert Packed Unsigned Doubleword Integers to Packed Single PrecisionFloating-Point Values
VCVTUQQ2PDConvert Packed Unsigned Quadword Integers to Packed Double PrecisionFloating-Point Values
VCVTUQQ2PHConvert Packed Unsigned Quadword Integers to Packed FP16 Values
VCVTUQQ2PSConvert Packed Unsigned Quadword Integers to Packed Single PrecisionFloating-Point Values
VCVTUSI2SDConvert Unsigned Integer to Scalar Double Precision Floating-Point Value
VCVTUSI2SHConvert Unsigned Doubleword Integer to an FP16 Value
VCVTUSI2SSConvert Unsigned Integer to Scalar Single Precision Floating-Point Value
VCVTUW2PHConvert Packed Unsigned Word Integers to FP16 Values
VCVTW2PHConvert Packed Signed Word Integers to FP16 Values
VDBPSADBWDouble Block Packed Sum-Absolute-Differences (SAD) on Unsigned Bytes
VDIVPHDivide Packed FP16 Values
VDIVSHDivide Scalar FP16 Values
VDPBF16PSDot Product of BF16 Pairs Accumulated Into Packed Single Precision
VERRVerify a Segment for Reading or Writing
VERWVerify a Segment for Reading or Writing
VEXPANDPDLoad Sparse Packed Double Precision Floating-Point Values From Dense Memory
VEXPANDPSLoad Sparse Packed Single Precision Floating-Point Values From Dense Memory
VEXTRACTF128Extract Packed Floating-Point Values
VEXTRACTF32x4Extract Packed Floating-Point Values
VEXTRACTF32x8Extract Packed Floating-Point Values
VEXTRACTF64x2Extract Packed Floating-Point Values
VEXTRACTF64x4Extract Packed Floating-Point Values
VEXTRACTI128ExtractPacked Integer Values
VEXTRACTI32x4ExtractPacked Integer Values
VEXTRACTI32x8ExtractPacked Integer Values
VEXTRACTI64x2ExtractPacked Integer Values
VEXTRACTI64x4ExtractPacked Integer Values
VFCMADDCPHComplex Multiply and Accumulate FP16 Values
VFCMADDCSHComplex Multiply and Accumulate Scalar FP16 Values
VFCMULCPHComplex Multiply FP16 Values
VFCMULCSHComplex Multiply Scalar FP16 Values
VFIXUPIMMPDFix Up Special Packed Float64 Values
VFIXUPIMMPSFix Up Special Packed Float32 Values
VFIXUPIMMSDFix Up Special Scalar Float64 Value
VFIXUPIMMSSFix Up Special Scalar Float32 Value
VFMADD132PDFused Multiply-Add of Packed DoublePrecision Floating-Point Values
VFMADD132PHFused Multiply-Add of Packed FP16 Values
VFMADD132PSFused Multiply-Add of Packed SinglePrecision Floating-Point Values
VFMADD132SDFused Multiply-Add of Scalar DoublePrecision Floating-Point Values
VFMADD132SHFused Multiply-Add of Scalar FP16 Values
VFMADD132SSFused Multiply-Add of Scalar Single PrecisionFloating-Point Values
VFMADD213PDFused Multiply-Add of Packed DoublePrecision Floating-Point Values
VFMADD213PHFused Multiply-Add of Packed FP16 Values
VFMADD213PSFused Multiply-Add of Packed SinglePrecision Floating-Point Values
VFMADD213SDFused Multiply-Add of Scalar DoublePrecision Floating-Point Values
VFMADD213SHFused Multiply-Add of Scalar FP16 Values
VFMADD213SSFused Multiply-Add of Scalar Single PrecisionFloating-Point Values
VFMADD231PDFused Multiply-Add of Packed DoublePrecision Floating-Point Values
VFMADD231PHFused Multiply-Add of Packed FP16 Values
VFMADD231PSFused Multiply-Add of Packed SinglePrecision Floating-Point Values
VFMADD231SDFused Multiply-Add of Scalar DoublePrecision Floating-Point Values
VFMADD231SHFused Multiply-Add of Scalar FP16 Values
VFMADD231SSFused Multiply-Add of Scalar Single PrecisionFloating-Point Values
VFMADDCPHComplex Multiply and Accumulate FP16 Values
VFMADDCSHComplex Multiply and Accumulate Scalar FP16 Values
VFMADDRND231PDFused Multiply-Add of Packed Double-Precision Floating-Point Valueswith rounding control
VFMADDSUB132PDFused Multiply-AlternatingAdd/Subtract of Packed Double Precision Floating-Point Values
VFMADDSUB132PHFused Multiply-AlternatingAdd/Subtract of Packed FP16 Values
VFMADDSUB132PSFused Multiply-AlternatingAdd/Subtract of Packed Single Precision Floating-Point Values
VFMADDSUB213PDFused Multiply-AlternatingAdd/Subtract of Packed Double Precision Floating-Point Values
VFMADDSUB213PHFused Multiply-AlternatingAdd/Subtract of Packed FP16 Values
VFMADDSUB213PSFused Multiply-AlternatingAdd/Subtract of Packed Single Precision Floating-Point Values
VFMADDSUB231PDFused Multiply-AlternatingAdd/Subtract of Packed Double Precision Floating-Point Values
VFMADDSUB231PHFused Multiply-AlternatingAdd/Subtract of Packed FP16 Values
VFMADDSUB231PSFused Multiply-AlternatingAdd/Subtract of Packed Single Precision Floating-Point Values
VFMSUB132PDFused Multiply-Subtract of Packed DoublePrecision Floating-Point Values
VFMSUB132PHFused Multiply-Subtract of Packed FP16 Values
VFMSUB132PSFused Multiply-Subtract of Packed SinglePrecision Floating-Point Values
VFMSUB132SDFused Multiply-Subtract of Scalar DoublePrecision Floating-Point Values
VFMSUB132SHFused Multiply-Subtract of Scalar FP16 Values
VFMSUB132SSFused Multiply-Subtract of Scalar SinglePrecision Floating-Point Values
VFMSUB213PDFused Multiply-Subtract of Packed DoublePrecision Floating-Point Values
VFMSUB213PHFused Multiply-Subtract of Packed FP16 Values
VFMSUB213PSFused Multiply-Subtract of Packed SinglePrecision Floating-Point Values
VFMSUB213SDFused Multiply-Subtract of Scalar DoublePrecision Floating-Point Values
VFMSUB213SHFused Multiply-Subtract of Scalar FP16 Values
VFMSUB213SSFused Multiply-Subtract of Scalar SinglePrecision Floating-Point Values
VFMSUB231PDFused Multiply-Subtract of Packed DoublePrecision Floating-Point Values
VFMSUB231PHFused Multiply-Subtract of Packed FP16 Values
VFMSUB231PSFused Multiply-Subtract of Packed SinglePrecision Floating-Point Values
VFMSUB231SDFused Multiply-Subtract of Scalar DoublePrecision Floating-Point Values
VFMSUB231SHFused Multiply-Subtract of Scalar FP16 Values
VFMSUB231SSFused Multiply-Subtract of Scalar SinglePrecision Floating-Point Values
VFMSUBADD132PDFused Multiply-AlternatingSubtract/Add of Packed Double Precision Floating-Point Values
VFMSUBADD132PHFused Multiply-AlternatingSubtract/Add of Packed FP16 Values
VFMSUBADD132PSFused Multiply-AlternatingSubtract/Add of Packed Single Precision Floating-Point Values
VFMSUBADD213PDFused Multiply-AlternatingSubtract/Add of Packed Double Precision Floating-Point Values
VFMSUBADD213PHFused Multiply-AlternatingSubtract/Add of Packed FP16 Values
VFMSUBADD213PSFused Multiply-AlternatingSubtract/Add of Packed Single Precision Floating-Point Values
VFMSUBADD231PDFused Multiply-AlternatingSubtract/Add of Packed Double Precision Floating-Point Values
VFMSUBADD231PHFused Multiply-AlternatingSubtract/Add of Packed FP16 Values
VFMSUBADD231PSFused Multiply-AlternatingSubtract/Add of Packed Single Precision Floating-Point Values
VFMULCPHComplex Multiply FP16 Values
VFMULCSHComplex Multiply Scalar FP16 Values
VFNMADD132PDFused Negative Multiply-Add of PackedDouble Precision Floating-Point Values
VFNMADD132PHFused Multiply-Add of Packed FP16 Values
VFNMADD132PSFused Negative Multiply-Add of PackedSingle Precision Floating-Point Values
VFNMADD132SDFused Negative Multiply-Add of ScalarDouble Precision Floating-Point Values
VFNMADD132SHFused Multiply-Add of Scalar FP16 Values
VFNMADD132SSFused Negative Multiply-Add of ScalarSingle Precision Floating-Point Values
VFNMADD213PDFused Negative Multiply-Add of PackedDouble Precision Floating-Point Values
VFNMADD213PHFused Multiply-Add of Packed FP16 Values
VFNMADD213PSFused Negative Multiply-Add of PackedSingle Precision Floating-Point Values
VFNMADD213SDFused Negative Multiply-Add of ScalarDouble Precision Floating-Point Values
VFNMADD213SHFused Multiply-Add of Scalar FP16 Values
VFNMADD213SSFused Negative Multiply-Add of ScalarSingle Precision Floating-Point Values
VFNMADD231PDFused Negative Multiply-Add of PackedDouble Precision Floating-Point Values
VFNMADD231PHFused Multiply-Add of Packed FP16 Values
VFNMADD231PSFused Negative Multiply-Add of PackedSingle Precision Floating-Point Values
VFNMADD231SDFused Negative Multiply-Add of ScalarDouble Precision Floating-Point Values
VFNMADD231SHFused Multiply-Add of Scalar FP16 Values
VFNMADD231SSFused Negative Multiply-Add of ScalarSingle Precision Floating-Point Values
VFNMSUB132PDFused Negative Multiply-Subtract ofPacked Double Precision Floating-Point Values
VFNMSUB132PHFused Multiply-Subtract of Packed FP16 Values
VFNMSUB132PSFused Negative Multiply-Subtract ofPacked Single Precision Floating-Point Values
VFNMSUB132SDFused Negative Multiply-Subtract ofScalar Double Precision Floating-Point Values
VFNMSUB132SHFused Multiply-Subtract of Scalar FP16 Values
VFNMSUB132SSFused Negative Multiply-Subtract ofScalar Single Precision Floating-Point Values
VFNMSUB213PDFused Negative Multiply-Subtract ofPacked Double Precision Floating-Point Values
VFNMSUB213PHFused Multiply-Subtract of Packed FP16 Values
VFNMSUB213PSFused Negative Multiply-Subtract ofPacked Single Precision Floating-Point Values
VFNMSUB213SDFused Negative Multiply-Subtract ofScalar Double Precision Floating-Point Values
VFNMSUB213SHFused Multiply-Subtract of Scalar FP16 Values
VFNMSUB213SSFused Negative Multiply-Subtract ofScalar Single Precision Floating-Point Values
VFNMSUB231PDFused Negative Multiply-Subtract ofPacked Double Precision Floating-Point Values
VFNMSUB231PHFused Multiply-Subtract of Packed FP16 Values
VFNMSUB231PSFused Negative Multiply-Subtract ofPacked Single Precision Floating-Point Values
VFNMSUB231SDFused Negative Multiply-Subtract ofScalar Double Precision Floating-Point Values
VFNMSUB231SHFused Multiply-Subtract of Scalar FP16 Values
VFNMSUB231SSFused Negative Multiply-Subtract ofScalar Single Precision Floating-Point Values
VFPCLASSPDTests Types of Packed Float64 Values
VFPCLASSPHTest Types of Packed FP16 Values
VFPCLASSPSTests Types of Packed Float32 Values
VFPCLASSSDTests Type of a Scalar Float64 Value
VFPCLASSSHTest Types of Scalar FP16 Values
VFPCLASSSSTests Type of a Scalar Float32 Value
VGATHERDPDGather Packed Double Precision Floating-Point Values UsingSigned Dword/Qword Indices
VGATHERDPD (1)Gather Packed Single, Packed Double with Signed Dword Indices
VGATHERDPSGather Packed Single Precision Floating-Point Values UsingSigned Dword/Qword Indices
VGATHERDPS (1)Gather Packed Single, Packed Double with Signed Dword Indices
VGATHERQPDGather Packed Double Precision Floating-Point Values UsingSigned Dword/Qword Indices
VGATHERQPD (1)Gather Packed Single, Packed Double with Signed Qword Indices
VGATHERQPSGather Packed Single Precision Floating-Point Values UsingSigned Dword/Qword Indices
VGATHERQPS (1)Gather Packed Single, Packed Double with Signed Qword Indices
VGETEXPPDConvert Exponents of Packed Double Precision Floating-Point Values to DoublePrecision Floating-Point Values
VGETEXPPHConvert Exponents of Packed FP16 Values to FP16 Values
VGETEXPPSConvert Exponents of Packed Single Precision Floating-Point Values to SinglePrecision Floating-Point Values
VGETEXPSDConvert Exponents of Scalar Double Precision Floating-Point Value to DoublePrecision Floating-Point Value
VGETEXPSHConvert Exponents of Scalar FP16 Values to FP16 Values
VGETEXPSSConvert Exponents of Scalar Single Precision Floating-Point Value to SinglePrecision Floating-Point Value
VGETMANTPDExtract Float64 Vector of Normalized Mantissas From Float64 Vector
VGETMANTPHExtract FP16 Vector of Normalized Mantissas from FP16 Vector
VGETMANTPSExtract Float32 Vector of Normalized Mantissas From Float32 Vector
VGETMANTSDExtract Float64 of Normalized Mantissa From Float64 Scalar
VGETMANTSHExtract FP16 of Normalized Mantissa from FP16 Scalar
VGETMANTSSExtract Float32 Vector of Normalized Mantissa From Float32 Scalar
VINSERTF128Insert PackedFloating-Point Values
VINSERTF32x4Insert PackedFloating-Point Values
VINSERTF32x8Insert PackedFloating-Point Values
VINSERTF64x2Insert PackedFloating-Point Values
VINSERTF64x4Insert PackedFloating-Point Values
VINSERTI128Insert PackedInteger Values
VINSERTI32x4Insert PackedInteger Values
VINSERTI32x8Insert PackedInteger Values
VINSERTI64x2Insert PackedInteger Values
VINSERTI64x4Insert PackedInteger Values
VMASKMOVConditional SIMD Packed Loads and Stores
VMAXPHReturn Maximum of Packed FP16 Values
VMAXSHReturn Maximum of Scalar FP16 Values
VMINPHReturn Minimum of Packed FP16 Values
VMINSHReturn Minimum Scalar FP16 Value
VMOVDQA32Move Aligned Packed Integer Values
VMOVDQA64Move Aligned Packed Integer Values
VMOVDQU16Move Unaligned Packed Integer Values
VMOVDQU32Move Unaligned Packed Integer Values
VMOVDQU64Move Unaligned Packed Integer Values
VMOVDQU8Move Unaligned Packed Integer Values
VMOVSHMove Scalar FP16 Value
VMOVWMove Word
VMULPHMultiply Packed FP16 Values
VMULSHMultiply Scalar FP16 Values
VP2INTERSECTDCompute Intersection Between DWORDS/QUADWORDS to aPair of Mask Registers
VP2INTERSECTQCompute Intersection Between DWORDS/QUADWORDS to aPair of Mask Registers
VPBLENDDBlend Packed Dwords
VPBLENDMBBlend Byte/Word Vectors Using an Opmask Control
VPBLENDMDBlend Int32/Int64 Vectors Using an OpMask Control
VPBLENDMQBlend Int32/Int64 Vectors Using an OpMask Control
VPBLENDMWBlend Byte/Word Vectors Using an Opmask Control
VPBROADCASTLoad Integer and Broadcast
VPBROADCASTBLoad With Broadcast Integer Data From General Purpose Register
VPBROADCASTDLoad With Broadcast Integer Data From General Purpose Register
VPBROADCASTMBroadcast Mask to Vector Register
VPBROADCASTQLoad With Broadcast Integer Data From General Purpose Register
VPBROADCASTWLoad With Broadcast Integer Data From General Purpose Register
VPCMPBCompare Packed Byte Values Into Mask
VPCMPDCompare Packed Integer Values Into Mask
VPCMPQCompare Packed Integer Values Into Mask
VPCMPUBCompare Packed Byte Values Into Mask
VPCMPUDCompare Packed Integer Values Into Mask
VPCMPUQCompare Packed Integer Values Into Mask
VPCMPUWCompare Packed Word Values Into Mask
VPCMPWCompare Packed Word Values Into Mask
VPCOMPRESSBStore Sparse Packed Byte/Word Integer Values Into DenseMemory/Register
VPCOMPRESSDStore Sparse Packed Doubleword Integer Values Into Dense Memory/Register
VPCOMPRESSQStore Sparse Packed Quadword Integer Values Into Dense Memory/Register
VPCONFLICTDDetect Conflicts Within a Vector of Packed Dword/Qword Values Into DenseMemory/ Register
VPCONFLICTQDetect Conflicts Within a Vector of Packed Dword/Qword Values Into DenseMemory/ Register
VPDPBUSDMultiply and Add Unsigned and Signed Bytes
VPDPBUSDSMultiply and Add Unsigned and Signed Bytes With Saturation
VPDPWSSDMultiply and Add Signed Word Integers
VPDPWSSDSMultiply and Add Signed Word Integers With Saturation
VPERM2F128Permute Floating-Point Values
VPERM2I128Permute Integer Values
VPERMBPermute Packed Bytes Elements
VPERMDPermute Packed Doubleword/Word Elements
VPERMI2BFull Permute of Bytes From Two Tables Overwriting the Index
VPERMI2DFull Permute From Two Tables Overwriting the Index
VPERMI2PDFull Permute From Two Tables Overwriting the Index
VPERMI2PSFull Permute From Two Tables Overwriting the Index
VPERMI2QFull Permute From Two Tables Overwriting the Index
VPERMI2WFull Permute From Two Tables Overwriting the Index
VPERMILPDPermute In-Lane of Pairs of Double Precision Floating-Point Values
VPERMILPSPermute In-Lane of Quadruples of Single Precision Floating-Point Values
VPERMPDPermute Double Precision Floating-Point Elements
VPERMPSPermute Single Precision Floating-Point Elements
VPERMQQwords Element Permutation
VPERMT2BFull Permute of Bytes From Two Tables Overwriting a Table
VPERMT2DFull Permute From Two Tables Overwriting One Table
VPERMT2PDFull Permute From Two Tables Overwriting One Table
VPERMT2PSFull Permute From Two Tables Overwriting One Table
VPERMT2QFull Permute From Two Tables Overwriting One Table
VPERMT2WFull Permute From Two Tables Overwriting One Table
VPERMWPermute Packed Doubleword/Word Elements
VPEXPANDBExpand Byte/Word Values
VPEXPANDDLoad Sparse Packed Doubleword Integer Values From Dense Memory/Register
VPEXPANDQLoad Sparse Packed Quadword Integer Values From Dense Memory/Register
VPEXPANDWExpand Byte/Word Values
VPGATHERDDGather Packed Dword Values Using Signed Dword/Qword Indices
VPGATHERDD (1)Gather Packed Dword, Packed Qword With Signed Dword Indices
VPGATHERDQGather Packed Dword, Packed Qword With Signed Dword Indices
VPGATHERDQ (1)Gather Packed Qword Values Using Signed Dword/Qword Indices
VPGATHERQDGather Packed Dword Values Using Signed Dword/Qword Indices
VPGATHERQD (1)Gather Packed Dword, Packed Qword with Signed Qword Indices
VPGATHERQQGather Packed Qword Values Using Signed Dword/Qword Indices
VPGATHERQQ (1)Gather Packed Dword, Packed Qword with Signed Qword Indices
VPLZCNTDCount the Number of Leading Zero Bits for Packed Dword, Packed Qword Values
VPLZCNTQCount the Number of Leading Zero Bits for Packed Dword, Packed Qword Values
VPMADD52HUQPacked Multiply of Unsigned 52-Bit Unsigned Integers and Add High 52-BitProducts to 64-Bit Accumulators
VPMADD52LUQPacked Multiply of Unsigned 52-Bit Integers and Add the Low 52-Bit Productsto Qword Accumulators
VPMASKMOVConditional SIMD Integer Packed Loads and Stores
VPMOVB2MConvert a Vector Register to a Mask
VPMOVD2MConvert a Vector Register to a Mask
VPMOVDBDown Convert DWord to Byte
VPMOVDWDown Convert DWord to Word
VPMOVM2BConvert a Mask Register to a VectorRegister
VPMOVM2DConvert a Mask Register to a VectorRegister
VPMOVM2QConvert a Mask Register to a VectorRegister
VPMOVM2WConvert a Mask Register to a VectorRegister
VPMOVQ2MConvert a Vector Register to a Mask
VPMOVQBDown Convert QWord to Byte
VPMOVQDDown Convert QWord to DWord
VPMOVQWDown Convert QWord to Word
VPMOVSDBDown Convert DWord to Byte
VPMOVSDWDown Convert DWord to Word
VPMOVSQBDown Convert QWord to Byte
VPMOVSQDDown Convert QWord to DWord
VPMOVSQWDown Convert QWord to Word
VPMOVSWBDown Convert Word to Byte
VPMOVUSDBDown Convert DWord to Byte
VPMOVUSDWDown Convert DWord to Word
VPMOVUSQBDown Convert QWord to Byte
VPMOVUSQDDown Convert QWord to DWord
VPMOVUSQWDown Convert QWord to Word
VPMOVUSWBDown Convert Word to Byte
VPMOVW2MConvert a Vector Register to a Mask
VPMOVWBDown Convert Word to Byte
VPMULTISHIFTQBSelect Packed Unaligned Bytes From Quadword Sources
VPOPCNTReturn the Count of Number of Bits Set to 1 in BYTE/WORD/DWORD/QWORD
VPROLDBit Rotate Left
VPROLQBit Rotate Left
VPROLVDBit Rotate Left
VPROLVQBit Rotate Left
VPRORDBit Rotate Right
VPRORQBit Rotate Right
VPRORVDBit Rotate Right
VPRORVQBit Rotate Right
VPSCATTERDDScatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices
VPSCATTERDQScatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices
VPSCATTERQDScatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices
VPSCATTERQQScatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices
VPSHLDConcatenate and Shift Packed Data Left Logical
VPSHLDVConcatenate and Variable Shift Packed Data Left Logical
VPSHRDConcatenate and Shift Packed Data Right Logical
VPSHRDVConcatenate and Variable Shift Packed Data Right Logical
VPSHUFBITQMBShuffle Bits From Quadword Elements Using Byte Indexes Into Mask
VPSLLVDVariable Bit Shift Left Logical
VPSLLVQVariable Bit Shift Left Logical
VPSLLVWVariable Bit Shift Left Logical
VPSRAVDVariable Bit Shift Right Arithmetic
VPSRAVQVariable Bit Shift Right Arithmetic
VPSRAVWVariable Bit Shift Right Arithmetic
VPSRLVDVariable Bit Shift Right Logical
VPSRLVQVariable Bit Shift Right Logical
VPSRLVWVariable Bit Shift Right Logical
VPTERNLOGDBitwise Ternary Logic
VPTERNLOGQBitwise Ternary Logic
VPTESTMBLogical AND and Set Mask
VPTESTMDLogical AND and Set Mask
VPTESTMQLogical AND and Set Mask
VPTESTMWLogical AND and Set Mask
VPTESTNMBLogical NAND and Set
VPTESTNMDLogical NAND and Set
VPTESTNMQLogical NAND and Set
VPTESTNMWLogical NAND and Set
VRANGEPDRange Restriction Calculation for Packed Pairs of Float64 Values
VRANGEPSRange Restriction Calculation for Packed Pairs of Float32 Values
VRANGESDRange Restriction Calculation From a Pair of Scalar Float64 Values
VRANGESSRange Restriction Calculation From a Pair of Scalar Float32 Values
VRCP14PDCompute Approximate Reciprocals of Packed Float64 Values
VRCP14PSCompute Approximate Reciprocals of Packed Float32 Values
VRCP14SDCompute Approximate Reciprocal of Scalar Float64 Value
VRCP14SSCompute Approximate Reciprocal of Scalar Float32 Value
VRCPPHCompute Reciprocals of Packed FP16 Values
VRCPSHCompute Reciprocal of Scalar FP16 Value
VREDUCEPDPerform Reduction Transformation on Packed Float64 Values
VREDUCEPHPerform Reduction Transformation on Packed FP16 Values
VREDUCEPSPerform Reduction Transformation on Packed Float32 Values
VREDUCESDPerform a Reduction Transformation on a Scalar Float64 Value
VREDUCESHPerform Reduction Transformation on Scalar FP16 Value
VREDUCESSPerform a Reduction Transformation on a Scalar Float32 Value
VRNDSCALEPDRound Packed Float64 Values to Include a Given Number of Fraction Bits
VRNDSCALEPHRound Packed FP16 Values to Include a Given Number of Fraction Bits
VRNDSCALEPSRound Packed Float32 Values to Include a Given Number of Fraction Bits
VRNDSCALESDRound Scalar Float64 Value to Include a Given Number of Fraction Bits
VRNDSCALESHRound Scalar FP16 Value to Include a Given Number of Fraction Bits
VRNDSCALESSRound Scalar Float32 Value to Include a Given Number of Fraction Bits
VRSQRT14PDCompute Approximate Reciprocals of Square Roots of Packed Float64 Values
VRSQRT14PSCompute Approximate Reciprocals of Square Roots of Packed Float32 Values
VRSQRT14SDCompute Approximate Reciprocal of Square Root of Scalar Float64 Value
VRSQRT14SSCompute Approximate Reciprocal of Square Root of Scalar Float32 Value
VRSQRTPHCompute Reciprocals of Square Roots of Packed FP16 Values
VRSQRTSHCompute Approximate Reciprocal of Square Root of Scalar FP16 Value
VSCALEFPDScale Packed Float64 Values With Float64 Values
VSCALEFPHScale Packed FP16 Values with FP16 Values
VSCALEFPSScale Packed Float32 Values With Float32 Values
VSCALEFSDScale Scalar Float64 Values With Float64 Values
VSCALEFSHScale Scalar FP16 Values with FP16 Values
VSCALEFSSScale Scalar Float32 Value With Float32 Value
VSCATTERDPDScatter Packed Single, PackedDouble with Signed Dword and Qword Indices
VSCATTERDPSScatter Packed Single, PackedDouble with Signed Dword and Qword Indices
VSCATTERQPDScatter Packed Single, PackedDouble with Signed Dword and Qword Indices
VSCATTERQPSScatter Packed Single, PackedDouble with Signed Dword and Qword Indices
VSHUFF32x4Shuffle Packed Values at 128-BitGranularity
VSHUFF64x2Shuffle Packed Values at 128-BitGranularity
VSHUFI32x4Shuffle Packed Values at 128-BitGranularity
VSHUFI64x2Shuffle Packed Values at 128-BitGranularity
VSQRTPHCompute Square Root of Packed FP16 Values
VSQRTSHCompute Square Root of Scalar FP16 Value
VSUBPHSubtract Packed FP16 Values
VSUBSHSubtract Scalar FP16 Value
VTESTPDPacked Bit Test
VTESTPSPacked Bit Test
VUCOMISHUnordered Compare Scalar FP16 Values and Set EFLAGS
VZEROALLZero XMM, YMM, and ZMM Registers
VZEROUPPERZero Upper Bits of YMM and ZMM Registers
WAITWait
WBINVDWrite Back and Invalidate Cache
WBNOINVDWrite Back and Do Not Invalidate Cache
WRFSBASEWrite FS/GS Segment Base
WRGSBASEWrite FS/GS Segment Base
WRMSRWrite to Model Specific Register
WRPKRUWrite Data to User Page Key Register
WRSSDWrite to Shadow Stack
WRSSQWrite to Shadow Stack
WRUSSDWrite to User Shadow Stack
WRUSSQWrite to User Shadow Stack
XABORTTransactional Abort
XACQUIREHardware Lock Elision Prefix Hints
XADDExchange and Add
XBEGINTransactional Begin
XCHGExchange Register/Memory With Register
XENDTransactional End
XGETBVGet Value of Extended Control Register
XLATTable Look-up Translation
XLATBTable Look-up Translation
XORLogical Exclusive OR
XORPDBitwise Logical XOR of Packed Double Precision Floating-Point Values
XORPSBitwise Logical XOR of Packed Single Precision Floating-Point Values
XRELEASEHardware Lock Elision Prefix Hints
XRESLDTRKResume Tracking Load Addresses
XRSTORRestore Processor Extended States
XRSTORSRestore Processor Extended States Supervisor
XSAVESave Processor Extended States
XSAVECSave Processor Extended States With Compaction
XSAVEOPTSave Processor Extended States Optimized
XSAVESSave Processor Extended States Supervisor
XSETBVSet Extended Control Register
XSUSLDTRKSuspend Tracking Load Addresses
XTESTTest if in Transactional Execution

SGX Instructions

MnemonicSummary
ENCLSExecute an Enclave System Function of Specified Leaf Number
ENCLS[EADD]Add a Page to an Uninitialized Enclave
ENCLS[EAUG]Add a Page to an Initialized Enclave
ENCLS[EBLOCK]Mark a page in EPC as Blocked
ENCLS[ECREATE]Create an SECS page in the Enclave Page Cache
ENCLS[EDBGRD]Read From a Debug Enclave
ENCLS[EDBGWR]Write to a Debug Enclave
ENCLS[EEXTEND]Extend Uninitialized Enclave Measurement by 256 Bytes
ENCLS[EINIT]Initialize an Enclave for Execution
ENCLS[ELDBC]Load an EPC Page and Mark its State
ENCLS[ELDB]Load an EPC Page and Mark its State
ENCLS[ELDUC]Load an EPC Page and Mark its State
ENCLS[ELDU]Load an EPC Page and Mark its State
ENCLS[EMODPR]Restrict the Permissions of an EPC Page
ENCLS[EMODT]Change the Type of an EPC Page
ENCLS[EPA]Add Version Array
ENCLS[ERDINFO]Read Type and Status Information About an EPC Page
ENCLS[EREMOVE]Remove a page from the EPC
ENCLS[ETRACKC]Activates EBLOCK Checks
ENCLS[ETRACK]Activates EBLOCK Checks
ENCLS[EWB]Invalidate an EPC Page and Write out to Main Memory
ENCLUExecute an Enclave User Function of Specified Leaf Number
ENCLU[EACCEPTCOPY]Initialize a Pending Page
ENCLU[EACCEPT]Accept Changes to an EPC Page
ENCLU[EDECCSSA]Decrements TCS.CSSA
ENCLU[EENTER]Enters an Enclave
ENCLU[EEXIT]Exits an Enclave
ENCLU[EGETKEY]Retrieves a Cryptographic Key
ENCLU[EMODPE]Extend an EPC Page Permissions
ENCLU[EREPORT]Create a Cryptographic Report of the Enclave
ENCLU[ERESUME]Re-Enters an Enclave
ENCLVExecute an Enclave VMM Function of Specified Leaf Number
ENCLV[EDECVIRTCHILD]Decrement VIRTCHILDCNT in SECS
ENCLV[EINCVIRTCHILD]Increment VIRTCHILDCNT in SECS
ENCLV[ESETCONTEXT]Set the ENCLAVECONTEXT Field in SECS

SMX Instructions

MnemonicSummary
GETSEC[CAPABILITIES]Report the SMX Capabilities
GETSEC[ENTERACCS]Execute Authenticated Chipset Code
GETSEC[EXITAC]Exit Authenticated Code Execution Mode
GETSEC[PARAMETERS]Report the SMX Parameters
GETSEC[SENTER]Enter a Measured Environment
GETSEC[SEXIT]Exit Measured Environment
GETSEC[SMCTRL]SMX Mode Control
GETSEC[WAKEUP]Wake Up Sleeping Processors in Measured Environment

VMX Instructions

MnemonicSummary
INVEPTInvalidate Translations Derived from EPT
INVVPIDInvalidate Translations Based on VPID
VMCALLCall to VM Monitor
VMCLEARClear Virtual-Machine Control Structure
VMFUNCInvoke VM function
VMLAUNCHLaunch/Resume Virtual Machine
VMPTRLDLoad Pointer to Virtual-Machine Control Structure
VMPTRSTStore Pointer to Virtual-Machine Control Structure
VMREADRead Field from Virtual-Machine Control Structure
VMRESUMELaunch/Resume Virtual Machine
VMRESUME (1)Resume Virtual Machine
VMWRITEWrite Field to Virtual-Machine Control Structure
VMXOFFLeave VMX Operation
VMXONEnter VMX Operation

Xeon Phi™ Instructions

MnemonicSummary
PREFETCHWT1Prefetch Vector Data Into Caches With Intent to Write and T1 Hint
V4FMADDPSPacked Single Precision Floating-Point Fused Multiply-Add(4-Iterations)
V4FMADDSSScalar Single Precision Floating-Point Fused Multiply-Add(4-Iterations)
V4FNMADDPSPacked Single Precision Floating-Point Fused Multiply-Add(4-Iterations)
V4FNMADDSSScalar Single Precision Floating-Point Fused Multiply-Add(4-Iterations)
VEXP2PDApproximation to the Exponential 2^x of Packed Double Precision Floating-PointValues With Less Than 2^-23 Relative Error
VEXP2PSApproximation to the Exponential 2^x of Packed Single Precision Floating-PointValues With Less Than 2^-23 Relative Error
VGATHERPF0DPDSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint
VGATHERPF0DPSSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint
VGATHERPF0QPDSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint
VGATHERPF0QPSSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint
VGATHERPF1DPDSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint
VGATHERPF1DPSSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint
VGATHERPF1QPDSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint
VGATHERPF1QPSSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint
VP4DPWSSDDot Product of Signed Words With Dword Accumulation (4-Iterations)
VP4DPWSSDSDot Product of Signed Words With Dword Accumulation and Saturation(4-Iterations)
VRCP28PDApproximation to the Reciprocal of Packed Double Precision Floating-Point ValuesWith Less Than 2^-28 Relative Error
VRCP28PSApproximation to the Reciprocal of Packed Single Precision Floating-Point ValuesWith Less Than 2^-28 Relative Error
VRCP28SDApproximation to the Reciprocal of Scalar Double Precision Floating-Point ValueWith Less Than 2^-28 Relative Error
VRCP28SSApproximation to the Reciprocal of Scalar Single Precision Floating-Point ValueWith Less Than 2^-28 Relative Error
VRSQRT28PDApproximation to the Reciprocal Square Root of Packed Double PrecisionFloating-Point Values With Less Than 2^-28 Relative Error
VRSQRT28PSApproximation to the Reciprocal Square Root of Packed Single PrecisionFloating-Point Values With Less Than 2^-28 Relative Error
VRSQRT28SDApproximation to the Reciprocal Square Root of Scalar Double PrecisionFloating-Point Value With Less Than 2^-28 Relative Error
VRSQRT28SSApproximation to the Reciprocal Square Root of Scalar Single Precision Floating-Point Value With Less Than 2^-28 Relative Error
VSCATTERPF0DPDSparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write
VSCATTERPF0DPSSparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write
VSCATTERPF0QPDSparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write
VSCATTERPF0QPSSparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write
VSCATTERPF1DPDSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write
VSCATTERPF1DPSSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write
VSCATTERPF1QPDSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write
VSCATTERPF1QPSSparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write
diff --git a/x86/ins.insb.insw.insd.html b/x86/ins.insb.insw.insd.html new file mode 100644 index 0000000..8cbf6d0 --- /dev/null +++ b/x86/ins.insb.insw.insd.html @@ -0,0 +1,214 @@ + +INS/INSB/INSW/INSD + — Input from Port to String

INS/INSB/INSW/INSD + — Input from Port to String

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
6CINS m8, DXZOValidValidInput byte from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.1
6DINS m16, DXZOValidValidInput word from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.1
6DINS m32, DXZOValidValidInput doubleword from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.1
6CINSBZOValidValidInput byte from I/O port specified in DX into memory location specified with ES:(E)DI or RDI.1
6DINSWZOValidValidInput word from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.1
6DINSDZOValidValidInput doubleword from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.1
+
+

1. In 64-bit mode, only 64-bit (RDI) and 32-bit (EDI) address sizes are supported. In non-64-bit mode, only 32-bit (EDI) and 16-bit (DI) address sizes are supported.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Copies the data from the I/O port specified with the source operand (second operand) to the destination operand (first operand). The source operand is an I/O port address (from 0 to 65,535) that is read from the DX register. The destination operand is a memory location, the address of which is read from either the ES:DI, ES:EDI or the RDI registers (depending on the address-size attribute of the instruction, 16, 32 or 64, respectively). (The ES segment cannot be overridden with a segment override prefix.) The size of the I/O port being accessed (that is, the size of the source and destination operands) is determined by the opcode for an 8-bit I/O port or by the operand-size attribute of the instruction for a 16- or 32-bit I/O port.

+

At the assembly-code level, two forms of this instruction are allowed: the “explicit-operands” form and the “no-operands” form. The explicit-operands form (specified with the INS mnemonic) allows the source and destination operands to be specified explicitly. Here, the source operand must be “DX,” and the destination operand should be a symbol that indicates the size of the I/O port and the destination address. This explicit-operands form is provided to allow documentation; however, note that the documentation provided by this form can be misleading. That is, the destination operand symbol must specify the correct type (size) of the operand (byte, word, or doubleword), but it does not have to specify the correct location. The location is always specified by the ES:(E)DI registers, which must be loaded correctly before the INS instruction is executed.

+

The no-operands form provides “short forms” of the byte, word, and doubleword versions of the INS instructions. Here also DX is assumed by the processor to be the source operand and ES:(E)DI is assumed to be the destination operand. The size of the I/O port is specified with the choice of mnemonic: INSB (byte), INSW (word), or INSD (doubleword).

+

After the byte, word, or doubleword is transfer from the I/O port to the memory location, the DI/EDI/RDI register is incremented or decremented automatically according to the setting of the DF flag in the EFLAGS register. (If the DF flag is 0, the (E)DI register is incremented; if the DF flag is 1, the (E)DI register is decremented.) The (E)DI register is incremented or decremented by 1 for byte operations, by 2 for word operations, or by 4 for doubleword operations.

+

The INS, INSB, INSW, and INSD instructions can be preceded by the REP prefix for block input of ECX bytes, words, or doublewords. See “REP/REPE/REPZ /REPNE/REPNZ—Repeat String Operation Prefix” in Chapter 4 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2B, for a description of the REP prefix.

+

These instructions are only useful for accessing I/O ports located in the processor’s I/O address space. See Chapter 19, “Input/Output,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for more information on accessing I/O ports in the I/O address space.

+

In 64-bit mode, default address size is 64 bits, 32 bit address size is supported using the prefix 67H. The address of the memory destination is specified by RDI or EDI. 16-bit address size is not supported in 64-bit mode. The operand size is not promoted.

+

These instructions may read from the I/O port without writing to the memory location if an exception or VM exit occurs due to the write (e.g. #PF). If this would be problematic, for example because the I/O port read has side-effects, software should ensure the write to the memory location does not cause an exception or VM exit.

+

Operation + ¶ +

+
IF ((PE = 1) and ((CPL > IOPL) or (VM = 1)))
+    THEN (* Protected mode with CPL > IOPL or virtual-8086 mode *)
+        IF (Any I/O Permission Bit for I/O port being accessed = 1)
+            THEN (* I/O operation is not allowed *)
+                #GP(0);
+            ELSE (* I/O operation is allowed *)
+                DEST := SRC; (* Read from I/O port *)
+        FI;
+    ELSE (Real Mode or Protected Mode with CPL IOPL *)
+        DEST := SRC; (* Read from I/O port *)
+FI;
+Non-64-bit Mode:
+IF (Byte transfer)
+    THEN IF DF = 0
+        THEN (E)DI := (E)DI + 1;
+        ELSE (E)DI := (E)DI – 1; FI;
+    ELSE IF (Word transfer)
+        THENIFDF =0
+            THEN (E)DI := (E)DI + 2;
+            ELSE (E)DI := (E)DI – 2; FI;
+        ELSE (* Doubleword transfer *)
+            THEN IF DF = 0
+                THEN (E)DI := (E)DI + 4;
+                ELSE (E)DI := (E)DI – 4; FI;
+        FI;
+FI;
+FI64-bit Mode:
+IF (Byte transfer)
+    THEN IF DF = 0
+        THEN (E|R)DI := (E|R)DI + 1;
+        ELSE (E|R)DI := (E|R)DI – 1; FI;
+    ELSE IF (Word transfer)
+        THENIFDF =0
+            THEN (E)DI := (E)DI + 2;
+            ELSE (E)DI := (E)DI – 2; FI;
+        ELSE (* Doubleword transfer *)
+            THEN IF DF = 0
+                THEN (E|R)DI := (E|R)DI + 4;
+                ELSE (E|R)DI := (E|R)DI – 4; FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the CPL is greater than (has less privilege) the I/O privilege level (IOPL) and any of the corresponding I/O permission bits in TSS for the I/O port being accessed is 1.
If the destination is located in a non-writable segment.
If an illegal memory operand effective address in the ES segments is given.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If any of the I/O permission bits in the TSS for the I/O port being accessed is 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the CPL is greater than (has less privilege) the I/O privilege level (IOPL) and any of the corresponding I/O permission bits in TSS for the I/O port being accessed is 1.
If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/insertps.html b/x86/insertps.html new file mode 100644 index 0000000..7342185 --- /dev/null +++ b/x86/insertps.html @@ -0,0 +1,169 @@ + +INSERTPS + — Insert Scalar Single Precision Floating-Point Value

INSERTPS + — Insert Scalar Single Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 21 /r ib INSERTPS xmm1, xmm2/m32, imm8AV/VSSE4_1Insert a single precision floating-point value selected by imm8 from xmm2/m32 into xmm1 at the specified destination element specified by imm8 and zero out destination elements in xmm1 as indicated in imm8.
VEX.128.66.0F3A.WIG 21 /r ib VINSERTPS xmm1, xmm2, xmm3/m32, imm8BV/VAVXInsert a single precision floating-point value selected by imm8 from xmm3/m32 and merge with values in xmm2 at the specified destination element specified by imm8 and write out the result and zero out destination elements in xmm1 as indicated in imm8.
EVEX.128.66.0F3A.W0 21 /r ib VINSERTPS xmm1, xmm2, xmm3/m32, imm8CV/VAVX512FInsert a single precision floating-point value selected by imm8 from xmm3/m32 and merge with values in xmm2 at the specified destination element specified by imm8 and write out the result and zero out destination elements in xmm1 as indicated in imm8.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

(register source form)

+

Copy a single precision scalar floating-point element into a 128-bit vector register. The immediate operand has three fields, where the ZMask bits specify which elements of the destination will be set to zero, the Count_D bits specify which element of the destination will be overwritten with the scalar value, and for vector register sources the Count_S bits specify which element of the source will be copied. When the scalar source is a memory operand the Count_S bits are ignored.

+

(memory source form)

+

Load a floating-point element from a 32-bit memory location and destination operand it into the first source at the location indicated by the Count_D bits of the immediate operand. Store in the destination and zero out destination elements based on the ZMask bits of the immediate operand.

+

128-bit Legacy SSE version: The first source register is an XMM register. The second source operand is either an XMM register or a 32-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

VEX.128 and EVEX encoded version: The destination and first source register is an XMM register. The second source operand is either an XMM register or a 32-bit memory location. The upper bits (MAXVL-1:128) of the corresponding register destination are zeroed.

+

If VINSERTPS is encoded with VEX.L= 1, an attempt to execute the instruction encoded with VEX.L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

VINSERTPS (VEX.128 and EVEX Encoded Version) + ¶ +

+
IF (SRC = REG) THEN COUNT_S := imm8[7:6]
+    ELSE COUNT_S := 0
+COUNT_D := imm8[5:4]
+ZMASK := imm8[3:0]
+CASE (COUNT_S) OF
+    0: TMP := SRC2[31:0]
+    1: TMP := SRC2[63:32]
+    2: TMP := SRC2[95:64]
+    3: TMP := SRC2[127:96]
+ESAC;
+CASE (COUNT_D) OF
+    0: TMP2[31:0] := TMP
+        TMP2[127:32] := SRC1[127:32]
+    1: TMP2[63:32] := TMP
+        TMP2[31:0] := SRC1[31:0]
+        TMP2[127:64] := SRC1[127:64]
+    2: TMP2[95:64] := TMP
+        TMP2[63:0] := SRC1[63:0]
+        TMP2[127:96] := SRC1[127:96]
+    3: TMP2[127:96] := TMP
+        TMP2[95:0] := SRC1[95:0]
+ESAC;
+IF (ZMASK[0] = 1) THEN DEST[31:0] := 00000000H
+    ELSE DEST[31:0] := TMP2[31:0]
+IF (ZMASK[1] = 1) THEN DEST[63:32] := 00000000H
+    ELSE DEST[63:32] := TMP2[63:32]
+IF (ZMASK[2] = 1) THEN DEST[95:64] := 00000000H
+    ELSE DEST[95:64] := TMP2[95:64]
+IF (ZMASK[3] = 1) THEN DEST[127:96] := 00000000H
+    ELSE DEST[127:96] := TMP2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

INSERTPS (128-bit Legacy SSE Version) + ¶ +

+
IF (SRC = REG) THEN COUNT_S :=imm8[7:6]
+    ELSE COUNT_S :=0
+COUNT_D := imm8[5:4]
+ZMASK := imm8[3:0]
+CASE (COUNT_S) OF
+    0: TMP := SRC[31:0]
+    1: TMP := SRC[63:32]
+    2: TMP := SRC[95:64]
+    3: TMP := SRC[127:96]
+ESAC;
+CASE (COUNT_D) OF
+    0: TMP2[31:0] := TMP
+        TMP2[127:32] := DEST[127:32]
+    1: TMP2[63:32] := TMP
+        TMP2[31:0] := DEST[31:0]
+        TMP2[127:64] := DEST[127:64]
+    2: TMP2[95:64] := TMP
+        TMP2[63:0] := DEST[63:0]
+        TMP2[127:96] := DEST[127:96]
+    3: TMP2[127:96] := TMP
+        TMP2[95:0] := DEST[95:0]
+ESAC;
+IF (ZMASK[0] = 1) THEN DEST[31:0] := 00000000H
+    ELSE DEST[31:0] := TMP2[31:0]
+IF (ZMASK[1] = 1) THEN DEST[63:32] := 00000000H
+    ELSE DEST[63:32] := TMP2[63:32]
+IF (ZMASK[2] = 1) THEN DEST[95:64] := 00000000H
+    ELSE DEST[95:64] := TMP2[95:64]
+IF (ZMASK[3] = 1) THEN DEST[127:96] := 00000000H
+    ELSE DEST[127:96] := TMP2[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VINSERTPS __m128 _mm_insert_ps(__m128 dst, __m128 src, const int nidx);
+
+
INSETRTPS __m128 _mm_insert_ps(__m128 dst, __m128 src, const int nidx);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 0.
+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

diff --git a/x86/intn.into.int3.int1.html b/x86/intn.into.int3.int1.html new file mode 100644 index 0000000..f2bfa2e --- /dev/null +++ b/x86/intn.into.int3.int1.html @@ -0,0 +1,995 @@ + +INT n/INTO/INT3/INT1 + — Call to Interrupt Procedure

INT n/INTO/INT3/INT1 + — Call to Interrupt Procedure

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
CCINT3ZOValidValidGenerate breakpoint trap.
CD ibINT imm8IValidValidGenerate software interrupt with vector specified by immediate byte.
CEINTOZOInvalidValidGenerate overflow trap if overflow flag is 1.
F1INT1ZOValidValidGenerate debug trap.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
Iimm8N/AN/AN/A
+

Description + ¶ +

+

The INT n instruction generates a call to the interrupt or exception handler specified with the destination operand (see the section titled “Interrupts and Exceptions” in Chapter 6 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1). The destination operand specifies a vector from 0 to 255, encoded as an 8-bit unsigned intermediate value. Each vector provides an index to a gate descriptor in the IDT. The first 32 vectors are reserved by Intel for system use. Some of these vectors are used for internally generated exceptions.

+

The INT n instruction is the general mnemonic for executing a software-generated call to an interrupt handler. The INTO instruction is a special mnemonic for calling overflow exception (#OF), exception 4. The overflow interrupt checks the OF flag in the EFLAGS register and calls the overflow interrupt handler if the OF flag is set to 1. (The INTO instruction cannot be used in 64-bit mode.)

+

The INT3 instruction uses a one-byte opcode (CC) and is intended for calling the debug exception handler with a breakpoint exception (#BP). (This one-byte form is useful because it can replace the first byte of any instruction at which a breakpoint is desired, including other one-byte instructions, without overwriting other instructions.)

+

The INT1 instruction also uses a one-byte opcode (F1) and generates a debug exception (#DB) without setting any bits in DR6.1 Hardware vendors may use the INT1 instruction for hardware debug. For that reason, Intel recommends software vendors instead use the INT3 instruction for software breakpoints.

+
+

1. The mnemonic ICEBP has also been used for the instruction with opcode F1.

+

An interrupt generated by the INTO, INT3, or INT1 instruction differs from one generated by INT n in the following ways:

+
    +
  • The normal IOPL checks do not occur in virtual-8086 mode. The interrupt is taken (without fault) with any IOPL value.
  • +
  • The interrupt redirection enabled by the virtual-8086 mode extensions (VME) does not occur. The interrupt is always handled by a protected-mode handler.
+

(These features do not pertain to CD03, the “normal” 2-byte opcode for INT 3. Intel and Microsoft assemblers will not generate the CD03 opcode from any mnemonic, but this opcode can be created by direct numeric code definition or by self-modifying code.)

+

The action of the INT n instruction (including the INTO, INT3, and INT1 instructions) is similar to that of a far call made with the CALL instruction. The primary difference is that with the INT n instruction, the EFLAGS register is pushed onto the stack before the return address. (The return address is a far address consisting of the current values of the CS and EIP registers.) Returns from interrupt procedures are handled with the IRET instruction, which pops the EFLAGS information and return address from the stack.

+

Each of the INT n, INTO, and INT3 instructions generates a general-protection exception (#GP) if the CPL is greater than the DPL value in the selected gate descriptor in the IDT. In contrast, the INT1 instruction can deliver a #DB

+

even if the CPL is greater than the DPL of descriptor 1 in the IDT. (This behavior supports the use of INT1 by hardware vendors performing hardware debug.)

+

The vector specifies an interrupt descriptor in the interrupt descriptor table (IDT); that is, it provides index into the IDT. The selected interrupt descriptor in turn contains a pointer to an interrupt or exception handler procedure. In protected mode, the IDT contains an array of 8-byte descriptors, each of which is an interrupt gate, trap gate, or task gate. In real-address mode, the IDT is an array of 4-byte far pointers (2-byte code segment selector and a 2-byte instruction pointer), each of which point directly to a procedure in the selected segment. (Note that in real-address mode, the IDT is called the interrupt vector table, and its pointers are called interrupt vectors.)

+

The following decision table indicates which action in the lower portion of the table is taken given the conditions in the upper portion of the table. Each Y in the lower section of the decision table represents a procedure defined in the “Operation” section for this instruction (except #GP).

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PE01111111
VM011
IOPL<3=3
DPL/CPL RELATIONSHIPDPL< CPLDPL> CPLDPL= CPL or CDPL< CPL & NC
INTERRUPT TYPES/W
GATE TYPETaskTrap or InterruptTrap or InterruptTrap or InterruptTrap or InterruptTrap or Interrupt
REAL-ADDRESS-MODEY
PROTECTED-MODEYYYYYYY
TRAP-OR-INTERRUPTGATEYYYYY
INTER-PRIVILEGE-LEVELINTERRUPTY
INTRA-PRIVILEGE-LEVELINTERRUPTY
INTERRUPT-FROM-VIRTUAL-8086-MODEY
TASK-GATEY
#GPYYY
+
Table 3-52. Decision Table
+
+

− Don't Care.

+

Y Yes, action taken.

+

Blank Action not taken.

+

S/W Applies to INT n, INT3, and INTO, but not to INT1.

+

When the processor is executing in virtual-8086 mode, the IOPL determines the action of the INT n instruction. If the IOPL is less than 3, the processor generates a #GP(selector) exception; if the IOPL is 3, the processor executes a protected mode interrupt to privilege level 0. The interrupt gate's DPL must be set to 3 and the target CPL of the interrupt handler procedure must be 0 to execute the protected mode interrupt to privilege level 0.

+

The interrupt descriptor table register (IDTR) specifies the base linear address and limit of the IDT. The initial base address value of the IDTR after the processor is powered up or reset is 0.

+

Refer to Chapter 6, “Procedure Calls, Interrupts, and Exceptions” and Chapter 17, “Control-flow Enforcement Technology (CET)” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for CET details.

+

Instruction ordering. Instructions following an INT n may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the INT n have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible). This applies also to the INTO, INT3, and INT1 instructions, but not to executions of INTO when EFLAGS.OF = 0.

+

Operation + ¶ +

+
The following operational description applies not only to the INT n, INTO, INT3, or INT1 instructions, but also to
+external interrupts, nonmaskable interrupts (NMIs), and exceptions. Some of these events push onto the stack an
+error code.
+The operational description specifies numerous checks whose failure may result in delivery of a nested exception.
+In these cases, the original event is not delivered.
+The operational description specifies the error code delivered by any nested exception. In some cases, the error
+code is specified with a pseudofunction error_code(num,idt,ext), where idt and ext are bit values. The pseudofunc-
+tion produces an error code as follows: (1) if idt is 0, the error code is (num & FCH) | ext; (2) if idt is 1, the error
+code is (num « 3) | 2 | ext.
+In many cases, the pseudofunction error_code is invoked with a pseudovariable EXT. The value of EXT depends on
+the nature of the event whose delivery encountered a nested exception: if that event is a software interrupt (INT n,
+INT3, or INTO), EXT is 0; otherwise (including INT1), EXT is 1.
+IF PE = 0
+    THEN
+        GOTO REAL-ADDRESS-MODE;
+    ELSE (* PE = 1 *)
+        IF (EFLAGS.VM = 1 AND CR4.VME = 0 AND IOPL < 3 AND INT n)
+            THEN
+                    #GP(0); (* Bit 0 of error code is 0 because INT n *)
+            ELSE
+                IF (EFLAGS.VM = 1 AND CR4.VME = 1 AND INT n)
+                        THEN
+                            Consult bit n of the software interrupt redirection bit map in the TSS;
+                            IF bit n is clear
+                                THEN (* redirect interrupt to 8086 program interrupt handler *)
+                                    Push EFLAGS[15:0]; (* if IOPL < 3, save VIF in IF position and save IOPL position as 3 *)
+                                    Push CS;
+                                    Push IP;
+                                    IF IOPL = 3
+                                        THEN IF := 0; (* Clear interrupt flag *)
+                                        ELSE VIF := 0; (* Clear virtual interrupt flag *)
+                                    FI;
+                                    TF := 0; (* Clear trap flag *)
+                                    load CS and EIP (lower 16 bits only) from entry n in interrupt vector table referenced from TSS;
+                                ELSE
+                                    IF IOPL = 3
+                                        THEN GOTO PROTECTED-MODE;
+                                        ELSE #GP(0); (* Bit 0 of error code is 0 because INT n *)
+                                    FI;
+                            FI;
+                        ELSE (* Protected mode, IA-32e mode, or virtual-8086 mode interrupt *)
+                            IF (IA32_EFER.LMA = 0)
+                                THEN (* Protected mode, or virtual-8086 mode interrupt *)
+                                    GOTO PROTECTED-MODE;
+                                ELSE (* IA-32e mode interrupt *)
+                                GOTO IA-32e-MODE;
+                            FI;
+                FI;
+        FI;
+FI;
+REAL-ADDRESS-MODE:
+    IF ((vector_number « 2) + 3) is not within IDT limit
+        THEN #GP; FI;
+    IF stack not large enough for a 6-byte return information
+        THEN #SS; FI;
+    Push (EFLAGS[15:0]);
+    IF := 0; (* Clear interrupt flag *)
+    TF := 0; (* Clear trap flag *)
+    AC := 0; (* Clear AC flag *)
+    Push(CS);
+    Push(IP);
+    (* No error codes are pushed in real-address mode*)
+    CS := IDT(Descriptor (vector_number « 2), selector));
+    EIP := IDT(Descriptor (vector_number « 2), offset)); (* 16 bit offset AND 0000FFFFH *)
+END;
+PROTECTED-MODE:
+    IF ((vector_number « 3) + 7) is not within IDT limits
+    or selected IDT descriptor is not an interrupt-, trap-, or task-gate type
+        THEN #GP(error_code(vector_number,1,EXT)); FI;
+        (* idt operand to error_code set because vector is used *)
+    IF software interrupt (* Generated by INT n, INT3, or INTO; does not apply to INT1 *)
+        THEN
+            IF gate DPL < CPL (* PE = 1, DPL < CPL, software interrupt *)
+                THEN #GP(error_code(vector_number,1,0)); FI;
+                (* idt operand to error_code set because vector is used *)
+                (* ext operand to error_code is 0 because INT n, INT3, or INTO*)
+    FI;
+    IF gate not present
+        THEN #NP(error_code(vector_number,1,EXT)); FI;
+        (* idt operand to error_code set because vector is used *)
+    IF task gate (* Specified in the selected interrupt table descriptor *)
+        THEN GOTO TASK-GATE;
+        ELSE GOTO TRAP-OR-INTERRUPT-GATE; (* PE = 1, trap/interrupt gate *)
+    FI;
+END;
+IA-32e-MODE:
+    IF INTO and CS.L = 1 (64-bit mode)
+        THEN #UD;
+    FI;
+    IF ((vector_number « 4) + 15) is not in IDT limits
+    or selected IDT descriptor is not an interrupt-, or trap-gate type
+        THEN #GP(error_code(vector_number,1,EXT));
+        (* idt operand to error_code set because vector is used *)
+    FI;
+    IF software interrupt (* Generated by INT n, INT3, or INTO; does not apply to INT1 *)
+        THEN
+            IF gate DPL < CPL (* PE = 1, DPL < CPL, software interrupt *)
+                THEN #GP(error_code(vector_number,1,0));
+                (* idt operand to error_code set because vector is used *)
+                (* ext operand to error_code is 0 because INT n, INT3, or INTO*)
+            FI;
+    FI;
+    IF gate not present
+        THEN #NP(error_code(vector_number,1,EXT));
+        (* idt operand to error_code set because vector is used *)
+    FI;
+    GOTO TRAP-OR-INTERRUPT-GATE; (* Trap/interrupt gate *)
+END;
+TASK-GATE: (* PE = 1, task gate *)
+    Read TSS selector in task gate (IDT descriptor);
+        IF local/global bit is set to local or index not within GDT limits
+            THEN #GP(error_code(TSS selector,0,EXT)); FI;
+            (* idt operand to error_code is 0 because selector is used *)
+        Access TSS descriptor in GDT;
+        IF TSS descriptor specifies that the TSS is busy (low-order 5 bits set to 00001)
+            THEN #GP(error_code(TSS selector,0,EXT)); FI;
+            (* idt operand to error_code is 0 because selector is used *)
+        IF TSS not present
+            THEN #NP(error_code(TSS selector,0,EXT)); FI;
+            (* idt operand to error_code is 0 because selector is used *)
+    SWITCH-TASKS (with nesting) to TSS;
+    IF interrupt caused by fault with error code
+        THEN
+            IF stack limit does not allow push of error code
+                THEN #SS(EXT); FI;
+            Push(error code);
+    FI;
+    IF EIP not within code segment limit
+        THEN #GP(EXT); FI;
+END;
+TRAP-OR-INTERRUPT-GATE:
+    Read new code-segment selector for trap or interrupt gate (IDT descriptor);
+    IF new code-segment selector is NULL
+        THEN #GP(EXT); FI; (* Error code contains NULL selector *)
+    IF new code-segment selector is not within its descriptor table limits
+        THEN #GP(error_code(new code-segment selector,0,EXT)); FI;
+        (* idt operand to error_code is 0 because selector is used *)
+    Read descriptor referenced by new code-segment selector;
+    IF descriptor does not indicate a code segment or new code-segment DPL > CPL
+        THEN #GP(error_code(new code-segment selector,0,EXT)); FI;
+        (* idt operand to error_code is 0 because selector is used *)
+    IF new code-segment descriptor is not present,
+        THEN #NP(error_code(new code-segment selector,0,EXT)); FI;
+        (* idt operand to error_code is 0 because selector is used *)
+    IF new code segment is non-conforming with DPL < CPL
+        THEN
+            IF VM = 0
+                THEN
+                        GOTO INTER-PRIVILEGE-LEVEL-INTERRUPT;
+                        (* PE = 1, VM = 0, interrupt or trap gate, nonconforming code segment,
+                        DPL < CPL *)
+                ELSE (* VM = 1 *)
+                        IF new code-segment DPL ≠ 0
+                            THEN #GP(error_code(new code-segment selector,0,EXT));
+                            (* idt operand to error_code is 0 because selector is used *)
+                        GOTO INTERRUPT-FROM-VIRTUAL-8086-MODE; FI;
+                        (* PE = 1, interrupt or trap gate, DPL < CPL, VM = 1 *)
+            FI;
+        ELSE (* PE = 1, interrupt or trap gate, DPL ≥ CPL *)
+            IF VM = 1
+                THEN #GP(error_code(new code-segment selector,0,EXT));
+                (* idt operand to error_code is 0 because selector is used *)
+            IF new code segment is conforming or new code-segment DPL = CPL
+                THEN
+                        GOTO INTRA-PRIVILEGE-LEVEL-INTERRUPT;
+                ELSE (* PE = 1, interrupt or trap gate, nonconforming code segment, DPL > CPL *)
+                        #GP(error_code(new code-segment selector,0,EXT));
+                        (* idt operand to error_code is 0 because selector is used *)
+            FI;
+    FI;
+END;
+INTER-PRIVILEGE-LEVEL-INTERRUPT:
+    (* PE = 1, interrupt or trap gate, non-conforming code segment, DPL < CPL *)
+    IF (IA32_EFER.LMA = 0) (* Not IA-32e mode *)
+        THEN
+        (* Identify stack-segment selector for new privilege level in current TSS *)
+            IF current TSS is 32-bit
+                THEN
+                        TSSstackAddress := (new code-segment DPL « 3) + 4;
+                        IF (TSSstackAddress + 5) > current TSS limit
+                            THEN #TS(error_code(current TSS selector,0,EXT)); FI;
+                            (* idt operand to error_code is 0 because selector is used *)
+                        NewSS := 2 bytes loaded from (TSS base + TSSstackAddress + 4);
+                        NewESP := 4 bytes loaded from (TSS base + TSSstackAddress);
+                ELSE (* current TSS is 16-bit *)
+                        TSSstackAddress := (new code-segment DPL « 2) + 2
+                        IF (TSSstackAddress + 3) > current TSS limit
+                            THEN #TS(error_code(current TSS selector,0,EXT)); FI;
+                            (* idt operand to error_code is 0 because selector is used *)
+                        NewSS := 2 bytes loaded from (TSS base + TSSstackAddress + 2);
+                        NewESP := 2 bytes loaded from (TSS base + TSSstackAddress);
+            FI;
+            IF NewSS is NULL
+                THEN #TS(EXT); FI;
+            IF NewSS index is not within its descriptor-table limits
+            or NewSS RPL ≠ new code-segment DPL
+                THEN #TS(error_code(NewSS,0,EXT)); FI;
+                (* idt operand to error_code is 0 because selector is used *)
+            Read new stack-segment descriptor for NewSS in GDT or LDT;
+            IF new stack-segment DPL ≠ new code-segment DPL
+            or new stack-segment Type does not indicate writable data segment
+                THEN #TS(error_code(NewSS,0,EXT)); FI;
+                (* idt operand to error_code is 0 because selector is used *)
+            IF NewSS is not present
+                THEN #SS(error_code(NewSS,0,EXT)); FI;
+                (* idt operand to error_code is 0 because selector is used *)
+                NewSSP := IA32_PLi_SSP (* where i = new code-segment DPL *)
+        ELSE (* IA-32e mode *)
+            IF IDT-gate IST = 0
+                THEN TSSstackAddress := (new code-segment DPL « 3) + 4;
+                ELSE TSSstackAddress := (IDT gate IST « 3) + 28;
+            FI;
+            IF (TSSstackAddress + 7) > current TSS limit
+                THEN #TS(error_code(current TSS selector,0,EXT); FI;
+                (* idt operand to error_code is 0 because selector is used *)
+            NewRSP := 8 bytes loaded from (current TSS base + TSSstackAddress);
+            NewSS := new code-segment DPL; (* NULL selector with RPL = new CPL *)
+            IF IDT-gate IST = 0
+                THEN
+                        NewSSP := IA32_PLi_SSP (* where i = new code-segment DPL *)
+                ELSE
+                        NewSSPAddress = IA32_INTERRUPT_SSP_TABLE_ADDR + (IDT-gate IST « 3)
+                        (* Check if shadow stacks are enabled at CPL 0 *)
+                        IF ShadowStackEnabled(CPL 0)
+                            THEN NewSSP := 8 bytes loaded from NewSSPAddress; FI;
+            FI;
+    FI;
+    IF IDT gate is 32-bit
+            THEN
+                IF new stack does not have room for 24 bytes (error code pushed)
+                or 20 bytes (no error code pushed)
+                        THEN #SS(error_code(NewSS,0,EXT)); FI;
+                        (* idt operand to error_code is 0 because selector is used *)
+            FI
+        ELSE
+            IF IDT gate is 16-bit
+                THEN
+                        IF new stack does not have room for 12 bytes (error code pushed)
+                        or 10 bytes (no error code pushed);
+                            THEN #SS(error_code(NewSS,0,EXT)); FI;
+                            (* idt operand to error_code is 0 because selector is used *)
+            ELSE (* 64-bit IDT gate*)
+                IF StackAddress is non-canonical
+                        THEN #SS(EXT); FI; (* Error code contains NULL selector *)
+        FI;
+    FI;
+    IF (IA32_EFER.LMA = 0) (* Not IA-32e mode *)
+        THEN
+            IF instruction pointer from IDT gate is not within new code-segment limits
+                THEN #GP(EXT); FI; (* Error code contains NULL selector *)
+            ESP := NewESP;
+            SS := NewSS; (* Segment descriptor information also loaded *)
+        ELSE (* IA-32e mode *)
+            IF instruction pointer from IDT gate contains a non-canonical address
+                THEN #GP(EXT); FI; (* Error code contains NULL selector *)
+            RSP := NewRSP & FFFFFFFFFFFFFFF0H;
+            SS := NewSS;
+    FI;
+    IF IDT gate is 32-bit
+        THEN
+            CS:EIP := Gate(CS:EIP); (* Segment descriptor information also loaded *)
+        ELSE
+            IF IDT gate 16-bit
+                THEN
+                        CS:IP := Gate(CS:IP);
+                        (* Segment descriptor information also loaded *)
+                ELSE (* 64-bit IDT gate *)
+                        CS:RIP := Gate(CS:RIP);
+                        (* Segment descriptor information also loaded *)
+            FI;
+    FI;
+    IF IDT gate is 32-bit
+            THEN
+                Push(far pointer to old stack);
+                (* Old SS and ESP, 3 words padded to 4 *)
+                Push(EFLAGS);
+                Push(far pointer to return instruction);
+                (* Old CS and EIP, 3 words padded to 4 *)
+                Push(ErrorCode); (* If needed, 4 bytes *)
+            ELSE
+                IF IDT gate 16-bit
+                        THEN
+                            Push(far pointer to old stack);
+                            (* Old SS and SP, 2 words *)
+                            Push(EFLAGS(15:0]);
+                            Push(far pointer to return instruction);
+                            (* Old CS and IP, 2 words *)
+                            Push(ErrorCode); (* If needed, 2 bytes *)
+                        ELSE (* 64-bit IDT gate *)
+                            Push(far pointer to old stack);
+                            (* Old SS and SP, each an 8-byte push *)
+                            Push(RFLAGS); (* 8-byte push *)
+                            Push(far pointer to return instruction);
+                            (* Old CS and RIP, each an 8-byte push *)
+                            Push(ErrorCode); (* If needed, 8-bytes *)
+            FI;
+    FI;
+    IF ShadowStackEnabled(CPL) AND CPL = 3
+        THEN
+            IF IA32_EFER.LMA = 0
+                THEN IA32_PL3_SSP := SSP;
+                ELSE (* adjust so bits 63:N get the value of bit N–1, where N is the CPU’s maximum linear-address width *)
+                        IA32_PL3_SSP := LA_adjust(SSP);
+            FI;
+    FI;
+    CPL := new code-segment DPL;
+    CS(RPL) := CPL;
+    IF ShadowStackEnabled(CPL)
+        oldSSP := SSP
+        SSP := NewSSP
+        IF SSP & 0x07 != 0
+            THEN #GP(0); FI;
+        (* Token and CS:LIP:oldSSP pushed on shadow stack must be contained in a naturally aligned 32-byte region *)
+        IF (SSP & ~0x1F) != ((SSP – 24) & ~0x1F)
+            #GP(0); FI;
+        IF ((IA32_EFER.LMA and CS.L) = 0 AND SSP[63:32] != 0)
+            THEN #GP(0); FI;
+        expected_token_value = SSP (* busy bit - bit position 0 - must be clear *)
+        new_token_value = SSP | BUSY_BIT (* Set the busy bit *)
+        IF shadow_stack_lock_cmpxchg8b(SSP, new_token_value, expected_token_value) != expected_token_value
+            THEN #GP(0); FI;
+        IF oldSS.DPL != 3
+            ShadowStackPush8B(oldCS); (* Padded with 48 high-order bits of 0 *)
+            ShadowStackPush8B(oldCSBASE + oldRIP); (* Padded with 32 high-order bits of 0 for 32 bit LIP*)
+            ShadowStackPush8B(oldSSP);
+        FI;
+    FI;
+    IF EndbranchEnabled (CPL)
+        IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH;
+        IA32_S_CET.SUPPRESS = 0
+    FI;
+    IF IDT gate is interrupt gate
+        THEN IF := 0 (* Interrupt flag set to 0, interrupts disabled *); FI;
+    TF := 0;
+    VM := 0;
+    RF := 0;
+    NT := 0;
+END;
+INTERRUPT-FROM-VIRTUAL-8086-MODE:
+    (* Identify stack-segment selector for privilege level 0 in current TSS *)
+    IF current TSS is 32-bit
+        THEN
+            IF TSS limit < 9
+                THEN #TS(error_code(current TSS selector,0,EXT)); FI;
+                (* idt operand to error_code is 0 because selector is used *)
+            NewSS := 2 bytes loaded from (current TSS base + 8);
+            NewESP := 4 bytes loaded from (current TSS base + 4);
+        ELSE (* current TSS is 16-bit *)
+            IF TSS limit < 5
+                THEN #TS(error_code(current TSS selector,0,EXT)); FI;
+                (* idt operand to error_code is 0 because selector is used *)
+            NewSS := 2 bytes loaded from (current TSS base + 4);
+            NewESP := 2 bytes loaded from (current TSS base + 2);
+    FI;
+    IF NewSS is NULL
+        THEN #TS(EXT); FI; (* Error code contains NULL selector *)
+    IF NewSS index is not within its descriptor table limits
+    or NewSS RPL ≠ 0
+        THEN #TS(error_code(NewSS,0,EXT)); FI;
+        (* idt operand to error_code is 0 because selector is used *)
+    Read new stack-segment descriptor for NewSS in GDT or LDT;
+    IF new stack-segment DPL ≠ 0 or stack segment does not indicate writable data segment
+        THEN #TS(error_code(NewSS,0,EXT)); FI;
+        (* idt operand to error_code is 0 because selector is used *)
+    IF new stack segment not present
+        THEN #SS(error_code(NewSS,0,EXT)); FI;
+        (* idt operand to error_code is 0 because selector is used *)
+    NewSSP := IA32_PL0_SSP (* the new code-segment DPL must be 0 *)
+    IF IDT gate is 32-bit
+        THEN
+            IF new stack does not have room for 40 bytes (error code pushed)
+            or 36 bytes (no error code pushed)
+                THEN #SS(error_code(NewSS,0,EXT)); FI;
+                (* idt operand to error_code is 0 because selector is used *)
+        ELSE (* IDT gate is 16-bit)
+            IF new stack does not have room for 20 bytes (error code pushed)
+            or 18 bytes (no error code pushed)
+                THEN #SS(error_code(NewSS,0,EXT)); FI;
+                (* idt operand to error_code is 0 because selector is used *)
+    FI;
+    IF instruction pointer from IDT gate is not within new code-segment limits
+        THEN #GP(EXT); FI; (* Error code contains NULL selector *)
+    tempEFLAGS := EFLAGS;
+    VM := 0;
+    TF := 0;
+    RF := 0;
+    NT := 0;
+    IF service through interrupt gate
+        THEN IF = 0; FI;
+    TempSS := SS;
+    TempESP := ESP;
+    SS := NewSS;
+    ESP := NewESP;
+    (* Following pushes are 16 bits for 16-bit IDT gates and 32 bits for 32-bit IDT gates;
+    Segment selector pushes in 32-bit mode are padded to two words *)
+    Push(GS);
+    Push(FS);
+    Push(DS);
+    Push(ES);
+    Push(TempSS);
+    Push(TempESP);
+    Push(TempEFlags);
+    Push(CS);
+    Push(EIP);
+    GS := 0; (* Segment registers made NULL, invalid for use in protected mode *)
+    FS := 0;
+    DS := 0;
+    ES := 0;
+    CS := Gate(CS); (* Segment descriptor information also loaded *)
+    CS(RPL) := 0;
+    CPL := 0;
+    IF IDT gate is 32-bit
+        THEN
+            EIP := Gate(instruction pointer);
+        ELSE (* IDT gate is 16-bit *)
+            EIP := Gate(instruction pointer) AND 0000FFFFH;
+    FI;
+    IF ShadowStackEnabled(0)
+        oldSSP := SSP
+        SSP := NewSSP
+        IF SSP & 0x07 != 0
+            THEN #GP(0); FI;
+        (* Token and CS:LIP:oldSSP pushed on shadow stack must be contained in a naturally aligned 32-byte region *)
+        IF (SSP & ~0x1F) != ((SSP – 24) & ~0x1F)
+            #GP(0); FI;
+    IF ((IA32_EFER.LMA and CS.L) = 0 AND SSP[63:32] != 0)
+        THEN #GP(0); FI;
+    expected_token_value = SSP (* busy bit - bit position 0 - must be clear *)
+    new_token_value = SSP | BUSY_BIT (* Set the busy bit *)
+    IF shadow_stack_lock_cmpxchg8b(SSP, new_token_value, expected_token_value) != expected_token_value
+        THEN #GP(0); FI;
+    FI;
+    IF EndbranchEnabled (CPL)
+        IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH;
+        IA32_S_CET.SUPPRESS = 0
+    FI;
+(* Start execution of new routine in Protected Mode *)
+END;
+INTRA-PRIVILEGE-LEVEL-INTERRUPT:
+    NewSSP = SSP;
+    CHECK_SS_TOKEN = 0
+    (* PE = 1, DPL = CPL or conforming segment *)
+    IF IA32_EFER.LMA = 1 (* IA-32e mode *)
+        IF IDT-descriptor IST ≠ 0
+            THEN
+                TSSstackAddress := (IDT-descriptor IST « 3) + 28;
+                IF (TSSstackAddress + 7) > TSS limit
+                        THEN #TS(error_code(current TSS selector,0,EXT)); FI;
+                        (* idt operand to error_code is 0 because selector is used *)
+                NewRSP := 8 bytes loaded from (current TSS base + TSSstackAddress);
+            ELSE NewRSP := RSP;
+        FI;
+        IF IDT-descriptor IST ≠ 0
+            IF ShadowStackEnabled(CPL)
+                THEN
+                        NewSSPAddress = IA32_INTERRUPT_SSP_TABLE_ADDR + (IDT gate IST « 3)
+                        NewSSP := 8 bytes loaded from NewSSPAddress
+                        CHECK_SS_TOKEN = 1
+            FI;
+        FI;
+    FI;
+    IF 32-bit gate (* implies IA32_EFER.LMA = 0 *)
+        THEN
+            IF current stack does not have room for 16 bytes (error code pushed)
+            or 12 bytes (no error code pushed)
+                THEN #SS(EXT); FI; (* Error code contains NULL selector *)
+        ELSE IF 16-bit gate (* implies IA32_EFER.LMA = 0 *)
+            IF current stack does not have room for 8 bytes (error code pushed)
+            or 6 bytes (no error code pushed)
+                THEN #SS(EXT); FI; (* Error code contains NULL selector *)
+        ELSE (* IA32_EFER.LMA = 1, 64-bit gate*)
+                IF NewRSP contains a non-canonical address
+                        THEN #SS(EXT); (* Error code contains NULL selector *)
+        FI;
+    FI;
+    IF (IA32_EFER.LMA = 0) (* Not IA-32e mode *)
+        THEN
+            IF instruction pointer from IDT gate is not within new code-segment limit
+                THEN #GP(EXT); FI; (* Error code contains NULL selector *)
+        ELSE
+            IF instruction pointer from IDT gate contains a non-canonical address
+                THEN #GP(EXT); FI; (* Error code contains NULL selector *)
+            RSP := NewRSP & FFFFFFFFFFFFFFF0H;
+    FI;
+    IF IDT gate is 32-bit (* implies IA32_EFER.LMA = 0 *)
+        THEN
+            Push (EFLAGS);
+            Push (far pointer to return instruction); (* 3 words padded to 4 *)
+            CS:EIP := Gate(CS:EIP); (* Segment descriptor information also loaded *)
+            Push (ErrorCode); (* If any *)
+        ELSE
+            IF IDT gate is 16-bit (* implies IA32_EFER.LMA = 0 *)
+                THEN
+                        Push (FLAGS);
+                        Push (far pointer to return location); (* 2 words *)
+                        CS:IP := Gate(CS:IP);
+                        (* Segment descriptor information also loaded *)
+                        Push (ErrorCode); (* If any *)
+                ELSE (* IA32_EFER.LMA = 1, 64-bit gate*)
+                        Push(far pointer to old stack);
+                        (* Old SS and SP, each an 8-byte push *)
+                        Push(RFLAGS); (* 8-byte push *)
+                        Push(far pointer to return instruction);
+                        (* Old CS and RIP, each an 8-byte push *)
+                        Push(ErrorCode); (* If needed, 8 bytes *)
+                        CS:RIP := GATE(CS:RIP);
+                        (* Segment descriptor information also loaded *)
+            FI;
+    FI;
+    CS(RPL) := CPL;
+    IF ShadowStackEnabled(CPL)
+        IF CHECK_SS_TOKEN == 1
+            THEN
+                IF NewSSP & 0x07 != 0
+                        THEN #GP(0); FI;
+        (* Token and CS:LIP:oldSSP pushed on shadow stack must be contained in a naturally aligned 32-byte region *)
+        IF (NewSSP & ~0x1F) != ((NewSSP – 24) & ~0x1F)
+            #GP(0); FI;
+                IF ((IA32_EFER.LMA and CS.L) = 0 AND NewSSP[63:32] != 0)
+                        THEN #GP(0); FI;
+                expected_token_value = NewSSP (* busy bit - bit position 0 - must be clear *)
+                new_token_value = NewSSP | BUSY_BIT (* Set the busy bit *)
+                IF shadow_stack_lock_cmpxchg8b(NewSSP, new_token_value, expected_token_value) != expected_token_value
+                        THEN #GP(0); FI;
+        FI;
+        (* Align to next 8 byte boundary *)
+        tempSSP = SSP;
+        Shadow_stack_store 4 bytes of 0 to (NewSSP − 4)
+        SSP = newSSP & 0xFFFFFFFFFFFFFFF8H;
+        (* push cs:lip:ssp on shadow stack *)
+        ShadowStackPush8B(oldCS); (* Padded with 48 high-order bits of 0 *)
+        ShadowStackPush8B(oldCSBASE + oldRIP); (* Padded with 32 high-order bits of 0 for 32 bit LIP*)
+        ShadowStackPush8B(tempSSP);
+    FI;
+    IF EndbranchEnabled (CPL)
+        IF CPL = 3
+            THEN
+                IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                IA32_U_CET.SUPPRESS = 0
+            ELSE
+                IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                IA32_S_CET.SUPPRESS = 0
+        FI;
+    FI;
+    IF IDT gate is interrupt gate
+        THEN IF := 0; FI; (* Interrupt flag set to 0; interrupts disabled *)
+    TF := 0;
+    NT := 0;
+    VM := 0;
+    RF := 0;
+END;
+
+

Flags Affected + ¶ +

+

The EFLAGS register is pushed onto the stack. The IF, TF, NT, AC, RF, and VM flags may be cleared, depending on the mode of operation of the processor when the INT instruction is executed (see the “Operation” section). If the interrupt uses a task gate, any flags may be set or cleared, controlled by the EFLAGS image in the new task’s TSS.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(error_code)If the instruction pointer in the IDT or in the interrupt, trap, or task gate is beyond the code segment limits.
If the segment selector in the interrupt, trap, or task gate is NULL.
If an interrupt, trap, or task gate, code segment, or TSS segment selector index is outside its descriptor table limits.
If the vector selects a descriptor outside the IDT limits.
If an IDT descriptor is not an interrupt, trap, or task gate.
If an interrupt is generated by the INT n, INT3, or INTO instruction and the DPL of an interrupt, trap, or task gate is less than the CPL.
If the segment selector in an interrupt or trap gate does not point to a segment descriptor for a code segment.
If the segment selector for a TSS has its local/global bit set for local.
If a TSS segment descriptor specifies that the TSS is busy or not available.
If SSP in IA32_PLi_SSP (where i is the new CPL) is not 8 byte aligned.
If the token and the stack frame to be pushed on shadow stack are not contained in a naturally aligned 32-byte region of the shadow stack.
If “supervisor Shadow Stack” token on new shadow stack is marked busy.
If destination mode is 32-bit or compatibility mode, but SSP address in “supervisor shadow stack” token is beyond 4GB.
If SSP address in “supervisor shadow stack” token does not match SSP address in IA32_PLi_SSP (where i is the new CPL).
#SS(error_code)If pushing the return address, flags, or error code onto the stack exceeds the bounds of the stack segment and no stack switch occurs.
If the SS register is being loaded and the segment pointed to is marked not present.
If pushing the return address, flags, error code, or stack segment pointer exceeds the bounds of the new stack segment when a stack switch occurs.
#NP(error_code)If code segment, interrupt gate, trap gate, task gate, or TSS is not present.
#TS(error_code)If the RPL of the stack segment selector in the TSS is not equal to the DPL of the code segment being accessed by the interrupt or trap gate.
If DPL of the stack segment descriptor pointed to by the stack segment selector in the TSS is not equal to the DPL of the code segment descriptor for the interrupt or trap gate.
If the stack segment selector in the TSS is NULL.
If the stack segment for the TSS is not a writable data segment.
If segment-selector index for stack segment is outside descriptor table limits.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
#AC(EXT)If alignment checking is enabled, the gate DPL is 3, and a stack push is unaligned.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the interrupt vector number is outside the IDT limits.
#SSIf stack limit violation on push.
If pushing the return address, flags, or error code onto the stack exceeds the bounds of the stack segment.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(error_code)(For INT n, INTO, or BOUND instruction) If the IOPL is less than 3 or the DPL of the interrupt, trap, or task gate is not equal to 3.
If the instruction pointer in the IDT or in the interrupt, trap, or task gate is beyond the code segment limits.
If the segment selector in the interrupt, trap, or task gate is NULL.
If a interrupt gate, trap gate, task gate, code segment, or TSS segment selector index is outside its descriptor table limits.
If the vector selects a descriptor outside the IDT limits.
If an IDT descriptor is not an interrupt, trap, or task gate.
If an interrupt is generated by INT n, INT3, or INTO and the DPL of an interrupt, trap, or task gate is less than the CPL.
If the segment selector in an interrupt or trap gate does not point to a segment descriptor for a code segment.
If the segment selector for a TSS has its local/global bit set for local.
#SS(error_code)If the SS register is being loaded and the segment pointed to is marked not present.
If pushing the return address, flags, error code, stack segment pointer, or data segments exceeds the bounds of the stack segment.
#NP(error_code)If code segment, interrupt gate, trap gate, task gate, or TSS is not present.
#TS(error_code)If the RPL of the stack segment selector in the TSS is not equal to the DPL of the code segment being accessed by the interrupt or trap gate.
If DPL of the stack segment descriptor for the TSS’s stack segment is not equal to the DPL of the code segment descriptor for the interrupt or trap gate.
If the stack segment selector in the TSS is NULL.
If the stack segment for the TSS is not a writable data segment.
If segment-selector index for stack segment is outside descriptor table limits.
#PF(fault-code)If a page fault occurs.
#OFIf the INTO instruction is executed and the OF flag is set.
#UDIf the LOCK prefix is used.
#AC(EXT)If alignment checking is enabled, the gate DPL is 3, and a stack push is unaligned.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(error_code)If the instruction pointer in the 64-bit interrupt gate or trap gate is non-canonical.
If the segment selector in the 64-bit interrupt or trap gate is NULL.
If the vector selects a descriptor outside the IDT limits.
If the vector points to a gate which is in non-canonical space.
If the vector points to a descriptor which is not a 64-bit interrupt gate or a 64-bit trap gate.
If the descriptor pointed to by the gate selector is outside the descriptor table limit.
If the descriptor pointed to by the gate selector is in non-canonical space.
If the descriptor pointed to by the gate selector is not a code segment.
If the descriptor pointed to by the gate selector doesn’t have the L-bit set, or has both the L-bit and D-bit set.
If the descriptor pointed to by the gate selector has DPL > CPL.
If SSP in IA32_PLi_SSP (where i is the new CPL) is not 8 byte aligned.
If the token and the stack frame to be pushed on shadow stack are not contained in a naturally aligned 32-byte region of the shadow stack.
If “supervisor shadow stack” token on new shadow stack is marked busy.
If destination mode is 32-bit or compatibility mode, but SSP address in “supervisor shadow stack” token is beyond 4GB.
If SSP address in “supervisor shadow stack” token does not match SSP address in IA32_PLi_SSP (where i is the new CPL).
#SS(error_code)If a push of the old EFLAGS, CS selector, EIP, or error code is in non-canonical space with no stack switch.
If a push of the old SS selector, ESP, EFLAGS, CS selector, EIP, or error code is in non-canonical space on a stack switch (either CPL change or no-CPL with IST).
#NP(error_code)If the 64-bit interrupt-gate, 64-bit trap-gate, or code segment is not present.
#TS(error_code)If an attempt to load RSP from the TSS causes an access to non-canonical space.
If the RSP from the TSS is outside descriptor table limits.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
#AC(EXT)If alignment checking is enabled, the gate DPL is 3, and a stack push is unaligned.
diff --git a/x86/invd.html b/x86/invd.html new file mode 100644 index 0000000..362b862 --- /dev/null +++ b/x86/invd.html @@ -0,0 +1,110 @@ + +INVD + — Invalidate Internal Caches

INVD + — Invalidate Internal Caches

+ +

Opcode1

+ + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 08INVDZOValidValidFlush internal caches; initiate flushing of external caches.
+
+

1. See the IA-32 Architecture Compatibility section below.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Invalidates (flushes) the processor’s internal caches and issues a special-function bus cycle that directs external caches to also flush themselves. Data held in internal caches is not written back to main memory.

+

After executing this instruction, the processor does not wait for the external caches to complete their flushing operation before proceeding with instruction execution. It is the responsibility of hardware to respond to the cache flush signal.

+

The INVD instruction is a privileged instruction. When the processor is running in protected mode, the CPL of a program or procedure must be 0 to execute this instruction.

+

The INVD instruction may be used when the cache is used as temporary memory and the cache contents need to be invalidated rather than written back to memory. When the cache is used as temporary memory, no external device should be actively writing data to main memory.

+

Use this instruction with care. Data cached internally and not written back to main memory will be lost. Note that any data from an external device to main memory (for example, via a PCIWrite) can be temporarily stored in the caches; these data can be lost when an INVD instruction is executed. Unless there is a specific requirement or benefit to flushing caches without writing back modified cache lines (for example, temporary memory, testing, or fault recovery where cache coherency with main memory is not a concern), software should instead use the WBINVD instruction.

+

On processors that support processor reserved memory, the INVD instruction cannot be executed when processor reserved memory protections are activated. See Section 36.5, “EPC and Management of EPC Pages,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3D.

+

Some processors prevent execution of INVD after BIOS execution is complete. They report this by enumerating CPUID.(EAX=07H,ECX=1H):EAX[bit 30] as 1. On such processors, INVD cannot be executed if bit 0 of SR_BIOS_DONE (MSR address 151H) is 1.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

The INVD instruction is implementation dependent; it may be implemented differently on different families of Intel 64 or IA-32 processors. This instruction is not supported on IA-32 processors earlier than the Intel486 processor.

+

Operation + ¶ +

+
Flush(InternalCaches);
+SignalFlush(ExternalCaches);
+Continue (* Continue execution *)
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the processor reserved memory protections are activated.
If CPUID.(EAX=07H, ECX=1H):EAX[30] = 1 and bit 0 is set in MSR_BIOS_DONE (MSR address 151H).
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If CPUID.(EAX=07H, ECX=1H):EAX[30] = 1 and bit 0 is set in MSR_BIOS_DONE (MSR address 151H).
If the processor reserved memory protections are activated.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)The INVD instruction cannot be executed in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/invept.html b/x86/invept.html new file mode 100644 index 0000000..0980e42 --- /dev/null +++ b/x86/invept.html @@ -0,0 +1,183 @@ + +INVEPT + — Invalidate Translations Derived from EPT

INVEPT + — Invalidate Translations Derived from EPT

+ + + + + + + + + + + + + +
Opcode/InstructionOp/EnDescription
66 0F 38 80 INVEPT r64, m128RMInvalidates EPT-derived entries in the TLBs and paging-structure caches (in 64-bit mode).
66 0F 38 80 INVEPT r32, m128RMInvalidates EPT-derived entries in the TLBs and paging-structure caches (outside 64-bit mode).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)NANA
+

Description + ¶ +

+

Invalidates mappings in the translation lookaside buffers (TLBs) and paging-structure caches that were derived from extended page tables (EPT). (See Chapter 29, “VMX Support for Address Translation.”) Invalidation is based on the INVEPT type specified in the register operand and the INVEPT descriptor specified in the memory operand.

+

Outside IA-32e mode, the register operand is always 32 bits, regardless of the value of CS.D; in 64-bit mode, the register operand has 64 bits (the instruction cannot be executed in compatibility mode).

+

The INVEPT types supported by a logical processors are reported in the IA32_VMX_EPT_VPID_CAP MSR (see Appendix A, “VMX Capability Reporting Facility”). There are two INVEPT types currently defined:

+
    +
  • Single-context invalidation. If the INVEPT type is 1, the logical processor invalidates all mappings associated with bits 51:12 of the EPT pointer (EPTP) specified in the INVEPT descriptor. It may invalidate other mappings as well.
  • +
  • Global invalidation: If the INVEPT type is 2, the logical processor invalidates mappings associated with all EPTPs.
+

If an unsupported INVEPT type is specified, the instruction fails.

+

INVEPT invalidates all the specified mappings for the indicated EPTP(s) regardless of the VPID and PCID values with which those mappings may be associated.

+

The INVEPT descriptor comprises 128 bits and contains a 64-bit EPTP value in bits 63:0 (see Figure 31-1).

+
+ + + + + + + + + + + + + + + +127 +6463 +0 +Reserved (must be zero) +EPT pointer (EPTP) +
Figure 31-1. INVEPT Descriptor
+

Operation + ¶ +

+
IF (not in VMX operation) or (CR0.PE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF in VMX non-root operation
+    THEN VM exit;
+ELSIF CPL > 0
+    THEN #GP(0);
+    ELSE
+        INVEPT_TYPE := value of register operand;
+        IF IA32_VMX_EPT_VPID_CAP MSR indicates that processor does not support INVEPT_TYPE
+            THEN VMfail(Invalid operand to INVEPT/INVVPID);
+            ELSE // INVEPT_TYPE must be 1 or 2
+                INVEPT_DESC := value of memory operand;
+                EPTP := INVEPT_DESC[63:0];
+                CASE INVEPT_TYPE OF
+                    1:
+                                    // single-context invalidation
+                        IF VM entry with the “enable EPT“ VM execution control set to 1
+                        would fail due to the EPTP value
+                            THEN VMfail(Invalid operand to INVEPT/INVVPID);
+                            ELSE
+                                Invalidate mappings associated with EPTP[51:12];
+                                VMsucceed;
+                        FI;
+                        BREAK;
+                    2:
+                                    // global invalidation
+                        Invalidate mappings associated with all EPTPs;
+                        VMsucceed;
+                        BREAK;
+                ESAC;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains an unusable segment.
If the source operand is located in an execute-only code segment.
#PF(fault-code)If a page fault occurs in accessing the memory operand.
#SS(0)If the memory operand effective address is outside the SS segment limit.
If the SS register contains an unusable segment.
#UDIf not in VMX operation.
If the logical processor does not support EPT (IA32_VMX_PROCBASED_CTLS2[33]=0).
If the logical processor supports EPT (IA32_VMX_PROCBASED_CTLS2[33]=1) but does not support the INVEPT instruction (IA32_VMX_EPT_VPID_CAP[20]=0).
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe INVEPT instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe INVEPT instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe INVEPT instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory operand is in the CS, DS, ES, FS, or GS segments and the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs in accessing the memory operand.
#SS(0)If the memory operand is in the SS segment and the memory address is in a non-canonical form.
#UDIf not in VMX operation.
If the logical processor does not support EPT (IA32_VMX_PROCBASED_CTLS2[33]=0).
If the logical processor supports EPT (IA32_VMX_PROCBASED_CTLS2[33]=1) but does not support the INVEPT instruction (IA32_VMX_EPT_VPID_CAP[20]=0).
diff --git a/x86/invlpg.html b/x86/invlpg.html new file mode 100644 index 0000000..505b221 --- /dev/null +++ b/x86/invlpg.html @@ -0,0 +1,106 @@ + +INVLPG + — Invalidate TLB Entries

INVLPG + — Invalidate TLB Entries

+ +

Opcode1

+ + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01/7ValidValidInvalidate TLB entries for page containing m.
+

1. See the IA-32 Architecture Compatibility section below.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Invalidates any translation lookaside buffer (TLB) entries specified with the source operand. The source operand is a memory address. The processor determines the page that contains that address and flushes all TLB entries for that page.1

+

The INVLPG instruction is a privileged instruction. When the processor is running in protected mode, the CPL must be 0 to execute this instruction.

+

The INVLPG instruction normally flushes TLB entries only for the specified page; however, in some cases, it may flush more entries, even the entire TLB. The instruction invalidates TLB entries associated with the current PCID and may or may not do so for TLB entries associated with other PCIDs. (If PCIDs are disabled — CR4.PCIDE = 0 — the current PCID is 000H.) The instruction also invalidates any global TLB entries for the specified page, regardless of PCID.

+

For more details on operations that flush the TLB, see “MOV—Move to/from Control Registers” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2B, and Section 4.10.4.1, “Operations that Invalidate TLBs and Paging-Structure Caches,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

This instruction’s operation is the same in all non-64-bit modes. It also operates the same in 64-bit mode, except if the memory address is in non-canonical form. In this case, INVLPG is the same as a NOP.

+

IA-32 Architecture Compatibility + ¶ +

+

The INVLPG instruction is implementation dependent, and its function may be implemented differently on different families of Intel 64 or IA-32 processors. This instruction is not supported on IA-32 processors earlier than the Intel486 processor.

+

Operation + ¶ +

+
Invalidate(RelevantTLBEntries);
+Continue; (* Continue execution *)
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the current privilege level is not 0.
#UDOperand is a register.
If the LOCK prefix is used.
+
+

1. If the paging structures map the linear address using a page larger than 4 KBytes and there are multiple TLB entries for that page (see Section 4.10.2.3, “Details of TLB Use,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A), the instruction invalidates all of them.

+

Real-Address Mode Exceptions + ¶ +

+ + + + + +
#UDOperand is a register.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)The INVLPG instruction cannot be executed at the virtual-8086 mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the current privilege level is not 0.
#UDOperand is a register.
If the LOCK prefix is used.
diff --git a/x86/invpcid.html b/x86/invpcid.html new file mode 100644 index 0000000..54ae861 --- /dev/null +++ b/x86/invpcid.html @@ -0,0 +1,209 @@ + +INVPCID + — Invalidate Process-Context Identifier

INVPCID + — Invalidate Process-Context Identifier

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 38 82 /r INVPCID r32, m128RMN.E./VINVPCIDInvalidates entries in the TLBs and paging-structure caches based on invalidation type in r32 and descriptor in m128.
66 0F 38 82 /r INVPCID r64, m128RMV/N.E.INVPCIDInvalidates entries in the TLBs and paging-structure caches based on invalidation type in r64 and descriptor in m128.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Invalidates mappings in the translation lookaside buffers (TLBs) and paging-structure caches based on process-context identifier (PCID). (See Section 4.10, “Caching Translation Information,” in the Intel 64 and IA-32 Architecture Software Developer’s Manual, Volume 3A.) Invalidation is based on the INVPCID type specified in the register operand and the INVPCID descriptor specified in the memory operand.

+

Outside 64-bit mode, the register operand is always 32 bits, regardless of the value of CS.D. In 64-bit mode the register operand has 64 bits.

+

There are four INVPCID types currently defined:

+
    +
  • Individual-address invalidation: If the INVPCID type is 0, the logical processor invalidates mappings—except global translations—for the linear address and PCID specified in the INVPCID descriptor.1 In some cases, the instruction may invalidate global translations or mappings for other linear addresses (or other PCIDs) as well.
  • +
  • Single-context invalidation: If the INVPCID type is 1, the logical processor invalidates all mappings—except global translations—associated with the PCID specified in the INVPCID descriptor. In some cases, the instruction may invalidate global translations or mappings for other PCIDs as well.
  • +
  • All-context invalidation, including global translations: If the INVPCID type is 2, the logical processor invalidates all mappings—including global translations—associated with any PCID.
  • +
  • All-context invalidation: If the INVPCID type is 3, the logical processor invalidates all mappings—except global translations—associated with any PCID. In some case, the instruction may invalidate global translations as well.
+

The INVPCID descriptor comprises 128 bits and consists of a PCID and a linear address as shown in Figure 3-25. For INVPCID type 0, the processor uses the full 64 bits of the linear address even outside 64-bit mode; the linear address is not used for other INVPCID types.

+
+ + + + + + + + + + + + + + + +127 +6463 +1211 0 +Linear Address +Reserved (must be zero) +PCID +
Figure 3-25. INVPCID Descriptor
+
+

1. If the paging structures map the linear address using a page larger than 4 KBytes and there are multiple TLB entries for that page (see Section 4.10.2.3, “Details of TLB Use,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A), the instruction invalidates all of them.

+

If CR4.PCIDE = 0, a logical processor does not cache information for any PCID other than 000H. In this case, executions with INVPCID types 0 and 1 are allowed only if the PCID specified in the INVPCID descriptor is 000H; executions with INVPCID types 2 and 3 invalidate mappings only for PCID 000H. Note that CR4.PCIDE must be 0 outside IA-32e mode (see Section 4.10.1, “Process-Context Identifiers (PCIDs),” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A).

+

Operation + ¶ +

+
INVPCID_TYPE := value of register operand; // must be in the range of 0–3
+INVPCID_DESC := value of memory operand;
+CASE INVPCID_TYPE OF
+    0:
+            // individual-address invalidation
+        PCID := INVPCID_DESC[11:0];
+        L_ADDR := INVPCID_DESC[127:64];
+        Invalidate mappings for L_ADDR associated with PCID except global translations;
+        BREAK;
+    1:
+            // single PCID invalidation
+        PCID := INVPCID_DESC[11:0];
+        Invalidate all mappings associated with PCID except global translations;
+        BREAK;
+    2:
+            // all PCID invalidation including global translations
+        Invalidate all mappings for all PCIDs, including global translations;
+        BREAK;
+    3:
+            // all PCID invalidation retaining global translations
+        Invalidate all mappings for all PCIDs except global translations;
+        BREAK;
+ESAC;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
INVPCID void _invpcid(unsigned __int32 type, void * descriptor);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains an unusable segment.
If the source operand is located in an execute-only code segment.
If an invalid type is specified in the register operand, i.e., INVPCID_TYPE > 3.
If bits 63:12 of INVPCID_DESC are not all zero.
If INVPCID_TYPE is either 0 or 1 and INVPCID_DESC[11:0] is not zero.
If INVPCID_TYPE is 0 and the linear address in INVPCID_DESC[127:64] is not canonical.
#PF(fault-code)If a page fault occurs in accessing the memory operand.
#SS(0)If the memory operand effective address is outside the SS segment limit.
If the SS register contains an unusable segment.
#UDIf if CPUID.(EAX=07H, ECX=0H):EBX.INVPCID[bit 10] = 0.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GPIf an invalid type is specified in the register operand, i.e., INVPCID_TYPE > 3.
If bits 63:12 of INVPCID_DESC are not all zero.
If INVPCID_TYPE is either 0 or 1 and INVPCID_DESC[11:0] is not zero.
If INVPCID_TYPE is 0 and the linear address in INVPCID_DESC[127:64] is not canonical.
#UDIf CPUID.(EAX=07H, ECX=0H):EBX.INVPCID[bit 10] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)The INVPCID instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory operand is in the CS, DS, ES, FS, or GS segments and the memory address is in a non-canonical form.
If an invalid type is specified in the register operand, i.e., INVPCID_TYPE > 3.
If bits 63:12 of INVPCID_DESC are not all zero.
If CR4.PCIDE=0, INVPCID_TYPE is either 0 or 1, and INVPCID_DESC[11:0] is not zero.
If INVPCID_TYPE is 0 and the linear address in INVPCID_DESC[127:64] is not canonical.
#PF(fault-code)If a page fault occurs in accessing the memory operand.
#SS(0)If the memory destination operand is in the SS segment and the memory address is in a non-canonical form.
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.INVPCID[bit 10] = 0.
diff --git a/x86/invvpid.html b/x86/invvpid.html new file mode 100644 index 0000000..390a865 --- /dev/null +++ b/x86/invvpid.html @@ -0,0 +1,216 @@ + +INVVPID + — Invalidate Translations Based on VPID

INVVPID + — Invalidate Translations Based on VPID

+ + + + + + + + + + + + + +
Opcode/InstructionOp/EnDescription
66 0F 38 81 INVVPID r64, m128RMInvalidates entries in the TLBs and paging-structure caches based on VPID (in 64-bit mode).
66 0F 38 81 INVVPID r32, m128RMInvalidates entries in the TLBs and paging-structure caches based on VPID (outside 64-bit mode).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)NANA
+

Description + ¶ +

+

Invalidates mappings in the translation lookaside buffers (TLBs) and paging-structure caches based on virtualprocessor identifier (VPID). (See Chapter 29, “VMX Support for Address Translation.”) Invalidation is based on the INVVPID type specified in the register operand and the INVVPID descriptor specified in the memory operand.

+

Outside IA-32e mode, the register operand is always 32 bits, regardless of the value of CS.D; in 64-bit mode, the register operand has 64 bits (the instruction cannot be executed in compatibility mode).

+

The INVVPID types supported by a logical processors are reported in the IA32_VMX_EPT_VPID_CAP MSR (see Appendix A, “VMX Capability Reporting Facility”). There are four INVVPID types currently defined:

+
    +
  • Individual-address invalidation: If the INVVPID type is 0, the logical processor invalidates mappings for the linear address and VPID specified in the INVVPID descriptor. In some cases, it may invalidate mappings for other linear addresses (or other VPIDs) as well.
  • +
  • Single-context invalidation: If the INVVPID type is 1, the logical processor invalidates all mappings tagged with the VPID specified in the INVVPID descriptor. In some cases, it may invalidate mappings for other VPIDs as well.
  • +
  • All-contexts invalidation: If the INVVPID type is 2, the logical processor invalidates all mappings tagged with all VPIDs except VPID 0000H. In some cases, it may invalidate translations with VPID 0000H as well.
  • +
  • Single-context invalidation, retaining global translations: If the INVVPID type is 3, the logical processor invalidates all mappings tagged with the VPID specified in the INVVPID descriptor except global translations. In some cases, it may invalidate global translations (and mappings with other VPIDs) as well. See the “Caching Translation Information” section in Chapter 4 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for information about global translations.
+

If an unsupported INVVPID type is specified, the instruction fails.

+

INVVPID invalidates all the specified mappings for the indicated VPID(s) regardless of the EPTP and PCID values with which those mappings may be associated.

+

The INVVPID descriptor comprises 128 bits and consists of a VPID and a linear address as shown in Figure 31-2.

+
+ + + + + + + + + + + + + + + +127 +6463 +1615 0 +Linear Address +Reserved (must be zero) +VPID +
Figure 31-2. INVVPID Descriptor
+

Operation + ¶ +

+
IF (not in VMX operation) or (CR0.PE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF in VMX non-root operation
+    THEN VM exit;
+ELSIF CPL > 0
+    THEN #GP(0);
+    ELSE
+        INVVPID_TYPE := value of register operand;
+        IF IA32_VMX_EPT_VPID_CAP MSR indicates that processor does not support
+        INVVPID_TYPE
+            THEN VMfail(Invalid operand to INVEPT/INVVPID);
+            ELSE // INVVPID_TYPE must be in the range 0–3
+                INVVPID_DESC := value of memory operand;
+                IF INVVPID_DESC[63:16] ≠ 0
+                    THEN VMfail(Invalid operand to INVEPT/INVVPID);
+                    ELSE
+                        CASE INVVPID_TYPE OF
+                            0:
+                                            // individual-address invalidation
+                                VPID := INVVPID_DESC[15:0];
+                                IF VPID = 0
+                                    THEN VMfail(Invalid operand to INVEPT/INVVPID);
+                                    ELSE
+                                        GL_ADDR := INVVPID_DESC[127:64];
+                                        IF (GL_ADDR is not in a canonical form)
+                                            THEN
+                                                VMfail(Invalid operand to INVEPT/INVVPID);
+                                            ELSE
+                                                Invalidate mappings for GL_ADDR tagged with VPID;
+                                                VMsucceed;
+                                        FI;
+                                FI;
+                                BREAK;
+                            1:
+                                            // single-context invalidation
+                                VPID := INVVPID_DESC[15:0];
+                                IF VPID = 0
+                                    THEN VMfail(Invalid operand to INVEPT/INVVPID);
+                                    ELSE
+                                        Invalidate all mappings tagged with VPID;
+                                        VMsucceed;
+                                FI;
+                                BREAK;
+                            2:
+                                            // all-context invalidation
+                                Invalidate all mappings tagged with all non-zero VPIDs;
+                                VMsucceed;
+                                BREAK;
+                            3:
+                                            // single-context invalidation retaining globals
+                                VPID := INVVPID_DESC[15:0];
+                                IF VPID = 0
+                                    THEN VMfail(Invalid operand to INVEPT/INVVPID);
+                                    ELSE
+                                        Invalidate all mappings tagged with VPID except global translations;
+                                        VMsucceed;
+                                FI;
+                                BREAK;
+                        ESAC;
+                FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains an unusable segment.
If the source operand is located in an execute-only code segment.
#PF(fault-code)If a page fault occurs in accessing the memory operand.
#SS(0)If the memory operand effective address is outside the SS segment limit.
If the SS register contains an unusable segment.
#UDIf not in VMX operation.
If the logical processor does not support VPIDs (IA32_VMX_PROCBASED_CTLS2[37]=0).
If the logical processor supports VPIDs (IA32_VMX_PROCBASED_CTLS2[37]=1) but does not support the INVVPID instruction (IA32_VMX_EPT_VPID_CAP[32]=0).
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe INVVPID instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe INVVPID instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe INVVPID instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory operand is in the CS, DS, ES, FS, or GS segments and the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs in accessing the memory operand.
#SS(0)If the memory destination operand is in the SS segment and the memory address is in a non-canonical form.
#UDIf not in VMX operation.
If the logical processor does not support VPIDs (IA32_VMX_PROCBASED_CTLS2[37]=0).
If the logical processor supports VPIDs (IA32_VMX_PROCBASED_CTLS2[37]=1) but does not support the INVVPID instruction (IA32_VMX_EPT_VPID_CAP[32]=0).
diff --git a/x86/iret.iretd.iretq.html b/x86/iret.iretd.iretq.html new file mode 100644 index 0000000..4deedd3 --- /dev/null +++ b/x86/iret.iretd.iretq.html @@ -0,0 +1,545 @@ + +IRET/IRETD/IRETQ + — Interrupt Return

IRET/IRETD/IRETQ + — Interrupt Return

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
CFIRETZOValidValidInterrupt return (16-bit operand size).
CFIRETDZOValidValidInterrupt return (32-bit operand size).
REX.W + CFIRETQZOValidN.E.Interrupt return (64-bit operand size).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Returns program control from an exception or interrupt handler to a program or procedure that was interrupted by an exception, an external interrupt, or a software-generated interrupt. These instructions are also used to perform a return from a nested task. (A nested task is created when a CALL instruction is used to initiate a task switch or when an interrupt or exception causes a task switch to an interrupt or exception handler.) See the section titled “Task Linking” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

IRET and IRETD are mnemonics for the same opcode. The IRETD mnemonic (interrupt return double) is intended for use when returning from an interrupt when using the 32-bit operand size; however, most assemblers use the IRET mnemonic interchangeably for both operand sizes.

+

In Real-Address Mode, the IRET instruction performs a far return to the interrupted program or procedure. During this operation, the processor pops the return instruction pointer, return code segment selector, and EFLAGS image from the stack to the EIP, CS, and EFLAGS registers, respectively, and then resumes execution of the interrupted program or procedure.

+

In Protected Mode, the action of the IRET instruction depends on the settings of the NT (nested task) and VM flags in the EFLAGS register and the VM flag in the EFLAGS image stored on the current stack. Depending on the setting of these flags, the processor performs the following types of interrupt returns:

+
    +
  • Return from virtual-8086 mode.
  • +
  • Return to virtual-8086 mode.
  • +
  • Intra-privilege level return.
  • +
  • Inter-privilege level return.
  • +
  • Return from nested task (task switch).
+

If the NT flag (EFLAGS register) is cleared, the IRET instruction performs a far return from the interrupt procedure, without a task switch. The code segment being returned to must be equally or less privileged than the interrupt handler routine (as indicated by the RPL field of the code segment selector popped from the stack).

+

As with a real-address mode interrupt return, the IRET instruction pops the return instruction pointer, return code segment selector, and EFLAGS image from the stack to the EIP, CS, and EFLAGS registers, respectively, and then resumes execution of the interrupted program or procedure. If the return is to another privilege level, the IRET instruction also pops the stack pointer and SS from the stack, before resuming program execution. If the return is to virtual-8086 mode, the processor also pops the data segment registers from the stack.

+

If the NT flag is set, the IRET instruction performs a task switch (return) from a nested task (a task called with a CALL instruction, an interrupt, or an exception) back to the calling or interrupted task. The updated state of the task executing the IRET instruction is saved in its TSS. If the task is re-entered later, the code that follows the IRET instruction is executed.

+

If the NT flag is set and the processor is in IA-32e mode, the IRET instruction causes a general protection exception.

+

If nonmaskable interrupts (NMIs) are blocked (see Section 6.7.1, “Handling Multiple NMIs” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A), execution of the IRET instruction unblocks NMIs.

+

This unblocking occurs even if the instruction causes a fault. In such a case, NMIs are unmasked before the exception handler is invoked.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.W prefix promotes operation to 64 bits (IRETQ). See the summary chart at the beginning of this section for encoding data and limits.

+

Refer to Chapter 6, “Procedure Calls, Interrupts, and Exceptions” and Chapter 17, “Control-flow Enforcement Technology (CET)” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for CET details.

+

Instruction ordering. IRET is a serializing instruction. See Section 9.3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

See “Changes to Instruction Behavior in VMX Non-Root Operation” in Chapter 26 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more information about the behavior of this instruction in VMX non-root operation.

+

Operation + ¶ +

+
IF PE = 0
+    THEN GOTO REAL-ADDRESS-MODE;
+ELSIF (IA32_EFER.LMA = 0)
+    THEN
+            IF (EFLAGS.VM = 1)
+                        THEN GOTO RETURN-FROM-VIRTUAL-8086-MODE;
+                        ELSE GOTO PROTECTED-MODE;
+            FI;
+    ELSE GOTO IA-32e-MODE;
+FI;
+REAL-ADDRESS-MODE;
+    IF OperandSize = 32
+            THEN
+                        EIP := Pop();
+                        CS := Pop(); (* 32-bit pop, high-order 16 bits discarded *)
+                        tempEFLAGS := Pop();
+                        EFLAGS := (tempEFLAGS AND 257FD5H) OR (EFLAGS AND 1A0000H);
+            ELSE (* OperandSize = 16 *)
+                        EIP := Pop(); (* 16-bit pop; clear upper 16 bits *)
+                        CS := Pop(); (* 16-bit pop *)
+                        EFLAGS[15:0] := Pop();
+    FI;
+    END;
+RETURN-FROM-VIRTUAL-8086-MODE:
+(* Processor is in virtual-8086 mode when IRET is executed and stays in virtual-8086 mode *)
+    IF IOPL = 3 (* Virtual mode: PE = 1, VM = 1, IOPL = 3 *)
+            THEN IF OperandSize = 32
+                        THEN
+                                EIP := Pop();
+                                CS := Pop(); (* 32-bit pop, high-order 16 bits discarded *)
+                                EFLAGS := Pop();
+                                (* VM, IOPL,VIP and VIF EFLAG bits not modified by pop *)
+                                IF EIP not within CS limit
+                                    THEN #GP(0); FI;
+                        ELSE (* OperandSize = 16 *)
+                                EIP := Pop(); (* 16-bit pop; clear upper 16 bits *)
+                                CS := Pop(); (* 16-bit pop *)
+                                EFLAGS[15:0] := Pop(); (* IOPL in EFLAGS not modified by pop *)
+                                IF EIP not within CS limit
+                                    THEN #GP(0); FI;
+                        FI;
+            ELSE
+                        #GP(0); (* Trap to virtual-8086 monitor: PE = 1, VM = 1, IOPL < 3 *)
+    FI;
+END;
+PROTECTED-MODE:
+    IF NT = 1
+            THEN GOTO TASK-RETURN; (* PE = 1, VM = 0, NT = 1 *)
+    FI;
+    IF OperandSize = 32
+            THEN
+                        EIP := Pop();
+                        CS := Pop(); (* 32-bit pop, high-order 16 bits discarded *)
+                        tempEFLAGS := Pop();
+            ELSE (* OperandSize = 16 *)
+                        EIP := Pop(); (* 16-bit pop; clear upper bits *)
+                        CS := Pop(); (* 16-bit pop *)
+                        tempEFLAGS := Pop(); (* 16-bit pop; clear upper bits *)
+    FI;
+    IF tempEFLAGS(VM) = 1 and CPL = 0
+            THEN GOTO RETURN-TO-VIRTUAL-8086-MODE;
+            ELSE GOTO PROTECTED-MODE-RETURN;
+    FI;
+TASK-RETURN:(*PE=1,VM =0,NT =1*)
+    SWITCH-TASKS (without nesting) to TSS specified in link field of current TSS;
+    Mark the task just abandoned as NOT BUSY;
+    IF EIP is not within CS limit
+            THEN #GP(0); FI;
+END;
+RETURN-TO-VIRTUAL-8086-MODE:
+    (* Interrupted procedure was in virtual-8086 mode: PE = 1, CPL=0, VM = 1 in flag image *)
+    (* If shadow stack or indirect branch tracking at CPL3 then #GP(0) *)
+    IF CR4.CET AND (IA32_U_CET.ENDBR_EN OR IA32_U_CET.SHSTK_EN)
+            THEN #GP(0); FI;
+    shadowStackEnabled = ShadowStackEnabled(CPL)
+    IF EIP not within CS limit
+            THEN #GP(0); FI;
+    EFLAGS := tempEFLAGS;
+    ESP := Pop();
+    SS := Pop(); (* Pop 2 words; throw away high-order word *)
+    ES := Pop(); (* Pop 2 words; throw away high-order word *)
+    DS := Pop(); (* Pop 2 words; throw away high-order word *)
+    FS := Pop(); (* Pop 2 words; throw away high-order word *)
+    GS := Pop(); (* Pop 2 words; throw away high-order word *)
+    IF shadowStackEnabled
+            (* check if 8 byte aligned *)
+            IF SSP AND 0x7 != 0
+                        THEN #CP(FAR-RET/IRET); FI;
+    FI;
+    CPL := 3;
+    (* Resume execution in Virtual-8086 mode *)
+    tempOldSSP = SSP;
+    (* Now past all faulting points; safe to free the token. The token free is done using the old SSP
+        * and using a supervisor override as old CPL was a supervisor privilege level *)
+    IF shadowStackEnabled
+            expected_token_value = tempOldSSP | BUSY_BIT (* busy bit - bit position 0 - must be set *)
+            new_token_value = tempOldSSP (* clear the busy bit *)
+            shadow_stack_lock_cmpxchg8b(tempOldSSP, new_token_value, expected_token_value)
+    FI;
+END;
+PROTECTED-MODE-RETURN: (* PE = 1 *)
+    IF CS(RPL) > CPL
+            THEN GOTO RETURN-TO-OUTER-PRIVILEGE-LEVEL;
+            ELSE GOTO RETURN-TO-SAME-PRIVILEGE-LEVEL; FI;
+END;
+RETURN-TO-OUTER-PRIVILEGE-LEVEL:
+    IF OperandSize = 32
+            THEN
+                        tempESP := Pop();
+                        tempSS := Pop(); (* 32-bit pop, high-order 16 bits discarded *)
+    ELSE IF OperandSize = 16
+            THEN
+                        tempESP := Pop(); (* 16-bit pop; clear upper bits *)
+                        tempSS := Pop(); (* 16-bit pop *)
+            ELSE (* OperandSize = 64 *)
+                        tempRSP := Pop();
+                        tempSS := Pop(); (* 64-bit pop, high-order 48 bits discarded *)
+    FI;
+    IF new mode ≠ 64-Bit Mode
+            THEN
+                        IF EIP is not within CS limit
+                                THEN #GP(0); FI;
+            ELSE (* new mode = 64-bit mode *)
+                        IF RIP is non-canonical
+                                    THEN #GP(0); FI;
+    FI;
+    EFLAGS (CF, PF, AF, ZF, SF, TF, DF, OF, NT) := tempEFLAGS;
+    IF OperandSize = 32 or OperandSize = 64
+            THEN EFLAGS(RF, AC, ID) := tempEFLAGS; FI;
+    IF CPL ≤ IOPL
+            THEN EFLAGS(IF) := tempEFLAGS; FI;
+    IF CPL = 0
+            THEN
+                        EFLAGS(IOPL) := tempEFLAGS;
+                        IF OperandSize = 32 or OperandSize = 64
+                                THEN EFLAGS(VIF, VIP) := tempEFLAGS; FI;
+    FI;
+    IF ShadowStackEnabled(CPL)
+            (* check if 8 byte aligned *)
+            IF SSP AND 0x7 != 0
+                        THEN #CP(FAR-RET/IRET); FI;
+            IF CS(RPL) != 3
+                        THEN
+                                tempSsCS = shadow_stack_load 8 bytes from SSP+16;
+                                tempSsLIP = shadow_stack_load 8 bytes from SSP+8;
+                                tempSSP = shadow_stack_load 8 bytes from SSP;
+                                SSP = SSP + 24;
+                                (* Do 64 bit compare to detect bits beyond 15 being set *)
+                                tempCS = CS; (* zero padded to 64 bit *)
+                                IF tempCS != tempSsCS
+                                    THEN #CP(FAR-RET/IRET); FI;
+                                (* Do 64 bit compare; pad CSBASE+RIP with 0 for 32 bit LIP *)
+                                IF CSBASE + RIP != tempSsEIP
+                                    THEN #CP(FAR-RET/IRET); FI;
+                                (* check if 4 byte aligned *)
+                                IF tempSSP AND 0x3 != 0
+                                    THEN #CP(FAR-RET/IRET); FI;
+            FI;
+    FI;
+    tempOldCPL = CPL;
+    CPL := CS(RPL);
+            IF OperandSize = 64
+                        THEN
+                                RSP := tempRSP;
+                                SS := tempSS;
+            ELSE
+                        ESP := tempESP;
+                        SS := tempSS;
+            FI;
+            IF new mode != 64-Bit Mode
+                        THEN
+                                IF EIP is not within CS limit
+                                    THEN #GP(0); FI;
+            ELSE (* new mode = 64-bit mode *)
+                        IF RIP is non-canonical
+                                THEN #GP(0); FI;
+            FI;
+            tempOldSSP = SSP;
+            IF ShadowStackEnabled(CPL)
+                        IF CPL = 3
+                                THEN tempSSP := IA32_PL3_SSP; FI;
+            IF ((IA32_EFER.LMA AND CS.L) = 0 AND tempSSP[63:32] != 0) OR
+                    ((IA32_EFER.LMA AND CS.L) = 1 AND tempSSP is not canonical relative to the current paging mode)
+                        THEN #GP(0); FI;
+            SSP := tempSSP
+            FI;
+            (* Now past all faulting points; safe to free the token. The token free is done using the old SSP
+                * and using a supervisor override as old CPL was a supervisor privilege level *)
+            IF ShadowStackEnabled(tempOldCPL)
+                        expected_token_value = tempOldSSP | BUSY_BIT (* busy bit - bit position 0 - must be set *)
+                        new_token_value = tempOldSSP (* clear the busy bit *)
+                        shadow_stack_lock_cmpxchg8b(tempOldSSP, new_token_value, expected_token_value)
+            FI;
+    FOR each SegReg in (ES, FS, GS, and DS)
+            DO
+                        tempDesc := descriptor cache for SegReg (* hidden part of segment register *)
+                        IF (SegmentSelector == NULL) OR (tempDesc(DPL) < CPL AND tempDesc(Type) is (data or non-conforming code)))
+                                THEN (* Segment register invalid *)
+                                    SegmentSelector := 0; (*Segment selector becomes null*)
+                        FI;
+            OD;
+END;
+RETURN-TO-SAME-PRIVILEGE-LEVEL: (* PE = 1, RPL = CPL *)
+    IF new mode ≠ 64-Bit Mode
+            THEN
+                        IF EIP is not within CS limit
+                                THEN #GP(0); FI;
+            ELSE (* new mode = 64-bit mode *)
+                        IF RIP is non-canonical
+                                    THEN #GP(0); FI;
+    FI;
+    EFLAGS (CF, PF, AF, ZF, SF, TF, DF, OF, NT) := tempEFLAGS;
+    IF OperandSize = 32 or OperandSize = 64
+            THEN EFLAGS(RF, AC, ID) := tempEFLAGS; FI;
+    IF CPL ≤ IOPL
+            THEN EFLAGS(IF) := tempEFLAGS; FI;
+    IF CPL = 0
+                THEN
+                            EFLAGS(IOPL) := tempEFLAGS;
+                            IF OperandSize = 32 or OperandSize = 64
+                                THEN EFLAGS(VIF, VIP) := tempEFLAGS; FI;
+    FI;
+    IF ShadowStackEnabled(CPL)
+            IF SSP AND 0x7 != 0 (* check if aligned to 8 bytes *)
+                        THEN #CP(FAR-RET/IRET); FI;
+            tempSsCS = shadow_stack_load 8 bytes from SSP+16;
+            tempSsLIP = shadow_stack_load 8 bytes from SSP+8;
+            tempSSP = shadow_stack_load 8 bytes from SSP;
+            SSP = SSP + 24;
+            tempCS = CS; (* zero padded to 64 bit *)
+            IF tempCS != tempSsCS (* 64 bit compare; CS zero padded to 64 bits *)
+                        THEN #CP(FAR-RET/IRET); FI;
+            IF CSBASE + RIP != tempSsLIP (* 64 bit compare; CSBASE+RIP zero padded to 64 bit for 32 bit LIP *)
+                        THEN #CP(FAR-RET/IRET); FI;
+            IF tempSSP AND 0x3 != 0 (* check if aligned to 4 bytes *)
+                        THEN #CP(FAR-RET/IRET); FI;
+            IF ((IA32_EFER.LMA AND CS.L) = 0 AND tempSSP[63:32] != 0) OR
+                    ((IA32_EFER.LMA AND CS.L) = 1 AND tempSSP is not canonical relative to the current paging mode)
+                        THEN #GP(0); FI;
+    FI;
+    IF ShadowStackEnabled(CPL)
+            IF IA32_EFER.LMA = 1
+            (* In IA-32e-mode the IRET may be switching stacks if the interrupt/exception was delivered
+                through an IDT with a non-zero IST *)
+            (* In IA-32e mode for same CPL IRET there is always a stack switch. The below check verifies if the
+                stack switch was to self stack and if so, do not try to free the token on this shadow stack. If the
+                tempSSP was not to same stack then there was a stack switch so do attempt to free the token *)
+                        IF tempSSP != SSP
+                                THEN
+                                    expected_token_value = SSP | BUSY_BIT (* busy bit - bit position 0 - must be set *)
+                                    new_token_value = SSP (* clear the busy bit *)
+                                    shadow_stack_lock_cmpxchg8b(SSP, new_token_value, expected_token_value)
+                        FI;
+            FI;
+            SSP := tempSSP
+    FI;
+END;
+IA-32e-MODE:
+    IF NT = 1
+            THEN #GP(0);
+    ELSE IF OperandSize = 32
+            THEN
+                        EIP := Pop();
+                        CS := Pop();
+                        tempEFLAGS := Pop();
+            ELSE IF OperandSize = 16
+                        THEN
+                                EIP := Pop(); (* 16-bit pop; clear upper bits *)
+                                CS := Pop(); (* 16-bit pop *)
+                                tempEFLAGS := Pop(); (* 16-bit pop; clear upper bits *)
+                        FI;
+            ELSE (* OperandSize = 64 *)
+                        THEN
+                                    RIP := Pop();
+                                    CS := Pop(); (* 64-bit pop, high-order 48 bits discarded *)
+                                    tempRFLAGS := Pop();
+    FI;
+    IF CS.RPL > CPL
+            THEN GOTO RETURN-TO-OUTER-PRIVILEGE-LEVEL;
+            ELSE
+                        IF instruction began in 64-Bit Mode
+                                THEN
+                                    IF OperandSize = 32
+                                        THEN
+                                            ESP := Pop();
+                                            SS := Pop(); (* 32-bit pop, high-order 16 bits discarded *)
+                                    ELSE IF OperandSize = 16
+                                        THEN
+                                            ESP := Pop(); (* 16-bit pop; clear upper bits *)
+                                            SS := Pop(); (* 16-bit pop *)
+                                        ELSE (* OperandSize = 64 *)
+                                            RSP := Pop();
+                                            SS := Pop(); (* 64-bit pop, high-order 48 bits discarded *)
+                                    FI;
+                        FI;
+                        GOTO RETURN-TO-SAME-PRIVILEGE-LEVEL; FI;
+END;
+
+

Flags Affected + ¶ +

+

All the flags and fields in the EFLAGS register are potentially modified, depending on the mode of operation of the processor. If performing a return from a nested task to a previous task, the EFLAGS register will be modified according to the EFLAGS image stored in the previous task’s TSS.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the return code or stack segment selector is NULL.
If the return instruction pointer is not within the return code segment limit.
#GP(selector)If a segment selector index is outside its descriptor table limits.
If the return code segment selector RPL is less than the CPL.
If the DPL of a conforming-code segment is greater than the return code segment selector RPL.
If the DPL for a nonconforming-code segment is not equal to the RPL of the code segment selector.
If the stack segment descriptor DPL is not equal to the RPL of the return code segment selector.
If the stack segment is not a writable data segment.
If the stack segment selector RPL is not equal to the RPL of the return code segment selector.
If the segment descriptor for a code segment does not indicate it is a code segment.
If the segment selector for a TSS has its local/global bit set for local.
If a TSS segment descriptor specifies that the TSS is not busy.
If a TSS segment descriptor specifies that the TSS is not available.
#SS(0)If the top bytes of stack are not within stack limits.
If the return stack segment is not present.
#NP(selector) If the return code segment is not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference occurs when the CPL is 3 and alignment checking is enabled.
#UDIf the LOCK prefix is used.
#CP(Far-RET/IRET) If the previous SSP from shadow stack (when returning to CPL <3) or from IA32_PL3_SSP (returning to CPL 3) is not 4 byte aligned.
If returning to 32-bit or compatibility mode and the previous SSP from shadow stack (when returning to CPL <3) or from IA32_PL3_SSP (returning to CPL 3) is beyond 4GB.
If return instruction pointer from stack and shadow stack do not match.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf the return instruction pointer is not within the return code segment limit.
#SSIf the top bytes of stack are not within stack limits.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If the return instruction pointer is not within the return code segment limit.
IF IOPL not equal to 3.
#PF(fault-code)If a page fault occurs.
#SS(0)If the top bytes of stack are not within stack limits.
#AC(0)If an unaligned memory reference occurs and alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#GP(0)If EFLAGS.NT[bit 14] = 1.
+

Other exceptions same as in Protected Mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If EFLAGS.NT[bit 14] = 1.
If the return code segment selector is NULL.
If the stack segment selector is NULL going back to compatibility mode.
If the stack segment selector is NULL going back to CPL3 64-bit mode.
If a NULL stack segment selector RPL is not equal to CPL going back to non-CPL3 64-bit mode.
If the return instruction pointer is not within the return code segment limit.
If the return instruction pointer is non-canonical.
#GP(Selector)If a segment selector index is outside its descriptor table limits.
If a segment descriptor memory address is non-canonical.
If the segment descriptor for a code segment does not indicate it is a code segment.
If the proposed new code segment descriptor has both the D-bit and L-bit set.
If the DPL for a nonconforming-code segment is not equal to the RPL of the code segment selector.
If CPL is greater than the RPL of the code segment selector.
If the DPL of a conforming-code segment is greater than the return code segment selector RPL.
If the stack segment is not a writable data segment.
If the stack segment descriptor DPL is not equal to the RPL of the return code segment selector.
If the stack segment selector RPL is not equal to the RPL of the return code segment selector.
#SS(0)If an attempt to pop a value off the stack violates the SS limit.
If an attempt to pop a value off the stack causes a non-canonical address to be referenced.
If the return stack segment is not present.
#NP(selector) If the return code segment is not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference occurs when the CPL is 3 and alignment checking is enabled.
#UDIf the LOCK prefix is used.
#CP(Far-RET/IRET) If the previous SSP from shadow stack (when returning to CPL <3) or from IA32_PL3_SSP (returning to CPL 3) is not 4 byte aligned.
If returning to 32-bit or compatibility mode and the previous SSP from shadow stack (when returning to CPL <3) or from IA32_PL3_SSP (returning to CPL 3) is beyond 4GB.
If return instruction pointer from stack and shadow stack do not match.
diff --git a/x86/jcc.html b/x86/jcc.html new file mode 100644 index 0000000..b3c871c --- /dev/null +++ b/x86/jcc.html @@ -0,0 +1,771 @@ + +Jcc + — Jump if Condition Is Met

Jcc + — Jump if Condition Is Met

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
77 cbJA rel8DValidValidJump short if above (CF=0 and ZF=0).
73 cbJAE rel8DValidValidJump short if above or equal (CF=0).
72 cbJB rel8DValidValidJump short if below (CF=1).
76 cbJBE rel8DValidValidJump short if below or equal (CF=1 or ZF=1).
72 cbJC rel8DValidValidJump short if carry (CF=1).
E3 cbJCXZ rel8DN.E.ValidJump short if CX register is 0.
E3 cbJECXZ rel8DValidValidJump short if ECX register is 0.
E3 cbJRCXZ rel8DValidN.E.Jump short if RCX register is 0.
74 cbJE rel8DValidValidJump short if equal (ZF=1).
7F cbJG rel8DValidValidJump short if greater (ZF=0 and SF=OF).
7D cbJGE rel8DValidValidJump short if greater or equal (SF=OF).
7C cbJL rel8DValidValidJump short if less (SF≠ OF).
7E cbJLE rel8DValidValidJump short if less or equal (ZF=1 or SF≠ OF).
76 cbJNA rel8DValidValidJump short if not above (CF=1 or ZF=1).
72 cbJNAE rel8DValidValidJump short if not above or equal (CF=1).
73 cbJNB rel8DValidValidJump short if not below (CF=0).
77 cbJNBE rel8DValidValidJump short if not below or equal (CF=0 and ZF=0).
73 cbJNC rel8DValidValidJump short if not carry (CF=0).
75 cbJNE rel8DValidValidJump short if not equal (ZF=0).
7E cbJNG rel8DValidValidJump short if not greater (ZF=1 or SF≠ OF).
7C cbJNGE rel8DValidValidJump short if not greater or equal (SF≠ OF).
7D cbJNL rel8DValidValidJump short if not less (SF=OF).
7F cbJNLE rel8DValidValidJump short if not less or equal (ZF=0 and SF=OF).
71 cbJNO rel8DValidValidJump short if not overflow (OF=0).
7B cbJNP rel8DValidValidJump short if not parity (PF=0).
79 cbJNS rel8DValidValidJump short if not sign (SF=0).
75 cbJNZ rel8DValidValidJump short if not zero (ZF=0).
70 cbJO rel8DValidValidJump short if overflow (OF=1).
7A cbJP rel8DValidValidJump short if parity (PF=1).
7A cbJPE rel8DValidValidJump short if parity even (PF=1).
7B cbJPO rel8DValidValidJump short if parity odd (PF=0).
78 cbJS rel8DValidValidJump short if sign (SF=1).
74 cbJZ rel8DValidValidJump short if zero (ZF = 1).
0F 87 cwJA rel16DN.S.ValidJump near if above (CF=0 and ZF=0). Not supported in 64-bit mode.
0F 87 cdJA rel32DValidValidJump near if above (CF=0 and ZF=0).
0F 83 cwJAE rel16DN.S.ValidJump near if above or equal (CF=0). Not supported in 64-bit mode.
0F 83 cdJAE rel32DValidValidJump near if above or equal (CF=0).
0F 82 cwJB rel16DN.S.ValidJump near if below (CF=1). Not supported in 64-bit mode.
0F 82 cdJB rel32DValidValidJump near if below (CF=1).
0F 86 cwJBE rel16DN.S.ValidJump near if below or equal (CF=1 or ZF=1). Not supported in 64-bit mode.
0F 86 cdJBE rel32DValidValidJump near if below or equal (CF=1 or ZF=1).
0F 82 cwJC rel16DN.S.ValidJump near if carry (CF=1). Not supported in 64-bit mode.
0F 82 cdJC rel32DValidValidJump near if carry (CF=1).
0F 84 cwJE rel16DN.S.ValidJump near if equal (ZF=1). Not supported in 64-bit mode.
0F 84 cdJE rel32DValidValidJump near if equal (ZF=1).
0F 84 cwJZ rel16DN.S.ValidJump near if 0 (ZF=1). Not supported in 64-bit mode.
0F 84 cdJZ rel32DValidValidJump near if 0 (ZF=1).
0F 8F cwJG rel16DN.S.ValidJump near if greater (ZF=0 and SF=OF). Not supported in 64-bit mode.
0F 8F cdJG rel32DValidValidJump near if greater (ZF=0 and SF=OF).
0F 8D cwJGE rel16DN.S.ValidJump near if greater or equal (SF=OF). Not supported in 64-bit mode.
0F 8D cdJGE rel32DValidValidJump near if greater or equal (SF=OF).
0F 8C cwJL rel16DN.S.ValidJump near if less (SF≠ OF). Not supported in 64-bit mode.
0F 8C cdJL rel32DValidValidJump near if less (SF≠ OF).
0F 8E cwJLE rel16DN.S.ValidJump near if less or equal (ZF=1 or SF≠ OF). Not supported in 64-bit mode.
0F 8E cdJLE rel32DValidValidJump near if less or equal (ZF=1 or SF≠ OF).
0F 86 cwJNA rel16DN.S.ValidJump near if not above (CF=1 or ZF=1). Not supported in 64-bit mode.
0F 86 cdJNA rel32DValidValidJump near if not above (CF=1 or ZF=1).
0F 82 cwJNAE rel16DN.S.ValidJump near if not above or equal (CF=1). Not supported in 64-bit mode.
0F 82 cdJNAE rel32DValidValidJump near if not above or equal (CF=1).
0F 83 cwJNB rel16DN.S.ValidJump near if not below (CF=0). Not supported in 64-bit mode.
0F 83 cdJNB rel32DValidValidJump near if not below (CF=0).
0F 87 cwJNBE rel16DN.S.ValidJump near if not below or equal (CF=0 and ZF=0). Not supported in 64-bit mode.
0F 87 cdJNBE rel32DValidValidJump near if not below or equal (CF=0 and ZF=0).
0F 83 cwJNC rel16DN.S.ValidJump near if not carry (CF=0). Not supported in 64-bit mode.
0F 83 cdJNC rel32DValidValidJump near if not carry (CF=0).
0F 85 cwJNE rel16DN.S.ValidJump near if not equal (ZF=0). Not supported in 64-bit mode.
0F 85 cdJNE rel32DValidValidJump near if not equal (ZF=0).
0F 8E cwJNG rel16DN.S.ValidJump near if not greater (ZF=1 or SF≠ OF). Not supported in 64-bit mode.
0F 8E cdJNG rel32DValidValidJump near if not greater (ZF=1 or SF≠ OF).
0F 8C cwJNGE rel16DN.S.ValidJump near if not greater or equal (SF≠ OF). Not supported in 64-bit mode.
0F 8C cdJNGE rel32DValidValidJump near if not greater or equal (SF≠ OF).
0F 8D cwJNL rel16DN.S.ValidJump near if not less (SF=OF). Not supported in 64-bit mode.
0F 8D cdJNL rel32DValidValidJump near if not less (SF=OF).
0F 8F cwJNLE rel16DN.S.ValidJump near if not less or equal (ZF=0 and SF=OF). Not supported in 64-bit mode.
0F 8F cdJNLE rel32DValidValidJump near if not less or equal (ZF=0 and SF=OF).
0F 81 cwJNO rel16DN.S.ValidJump near if not overflow (OF=0). Not supported in 64-bit mode.
0F 81 cdJNO rel32DValidValidJump near if not overflow (OF=0).
0F 8B cwJNP rel16DN.S.ValidJump near if not parity (PF=0). Not supported in 64-bit mode.
0F 8B cdJNP rel32DValidValidJump near if not parity (PF=0).
0F 89 cwJNS rel16DN.S.ValidJump near if not sign (SF=0). Not supported in 64-bit mode.
0F 89 cdJNS rel32DValidValidJump near if not sign (SF=0).
0F 85 cwJNZ rel16DN.S.ValidJump near if not zero (ZF=0). Not supported in 64-bit mode.
0F 85 cdJNZ rel32DValidValidJump near if not zero (ZF=0).
0F 80 cwJO rel16DN.S.ValidJump near if overflow (OF=1). Not supported in 64-bit mode.
0F 80 cdJO rel32DValidValidJump near if overflow (OF=1).
0F 8A cwJP rel16DN.S.ValidJump near if parity (PF=1). Not supported in 64-bit mode.
0F 8A cdJP rel32DValidValidJump near if parity (PF=1).
0F 8A cwJPE rel16DN.S.ValidJump near if parity even (PF=1). Not supported in 64-bit mode.
0F 8A cdJPE rel32DValidValidJump near if parity even (PF=1).
0F 8B cwJPO rel16DN.S.ValidJump near if parity odd (PF=0). Not supported in 64-bit mode.
0F 8B cdJPO rel32DValidValidJump near if parity odd (PF=0).
0F 88 cwJS rel16DN.S.ValidJump near if sign (SF=1). Not supported in 64-bit mode.
0F 88 cdJS rel32DValidValidJump near if sign (SF=1).
0F 84 cwJZ rel16DN.S.ValidJump near if 0 (ZF=1). Not supported in 64-bit mode.
0F 84 cdJZ rel32DValidValidJump near if 0 (ZF=1).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
DOffsetN/AN/AN/A
+

Description + ¶ +

+

Checks the state of one or more of the status flags in the EFLAGS register (CF, OF, PF, SF, and ZF) and, if the flags are in the specified state (condition), performs a jump to the target instruction specified by the destination operand. A condition code (cc) is associated with each instruction to indicate the condition being tested for. If the condition is not satisfied, the jump is not performed and execution continues with the instruction following the Jcc instruction.

+

The target instruction is specified with a relative offset (a signed offset relative to the current value of the instruction pointer in the EIP register). A relative offset (rel8, rel16, or rel32) is generally specified as a label in assembly code, but at the machine code level, it is encoded as a signed, 8-bit or 32-bit immediate value, which is added to the instruction pointer. Instruction coding is most efficient for offsets of –128 to +127. If the operand-size attribute is 16, the upper two bytes of the EIP register are cleared, resulting in a maximum instruction pointer size of 16 bits.

+

The conditions for each Jcc mnemonic are given in the “Description” column of the table on the preceding page. The terms “less” and “greater” are used for comparisons of signed integers and the terms “above” and “below” are used for unsigned integers.

+

Because a particular state of the status flags can sometimes be interpreted in two ways, two mnemonics are defined for some opcodes. For example, the JA (jump if above) instruction and the JNBE (jump if not below or equal) instruction are alternate mnemonics for the opcode 77H.

+

The Jcc instruction does not support far jumps (jumps to other code segments). When the target for the conditional jump is in a different segment, use the opposite condition from the condition being tested for the Jcc instruction, and then access the target with an unconditional far jump (JMP instruction) to the other segment. For example, the following conditional far jump is illegal:

+

JZ FARLABEL;

+

To accomplish this far jump, use the following two instructions:

+

JNZ BEYOND;

+

JMP FARLABEL;

+

BEYOND:

+

The JRCXZ, JECXZ, and JCXZ instructions differ from other Jcc instructions because they do not check status flags. Instead, they check RCX, ECX or CX for 0. The register checked is determined by the address-size attribute. These instructions are useful when used at the beginning of a loop that terminates with a conditional loop instruction (such as LOOPNE). They can be used to prevent an instruction sequence from entering a loop when RCX, ECX or CX is 0. This would cause the loop to execute 264, 232 or 64K times (not zero times).

+

All conditional jumps are converted to code fetches of one or two cache lines, regardless of jump address or cache-ability.

+

In 64-bit mode, operand size is fixed at 64 bits. JMP Short is RIP = RIP + 8-bit offset sign extended to 64 bits. JMP Near is RIP = RIP + 32-bit offset sign extended to 64 bits.

+

Operation + ¶ +

+
IF condition
+    THEN
+        tempEIP := EIP + SignExtend(DEST);
+        IF OperandSize = 16
+            THEN tempEIP := tempEIP AND 0000FFFFH;
+        FI;
+    IF tempEIP is not within code segment limit
+        THEN #GP(0);
+        ELSE EIP := tempEIP
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the offset being jumped to is beyond the limits of the CS segment.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf the offset being jumped to is beyond the limits of the CS segment or is outside of the effective address space from 0 to FFFFH. This condition can occur if a 32-bit address size override prefix is used.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#UDIf the LOCK prefix is used.
diff --git a/x86/jmp.html b/x86/jmp.html new file mode 100644 index 0000000..a4932ab --- /dev/null +++ b/x86/jmp.html @@ -0,0 +1,549 @@ + +JMP + — Jump

JMP + — Jump

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
EB cbJMP rel8DValidValidJump short, RIP = RIP + 8-bit displacement sign extended to 64-bits.
E9 cwJMP rel16DN.S.ValidJump near, relative, displacement relative to next instruction. Not supported in 64-bit mode.
E9 cdJMP rel32DValidValidJump near, relative, RIP = RIP + 32-bit displacement sign extended to 64-bits.
FF /4JMP r/m16MN.S.ValidJump near, absolute indirect, address = zero-extended r/m16. Not supported in 64-bit mode.
FF /4JMP r/m32MN.S.ValidJump near, absolute indirect, address given in r/m32. Not supported in 64-bit mode.
FF /4JMP r/m64MValidN.E.Jump near, absolute indirect, RIP = 64-Bit offset from register or memory.
EA cdJMP ptr16:16SInv.ValidJump far, absolute, address given in operand.
EA cpJMP ptr16:32SInv.ValidJump far, absolute, address given in operand.
FF /5JMP m16:16MValidValidJump far, absolute indirect, address given in m16:16.
FF /5JMP m16:32MValidValidJump far, absolute indirect, address given in m16:32.
REX.W FF /5JMP m16:64MValidN.E.Jump far, absolute indirect, address given in m16:64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
SSegment + Absolute AddressN/AN/AN/A
DOffsetN/AN/AN/A
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Transfers program control to a different point in the instruction stream without recording return information. The destination (target) operand specifies the address of the instruction being jumped to. This operand can be an immediate value, a general-purpose register, or a memory location.

+

This instruction can be used to execute four different types of jumps:

+
    +
  • Near jump—A jump to an instruction within the current code segment (the segment currently pointed to by the CS register), sometimes referred to as an intrasegment jump.
  • +
  • Short jump—A near jump where the jump range is limited to –128 to +127 from the current EIP value.
  • +
  • Far jump—A jump to an instruction located in a different segment than the current code segment but at the same privilege level, sometimes referred to as an intersegment jump.
  • +
  • Task switch—A jump to an instruction located in a different task.
+

A task switch can only be executed in protected mode (see Chapter 8, in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for information on performing task switches with the JMP instruction).

+

Near and Short Jumps. When executing a near jump, the processor jumps to the address (within the current code segment) that is specified with the target operand. The target operand specifies either an absolute offset (that is an offset from the base of the code segment) or a relative offset (a signed displacement relative to the current

+

value of the instruction pointer in the EIP register). A near jump to a relative offset of 8-bits (rel8) is referred to as a short jump. The CS register is not changed on near and short jumps.

+

An absolute offset is specified indirectly in a general-purpose register or a memory location (r/m16 or r/m32). The operand-size attribute determines the size of the target operand (16 or 32 bits). Absolute offsets are loaded directly into the EIP register. If the operand-size attribute is 16, the upper two bytes of the EIP register are cleared, resulting in a maximum instruction pointer size of 16 bits.

+

A relative offset (rel8, rel16, or rel32) is generally specified as a label in assembly code, but at the machine code level, it is encoded as a signed 8-, 16-, or 32-bit immediate value. This value is added to the value in the EIP register. (Here, the EIP register contains the address of the instruction following the JMP instruction). When using relative offsets, the opcode (for short vs. near jumps) and the operand-size attribute (for near relative jumps) determines the size of the target operand (8, 16, or 32 bits).

+

Far Jumps in Real-Address or Virtual-8086 Mode. When executing a far jump in real-address or virtual-8086 mode, the processor jumps to the code segment and offset specified with the target operand. Here the target operand specifies an absolute far address either directly with a pointer (ptr16:16 or ptr16:32) or indirectly with a memory location (m16:16 or m16:32). With the pointer method, the segment and address of the called procedure is encoded in the instruction, using a 4-byte (16-bit operand size) or 6-byte (32-bit operand size) far address immediate. With the indirect method, the target operand specifies a memory location that contains a 4-byte (16-bit operand size) or 6-byte (32-bit operand size) far address. The far address is loaded directly into the CS and EIP registers. If the operand-size attribute is 16, the upper two bytes of the EIP register are cleared.

+

Far Jumps in Protected Mode. When the processor is operating in protected mode, the JMP instruction can be used to perform the following three types of far jumps:

+
    +
  • A far jump to a conforming or non-conforming code segment.
  • +
  • A far jump through a call gate.
  • +
  • A task switch.
+

(The JMP instruction cannot be used to perform inter-privilege-level far jumps.)

+

In protected mode, the processor always uses the segment selector part of the far address to access the corresponding descriptor in the GDT or LDT. The descriptor type (code segment, call gate, task gate, or TSS) and access rights determine the type of jump to be performed.

+

If the selected descriptor is for a code segment, a far jump to a code segment at the same privilege level is performed. (If the selected code segment is at a different privilege level and the code segment is non-conforming, a general-protection exception is generated.) A far jump to the same privilege level in protected mode is very similar to one carried out in real-address or virtual-8086 mode. The target operand specifies an absolute far address either directly with a pointer (ptr16:16 or ptr16:32) or indirectly with a memory location (m16:16 or m16:32). The operand-size attribute determines the size of the offset (16 or 32 bits) in the far address. The new code segment selector and its descriptor are loaded into CS register, and the offset from the instruction is loaded into the EIP register. Note that a call gate (described in the next paragraph) can also be used to perform far call to a code segment at the same privilege level. Using this mechanism provides an extra level of indirection and is the preferred method of making jumps between 16-bit and 32-bit code segments.

+

When executing a far jump through a call gate, the segment selector specified by the target operand identifies the call gate. (The offset part of the target operand is ignored.) The processor then jumps to the code segment specified in the call gate descriptor and begins executing the instruction at the offset specified in the call gate. No stack switch occurs. Here again, the target operand can specify the far address of the call gate either directly with a pointer (ptr16:16 or ptr16:32) or indirectly with a memory location (m16:16 or m16:32).

+

Executing a task switch with the JMP instruction is somewhat similar to executing a jump through a call gate. Here the target operand specifies the segment selector of the task gate for the task being switched to (and the offset part of the target operand is ignored). The task gate in turn points to the TSS for the task, which contains the segment selectors for the task’s code and stack segments. The TSS also contains the EIP value for the next instruction that was to be executed before the task was suspended. This instruction pointer value is loaded into the EIP register so that the task begins executing again at this next instruction.

+

The JMP instruction can also specify the segment selector of the TSS directly, which eliminates the indirection of the task gate. See Chapter 8 in Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for detailed information on the mechanics of a task switch.

+

Note that when you execute at task switch with a JMP instruction, the nested task flag (NT) is not set in the EFLAGS register and the new TSS’s previous task link field is not loaded with the old task’s TSS selector. A return to the previous task can thus not be carried out by executing the IRET instruction. Switching tasks with the JMP instruction differs in this regard from the CALL instruction which does set the NT flag and save the previous task link information, allowing a return to the calling task with an IRET instruction.

+

Refer to Chapter 6, “Procedure Calls, Interrupts, and Exceptions” and Chapter 17, “Control-flow Enforcement Technology (CET)” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for CET details.

+

In 64-Bit Mode. The instruction’s operation size is fixed at 64 bits. If a selector points to a gate, then RIP equals the 64-bit displacement taken from gate; else RIP equals the zero-extended offset from the far pointer referenced in the instruction.

+

See the summary chart at the beginning of this section for encoding data and limits.

+

Instruction ordering. Instructions following a far jump may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the far jump have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible).

+

Instructions sequentially following a near indirect JMP instruction (i.e., those not at the target) may be executed speculatively. If software needs to prevent this (e.g., in order to prevent a speculative execution side channel), then an INT3 or LFENCE instruction opcode can be placed after the near indirect JMP in order to block speculative execution.

+

Operation + ¶ +

+
IF near jump
+    IF 64-bit Mode
+            THEN
+                    IF near relative jump
+                        THEN
+                            tempRIP := RIP + DEST; (* RIP is instruction following JMP instruction*)
+                        ELSE (* Near absolute jump *)
+                            tempRIP := DEST;
+                    FI;
+            ELSE
+                    IF near relative jump
+                        THEN
+                            tempEIP := EIP + DEST; (* EIP is instruction following JMP instruction*)
+                        ELSE (* Near absolute jump *)
+                            tempEIP := DEST;
+                    FI;
+    FI;
+    IF (IA32_EFER.LMA = 0 or target mode = Compatibility mode)
+    and tempEIP outside code segment limit
+            THEN #GP(0); FI
+    IF 64-bit mode and tempRIP is not canonical
+            THEN #GP(0);
+    FI;
+    IF OperandSize = 32
+                THEN
+                    EIP := tempEIP;
+                ELSE
+                    IF OperandSize = 16
+                            THEN (* OperandSize = 16 *)
+                                    EIP := tempEIP AND 0000FFFFH;
+                                ELSE (* OperandSize = 64)
+                                    RIP := tempRIP;
+                    FI;
+        FI;
+    IF (JMP near indirect, absolute indirect)
+            IF EndbranchEnabledAndNotSuppressed(CPL)
+                    IF CPL = 3
+                            THEN
+                                    IF ( no 3EH prefix OR IA32_U_CET.NO_TRACK_EN == 0 )
+                                        THEN
+                                            IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                                    FI;
+                            ELSE
+                                    IF ( no 3EH prefix OR IA32_S_CET.NO_TRACK_EN == 0 )
+                                        THEN
+                                            IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                                    FI;
+                    FI;
+            FI;
+    FI;
+FI;
+IF far jump and (PE = 0 or (PE = 1 AND VM = 1)) (* Real-address or virtual-8086 mode *)
+        THEN
+                tempEIP := DEST(Offset); (* DEST is ptr16:32 or [m16:32] *)
+                IF tempEIP is beyond code segment limit
+                    THEN #GP(0); FI;
+                CS := DEST(segment selector); (* DEST is ptr16:32 or [m16:32] *)
+                IF OperandSize = 32
+                        THEN
+                            EIP := tempEIP; (* DEST is ptr16:32 or [m16:32] *)
+                        ELSE (* OperandSize = 16 *)
+                            EIP := tempEIP AND 0000FFFFH; (* Clear upper 16 bits *)
+                FI;
+FI;
+IF far jump and (PE = 1 and VM = 0)
+(* IA-32e mode or protected mode, not virtual-8086 mode *)
+        THEN
+                IF effective address in the CS, DS, ES, FS, GS, or SS segment is illegal
+            or segment selector in target operand NULL
+                            THEN #GP(0); FI;
+                IF segment selector index not within descriptor table limits
+                    THEN #GP(new selector); FI;
+            Read type and access rights of segment descriptor;
+            IF (IA32_EFER.LMA = 0)
+                    THEN
+                            IF segment type is not a conforming or nonconforming code
+                            segment, call gate, task gate, or TSS
+                                    THEN #GP(segment selector); FI;
+                    ELSE
+                            IF segment type is not a conforming or nonconforming code segment
+                            call gate
+                                    THEN #GP(segment selector); FI;
+            FI;
+            Depending on type and access rights:
+                    GO TO CONFORMING-CODE-SEGMENT;
+                    GO TO NONCONFORMING-CODE-SEGMENT;
+                    GO TO CALL-GATE;
+                    GO TO TASK-GATE;
+                    GO TO TASK-STATE-SEGMENT;
+        ELSE
+                #GP(segment selector);
+FI;
+CONFORMING-CODE-SEGMENT:
+    IF L-Bit = 1 and D-BIT = 1 and IA32_EFER.LMA = 1
+            THEN GP(new code segment selector); FI;
+        IF DPL > CPL
+            THEN #GP(segment selector); FI;
+        IF segment not present
+            THEN #NP(segment selector); FI;
+    tempEIP := DEST(Offset);
+    IF OperandSize = 16
+                THEN tempEIP := tempEIP AND 0000FFFFH;
+    FI;
+    IF (IA32_EFER.LMA = 0 or target mode = Compatibility mode) and
+    tempEIP outside code segment limit
+            THEN #GP(0); FI
+    IF tempEIP is non-canonical
+            THEN #GP(0); FI;
+    IF ShadowStackEnabled(CPL)
+            IF (IA32_EFER.LMA and DEST(segment selector).L) = 0
+                    (* If target is legacy or compatibility mode then the SSP must be in low 4GB *)
+                    IF (SSP & 0xFFFFFFFF00000000 != 0)
+                            THEN #GP(0); FI;
+            FI;
+    FI;
+    CS := DEST[segment selector]; (* Segment descriptor information also loaded *)
+    CS(RPL) := CPL
+    EIP := tempEIP;
+    IF EndbranchEnabled(CPL)
+            IF CPL = 3
+                    THEN
+                            IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                            IA32_U_CET.SUPPRESS = 0
+                    ELSE
+                            IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                            IA32_S_CET.SUPPRESS = 0
+            FI;
+    FI;
+END;
+NONCONFORMING-CODE-SEGMENT:
+    IF L-Bit = 1 and D-BIT = 1 and IA32_EFER.LMA = 1
+            THEN GP(new code segment selector); FI;
+    IF (RPL > CPL) OR (DPL ≠ CPL)
+            THEN #GP(code segment selector); FI;
+    IF segment not present
+            THEN #NP(segment selector); FI;
+    tempEIP := DEST(Offset);
+    IF OperandSize = 16
+                THEN tempEIP := tempEIP AND 0000FFFFH; FI;
+    IF (IA32_EFER.LMA = 0 OR target mode = Compatibility mode)
+    and tempEIP outside code segment limit
+            THEN #GP(0); FI
+    IF tempEIP is non-canonical THEN #GP(0); FI;
+    IF ShadowStackEnabled(CPL)
+            IF (IA32_EFER.LMA and DEST(segment selector).L) = 0
+                    (* If target is legacy or compatibility mode then the SSP must be in low 4GB *)
+                    IF (SSP & 0xFFFFFFFF00000000 != 0)
+                            THEN #GP(0); FI;
+            FI;
+    FI;
+    CS := DEST[segment selector]; (* Segment descriptor information also loaded *)
+    CS(RPL) := CPL;
+    EIP := tempEIP;
+    IF EndbranchEnabled(CPL)
+            IF CPL = 3
+                    THEN
+                            IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                            IA32_U_CET.SUPPRESS = 0
+                    ELSE
+                            IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+                            IA32_S_CET.SUPPRESS = 0
+            FI;
+    FI;
+END;
+CALL-GATE:
+    IF call gate DPL < CPL
+    or call gate DPL < call gate segment-selector RPL
+                    THEN #GP(call gate selector); FI;
+    IF call gate not present
+            THEN #NP(call gate selector); FI;
+    IF call gate code-segment selector is NULL
+            THEN #GP(0); FI;
+    IF call gate code-segment selector index outside descriptor table limits
+            THEN #GP(code segment selector); FI;
+    Read code segment descriptor;
+    IF code-segment segment descriptor does not indicate a code segment
+    or code-segment segment descriptor is conforming and DPL > CPL
+    or code-segment segment descriptor is non-conforming and DPL ≠ CPL
+                    THEN #GP(code segment selector); FI;
+    IF IA32_EFER.LMA = 1 and (code-segment descriptor is not a 64-bit code segment
+    or code-segment segment descriptor has both L-Bit and D-bit set)
+                    THEN #GP(code segment selector); FI;
+    IF code segment is not present
+            THEN #NP(code-segment selector); FI;
+        tempEIP := DEST(Offset);
+        IF GateSize = 16
+                THEN tempEIP := tempEIP AND 0000FFFFH; FI;
+    IF (IA32_EFER.LMA = 0 OR target mode = Compatibility mode) AND tempEIP
+    outside code segment limit
+            THEN #GP(0); FI
+    CS := DEST[SegmentSelector]; (* Segment descriptor information also loaded *)
+    CS(RPL) := CPL;
+    EIP := tempEIP;
+    IF EndbranchEnabled(CPL)
+            IF CPL = 3
+                    THEN
+                            IA32_U_CET.TRACKER = WAIT_FOR_ENDBRANCH;
+                            IA32_U_CET.SUPPRESS = 0
+                    ELSE
+                            IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH;
+                            IA32_S_CET.SUPPRESS = 0
+            FI;
+    FI;
+END;
+TASK-GATE:
+    IF task gate DPL < CPL
+    or task gate DPL < task gate segment-selector RPL
+            THEN #GP(task gate selector); FI;
+    IF task gate not present
+            THEN #NP(gate selector); FI;
+    Read the TSS segment selector in the task-gate descriptor;
+    IF TSS segment selector local/global bit is set to local
+    or index not within GDT limits
+    or descriptor is not a TSS segment
+    or TSS descriptor specifies that the TSS is busy
+            THEN #GP(TSS selector); FI;
+        IF TSS not present
+            THEN #NP(TSS selector); FI;
+        SWITCH-TASKS to TSS;
+        IF EIP not within code segment limit
+            THEN #GP(0); FI;
+END;
+TASK-STATE-SEGMENT:
+    IF TSS DPL < CPL
+    or TSS DPL < TSS segment-selector RPL
+    or TSS descriptor indicates TSS not available
+            THEN #GP(TSS selector); FI;
+    IF TSS is not present
+            THEN #NP(TSS selector); FI;
+    SWITCH-TASKS to TSS;
+    IF EIP not within code segment limit
+            THEN #GP(0); FI;
+END;
+
+

Flags Affected + ¶ +

+

All flags are affected if a task switch occurs; no flags are affected if a task switch does not occur.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If offset in target operand, call gate, or TSS is beyond the code segment limits.
If the segment selector in the destination operand, call gate, task gate, or TSS is NULL.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If target mode is compatibility mode and SSP is not in low 4GB.
#GP(selector)If the segment selector index is outside descriptor table limits.
If the segment descriptor pointed to by the segment selector in the destination operand is not for a conforming-code segment, nonconforming-code segment, call gate, task gate, or task state segment.
If the DPL for a nonconforming-code segment is not equal to the CPL
(When not using a call gate.) If the RPL for the segment’s segment selector is greater than the CPL.
If the DPL for a conforming-code segment is greater than the CPL.
If the DPL from a call-gate, task-gate, or TSS segment descriptor is less than the CPL or than the RPL of the call-gate, task-gate, or TSS’s segment selector.
If the segment descriptor for selector in a call gate does not indicate it is a code segment.
If the segment descriptor for the segment selector in a task gate does not indicate an available TSS.
If the segment selector for a TSS has its local/global bit set for local.
If a TSS segment descriptor specifies that the TSS is busy or not available.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NP(selector) If the code segment being accessed is not present.
If call gate, task gate, or TSS not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3. (Only occurs when fetching target from memory.)
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If the target operand is beyond the code segment limits.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made. (Only occurs when fetching target from memory.)
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same as 64-bit mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory address is non-canonical.
If target offset in destination operand is non-canonical.
If target offset in destination operand is beyond the new code segment limit.
If the segment selector in the destination operand is NULL.
If the code segment selector in the 64-bit gate is NULL.
If transitioning to compatibility mode and the SSP is beyond 4GB.
#GP(selector)If the code segment or 64-bit call gate is outside descriptor table limits.
If the code segment or 64-bit call gate overlaps non-canonical space.
If the segment descriptor from a 64-bit call gate is in non-canonical space.
If the segment descriptor pointed to by the segment selector in the destination operand is not for a conforming-code segment, nonconforming-code segment, 64-bit call gate.
If the segment descriptor pointed to by the segment selector in the destination operand is a code segment, and has both the D-bit and the L-bit set.
If the DPL for a nonconforming-code segment is not equal to the CPL, or the RPL for the segment’s segment selector is greater than the CPL.
If the DPL for a conforming-code segment is greater than the CPL.
If the DPL from a 64-bit call-gate is less than the CPL or than the RPL of the 64-bit call-gate.
If the upper type field of a 64-bit call gate is not 0x0.
If the segment selector from a 64-bit call gate is beyond the descriptor table limits.
If the code segment descriptor pointed to by the selector in the 64-bit gate doesn't have the L-bit set and the D-bit clear.
If the segment descriptor for a segment selector from the 64-bit call gate does not indicate it is a code segment.
If the code segment is non-conforming and CPL ≠ DPL.
If the code segment is confirming and CPL < DPL.
#NP(selector)If a code segment or 64-bit call gate is not present.
#UD(64-bit mode only) If a far jump is direct to an absolute address in memory.
If the LOCK prefix is used.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
diff --git a/x86/kaddw.kaddb.kaddq.kaddd.html b/x86/kaddw.kaddb.kaddq.kaddd.html new file mode 100644 index 0000000..77609de --- /dev/null +++ b/x86/kaddw.kaddb.kaddq.kaddd.html @@ -0,0 +1,111 @@ + +KADDW/KADDB/KADDQ/KADDD + — ADD Two Masks

KADDW/KADDB/KADDQ/KADDD + — ADD Two Masks

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L1.0F.W0 4A /r KADDW k1, k2, k3RVRV/VAVX512DQAdd 16 bits masks in k2 and k3 and place result in k1.
VEX.L1.66.0F.W0 4A /r KADDB k1, k2, k3RVRV/VAVX512DQAdd 8 bits masks in k2 and k3 and place result in k1.
VEX.L1.0F.W1 4A /r KADDQ k1, k2, k3RVRV/VAVX512BWAdd 64 bits masks in k2 and k3 and place result in k1.
VEX.L1.66.0F.W1 4A /r KADDD k1, k2, k3RVRV/VAVX512BWAdd 32 bits masks in k2 and k3 and place result in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RVRModRM:reg (w)VEX.1vvv (r)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Adds the vector mask k2 and the vector mask k3, and writes the result into vector mask k1.

+

Operation + ¶ +

+

KADDW + ¶ +

+
DEST[15:0] := SRC1[15:0] + SRC2[15:0]
+DEST[MAX_KL-1:16] := 0
+
+

KADDB + ¶ +

+
DEST[7:0] := SRC1[7:0] + SRC2[7:0]
+DEST[MAX_KL-1:8] := 0
+
+

KADDQ + ¶ +

+
DEST[63:0] := SRC1[63:0] + SRC2[63:0]
+DEST[MAX_KL-1:64] := 0
+
+

KADDD + ¶ +

+
DEST[31:0] := SRC1[31:0] + SRC2[31:0]
+DEST[MAX_KL-1:32] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KADDW __mmask16 _kadd_mask16 (__mmask16 a, __mmask16 b);
+
+
KADDB __mmask8 _kadd_mask8 (__mmask8 a, __mmask8 b);
+
+
KADDQ __mmask64 _kadd_mask64 (__mmask64 a, __mmask64 b);
+
+
KADDD __mmask32 _kadd_mask32 (__mmask32 a, __mmask32 b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/kandnw.kandnb.kandnq.kandnd.html b/x86/kandnw.kandnb.kandnq.kandnd.html new file mode 100644 index 0000000..8eddaf6 --- /dev/null +++ b/x86/kandnw.kandnb.kandnq.kandnd.html @@ -0,0 +1,105 @@ + +KANDNW/KANDNB/KANDNQ/KANDND + — Bitwise Logical AND NOT Masks

KANDNW/KANDNB/KANDNQ/KANDND + — Bitwise Logical AND NOT Masks

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L1.0F.W0 42 /r KANDNW k1, k2, k3RVRV/VAVX512FBitwise AND NOT 16 bits masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W0 42 /r KANDNB k1, k2, k3RVRV/VAVX512DQBitwise AND NOT 8 bits masks k1 and k2 and place result in k1.
VEX.L1.0F.W1 42 /r KANDNQ k1, k2, k3RVRV/VAVX512BWBitwise AND NOT 64 bits masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W1 42 /r KANDND k1, k2, k3RVRV/VAVX512BWBitwise AND NOT 32 bits masks k2 and k3 and place result in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RVRModRM:reg (w)VEX.1vvv (r)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Performs a bitwise AND NOT between the vector mask k2 and the vector mask k3, and writes the result into vector mask k1.

+

Operation + ¶ +

+

KANDNW + ¶ +

+
DEST[15:0] := (BITWISE NOT SRC1[15:0]) BITWISE AND SRC2[15:0]
+DEST[MAX_KL-1:16] := 0
+
+

KANDNB + ¶ +

+
DEST[7:0] := (BITWISE NOT SRC1[7:0]) BITWISE AND SRC2[7:0]
+DEST[MAX_KL-1:8] := 0
+
+

KANDNQ + ¶ +

+
DEST[63:0] := (BITWISE NOT SRC1[63:0]) BITWISE AND SRC2[63:0]
+DEST[MAX_KL-1:64] := 0
+
+

KANDND + ¶ +

+
DEST[31:0] := (BITWISE NOT SRC1[31:0]) BITWISE AND SRC2[31:0]
+DEST[MAX_KL-1:32] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KANDNW __mmask16 _mm512_kandn(__mmask16 a, __mmask16 b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/kandw.kandb.kandq.kandd.html b/x86/kandw.kandb.kandq.kandd.html new file mode 100644 index 0000000..d0dd83b --- /dev/null +++ b/x86/kandw.kandb.kandq.kandd.html @@ -0,0 +1,105 @@ + +KANDW/KANDB/KANDQ/KANDD + — Bitwise Logical AND Masks

KANDW/KANDB/KANDQ/KANDD + — Bitwise Logical AND Masks

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L1.0F.W0 41 /r KANDW k1, k2, k3RVRV/VAVX512FBitwise AND 16 bits masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W0 41 /r KANDB k1, k2, k3RVRV/VAVX512DQBitwise AND 8 bits masks k2 and k3 and place result in k1.
VEX.L1.0F.W1 41 /r KANDQ k1, k2, k3RVRV/VAVX512BWBitwise AND 64 bits masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W1 41 /r KANDD k1, k2, k3RVRV/VAVX512BWBitwise AND 32 bits masks k2 and k3 and place result in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RVRModRM:reg (w)VEX.1vvv (r)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Performs a bitwise AND between the vector mask k2 and the vector mask k3, and writes the result into vector mask k1.

+

Operation + ¶ +

+

KANDW + ¶ +

+
DEST[15:0] := SRC1[15:0] BITWISE AND SRC2[15:0]
+DEST[MAX_KL-1:16] := 0
+
+

KANDB + ¶ +

+
DEST[7:0] := SRC1[7:0] BITWISE AND SRC2[7:0]
+DEST[MAX_KL-1:8] := 0
+
+

KANDQ + ¶ +

+
DEST[63:0] := SRC1[63:0] BITWISE AND SRC2[63:0]
+DEST[MAX_KL-1:64] := 0
+
+

KANDD + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE AND SRC2[31:0]
+DEST[MAX_KL-1:32] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KANDW __mmask16 _mm512_kand(__mmask16 a, __mmask16 b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/kmovw.kmovb.kmovq.kmovd.html b/x86/kmovw.kmovb.kmovq.kmovd.html new file mode 100644 index 0000000..d0eedc3 --- /dev/null +++ b/x86/kmovw.kmovb.kmovq.kmovd.html @@ -0,0 +1,193 @@ + +KMOVW/KMOVB/KMOVQ/KMOVD + — Move From and to Mask Registers

KMOVW/KMOVB/KMOVQ/KMOVD + — Move From and to Mask Registers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L0.0F.W0 90 /r KMOVW k1, k2/m16RMV/VAVX512FMove 16 bits mask from k2/m16 and store the result in k1.
VEX.L0.66.0F.W0 90 /r KMOVB k1, k2/m8RMV/VAVX512DQMove 8 bits mask from k2/m8 and store the result in k1.
VEX.L0.0F.W1 90 /r KMOVQ k1, k2/m64RMV/VAVX512BWMove 64 bits mask from k2/m64 and store the result in k1.
VEX.L0.66.0F.W1 90 /r KMOVD k1, k2/m32RMV/VAVX512BWMove 32 bits mask from k2/m32 and store the result in k1.
VEX.L0.0F.W0 91 /r KMOVW m16, k1MRV/VAVX512FMove 16 bits mask from k1 and store the result in m16.
VEX.L0.66.0F.W0 91 /r KMOVB m8, k1MRV/VAVX512DQMove 8 bits mask from k1 and store the result in m8.
VEX.L0.0F.W1 91 /r KMOVQ m64, k1MRV/VAVX512BWMove 64 bits mask from k1 and store the result in m64.
VEX.L0.66.0F.W1 91 /r KMOVD m32, k1MRV/VAVX512BWMove 32 bits mask from k1 and store the result in m32.
VEX.L0.0F.W0 92 /r KMOVW k1, r32RRV/VAVX512FMove 16 bits mask from r32 to k1.
VEX.L0.66.0F.W0 92 /r KMOVB k1, r32RRV/VAVX512DQMove 8 bits mask from r32 to k1.
VEX.L0.F2.0F.W1 92 /r KMOVQ k1, r64RRV/IAVX512BWMove 64 bits mask from r64 to k1.
VEX.L0.F2.0F.W0 92 /r KMOVD k1, r32RRV/VAVX512BWMove 32 bits mask from r32 to k1.
VEX.L0.0F.W0 93 /r KMOVW r32, k1RRV/VAVX512FMove 16 bits mask from k1 to r32.
VEX.L0.66.0F.W0 93 /r KMOVB r32, k1RRV/VAVX512DQMove 8 bits mask from k1 to r32.
VEX.L0.F2.0F.W1 93 /r KMOVQ r64, k1RRV/IAVX512BWMove 64 bits mask from k1 to r64.
VEX.L0.F2.0F.W0 93 /r KMOVD r32, k1RRV/VAVX512BWMove 32 bits mask from k1 to r32.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2
RMModRM:reg (w)ModRM:r/m (r)
MRModRM:r/m (w, ModRM:[7:6] must not be 11b)ModRM:reg (r)
RRModRM:reg (w)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Copies values from the source operand (second operand) to the destination operand (first operand). The source and destination operands can be mask registers, memory location or general purpose. The instruction cannot be used to transfer data between general purpose registers and or memory locations.

+

When moving to a mask register, the result is zero extended to MAX_KL size (i.e., 64 bits currently). When moving to a general-purpose register (GPR), the result is zero-extended to the size of the destination. In 32-bit mode, the default GPR destination’s size is 32 bits. In 64-bit mode, the default GPR destination’s size is 64 bits. Note that VEX.W can only be used to modify the size of the GPR operand in 64b mode.

+

Operation + ¶ +

+

KMOVW + ¶ +

+
IF *destination is a memory location*
+    DEST[15:0] := SRC[15:0]
+IF *destination is a mask register or a GPR *
+    DEST := ZeroExtension(SRC[15:0])
+
+

KMOVB + ¶ +

+
IF *destination is a memory location*
+    DEST[7:0] := SRC[7:0]
+IF *destination is a mask register or a GPR *
+    DEST := ZeroExtension(SRC[7:0])
+
+

KMOVQ + ¶ +

+
IF *destination is a memory location or a GPR*
+    DEST[63:0] := SRC[63:0]
+IF *destination is a mask register*
+    DEST := ZeroExtension(SRC[63:0])
+
+

KMOVD + ¶ +

+
IF *destination is a memory location*
+    DEST[31:0] := SRC[31:0]
+IF *destination is a mask register or a GPR *
+    DEST := ZeroExtension(SRC[31:0])
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KMOVW __mmask16 _mm512_kmov(__mmask16 a);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Instructions with RR operand encoding, see Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

+

Instructions with RM or MR operand encoding, see Table 2-64, “TYPE K21 Exception Definition (VEX-Encoded OpMask Instructions Addressing Memory).”

diff --git a/x86/knotw.knotb.knotq.knotd.html b/x86/knotw.knotb.knotq.knotd.html new file mode 100644 index 0000000..314d89e --- /dev/null +++ b/x86/knotw.knotb.knotq.knotd.html @@ -0,0 +1,103 @@ + +KNOTW/KNOTB/KNOTQ/KNOTD + — NOT Mask Register

KNOTW/KNOTB/KNOTQ/KNOTD + — NOT Mask Register

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L0.0F.W0 44 /r KNOTW k1, k2RRV/VAVX512FBitwise NOT of 16 bits mask k2.
VEX.L0.66.0F.W0 44 /r KNOTB k1, k2RRV/VAVX512DQBitwise NOT of 8 bits mask k2.
VEX.L0.0F.W1 44 /r KNOTQ k1, k2RRV/VAVX512BWBitwise NOT of 64 bits mask k2.
VEX.L0.66.0F.W1 44 /r KNOTD k1, k2RRV/VAVX512BWBitwise NOT of 32 bits mask k2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + +
Op/EnOperand 1Operand 2
RRModRM:reg (w)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Performs a bitwise NOT of vector mask k2 and writes the result into vector mask k1.

+

Operation + ¶ +

+

KNOTW + ¶ +

+
DEST[15:0] := BITWISE NOT SRC[15:0]
+DEST[MAX_KL-1:16] := 0
+
+

KNOTB + ¶ +

+
DEST[7:0] := BITWISE NOT SRC[7:0]
+DEST[MAX_KL-1:8] := 0
+
+

KNOTQ + ¶ +

+
DEST[63:0] := BITWISE NOT SRC[63:0]
+DEST[MAX_KL-1:64] := 0
+
+

KNOTD + ¶ +

+
DEST[31:0] := BITWISE NOT SRC[31:0]
+DEST[MAX_KL-1:32] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KNOTW __mmask16 _mm512_knot(__mmask16 a);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/kortestw.kortestb.kortestq.kortestd.html b/x86/kortestw.kortestb.kortestq.kortestd.html new file mode 100644 index 0000000..4018e4b --- /dev/null +++ b/x86/kortestw.kortestb.kortestq.kortestd.html @@ -0,0 +1,130 @@ + +KORTESTW/KORTESTB/KORTESTQ/KORTESTD + — OR Masks and Set Flags

KORTESTW/KORTESTB/KORTESTQ/KORTESTD + — OR Masks and Set Flags

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/E n64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L0.0F.W0 98 /r KORTESTW k1, k2RRV/VAVX512FBitwise OR 16 bits masks k1 and k2 and update ZF and CF accordingly.
VEX.L0.66.0F.W0 98 /r KORTESTB k1, k2RRV/VAVX512DQBitwise OR 8 bits masks k1 and k2 and update ZF and CF accordingly.
VEX.L0.0F.W1 98 /r KORTESTQ k1, k2RRV/VAVX512BWBitwise OR 64 bits masks k1 and k2 and update ZF and CF accordingly.
VEX.L0.66.0F.W1 98 /r KORTESTD k1, k2RRV/VAVX512BWBitwise OR 32 bits masks k1 and k2 and update ZF and CF accordingly.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + +
Op/EnOperand 1Operand 2
RRModRM:reg (w)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Performs a bitwise OR between the vector mask register k2, and the vector mask register k1, and sets CF and ZF based on the operation result.

+

ZF flag is set if both sources are 0x0. CF is set if, after the OR operation is done, the operation result is all 1’s.

+

Operation + ¶ +

+

KORTESTW + ¶ +

+
TMP[15:0] := DEST[15:0] BITWISE OR SRC[15:0]
+IF(TMP[15:0]=0)
+    THEN ZF := 1
+    ELSE ZF := 0
+FI;
+IF(TMP[15:0]=FFFFh)
+    THEN CF := 1
+    ELSE CF := 0
+FI;
+
+

KORTESTB + ¶ +

+
TMP[7:0] := DEST[7:0] BITWISE OR SRC[7:0]
+IF(TMP[7:0]=0)
+    THEN ZF := 1
+    ELSE ZF := 0
+FI;
+IF(TMP[7:0]==FFh)
+    THEN CF := 1
+    ELSE CF := 0
+FI;
+
+

KORTESTQ + ¶ +

+
TMP[63:0] := DEST[63:0] BITWISE OR SRC[63:0]
+IF(TMP[63:0]=0)
+    THEN ZF := 1
+    ELSE ZF := 0
+FI;
+IF(TMP[63:0]==FFFFFFFF_FFFFFFFFh)
+    THEN CF := 1
+    ELSE CF := 0
+FI;
+
+

KORTESTD + ¶ +

+
TMP[31:0] := DEST[31:0] BITWISE OR SRC[31:0]
+IF(TMP[31:0]=0)
+    THEN ZF := 1
+    ELSE ZF := 0
+FI;
+IF(TMP[31:0]=FFFFFFFFh)
+    THEN CF := 1
+    ELSE CF := 0
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KORTESTW __mmask16 _mm512_kortest[cz](__mmask16 a, __mmask16 b);
+
+

Flags Affected + ¶ +

+

The ZF flag is set if the result of OR-ing both sources is all 0s.

+

The CF flag is set if the result of OR-ing both sources is all 1s.

+

The OF, SF, AF, and PF flags are set to 0.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/korw.korb.korq.kord.html b/x86/korw.korb.korq.kord.html new file mode 100644 index 0000000..8b4a255 --- /dev/null +++ b/x86/korw.korb.korq.kord.html @@ -0,0 +1,105 @@ + +KORW/KORB/KORQ/KORD + — Bitwise Logical OR Masks

KORW/KORB/KORQ/KORD + — Bitwise Logical OR Masks

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L1.0F.W0 45 /r KORW k1, k2, k3RVRV/VAVX512FBitwise OR 16 bits masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W0 45 /r KORB k1, k2, k3RVRV/VAVX512DQBitwise OR 8 bits masks k2 and k3 and place result in k1.
VEX.L1.0F.W1 45 /r KORQ k1, k2, k3RVRV/VAVX512BWBitwise OR 64 bits masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W1 45 /r KORD k1, k2, k3RVRV/VAVX512BWBitwise OR 32 bits masks k2 and k3 and place result in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RVRModRM:reg (w)VEX.1vvv (r)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Performs a bitwise OR between the vector mask k2 and the vector mask k3, and writes the result into vector mask k1 (three-operand form).

+

Operation + ¶ +

+

KORW + ¶ +

+
DEST[15:0] := SRC1[15:0] BITWISE OR SRC2[15:0]
+DEST[MAX_KL-1:16] := 0
+
+

KORB + ¶ +

+
DEST[7:0] := SRC1[7:0] BITWISE OR SRC2[7:0]
+DEST[MAX_KL-1:8] := 0
+
+

KORQ + ¶ +

+
DEST[63:0] := SRC1[63:0] BITWISE OR SRC2[63:0]
+DEST[MAX_KL-1:64] := 0
+
+

KORD + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE OR SRC2[31:0]
+DEST[MAX_KL-1:32] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KORW __mmask16 _mm512_kor(__mmask16 a, __mmask16 b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/kshiftlw.kshiftlb.kshiftlq.kshiftld.html b/x86/kshiftlw.kshiftlb.kshiftlq.kshiftld.html new file mode 100644 index 0000000..70a9449 --- /dev/null +++ b/x86/kshiftlw.kshiftlb.kshiftlq.kshiftld.html @@ -0,0 +1,117 @@ + +KSHIFTLW/KSHIFTLB/KSHIFTLQ/KSHIFTLD + — Shift Left Mask Registers

KSHIFTLW/KSHIFTLB/KSHIFTLQ/KSHIFTLD + — Shift Left Mask Registers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L0.66.0F3A.W1 32 /r KSHIFTLW k1, k2, imm8RRIV/VAVX512FShift left 16 bits in k2 by immediate and write result in k1.
VEX.L0.66.0F3A.W0 32 /r KSHIFTLB k1, k2, imm8RRIV/VAVX512DQShift left 8 bits in k2 by immediate and write result in k1.
VEX.L0.66.0F3A.W1 33 /r KSHIFTLQ k1, k2, imm8RRIV/VAVX512BWShift left 64 bits in k2 by immediate and write result in k1.
VEX.L0.66.0F3A.W0 33 /r KSHIFTLD k1, k2, imm8RRIV/VAVX512BWShift left 32 bits in k2 by immediate and write result in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RRIModRM:reg (w)ModRM:r/m (r, ModRM:[7:6] must be 11b)imm8
+

Description + ¶ +

+

Shifts 8/16/32/64 bits in the second operand (source operand) left by the count specified in immediate byte and place the least significant 8/16/32/64 bits of the result in the destination operand. The higher bits of the destination are zero-extended. The destination is set to zero if the count value is greater than 7 (for byte shift), 15 (for word shift), 31 (for doubleword shift) or 63 (for quadword shift).

+

Operation + ¶ +

+

KSHIFTLW + ¶ +

+
COUNT := imm8[7:0]
+DEST[MAX_KL-1:0] := 0
+IF COUNT <=15
+    THEN DEST[15:0] := SRC1[15:0] << COUNT;
+FI;
+
+

KSHIFTLB + ¶ +

+
COUNT := imm8[7:0]
+DEST[MAX_KL-1:0] := 0
+IF COUNT <=7
+    THEN DEST[7:0] := SRC1[7:0] << COUNT;
+FI;
+
+

KSHIFTLQ + ¶ +

+
COUNT := imm8[7:0]
+DEST[MAX_KL-1:0] := 0
+IF COUNT <=63
+    THEN DEST[63:0] := SRC1[63:0] << COUNT;
+FI;
+
+

KSHIFTLD + ¶ +

+
COUNT := imm8[7:0]
+DEST[MAX_KL-1:0] := 0
+IF COUNT <=31
+    THEN DEST[31:0] := SRC1[31:0] << COUNT;
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
Compiler auto generates KSHIFTLW when needed.
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/kshiftrw.kshiftrb.kshiftrq.kshiftrd.html b/x86/kshiftrw.kshiftrb.kshiftrq.kshiftrd.html new file mode 100644 index 0000000..5d7c58d --- /dev/null +++ b/x86/kshiftrw.kshiftrb.kshiftrq.kshiftrd.html @@ -0,0 +1,117 @@ + +KSHIFTRW/KSHIFTRB/KSHIFTRQ/KSHIFTRD + — Shift Right Mask Registers

KSHIFTRW/KSHIFTRB/KSHIFTRQ/KSHIFTRD + — Shift Right Mask Registers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L0.66.0F3A.W1 30 /r KSHIFTRW k1, k2, imm8RRIV/VAVX512FShift right 16 bits in k2 by immediate and write result in k1.
VEX.L0.66.0F3A.W0 30 /r KSHIFTRB k1, k2, imm8RRIV/VAVX512DQShift right 8 bits in k2 by immediate and write result in k1.
VEX.L0.66.0F3A.W1 31 /r KSHIFTRQ k1, k2, imm8RRIV/VAVX512BWShift right 64 bits in k2 by immediate and write result in k1.
VEX.L0.66.0F3A.W0 31 /r KSHIFTRD k1, k2, imm8RRIV/VAVX512BWShift right 32 bits in k2 by immediate and write result in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RRIModRM:reg (w)ModRM:r/m (r, ModRM:[7:6] must be 11b)imm8
+

Description + ¶ +

+

Shifts 8/16/32/64 bits in the second operand (source operand) right by the count specified in immediate and place the least significant 8/16/32/64 bits of the result in the destination operand. The higher bits of the destination are zero-extended. The destination is set to zero if the count value is greater than 7 (for byte shift), 15 (for word shift), 31 (for doubleword shift) or 63 (for quadword shift).

+

Operation + ¶ +

+

KSHIFTRW + ¶ +

+
COUNT := imm8[7:0]
+DEST[MAX_KL-1:0] := 0
+IF COUNT <=15
+    THEN DEST[15:0] := SRC1[15:0] >> COUNT;
+FI;
+
+

KSHIFTRB + ¶ +

+
COUNT := imm8[7:0]
+DEST[MAX_KL-1:0] := 0
+IF COUNT <=7
+    THEN DEST[7:0] := SRC1[7:0] >> COUNT;
+FI;
+
+

KSHIFTRQ + ¶ +

+
COUNT := imm8[7:0]
+DEST[MAX_KL-1:0] := 0
+IF COUNT <=63
+    THEN DEST[63:0] := SRC1[63:0] >> COUNT;
+FI;
+
+

KSHIFTRD + ¶ +

+
COUNT := imm8[7:0]
+DEST[MAX_KL-1:0] := 0
+IF COUNT <=31
+    THEN DEST[31:0] := SRC1[31:0] >> COUNT;
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
Compiler auto generates KSHIFTRW when needed.
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/ktestw.ktestb.ktestq.ktestd.html b/x86/ktestw.ktestb.ktestq.ktestd.html new file mode 100644 index 0000000..a3c661f --- /dev/null +++ b/x86/ktestw.ktestb.ktestq.ktestd.html @@ -0,0 +1,134 @@ + +KTESTW/KTESTB/KTESTQ/KTESTD + — Packed Bit Test Masks and Set Flags

KTESTW/KTESTB/KTESTQ/KTESTD + — Packed Bit Test Masks and Set Flags

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L0.0F.W0 99 /r KTESTW k1, k2RRV/VAVX512DQSet ZF and CF depending on sign bit AND and ANDN of 16 bits mask register sources.
VEX.L0.66.0F.W0 99 /r KTESTB k1, k2RRV/VAVX512DQSet ZF and CF depending on sign bit AND and ANDN of 8 bits mask register sources.
VEX.L0.0F.W1 99 /r KTESTQ k1, k2RRV/VAVX512BWSet ZF and CF depending on sign bit AND and ANDN of 64 bits mask register sources.
VEX.L0.66.0F.W1 99 /r KTESTD k1, k2RRV/VAVX512BWSet ZF and CF depending on sign bit AND and ANDN of 32 bits mask register sources.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + +
Op/EnOperand 1Operand 2
RRModRM:reg (r)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Performs a bitwise comparison of the bits of the first source operand and corresponding bits in the second source operand. If the AND operation produces all zeros, the ZF is set else the ZF is clear. If the bitwise AND operation of the inverted first source operand with the second source operand produces all zeros the CF is set else the CF is clear. Only the EFLAGS register is updated.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

KTESTW + ¶ +

+
TEMP[15:0] := SRC2[15:0] AND SRC1[15:0]
+IF (TEMP[15:0] = = 0)
+    THEN ZF :=1;
+    ELSE ZF := 0;
+FI;
+TEMP[15:0] := SRC2[15:0] AND NOT SRC1[15:0]
+IF (TEMP[15:0] = = 0)
+    THEN CF :=1;
+    ELSE CF := 0;
+FI;
+AF := OF := PF := SF := 0;
+
+

KTESTB + ¶ +

+
TEMP[7:0] := SRC2[7:0] AND SRC1[7:0]
+IF (TEMP[7:0] = = 0)
+    THEN ZF :=1;
+    ELSE ZF := 0;
+FI;
+TEMP[7:0] := SRC2[7:0] AND NOT SRC1[7:0]
+IF (TEMP[7:0] = = 0)
+    THEN CF :=1;
+    ELSE CF := 0;
+FI;
+AF := OF := PF := SF := 0;
+
+

KTESTQ + ¶ +

+
TEMP[63:0] := SRC2[63:0] AND SRC1[63:0]
+IF (TEMP[63:0] = = 0)
+    THEN ZF :=1;
+    ELSE ZF := 0;
+FI;
+TEMP[63:0] := SRC2[63:0] AND NOT SRC1[63:0]
+IF (TEMP[63:0] = = 0)
+    THEN CF :=1;
+    ELSE CF := 0;
+FI;
+AF := OF := PF := SF := 0;
+
+

KTESTD + ¶ +

+
TEMP[31:0] := SRC2[31:0] AND SRC1[31:0]
+IF (TEMP[31:0] = = 0)
+    THEN ZF :=1;
+    ELSE ZF := 0;
+FI;
+TEMP[31:0] := SRC2[31:0] AND NOT SRC1[31:0]
+IF (TEMP[31:0] = = 0)
+    THEN CF :=1;
+    ELSE CF := 0;
+FI;
+AF := OF := PF := SF := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/kunpckbw.kunpckwd.kunpckdq.html b/x86/kunpckbw.kunpckwd.kunpckdq.html new file mode 100644 index 0000000..3162f8c --- /dev/null +++ b/x86/kunpckbw.kunpckwd.kunpckdq.html @@ -0,0 +1,99 @@ + +KUNPCKBW/KUNPCKWD/KUNPCKDQ + — Unpack for Mask Registers

KUNPCKBW/KUNPCKWD/KUNPCKDQ + — Unpack for Mask Registers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L1.66.0F.W0 4B /r KUNPCKBW k1, k2, k3RVRV/VAVX512FUnpack 8-bit masks in k2 and k3 and write word result in k1.
VEX.L1.0F.W0 4B /r KUNPCKWD k1, k2, k3RVRV/VAVX512BWUnpack 16-bit masks in k2 and k3 and write doubleword result in k1.
VEX.L1.0F.W1 4B /r KUNPCKDQ k1, k2, k3RVRV/VAVX512BWUnpack 32-bit masks in k2 and k3 and write quadword result in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RVRModRM:reg (w)VEX.1vvv (r)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Unpacks the lower 8/16/32 bits of the second and third operands (source operands) into the low part of the first operand (destination operand), starting from the low bytes. The result is zero-extended in the destination.

+

Operation + ¶ +

+

KUNPCKBW + ¶ +

+
DEST[7:0] := SRC2[7:0]
+DEST[15:8] := SRC1[7:0]
+DEST[MAX_KL-1:16] := 0
+
+

KUNPCKWD + ¶ +

+
DEST[15:0] := SRC2[15:0]
+DEST[31:16] := SRC1[15:0]
+DEST[MAX_KL-1:32] := 0
+
+

KUNPCKDQ + ¶ +

+
DEST[31:0] := SRC2[31:0]
+DEST[63:32] := SRC1[31:0]
+DEST[MAX_KL-1:64] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KUNPCKBW __mmask16 _mm512_kunpackb(__mmask16 a, __mmask16 b);
+
+
KUNPCKDQ __mmask64 _mm512_kunpackd(__mmask64 a, __mmask64 b);
+
+
KUNPCKWD __mmask32 _mm512_kunpackw(__mmask32 a, __mmask32 b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/kxnorw.kxnorb.kxnorq.kxnord.html b/x86/kxnorw.kxnorb.kxnorq.kxnord.html new file mode 100644 index 0000000..2fbac2f --- /dev/null +++ b/x86/kxnorw.kxnorb.kxnorq.kxnord.html @@ -0,0 +1,105 @@ + +KXNORW/KXNORB/KXNORQ/KXNORD + — Bitwise Logical XNOR Masks

KXNORW/KXNORB/KXNORQ/KXNORD + — Bitwise Logical XNOR Masks

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L1.0F.W0 46 /r KXNORW k1, k2, k3RVRV/VAVX512FBitwise XNOR 16-bit masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W0 46 /r KXNORB k1, k2, k3RVRV/VAVX512DQBitwise XNOR 8-bit masks k2 and k3 and place result in k1.
VEX.L1.0F.W1 46 /r KXNORQ k1, k2, k3RVRV/VAVX512BWBitwise XNOR 64-bit masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W1 46 /r KXNORD k1, k2, k3RVRV/VAVX512BWBitwise XNOR 32-bit masks k2 and k3 and place result in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RVRModRM:reg (w)VEX.1vvv (r)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Performs a bitwise XNOR between the vector mask k2 and the vector mask k3, and writes the result into vector mask k1 (three-operand form).

+

Operation + ¶ +

+

KXNORW + ¶ +

+
DEST[15:0] := NOT (SRC1[15:0] BITWISE XOR SRC2[15:0])
+DEST[MAX_KL-1:16] := 0
+
+

KXNORB + ¶ +

+
DEST[7:0] := NOT (SRC1[7:0] BITWISE XOR SRC2[7:0])
+DEST[MAX_KL-1:8] := 0
+
+

KXNORQ + ¶ +

+
DEST[63:0] := NOT (SRC1[63:0] BITWISE XOR SRC2[63:0])
+DEST[MAX_KL-1:64] := 0
+
+

KXNORD + ¶ +

+
DEST[31:0] := NOT (SRC1[31:0] BITWISE XOR SRC2[31:0])
+DEST[MAX_KL-1:32] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KXNORW __mmask16 _mm512_kxnor(__mmask16 a, __mmask16 b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/kxorw.kxorb.kxorq.kxord.html b/x86/kxorw.kxorb.kxorq.kxord.html new file mode 100644 index 0000000..85d454d --- /dev/null +++ b/x86/kxorw.kxorb.kxorq.kxord.html @@ -0,0 +1,105 @@ + +KXORW/KXORB/KXORQ/KXORD + — Bitwise Logical XOR Masks

KXORW/KXORB/KXORQ/KXORD + — Bitwise Logical XOR Masks

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.L1.0F.W0 47 /r KXORW k1, k2, k3RVRV/VAVX512FBitwise XOR 16-bit masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W0 47 /r KXORB k1, k2, k3RVRV/VAVX512DQBitwise XOR 8-bit masks k2 and k3 and place result in k1.
VEX.L1.0F.W1 47 /r KXORQ k1, k2, k3RVRV/VAVX512BWBitwise XOR 64-bit masks k2 and k3 and place result in k1.
VEX.L1.66.0F.W1 47 /r KXORD k1, k2, k3RVRV/VAVX512BWBitwise XOR 32-bit masks k2 and k3 and place result in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RVRModRM:reg (w)VEX.1vvv (r)ModRM:r/m (r, ModRM:[7:6] must be 11b)
+

Description + ¶ +

+

Performs a bitwise XOR between the vector mask k2 and the vector mask k3, and writes the result into vector mask k1 (three-operand form).

+

Operation + ¶ +

+

KXORW + ¶ +

+
DEST[15:0] := SRC1[15:0] BITWISE XOR SRC2[15:0]
+DEST[MAX_KL-1:16] := 0
+
+

KXORB + ¶ +

+
DEST[7:0] := SRC1[7:0] BITWISE XOR SRC2[7:0]
+DEST[MAX_KL-1:8] := 0
+
+

KXORQ + ¶ +

+
DEST[63:0] := SRC1[63:0] BITWISE XOR SRC2[63:0]
+DEST[MAX_KL-1:64] := 0
+
+

KXORD + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE XOR SRC2[31:0]
+DEST[MAX_KL-1:32] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
KXORW __mmask16 _mm512_kxor(__mmask16 a, __mmask16 b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-63, “TYPE K20 Exception Definition (VEX-Encoded OpMask Instructions w/o Memory Arg).”

diff --git a/x86/lahf.html b/x86/lahf.html new file mode 100644 index 0000000..2cd73e3 --- /dev/null +++ b/x86/lahf.html @@ -0,0 +1,90 @@ + +LAHF + — Load Status Flags Into AH Register

LAHF + — Load Status Flags Into AH Register

+ + + + + + + + + + + + + + + +
Opcode EnModeLeg ModeDescription
9F Load: AH := EFLAGS(SF:ZF:0:AF:0:PF:1:CF).
+

1. Valid in specific steppings; see Description section.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

This instruction executes as described above in compatibility mode and legacy mode. It is valid in 64-bit mode only if CPUID.80000001H:ECX.LAHF-SAHF[bit 0] = 1.

+

Operation + ¶ +

+
IF 64-Bit Mode
+    THEN
+        IF CPUID.80000001H:ECX.LAHF-SAHF[bit 0] = 1;
+            THEN AH := RFLAGS(SF:ZF:0:AF:0:PF:1:CF);
+            ELSE #UD;
+        FI;
+    ELSE
+        AH := EFLAGS(SF:ZF:0:AF:0:PF:1:CF);
+FI;
+
+

Flags Affected + ¶ +

+

None. The state of the flags in the EFLAGS register is not affected.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + +
#UDIf CPUID.80000001H:ECX.LAHF-SAHF[bit 0] = 0.
If the LOCK prefix is used.
diff --git a/x86/lar.html b/x86/lar.html new file mode 100644 index 0000000..8942eb7 --- /dev/null +++ b/x86/lar.html @@ -0,0 +1,182 @@ + +LAR + — Load Access Rights Byte

LAR + — Load Access Rights Byte

+ + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 02 /rLAR r16, r16/m16RMValidValidr16 := access rights referenced by r16/m16
0F 02 /rLAR reg, r32/m161RMValidValidreg := access rights referenced by r32/m16
+
+

1. For all loads (regardless of source or destination sizing) only bits 16-0 are used. Other bits are ignored.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Loads the access rights from the segment descriptor specified by the second operand (source operand) into the first operand (destination operand) and sets the ZF flag in the flag register. The source operand (which can be a register or a memory location) contains the segment selector for the segment descriptor being accessed. If the source operand is a memory address, only 16 bits of data are accessed. The destination operand is a general-purpose register.

+

The processor performs access checks as part of the loading process. Once loaded in the destination register, software can perform additional checks on the access rights information.

+

The access rights for a segment descriptor include fields located in the second doubleword (bytes 4–7) of the segment descriptor. The following fields are loaded by the LAR instruction:

+
    +
  • Bits 7:0 are returned as 0
  • +
  • Bits 11:8 return the segment type.
  • +
  • Bit 12 returns the S flag.
  • +
  • Bits 14:13 return the DPL.
  • +
  • Bit 15 returns the P flag.
  • +
  • The following fields are returned only if the operand size is greater than 16 bits: +
      +
    • Bits 19:16 are undefined.
    • +
    • Bits 19:16 are undefined.
    • +
    • Bit 20 returns the software-available bit in the descriptor.
    • +
    • Bit 20 returns the software-available bit in the descriptor.
    • +
    • Bit 21 returns the L flag.
    • +
    • Bit 21 returns the L flag.
    • +
    • Bit 22 returns the D/B flag.
    • +
    • Bit 22 returns the D/B flag.
    • +
    • Bit 23 returns the G flag.
    • +
    • Bit 23 returns the G flag.
    • +
    • Bits 31:24 are returned as 0.
    • +
    • Bits 31:24 are returned as 0.
+

This instruction performs the following checks before it loads the access rights in the destination register:

+
    +
  • Checks that the segment selector is not NULL.
  • +
  • Checks that the segment selector points to a descriptor that is within the limits of the GDT or LDT being accessed
  • +
  • Checks that the descriptor type is valid for this instruction. All code and data segment descriptors are valid for (can be accessed with) the LAR instruction. The valid system segment and gate descriptor types are given in Table 3-53.
  • +
  • If the segment is not a conforming code segment, it checks that the specified segment descriptor is visible at the CPL (that is, if the CPL and the RPL of the segment selector are less than or equal to the DPL of the segment selector).
+

If the segment descriptor cannot be accessed or is an invalid type for the instruction, the ZF flag is cleared and no access rights are loaded in the destination operand.

+

The LAR instruction can only be executed in protected mode and IA-32e mode.

+
+ + + + + + + + + + + + + + + +
TypeProtected ModeIA-32e Mode
NameValidNameValid
0 1 2 3 4 5 6 7 8 9 A B C D E FReserved Available 16-bit TSS LDT Busy 16-bit TSS 16-bit call gate 16-bit/32-bit task gate 16-bit interrupt gate 16-bit trap gate Reserved Available 32-bit TSS Reserved Busy 32-bit TSS 32-bit call gate Reserved 32-bit interrupt gate 32-bit trap gateNo Yes Yes Yes Yes Yes No No No Yes No Yes Yes No No NoReserved Reserved LDT Reserved Reserved Reserved Reserved Reserved Reserved Available 64-bit TSS Reserved Busy 64-bit TSS 64-bit call gate Reserved 64-bit interrupt gate 64-bit trap gateNo No Yes No No No No No No Yes No Yes Yes No No No
+
Table 3-53. Segment and Gate Types
+

Operation + ¶ +

+
IF Offset(SRC) > descriptor table limit
+    THEN
+        ZF := 0;
+    ELSE
+        SegmentDescriptor := descriptor referenced by SRC;
+        IF SegmentDescriptor(Type) ≠ conforming code segment
+        and (CPL > DPL) or (RPL > DPL)
+        or SegmentDescriptor(Type) is not valid for instruction
+            THEN
+                ZF := 0;
+            ELSE
+                DEST := access rights from SegmentDescriptor as given in Description section;
+                ZF := 1;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

The ZF flag is set to 1 if the access rights are loaded successfully; otherwise, it is cleared to 0.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and the memory operand effective address is unaligned while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe LAR instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe LAR instruction cannot be executed in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If the memory operand effective address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory operand effective address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and the memory operand effective address is unaligned while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/lddqu.html b/x86/lddqu.html new file mode 100644 index 0000000..997c9e7 --- /dev/null +++ b/x86/lddqu.html @@ -0,0 +1,101 @@ + +LDDQU + — Load Unaligned Integer 128 Bits

LDDQU + — Load Unaligned Integer 128 Bits

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F2 0F F0 /r LDDQU xmm1, memRMV/VSSE3Load unaligned data from mem and return double quadword in xmm1.
VEX.128.F2.0F.WIG F0 /r VLDDQU xmm1, m128RMV/VAVXLoad unaligned packed integer values from mem to xmm1.
VEX.256.F2.0F.WIG F0 /r VLDDQU ymm1, m256RMV/VAVXLoad unaligned packed integer values from mem to ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The instruction is functionally similar to (V)MOVDQU ymm/xmm, m256/m128 for loading from memory. That is: 32/16 bytes of data starting at an address specified by the source memory operand (second operand) are fetched from memory and placed in a destination register (first operand). The source operand need not be aligned on a 32/16-byte boundary. Up to 64/32 bytes may be loaded from memory; this is implementation dependent.

+

This instruction may improve performance relative to (V)MOVDQU if the source operand crosses a cache line boundary. In situations that require the data loaded by (V)LDDQU be modified and stored to the same location, use (V)MOVDQU or (V)MOVDQA instead of (V)LDDQU. To move a double quadword to or from memory locations that are known to be aligned on 16-byte boundaries, use the (V)MOVDQA instruction.

+

Implementation Notes + ¶ +

+
    +
  • If the source is aligned to a 32/16-byte boundary, based on the implementation, the 32/16 bytes may be loaded more than once. For that reason, the usage of (V)LDDQU should be avoided when using uncached or write-combining (WC) memory regions. For uncached or WC memory regions, keep using (V)MOVDQU.
  • +
  • This instruction is a replacement for (V)MOVDQU (load) in situations where cache line splits significantly affect performance. It should not be used in situations where store-load forwarding is performance critical. If performance of store-load forwarding is critical to the application, use (V)MOVDQA store-load pairs when data is 256/128-bit aligned or (V)MOVDQU store-load pairs when data is 256/128-bit unaligned.
  • +
  • If the memory address is not aligned on 32/16-byte boundary, some implementations may load up to 64/32 bytes and return 32/16 bytes in the destination. Some processor implementations may issue multiple loads to access the appropriate 32/16 bytes. Developers of multi-threaded or multi-processor software should be aware that on these processors the loads will be performed in a non-atomic way.
  • +
  • If alignment checking is enabled (CR0.AM = 1, RFLAGS.AC = 1, and CPL = 3), an alignment-check exception (#AC) may or may not be generated (depending on processor implementation) when the memory address is not aligned on an 8-byte boundary.
+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

LDDQU (128-bit Legacy SSE Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VLDDQU (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] := 0
+
+

VLDDQU (VEX.256 Encoded Version) + ¶ +

+
DEST[255:0] := SRC[255:0]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
LDDQU __m128i _mm_lddqu_si128 (__m128i * p);
+
+
VLDDQU __m256i _mm256_lddqu_si256 (__m256i * p);
+
+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

+

Note treatment of #AC varies.

diff --git a/x86/ldmxcsr.html b/x86/ldmxcsr.html new file mode 100644 index 0000000..073d87e --- /dev/null +++ b/x86/ldmxcsr.html @@ -0,0 +1,82 @@ + +LDMXCSR + — Load MXCSR Register

LDMXCSR + — Load MXCSR Register

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
NP 0F AE /2 LDMXCSR m32MV/VSSELoad MXCSR register from m32.
VEX.LZ.0F.WIG AE /2 VLDMXCSR m32MV/VAVXLoad MXCSR register from m32.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Loads the source operand into the MXCSR control/status register. The source operand is a 32-bit memory location. See “MXCSR Control and Status Register” in Chapter 10, of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for a description of the MXCSR register and its contents.

+

The LDMXCSR instruction is typically used in conjunction with the (V)STMXCSR instruction, which stores the contents of the MXCSR register in memory.

+

The default MXCSR value at reset is 1F80H.

+

If a (V)LDMXCSR instruction clears a SIMD floating-point exception mask bit and sets the corresponding exception flag bit, a SIMD floating-point exception will not be immediately generated. The exception will be generated only upon the execution of the next instruction that meets both conditions below:

+
    +
  • the instruction must operate on an XMM or YMM register operand,
  • +
  • the instruction causes that particular SIMD floating-point exception to be reported.
+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

If VLDMXCSR is encoded with VEX.L= 1, an attempt to execute the instruction encoded with VEX.L= 1 will cause an #UD exception.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+
MXCSR := m32;
+
+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
_mm_setcsr(unsigned int i)
+
+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-22, “Type 5 Class Exception Conditions,” additionally:

+ + + + + + +
#GPFor an attempt to set reserved bits in MXCSR.
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/lds.les.lfs.lgs.lss.html b/x86/lds.les.lfs.lgs.lss.html new file mode 100644 index 0000000..4360e29 --- /dev/null +++ b/x86/lds.les.lfs.lgs.lss.html @@ -0,0 +1,333 @@ + +LDS/LES/LFS/LGS/LSS + — Load Far Pointer

LDS/LES/LFS/LGS/LSS + — Load Far Pointer

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
C5 /rLDS r16,m16:16RMInvalidValidLoad DS:r16 with far pointer from memory.
C5 /rLDS r32,m16:32RMInvalidValidLoad DS:r32 with far pointer from memory.
0F B2 /rLSS r16,m16:16RMValidValidLoad SS:r16 with far pointer from memory.
0F B2 /rLSS r32,m16:32RMValidValidLoad SS:r32 with far pointer from memory.
REX + 0F B2 /rLSS r64,m16:64RMValidN.E.Load SS:r64 with far pointer from memory.
C4 /rLES r16,m16:16RMInvalidValidLoad ES:r16 with far pointer from memory.
C4 /rLES r32,m16:32RMInvalidValidLoad ES:r32 with far pointer from memory.
0F B4 /rLFS r16,m16:16RMValidValidLoad FS:r16 with far pointer from memory.
0F B4 /rLFS r32,m16:32RMValidValidLoad FS:r32 with far pointer from memory.
REX + 0F B4 /rLFS r64,m16:64RMValidN.E.Load FS:r64 with far pointer from memory.
0F B5 /rLGS r16,m16:16RMValidValidLoad GS:r16 with far pointer from memory.
0F B5 /rLGS r32,m16:32RMValidValidLoad GS:r32 with far pointer from memory.
REX + 0F B5 /rLGS r64,m16:64RMValidN.E.Load GS:r64 with far pointer from memory.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Loads a far pointer (segment selector and offset) from the second operand (source operand) into a segment register and the first operand (destination operand). The source operand specifies a 48-bit or a 32-bit pointer in memory depending on the current setting of the operand-size attribute (32 bits or 16 bits, respectively). The instruction opcode and the destination operand specify a segment register/general-purpose register pair. The 16-bit segment selector from the source operand is loaded into the segment register specified with the opcode (DS, SS, ES, FS, or GS). The 32-bit or 16-bit offset is loaded into the register specified with the destination operand.

+

If one of these instructions is executed in protected mode, additional information from the segment descriptor pointed to by the segment selector in the source operand is loaded in the hidden part of the selected segment register.

+

Also in protected mode, a NULL selector (values 0000 through 0003) can be loaded into DS, ES, FS, or GS registers without causing a protection exception. (Any subsequent reference to a segment whose corresponding segment register is loaded with a NULL selector, causes a general-protection exception (#GP) and no memory reference to the segment occurs.)

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.W promotes operation to specify a source operand referencing an 80-bit pointer (16-bit selector, 64-bit offset) in memory. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
64-BIT_MODE
+    IF SS is loaded
+        THEN
+            IF SegmentSelector = NULL and ( (RPL = 3) or
+                    (RPL ≠ 3 and RPL ≠ CPL) )
+                THEN #GP(0);
+            ELSE IF descriptor is in non-canonical space
+                THEN #GP(selector); FI;
+            ELSE IF Segment selector index is not within descriptor table limits
+                    or segment selector RPL ≠ CPL
+                    or access rights indicate nonwritable data segment
+                    or DPL ≠ CPL
+                THEN #GP(selector); FI;
+            ELSE IF Segment marked not present
+                THEN #SS(selector); FI;
+            FI;
+            SS := SegmentSelector(SRC);
+            SS := SegmentDescriptor([SRC]);
+    ELSE IF attempt to load DS, or ES
+        THEN #UD;
+    ELSE IF FS, or GS is loaded with non-NULL segment selector
+        THEN IF Segment selector index is not within descriptor table limits
+            or access rights indicate segment neither data nor readable code segment
+            or segment is data or nonconforming-code segment
+            and ( RPL > DPL or CPL > DPL)
+                THEN #GP(selector); FI;
+            ELSE IF Segment marked not present
+                THEN #NP(selector); FI;
+            FI;
+            SegmentRegister := SegmentSelector(SRC) ;
+            SegmentRegister := SegmentDescriptor([SRC]);
+        FI;
+    ELSE IF FS, or GS is loaded with a NULL selector:
+        THEN
+            SegmentRegister := NULLSelector;
+            SegmentRegister(DescriptorValidBit) := 0; FI; (* Hidden flag;
+                not accessible by software *)
+    FI;
+    DEST := Offset(SRC);
+PREOTECTED MODE OR COMPATIBILITY MODE;
+    IF SS is loaded
+        THEN
+            IF SegementSelector = NULL
+                THEN #GP(0);
+            ELSE IF Segment selector index is not within descriptor table limits
+                    or segment selector RPL ≠ CPL
+                    or access rights indicate nonwritable data segment
+                    or DPL ≠ CPL
+                THEN #GP(selector); FI;
+            ELSE IF Segment marked not present
+                THEN #SS(selector); FI;
+            FI;
+            SS := SegmentSelector(SRC);
+            SS := SegmentDescriptor([SRC]);
+    ELSE IF DS, ES, FS, or GS is loaded with non-NULL segment selector
+        THEN IF Segment selector index is not within descriptor table limits
+            or access rights indicate segment neither data nor readable code segment
+            or segment is data or nonconforming-code segment
+            and (RPL > DPL or CPL > DPL)
+                THEN #GP(selector); FI;
+            ELSE IF Segment marked not present
+                THEN #NP(selector); FI;
+            FI;
+            SegmentRegister := SegmentSelector(SRC) AND RPL;
+            SegmentRegister := SegmentDescriptor([SRC]);
+        FI;
+    ELSE IF DS, ES, FS, or GS is loaded with a NULL selector:
+        THEN
+            SegmentRegister := NULLSelector;
+            SegmentRegister(DescriptorValidBit) := 0; FI; (* Hidden flag;
+                not accessible by software *)
+    FI;
+    DEST := Offset(SRC);
+Real-Address or Virtual-8086 Mode
+    SegmentRegister := SegmentSelector(SRC); FI;
+    DEST := Offset(SRC);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf source operand is not a memory location.
If the LOCK prefix is used.
#GP(0)If a NULL selector is loaded into the SS register.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#GP(selector)If the SS register is being loaded and any of the following is true: the segment selector index is not within the descriptor table limits, the segment selector RPL is not equal to CPL, the segment is a non-writable data segment, or DPL is not equal to CPL.
If the DS, ES, FS, or GS register is being loaded with a non-NULL segment selector and any of the following is true: the segment selector index is not within descriptor table limits, the segment is neither a data nor a readable code segment, or the segment is a data or nonconforming-code segment and both RPL and CPL are greater than DPL.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#SS(selector)If the SS register is being loaded and the segment is marked not present.
#NP(selector)If DS, ES, FS, or GS register is being loaded with a non-NULL segment selector and the segment is marked not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf source operand is not a memory location.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#UDIf source operand is not a memory location.
If the LOCK prefix is used.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
If a NULL selector is attempted to be loaded into the SS register in compatibility mode.
If a NULL selector is attempted to be loaded into the SS register in CPL3 and 64-bit mode.
If a NULL selector is attempted to be loaded into the SS register in non-CPL3 and 64-bit mode where its RPL is not equal to CPL.
#GP(Selector)If the FS, or GS register is being loaded with a non-NULL segment selector and any of the following is true: the segment selector index is not within descriptor table limits, the memory address of the descriptor is non-canonical, the segment is neither a data nor a readable code segment, or the segment is a data or nonconforming-code segment and both RPL and CPL are greater than DPL.
If the SS register is being loaded and any of the following is true: the segment selector index is not within the descriptor table limits, the memory address of the descriptor is non-canonical, the segment selector RPL is not equal to CPL, the segment is a nonwritable data segment, or DPL is not equal to CPL.
#SS(0)If a memory operand effective address is non-canonical
#SS(Selector)If the SS register is being loaded and the segment is marked not present.
#NP(selector)If FS, or GS register is being loaded with a non-NULL segment selector and the segment is marked not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf source operand is not a memory location.
If the LOCK prefix is used.
diff --git a/x86/ldtilecfg.html b/x86/ldtilecfg.html new file mode 100644 index 0000000..e189645 --- /dev/null +++ b/x86/ldtilecfg.html @@ -0,0 +1,182 @@ + +LDTILECFG + — Load Tile Configuration

LDTILECFG + — Load Tile Configuration

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.NP.0F38.W0 49 !(11):000:bbb LDTILECFG m512AV/N.E.AMX-TILELoad tile configuration as specified in m512.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

The LDTILECFG instruction takes an operand containing a pointer to a 64-byte memory location containing the description of the tiles to be supported. In order to configure the tiles, the AMX-TILE bit in CPUID must be set and the operating system has to have enabled the tiles architecture.

+

The memory area contains the palette and describes how many tiles are being used and defines each tile in terms of rows and column bytes. Requests must be compatible with the restrictions provided by CPUID; see Table 3-10 below.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Byte(s)Field NameDescription
0palettePalette selects the supported configuration of the tiles that will be used.
1start_rowstart_row is used for storing the restart values for interrupted operations.
2-15reserved, must be zero
16-17tile0.colsbTile 0 bytes per row.
18-19tile1.colsbTile 1 bytes per row.
20-21tile2.colsbTile 2 bytes per row.
...(sequence continues)
30-31tile7.colsbTile 7 bytes per row.
32-47reserved, must be zero
48tile0.rowsTile 0 rows.
49tile1.rowsTile 1 rows.
50tile2.rowsTile 2 rows.
...(sequence continues)
55tile7.rowsTile 7 rows.
56-63reserved, must be zero
+
Table 3-10. Memory Area Layout
+

If a tile row and column pair is not used to specify tile parameters, they must have the value zero. All enabled tiles (based on the palette) must be configured. Specifying tile parameters for more tiles than the implementation limit or the palette limit results in a #GP fault.

+

If the palette_id is zero, that signifies the INIT state for both TILECFG and TILEDATA. Tiles are zeroed in the INIT state. The only legal non-INIT value for palette_id is 1.

+

Any attempt to execute the LDTILECFG instruction inside an Intel TSX transaction will result in a transaction abort.

+

Operation + ¶ +

+

LDTILECFG mem + ¶ +

+
error := False
+buf := read_memory(mem, 64)
+temp_tilecfg.palette_id := buf.byte[0]
+if temp_tilecfg.palette_id > max_palette:
+    error := True
+if not xcr0_supports_palette(temp_tilecfg.palette_id):
+    error := True
+if temp_tilecfg.palette_id !=0:
+    temp_tilecfg.start_row := buf.byte[1]
+    if buf.byte[2..15] is nonzero:
+        error := True
+    p := 16
+    # configure columns
+    for n in 0 ... palette_table[temp_tilecfg.palette_id].max_names-1:
+        temp_tilecfg.t[n].colsb:= buf.word[p/2]
+        p := p + 2
+        if temp_tilecfg.t[n].colsb > palette_table[temp_tilecfg.palette_id].bytes_per_row:
+            error := True
+    if nonzero(buf[p...47]):
+        error := True
+    # configure rows
+    p := 48
+    for n in 0 ... palette_table[temp_tilecfg.palette_id].max_names-1:
+        temp_tilecfg.t[n].rows:= buf.byte[p]
+        if temp_tilecfg.t[n].rows > palette_table[temp_tilecfg.palette_id].max_rows:
+            error := True
+        p := p + 1
+    if nonzero(buf[p...63]):
+        error := True
+    # validate each tile's row & col configs are reasonable and enable the valid tiles
+    for n in 0 ... palette_table[temp_tilecfg.palette_id].max_names-1:
+        if temp_tilecfg.t[n].rows !=0 and temp_tilecfg.t[n].colsb != 0:
+            temp_tilecfg.t[n].valid := 1
+        elif temp_tilecfg.t[n].rows == 0 and temp_tilecfg.t[n].colsb == 0:
+            temp_tilecfg.t[n].valid := 0
+        else:
+            error := True// one of rows or colsbwas 0 but not both.
+if error:
+    #GP
+elif temp_tilecfg.palette_id == 0:
+    TILES_CONFIGURED := 0// init state
+    tilecfg := 0// equivalent to 64B of zeros
+    zero_all_tile_data()
+else:
+    tilecfg := temp_tilecfg
+    zero_all_tile_data()
+    TILES_CONFIGURED := 1
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
LDTILECFG void _tile_loadconfig(const void *);
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

AMX-E1; see Section 2.10, “Intel® AMX Instruction Exception Classes,” for details.

diff --git a/x86/lea.html b/x86/lea.html new file mode 100644 index 0000000..19e0876 --- /dev/null +++ b/x86/lea.html @@ -0,0 +1,179 @@ + +LEA + — Load Effective Address

LEA + — Load Effective Address

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
8D /rLEA r16,mRMValidValidStore effective address for m in register r16.
8D /rLEA r32,mRMValidValidStore effective address for m in register r32.
REX.W + 8D /rLEA r64,mRMValidN.E.Store effective address for m in register r64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Computes the effective address of the second operand (the source operand) and stores it in the first operand (destination operand). The source operand is a memory address (offset part) specified with one of the processors addressing modes; the destination operand is a general-purpose register. The address-size and operand-size attributes affect the action performed by this instruction, as shown in the following table. The operand-size attribute of the instruction is determined by the chosen register; the address-size attribute is determined by the attribute of the code segment.

+
+ + + + + + + + + + + + + + + + + + + + +
Operand SizeAddress SizeAction Performed
161616-bit effective address is calculated and stored in requested 16-bit register destination.
163232-bit effective address is calculated. The lower 16 bits of the address are stored in the requested 16-bit register destination.
321616-bit effective address is calculated. The 16-bit address is zero-extended and stored in the requested 32-bit register destination.
323232-bit effective address is calculated and stored in the requested 32-bit register destination.
+
Table 3-54. Non-64-bit Mode LEA Operation with Address and Operand Size Attributes
+

Different assemblers may use different algorithms based on the size attribute and symbolic reference of the source operand.

+

In 64-bit mode, the instruction’s destination operand is governed by operand size attribute, the default operand size is 32 bits. Address calculation is governed by address size attribute, the default address size is 64-bits. In 64-bit mode, address size of 16 bits is not encodable. See Table 3-55.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Operand SizeAddress SizeAction Performed
163232-bit effective address is calculated (using 67H prefix). The lower 16 bits of the address are stored in the requested 16-bit register destination (using 66H prefix).
166464-bit effective address is calculated (default address size). The lower 16 bits of the address are stored in the requested 16-bit register destination (using 66H prefix).
323232-bit effective address is calculated (using 67H prefix) and stored in the requested 32-bit register destination.
326464-bit effective address is calculated (default address size) and the lower 32 bits of the address are stored in the requested 32-bit register destination.
643232-bit effective address is calculated (using 67H prefix), zero-extended to 64-bits, and stored in the requested 64-bit register destination (using REX.W).
646464-bit effective address is calculated (default address size) and all 64-bits of the address are stored in the requested 64-bit register destination (using REX.W).
+
Table 3-55. 64-bit Mode LEA Operation with Address and Operand Size Attributes
+

Operation + ¶ +

+
IF OperandSize = 16 and AddressSize = 16
+    THEN
+        DEST := EffectiveAddress(SRC); (* 16-bit address *)
+    ELSE IF OperandSize = 16 and AddressSize = 32
+        THEN
+            temp := EffectiveAddress(SRC); (* 32-bit address *)
+            DEST := temp[0:15]; (* 16-bit address *)
+        FI;
+    ELSE IF OperandSize = 32 and AddressSize = 16
+        THEN
+            temp := EffectiveAddress(SRC); (* 16-bit address *)
+            DEST := ZeroExtend(temp); (* 32-bit address *)
+        FI;
+    ELSE IF OperandSize = 32 and AddressSize = 32
+        THEN
+            DEST := EffectiveAddress(SRC); (* 32-bit address *)
+        FI;
+    ELSE IF OperandSize = 16 and AddressSize = 64
+        THEN
+            temp := EffectiveAddress(SRC); (* 64-bit address *)
+            DEST := temp[0:15]; (* 16-bit address *)
+        FI;
+    ELSE IF OperandSize = 32 and AddressSize = 64
+        THEN
+            temp := EffectiveAddress(SRC); (* 64-bit address *)
+            DEST := temp[0:31]; (* 16-bit address *)
+        FI;
+    ELSE IF OperandSize = 64 and AddressSize = 64
+        THEN
+            DEST := EffectiveAddress(SRC); (* 64-bit address *)
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + +
#UDIf source operand is not a memory location.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/leave.html b/x86/leave.html new file mode 100644 index 0000000..18cbcc3 --- /dev/null +++ b/x86/leave.html @@ -0,0 +1,143 @@ + +LEAVE + — High Level Procedure Exit

LEAVE + — High Level Procedure Exit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
C9LEAVEZOValidValidSet SP to BP, then pop BP.
C9LEAVEZON.E.ValidSet ESP to EBP, then pop EBP.
C9LEAVEZOValidN.E.Set RSP to RBP, then pop RBP.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Releases the stack frame set up by an earlier ENTER instruction. The LEAVE instruction copies the frame pointer (in the EBP register) into the stack pointer register (ESP), which releases the stack space allocated to the stack frame. The old frame pointer (the frame pointer for the calling procedure that was saved by the ENTER instruction) is then popped from the stack into the EBP register, restoring the calling procedure’s stack frame.

+

A RET instruction is commonly executed following a LEAVE instruction to return program control to the calling procedure.

+

See “Procedure Calls for Block-Structured Languages” in Chapter 7 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for detailed information on the use of the ENTER and LEAVE instructions.

+

In 64-bit mode, the instruction’s default operation size is 64 bits; 32-bit operation cannot be encoded. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF StackAddressSize = 32
+    THEN
+        ESP := EBP;
+    ELSE IF StackAddressSize = 64
+        THEN RSP := RBP; FI;
+    ELSE IF StackAddressSize = 16
+        THEN SP := BP; FI;
+FI;
+IF OperandSize = 32
+    THEN EBP := Pop();
+    ELSE IF OperandSize = 64
+        THEN RBP := Pop(); FI;
+    ELSE IF OperandSize = 16
+        THEN BP := Pop(); FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If the EBP register points to a location that is not within the limits of the current stack segment.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf the EBP register points to a location outside of the effective address space from 0 to FFFFH.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the EBP register points to a location outside of the effective address space from 0 to FFFFH.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + +
#SS(0)If the stack address is in a non-canonical form.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/lfence.html b/x86/lfence.html new file mode 100644 index 0000000..da592ab --- /dev/null +++ b/x86/lfence.html @@ -0,0 +1,61 @@ + +LFENCE + — Load Fence

LFENCE + — Load Fence

+ + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F AE E8 LFENCEZOV/VSSE2Serializes load operations.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Performs a serializing operation on all load-from-memory instructions that were issued prior the LFENCE instruction. Specifically, LFENCE does not execute until all prior instructions have completed locally, and no later instruction begins execution until LFENCE completes. In particular, an instruction that loads from memory and that precedes an LFENCE receives data from memory prior to completion of the LFENCE. (An LFENCE that follows an instruction that stores to memory might complete before the data being stored have become globally visible.) Instructions following an LFENCE may be fetched from memory before the LFENCE, but they will not execute (even speculatively) until the LFENCE completes.

+

Weakly ordered memory types can be used to achieve higher processor performance through such techniques as out-of-order issue and speculative reads. The degree to which a consumer of data recognizes or knows that the data is weakly ordered varies among applications and may be unknown to the producer of this data. The LFENCE instruction provides a performance-efficient way of ensuring load ordering between routines that produce weakly-ordered results and routines that consume that data.

+

Processors are free to fetch and cache data speculatively from regions of system memory that use the WB, WC, and WT memory types. This speculative fetching can occur at any time and is not tied to instruction execution. Thus, it is not ordered with respect to executions of the LFENCE instruction; data can be brought into the caches speculatively just before, during, or after the execution of an LFENCE instruction.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Specification of the instruction's opcode above indicates a ModR/M byte of E8. For this instruction, the processor ignores the r/m field of the ModR/M byte. Thus, LFENCE is encoded by any opcode of the form 0F AE Ex, where x is in the range 8-F.

+

Operation + ¶ +

+
Wait_On_Following_Instructions_Until(preceding_instructions_complete);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
void _mm_lfence(void)
+
+

Exceptions (All Modes of Operation) + ¶ +

+

#UD If CPUID.01H:EDX.SSE2[bit 26] = 0.

+

If the LOCK prefix is used.

diff --git a/x86/lgdt.lidt.html b/x86/lgdt.lidt.html new file mode 100644 index 0000000..1fad37d --- /dev/null +++ b/x86/lgdt.lidt.html @@ -0,0 +1,176 @@ + +LGDT/LIDT + — Load Global/Interrupt Descriptor Table Register

LGDT/LIDT + — Load Global/Interrupt Descriptor Table Register

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01 /2LGDT m16&32MN.E.ValidLoad m into GDTR.
0F 01 /3LIDT m16&32MN.E.ValidLoad m into IDTR.
0F 01 /2LGDT m16&64MValidN.E.Load m into GDTR.
0F 01 /3LIDT m16&64MValidN.E.Load m into IDTR.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Loads the values in the source operand into the global descriptor table register (GDTR) or the interrupt descriptor table register (IDTR). The source operand specifies a 6-byte memory location that contains the base address (a linear address) and the limit (size of table in bytes) of the global descriptor table (GDT) or the interrupt descriptor table (IDT). If operand-size attribute is 32 bits, a 16-bit limit (lower 2 bytes of the 6-byte data operand) and a 32-bit base address (upper 4 bytes of the data operand) are loaded into the register. If the operand-size attribute is 16 bits, a 16-bit limit (lower 2 bytes) and a 24-bit base address (third, fourth, and fifth byte) are loaded. Here, the high-order byte of the operand is not used and the high-order byte of the base address in the GDTR or IDTR is filled with zeros.

+

The LGDT and LIDT instructions are used only in operating-system software; they are not used in application programs. They are the only instructions that directly load a linear address (that is, not a segment-relative address) and a limit in protected mode. They are commonly executed in real-address mode to allow processor initialization prior to switching to protected mode.

+

In 64-bit mode, the instruction’s operand size is fixed at 8+2 bytes (an 8-byte base and a 2-byte limit). See the summary chart at the beginning of this section for encoding data and limits.

+

See “SGDT—Store Global Descriptor Table Register” in Chapter 4, of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2B, for information on storing the contents of the GDTR and IDTR.

+

Operation + ¶ +

+
IF Instruction is LIDT
+    THEN
+        IF OperandSize = 16
+            THEN
+                IDTR(Limit) := SRC[0:15];
+                IDTR(Base) := SRC[16:47] AND 00FFFFFFH;
+            ELSE IF 32-bit Operand Size
+                THEN
+                    IDTR(Limit) := SRC[0:15];
+                    IDTR(Base) := SRC[16:47];
+                FI;
+            ELSE IF 64-bit Operand Size (* In 64-Bit Mode *)
+                THEN
+                    IDTR(Limit) := SRC[0:15];
+                    IDTR(Base) := SRC[16:79];
+                FI;
+        FI;
+    ELSE (* Instruction is LGDT *)
+        IF OperandSize = 16
+            THEN
+                GDTR(Limit) := SRC[0:15];
+                GDTR(Base) := SRC[16:47] AND 00FFFFFFH;
+            ELSE IF 32-bit Operand Size
+                THEN
+                    GDTR(Limit) := SRC[0:15];
+                    GDTR(Base) := SRC[16:47];
+                FI;
+            ELSE IF 64-bit Operand Size (* In 64-Bit Mode *)
+                THEN
+                    GDTR(Limit) := SRC[0:15];
+                    GDTR(Base) := SRC[16:79];
+                FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
#GP(0)If the current privilege level is not 0.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#UDIf the LOCK prefix is used.
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + +
#UDIf the LOCK prefix is used.
#GPIf the current privilege level is not 0.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the current privilege level is not 0.
If the memory address is in a non-canonical form.
#UDIf the LOCK prefix is used.
#PF(fault-code)If a page fault occurs.
diff --git a/x86/lldt.html b/x86/lldt.html new file mode 100644 index 0000000..597d750 --- /dev/null +++ b/x86/lldt.html @@ -0,0 +1,140 @@ + +LLDT + — Load Local Descriptor Table Register

LLDT + — Load Local Descriptor Table Register

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 00 /2LLDT r/m16MValidValidLoad segment selector r/m16 into LDTR.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Loads the source operand into the segment selector field of the local descriptor table register (LDTR). The source operand (a general-purpose register or a memory location) contains a segment selector that points to a local descriptor table (LDT). After the segment selector is loaded in the LDTR, the processor uses the segment selector to locate the segment descriptor for the LDT in the global descriptor table (GDT). It then loads the segment limit and base address for the LDT from the segment descriptor into the LDTR. The segment registers DS, ES, SS, FS, GS, and CS are not affected by this instruction, nor is the LDTR field in the task state segment (TSS) for the current task.

+

If bits 2-15 of the source operand are 0, LDTR is marked invalid and the LLDT instruction completes silently. However, all subsequent references to descriptors in the LDT (except by the LAR, VERR, VERW or LSL instructions) cause a general protection exception (#GP).

+

The operand-size attribute has no effect on this instruction.

+

The LLDT instruction is provided for use in operating-system software; it should not be used in application programs. This instruction can only be executed in protected mode or 64-bit mode.

+

In 64-bit mode, the operand size is fixed at 16 bits.

+

Operation + ¶ +

+
IF SRC(Offset) > descriptor table limit
+    THEN #GP(segment selector); FI;
+IF segment selector is valid
+    Read segment descriptor;
+    IF SegmentDescriptor(Type) ≠ LDT
+        THEN #GP(segment selector); FI;
+    IF segment descriptor is not present
+        THEN #NP(segment selector); FI;
+    LDTR(SegmentSelector) := SRC;
+    LDTR(SegmentDescriptor) := GDTSegmentDescriptor;
+ELSE LDTR := INVALID
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#GP(selector)If the selector operand does not point into the Global Descriptor Table or if the entry in the GDT is not a Local Descriptor Table.
Segment selector is beyond GDT limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#NP(selector)If the LDT descriptor is not present.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe LLDT instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe LLDT instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the current privilege level is not 0.
If the memory address is in a non-canonical form.
#GP(selector)If the selector operand does not point into the Global Descriptor Table or if the entry in the GDT is not a Local Descriptor Table.
Segment selector is beyond GDT limit.
#NP(selector)If the LDT descriptor is not present.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
diff --git a/x86/lmsw.html b/x86/lmsw.html new file mode 100644 index 0000000..b01cba8 --- /dev/null +++ b/x86/lmsw.html @@ -0,0 +1,121 @@ + +LMSW + — Load Machine Status Word

LMSW + — Load Machine Status Word

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01 /6LMSW r/m16MValidValidLoads r/m16 in machine status word of CR0.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Loads the source operand into the machine status word, bits 0 through 15 of register CR0. The source operand can be a 16-bit general-purpose register or a memory location. Only the low-order 4 bits of the source operand (which contains the PE, MP, EM, and TS flags) are loaded into CR0. The PG, CD, NW, AM, WP, NE, and ET flags of CR0 are not affected. The operand-size attribute has no effect on this instruction.

+

If the PE flag of the source operand (bit 0) is set to 1, the instruction causes the processor to switch to protected mode. While in protected mode, the LMSW instruction cannot be used to clear the PE flag and force a switch back to real-address mode.

+

The LMSW instruction is provided for use in operating-system software; it should not be used in application programs. In protected or virtual-8086 mode, it can only be executed at CPL 0.

+

This instruction is provided for compatibility with the Intel 286 processor; programs and procedures intended to run on IA-32 and Intel 64 processors beginning with Intel386 processors should use the MOV (control registers) instruction to load the whole CR0 register. The MOV CR0 instruction can be used to set and clear the PE flag in CR0, allowing a procedure or program to switch between protected and real-address modes.

+

This instruction is a serializing instruction.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode. Note that the operand size is fixed at 16 bits.

+

See “Changes to Instruction Behavior in VMX Non-Root Operation” in Chapter 26 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more information about the behavior of this instruction in VMX non-root operation.

+

Operation + ¶ +

+
CR0[0:3] := SRC[0:3];
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)The LMSW instruction is not recognized in virtual-8086 mode.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the current privilege level is not 0.
If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
diff --git a/x86/loadiwkey.html b/x86/loadiwkey.html new file mode 100644 index 0000000..59eae86 --- /dev/null +++ b/x86/loadiwkey.html @@ -0,0 +1,125 @@ + +LOADIWKEY + — Load Internal Wrapping Key With Key Locker

LOADIWKEY + — Load Internal Wrapping Key With Key Locker

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F 38 DC 11:rrr:bbb LOADIWKEY xmm1, xmm2, <EAX>, <XMM0>AV/VKLLoad internal wrapping key from xmm1, xmm2, and XMM0.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r)ModRM:r/m (r)Implicit EAX (r)Implicit XMM0 (r)
+

Description + ¶ +

+

The LOADIWKEY1 instruction writes the Key Locker internal wrapping key, which is called IWKey. This IWKey is used by the ENCODEKEY* instructions to wrap keys into handles. Conversely, the AESENC/DEC*KL instructions use IWKey to unwrap those keys from the handles and help verify the handle integrity. For security reasons, no instruction is designed to allow software to directly read the IWKey value.

+

IWKey includes two cryptographic keys as well as metadata. The two cryptographic keys are loaded from register sources so that LOADIWKEY can be executed without the keys ever being in memory.

+

The key input operands are:

+
    +
  • The 256-bit encryption key is loaded from the two explicit operands.
  • +
  • The 128-bit integrity key is loaded from the implicit operand XMM0.
+

The implicit operand EAX specifies the KeySource and whether backing up the key is permitted:

+
    +
  • EAX[0] – When set, the wrapping key being initialized is not permitted to be backed up to platform-scoped storage.
  • +
  • EAX[4:1] – This specifies the KeySource, which is the type of key. Currently only two encodings are supported. A KeySource of 0 indicates that the key input operands described above should be directly stored as the internal wrapping keys. LOADIWKEY with a KeySource of 1 will have random numbers from the on-chip random number generator XORed with the source registers (including XMM0) so that the software that executes the LOADIWKEY does not know the actual IWKey encryption and integrity keys. Software can choose to put additional random data into the source registers so that other sources of random data are combined with the hardware random number generator supplied value. Software should always check ZF after executing LOADIWKEY with KeySource of 1 as this operation may fail due to it being unable to get sufficient full-entropy data from the on-chip random number generator. Both KeySource of 0 and 1 specify that IWKey be used with the AES-GCM-SIV algorithm. CPUID.19H.ECX[1] enumerates support for KeySource of 1. All other KeySource encodings are reserved.
  • +
  • EAX[31:5] – Reserved.
+

1. Further details on Key Locker and usage of this instruction can be found here:

+

https://software.intel.com/content/www/us/en/develop/download/intel-key-locker-specification.html. + ¶ +

+

Operation + ¶ +

+

LOADIWKEY + ¶ +

+
IF CPL > 0
+                    // LOADKWKEY only allowed at ring 0 (supervisor mode)
+    THEN #GP (0); FI;
+IF EAX[4:1] > 1
+                    // Reserved KeySource encoding used
+    THEN #GP (0); FI;
+IF EAX[31:5] != 0
+                    // Reserved bit in EAX is set
+    THEN #GP (0); FI;
+IF EAX[0] AND (CPUID.19H.ECX[0] == 0)
+                        // NoBackup is not supported on this part
+    THEN #GP (0); FI;
+IF (EAX[4:1] == 1) AND (CPUID.19H.ECX[1] == 0)
+                        // KeySource of 1 is not supported on this part
+    THEN #GP (0); FI;
+IF (EAX[4:1] == 0) // KeySource of 0
+    THEN
+        IWKey.Encryption Key[127:0] := SRC2[127:0]:
+        IWKey.Encryption Key[255:128] := SRC1[127:0];
+        IWKey.IntegrityKey[127:0] := XMM0[127:0];
+        IWKey.NoBackup = EAX [0];
+        IWKey.KeySource = EAX [4:1];
+        RFLAGS.ZF := 0;
+    ELSE // KeySource of 1. See RDSEED definition for details of randomness
+        IF HW_NRND_GEN.ready == 1 // Full-entropy random data from RDSEED hardware block was received
+            THEN
+                IWKey.Encryption Key[127:0] := SRC2[127:0] XOR HW_NRND_GEN.data[127:0];
+                IWKey.Encryption Key[255:128] := SRC1[127:0] XOR HW_NRND_GEN.data[255:128];
+                IWKey.IntegrityKey[127:0] := XMM0[127:0] XOR HW_NRND_GEN.data[383:256];
+                IWKey.NoBackup = EAX [0];
+                IWKey.KeySource = EAX [4:1];
+                RFLAGS.ZF := 0;
+            ELSE // Random data was not returned from RDSEED hardware block. IWKey was not loaded
+                RFLAGS.ZF := 1;
+        FI;
+FI;
+RFLAGS.OF, SF, AF, PF, CF := 0;
+
+

Flags Affected + ¶ +

+

ZF is set to 0 if the operation succeeded and set to 1 if the operation failed due to full-entropy random data not being received from RDSEED. The other arithmetic flags (OF, SF, AF, PF, CF) are cleared to 0.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
LOADIWKEY void _mm_loadiwkey(unsigned int ctl, __m128i intkey, __m128i enkey_lo, __m128i enkey_hi);
+
+

Exceptions (All Operating Modes) + ¶ +

+

#GP If CPL > 0. (Does not apply in real-address mode.)

+

If EAX[4:1] > 1.

+

If EAX[31:5] != 0.

+

If (EAX[0] == 1) AND (CPUID.19H.ECX[0] == 0).

+

If (EAX[4:1] == 1) AND (CPUID.19H.ECX[1] == 0).

+

#UD If the LOCK prefix is used.

+

If CPUID.07H:ECX.KL[bit 23] = 0.

+

If CR4.KL = 0.

+

If CR0.EM = 1.

+

If CR4.OSFXSR = 0.

+

#NM If CR0.TS = 1.

diff --git a/x86/lock.html b/x86/lock.html new file mode 100644 index 0000000..ddb1a40 --- /dev/null +++ b/x86/lock.html @@ -0,0 +1,90 @@ + +LOCK + — Assert LOCK# Signal Prefix

LOCK + — Assert LOCK# Signal Prefix

+ +

Opcode1

+ + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F0LOCKZOValidValidAsserts LOCK# signal for duration of the accompanying instruction.
+
+

1. See IA-32 Architecture Compatibility section below.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Causes the processor’s LOCK# signal to be asserted during execution of the accompanying instruction (turns the instruction into an atomic instruction). In a multiprocessor environment, the LOCK# signal ensures that the processor has exclusive use of any shared memory while the signal is asserted.

+

In most IA-32 and all Intel 64 processors, locking may occur without the LOCK# signal being asserted. See the “IA-32 Architecture Compatibility” section below for more details.

+

The LOCK prefix can be prepended only to the following instructions and only to those forms of the instructions where the destination operand is a memory operand: ADD, ADC, AND, BTC, BTR, BTS, CMPXCHG, CMPXCH8B, CMPXCHG16B, DEC, INC, NEG, NOT, OR, SBB, SUB, XOR, XADD, and XCHG. If the LOCK prefix is used with one of these instructions and the source operand is a memory operand, an undefined opcode exception (#UD) may be generated. An undefined opcode exception will also be generated if the LOCK prefix is used with any instruction not in the above list. The XCHG instruction always asserts the LOCK# signal regardless of the presence or absence of the LOCK prefix.

+

The LOCK prefix is typically used with the BTS instruction to perform a read-modify-write operation on a memory location in shared memory environment.

+

The integrity of the LOCK prefix is not affected by the alignment of the memory field. Memory locking is observed for arbitrarily misaligned fields.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

Beginning with the P6 family processors, when the LOCK prefix is prefixed to an instruction and the memory area being accessed is cached internally in the processor, the LOCK# signal is generally not asserted. Instead, only the processor’s cache is locked. Here, the processor’s cache coherency mechanism ensures that the operation is carried out atomically with regards to memory. See “Effects of a Locked Operation on Internal Processor Caches” in Chapter 9 of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, the for more information on locking of caches.

+

Operation + ¶ +

+
AssertLOCK#(DurationOfAccompaningInstruction);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used with an instruction not listed: ADD, ADC, AND, BTC, BTR, BTS, CMPXCHG, CMPXCH8B, CMPXCHG16B, DEC, INC, NEG, NOT, OR, SBB, SUB, XOR, XADD, XCHG.
Other exceptions can be generated by the instruction when the LOCK prefix is applied.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/lods.lodsb.lodsw.lodsd.lodsq.html b/x86/lods.lodsb.lodsw.lodsd.lodsq.html new file mode 100644 index 0000000..019f584 --- /dev/null +++ b/x86/lods.lodsb.lodsw.lodsd.lodsq.html @@ -0,0 +1,211 @@ + +LODS/LODSB/LODSW/LODSD/LODSQ + — Load String

LODS/LODSB/LODSW/LODSD/LODSQ + — Load String

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
ACLODS m8ZOValidValidFor legacy mode, Load byte at address DS:(E)SI into AL. For 64-bit mode load byte at address (R)SI into AL.
ADLODS m16ZOValidValidFor legacy mode, Load word at address DS:(E)SI into AX. For 64-bit mode load word at address (R)SI into AX.
ADLODS m32ZOValidValidFor legacy mode, Load dword at address DS:(E)SI into EAX. For 64-bit mode load dword at address (R)SI into EAX.
REX.W + ADLODS m64ZOValidN.E.Load qword at address (R)SI into RAX.
ACLODSBZOValidValidFor legacy mode, Load byte at address DS:(E)SI into AL. For 64-bit mode load byte at address (R)SI into AL.
ADLODSWZOValidValidFor legacy mode, Load word at address DS:(E)SI into AX. For 64-bit mode load word at address (R)SI into AX.
ADLODSDZOValidValidFor legacy mode, Load dword at address DS:(E)SI into EAX. For 64-bit mode load dword at address (R)SI into EAX.
REX.W + ADLODSQZOValidN.E.Load qword at address (R)SI into RAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Loads a byte, word, or doubleword from the source operand into the AL, AX, or EAX register, respectively. The source operand is a memory location, the address of which is read from the DS:ESI or the DS:SI registers (depending on the address-size attribute of the instruction, 32 or 16, respectively). The DS segment may be overridden with a segment override prefix.

+

At the assembly-code level, two forms of this instruction are allowed: the “explicit-operands” form and the “no-operands” form. The explicit-operands form (specified with the LODS mnemonic) allows the source operand to be specified explicitly. Here, the source operand should be a symbol that indicates the size and location of the source value. The destination operand is then automatically selected to match the size of the source operand (the AL register for byte operands, AX for word operands, and EAX for doubleword operands). This explicit-operands form is provided to allow documentation; however, note that the documentation provided by this form can be misleading. That is, the source operand symbol must specify the correct type (size) of the operand (byte, word, or doubleword), but it does not have to specify the correct location. The location is always specified by the DS:(E)SI registers, which must be loaded correctly before the load string instruction is executed.

+

The no-operands form provides “short forms” of the byte, word, and doubleword versions of the LODS instructions. Here also DS:(E)SI is assumed to be the source operand and the AL, AX, or EAX register is assumed to be the destination operand. The size of the source and destination operands is selected with the mnemonic: LODSB (byte loaded into register AL), LODSW (word loaded into AX), or LODSD (doubleword loaded into EAX).

+

After the byte, word, or doubleword is transferred from the memory location into the AL, AX, or EAX register, the (E)SI register is incremented or decremented automatically according to the setting of the DF flag in the EFLAGS register. (If the DF flag is 0, the (E)SI register is incremented; if the DF flag is 1, the ESI register is decremented.) The (E)SI register is incremented or decremented by 1 for byte operations, by 2 for word operations, or by 4 for doubleword operations.

+

In 64-bit mode, use of the REX.W prefix promotes operation to 64 bits. LODS/LODSQ load the quadword at address (R)SI into RAX. The (R)SI register is then incremented or decremented automatically according to the setting of the DF flag in the EFLAGS register.

+

The LODS, LODSB, LODSW, and LODSD instructions can be preceded by the REP prefix for block loads of ECX bytes, words, or doublewords. More often, however, these instructions are used within a LOOP construct because further processing of the data moved into the register is usually necessary before the next transfer can be made. See “REP/REPE/REPZ /REPNE/REPNZ—Repeat String Operation Prefix” in Chapter 4 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2B, for a description of the REP prefix.

+

Operation + ¶ +

+
IF AL := SRC; (* Byte load *)
+    THEN AL := SRC; (* Byte load *)
+        IF DF = 0
+            THEN (E)SI := (E)SI + 1;
+            ELSE (E)SI := (E)SI – 1;
+        FI;
+ELSE IF AX := SRC; (* Word load *)
+    THEN IF DF = 0
+            THEN (E)SI := (E)SI + 2;
+            ELSE (E)SI := (E)SI – 2;
+        IF;
+    FI;
+ELSE IF EAX := SRC; (* Doubleword load *)
+    THENIFDF =0
+            THEN (E)SI := (E)SI + 4;
+            ELSE (E)SI := (E)SI – 4;
+        FI;
+    FI;
+ELSE IF RAX := SRC; (* Quadword load *)
+    THEN IF DF = 0
+            THEN (R)SI := (R)SI + 8;
+            ELSE (R)SI := (R)SI – 8;
+        FI;
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/loop.loopcc.html b/x86/loop.loopcc.html new file mode 100644 index 0000000..6b60d5f --- /dev/null +++ b/x86/loop.loopcc.html @@ -0,0 +1,155 @@ + +LOOP/LOOPcc + — Loop According to ECX Counter

LOOP/LOOPcc + — Loop According to ECX Counter

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
E2 cbLOOP rel8DValidValidDecrement count; jump short if count ≠ 0.
E1 cbLOOPE rel8DValidValidDecrement count; jump short if count ≠ 0 and ZF = 1.
E0 cbLOOPNE rel8DValidValidDecrement count; jump short if count ≠ 0 and ZF = 0.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/En Operand 1 Operand 2 Operand 3Operand 4
D Offset N/A N/AN/A
+

Description + ¶ +

+

Performs a loop operation using the RCX, ECX or CX register as a counter (depending on whether address size is 64 bits, 32 bits, or 16 bits). Note that the LOOP instruction ignores REX.W; but 64-bit address size can be over-ridden using a 67H prefix.

+

Each time the LOOP instruction is executed, the count register is decremented, then checked for 0. If the count is 0, the loop is terminated and program execution continues with the instruction following the LOOP instruction. If the count is not zero, a near jump is performed to the destination (target) operand, which is presumably the instruction at the beginning of the loop.

+

The target instruction is specified with a relative offset (a signed offset relative to the current value of the instruction pointer in the IP/EIP/RIP register). This offset is generally specified as a label in assembly code, but at the machine code level, it is encoded as a signed, 8-bit immediate value, which is added to the instruction pointer. Offsets of –128 to +127 are allowed with this instruction.

+

Some forms of the loop instruction (LOOPcc) also accept the ZF flag as a condition for terminating the loop before the count reaches zero. With these forms of the instruction, a condition code (cc) is associated with each instruction to indicate the condition being tested for. Here, the LOOPcc instruction itself does not affect the state of the ZF flag; the ZF flag is changed by other instructions in the loop.

+

Operation + ¶ +

+
IF (AddressSize = 32)
+    THEN Count is ECX;
+ELSE IF (AddressSize = 64)
+    Count is RCX;
+ELSE Count is CX;
+FI;
+Count := Count – 1;
+IF Instruction is not LOOP
+    THEN
+        IF (Instruction := LOOPE) or (Instruction := LOOPZ)
+            THEN IF (ZF = 1) and (Count ≠ 0)
+                    THEN BranchCond := 1;
+                    ELSE BranchCond := 0;
+                FI;
+            ELSE (Instruction = LOOPNE) or (Instruction = LOOPNZ)
+                IF (ZF = 0 ) and (Count ≠ 0)
+                    THEN BranchCond := 1;
+                    ELSE BranchCond := 0;
+        FI;
+    ELSE (* Instruction = LOOP *)
+        IF (Count ≠ 0)
+            THEN BranchCond := 1;
+            ELSE BranchCond := 0;
+        FI;
+FI;
+IF BranchCond = 1
+    THEN
+        IF in 64-bit mode (* OperandSize = 64 *)
+            THEN
+                tempRIP := RIP + SignExtend(DEST);
+                IF tempRIP is not canonical
+                    THEN #GP(0);
+                ELSE RIP := tempRIP;
+                FI;
+            ELSE
+                tempEIP := EIP SignExtend(DEST);
+                IF OperandSize 16
+                    THEN tempEIP := tempEIP AND 0000FFFFH;
+                FI;
+                IF tempEIP is not within code segment limit
+                    THEN #GP(0);
+                    ELSE EIP := tempEIP;
+                FI;
+        FI;
+    ELSE
+        Terminate loop and continue program execution at (R/E)IP;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the offset being jumped to is beyond the limits of the CS segment.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf the offset being jumped to is beyond the limits of the CS segment or is outside of the effective address space from 0 to FFFFH. This condition can occur if a 32-bit address size override prefix is used.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the offset being jumped to is in a non-canonical form.
#UDIf the LOCK prefix is used.
diff --git a/x86/lsl.html b/x86/lsl.html new file mode 100644 index 0000000..1eb480f --- /dev/null +++ b/x86/lsl.html @@ -0,0 +1,176 @@ + +LSL + — Load Segment Limit

LSL + — Load Segment Limit

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 03 /rLSL r16, r16/m16RMValidValidLoad: r16 := segment limit, selector r16/m16.
0F 03 /rLSL r32, r32/m161RMValidValidLoad: r32 := segment limit, selector r32/m16.
REX.W + 0F 03 /rLSL r64, r32/m161RMValidValidLoad: r64 := segment limit, selector r32/m16
+
+

1. For all loads (regardless of destination sizing), only bits 16-0 are used. Other bits are ignored.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Loads the unscrambled segment limit from the segment descriptor specified with the second operand (source operand) into the first operand (destination operand) and sets the ZF flag in the EFLAGS register. The source operand (which can be a register or a memory location) contains the segment selector for the segment descriptor being accessed. The destination operand is a general-purpose register.

+

The processor performs access checks as part of the loading process. Once loaded in the destination register, software can compare the segment limit with the offset of a pointer.

+

The segment limit is a 20-bit value contained in bytes 0 and 1 and in the first 4 bits of byte 6 of the segment descriptor. If the descriptor has a byte granular segment limit (the granularity flag is set to 0), the destination operand is loaded with a byte granular value (byte limit). If the descriptor has a page granular segment limit (the granularity flag is set to 1), the LSL instruction will translate the page granular limit (page limit) into a byte limit before loading it into the destination operand. The translation is performed by shifting the 20-bit “raw” limit left 12 bits and filling the low-order 12 bits with 1s.

+

When the operand size is 32 bits, the 32-bit byte limit is stored in the destination operand. When the operand size is 16 bits, a valid 32-bit limit is computed; however, the upper 16 bits are truncated and only the low-order 16 bits are loaded into the destination operand.

+

This instruction performs the following checks before it loads the segment limit into the destination register:

+
    +
  • Checks that the segment selector is not NULL.
  • +
  • Checks that the segment selector points to a descriptor that is within the limits of the GDT or LDT being accessed
  • +
  • Checks that the descriptor type is valid for this instruction. All code and data segment descriptors are valid for (can be accessed with) the LSL instruction. The valid special segment and gate descriptor types are given in the following table.
  • +
  • If the segment is not a conforming code segment, the instruction checks that the specified segment descriptor is visible at the CPL (that is, if the CPL and the RPL of the segment selector are less than or equal to the DPL of the segment selector).
+

If the segment descriptor cannot be accessed or is an invalid type for the instruction, the ZF flag is cleared and no value is loaded in the destination operand.

+
+ + + + + + + + + + + + + + + +
TypeProtected ModeIA-32e Mode
NameValidNameValid
0 1 2 3 4 5 6 7 8 9 A B C D E FReserved Available 16-bit TSS LDT Busy 16-bit TSS 16-bit call gate 16-bit/32-bit task gate 16-bit interrupt gate 16-bit trap gate Reserved Available 32-bit TSS Reserved Busy 32-bit TSS 32-bit call gate Reserved 32-bit interrupt gate 32-bit trap gateNo Yes Yes Yes No No No No No Yes No Yes No No No NoReserved Reserved LDT1 Reserved Reserved Reserved Reserved Reserved Reserved 64-bit TSS1 Reserved Busy 64-bit TSS1 64-bit call gate Reserved 64-bit interrupt gate 64-bit trap gateNo No Yes No No No No No No Yes No Yes No No No No
+
Table 3-56. Segment and Gate Descriptor Types
+
+

1. In this case, the descriptor comprises 16 bytes; bits 12:8 of the upper 4 bytes must be 0.

+

Operation + ¶ +

+
IF SRC(Offset) > descriptor table limit
+    THEN ZF := 0; FI;
+Read segment descriptor;
+IF SegmentDescriptor(Type) ≠ conforming code segment
+and (CPL > DPL) OR (RPL > DPL)
+or Segment type is not valid for instruction
+        THEN
+            ZF := 0;
+        ELSE
+            temp := SegmentLimit([SRC]);
+            IF (SegmentDescriptor(G) = 1)
+                THEN temp := (temp << 12) OR 00000FFFH;
+            ELSE IF OperandSize = 32
+                THEN DEST := temp; FI;
+            ELSE IF OperandSize = 64 (* REX.W used *)
+                THEN DEST := temp(* Zero-extended *); FI;
+            ELSE (* OperandSize = 16 *)
+                DEST := temp AND FFFFH;
+            FI;
+FI;
+
+

Flags Affected + ¶ +

+

The ZF flag is set to 1 if the segment limit is loaded successfully; otherwise, it is set to 0.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and the memory operand effective address is unaligned while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe LSL instruction cannot be executed in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe LSL instruction cannot be executed in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If the memory operand effective address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory operand effective address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and the memory operand effective address is unaligned while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/ltr.html b/x86/ltr.html new file mode 100644 index 0000000..343db3a --- /dev/null +++ b/x86/ltr.html @@ -0,0 +1,150 @@ + +LTR + — Load Task Register

LTR + — Load Task Register

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 00 /3LTR r/m16MValidValidLoad r/m16 into task register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Loads the source operand into the segment selector field of the task register. The source operand (a general-purpose register or a memory location) contains a segment selector that points to a task state segment (TSS). After the segment selector is loaded in the task register, the processor uses the segment selector to locate the segment descriptor for the TSS in the global descriptor table (GDT). It then loads the segment limit and base address for the TSS from the segment descriptor into the task register. The task pointed to by the task register is marked busy, but a switch to the task does not occur.

+

The LTR instruction is provided for use in operating-system software; it should not be used in application programs. It can only be executed in protected mode when the CPL is 0. It is commonly used in initialization code to establish the first task to be executed.

+

The operand-size attribute has no effect on this instruction.

+

In 64-bit mode, the operand size is still fixed at 16 bits. The instruction references a 16-byte descriptor to load the 64-bit base.

+

Operation + ¶ +

+
IF SRC is a NULL selector
+    THEN #GP(0);
+IF SRC(Offset) > descriptor table limit OR IF SRC(type) ≠ global
+    THEN #GP(segment selector); FI;
+Read segment descriptor;
+IF segment descriptor is not for an available TSS
+    THEN #GP(segment selector); FI;
+IF segment descriptor is not present
+    THEN #NP(segment selector); FI;
+TSSsegmentDescriptor(busy) := 1;
+(* Locked read-modify-write operation on the entire descriptor when setting busy flag *)
+TaskRegister(SegmentSelector) := SRC;
+TaskRegister(SegmentDescriptor) := TSSSegmentDescriptor;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the source operand contains a NULL segment selector.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#GP(selector)If the source selector points to a segment that is not a TSS or to one for a task that is already busy.
If the selector points to LDT or is beyond the GDT limit.
#NP(selector)If the TSS descriptor is marked not present.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe LTR instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe LTR instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode, as well as the following:

+ + + +
#GP(selector)If the source selector points to a 16-bit TSS.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the current privilege level is not 0.
If the memory address is in a non-canonical form.
If the source operand contains a NULL segment selector.
#GP(selector)If the source selector points to a segment that is not a TSS, to a 16-bit TSS, or to a TSS for a task that is already busy.
If the selector points to LDT or is beyond the GDT limit.
If the descriptor type of the upper 8-byte of the 16-byte descriptor is non-zero.
#NP(selector)If the TSS descriptor is marked not present.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
diff --git a/x86/lzcnt.html b/x86/lzcnt.html new file mode 100644 index 0000000..eaf8a10 --- /dev/null +++ b/x86/lzcnt.html @@ -0,0 +1,164 @@ + +LZCNT + — Count the Number of Leading Zero Bits

LZCNT + — Count the Number of Leading Zero Bits

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F BD /r LZCNT r16, r/m16RMV/VLZCNTCount the number of leading zero bits in r/m16, return result in r16.
F3 0F BD /r LZCNT r32, r/m32RMV/VLZCNTCount the number of leading zero bits in r/m32, return result in r32.
F3 REX.W 0F BD /r LZCNT r64, r/m64RMV/N.E.LZCNTCount the number of leading zero bits in r/m64, return result in r64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Counts the number of leading most significant zero bits in a source operand (second operand) returning the result into a destination (first operand).

+

LZCNT differs from BSR. For example, LZCNT will produce the operand size when the input operand is zero. It should be noted that on processors that do not support LZCNT, the instruction byte encoding is executed as BSR.

+

In 64-bit mode 64-bit operand size requires REX.W=1.

+

Operation + ¶ +

+
temp := OperandSize - 1
+DEST := 0
+WHILE (temp >= 0) AND (Bit(SRC, temp) = 0)
+DO
+    temp := temp - 1
+    DEST := DEST+ 1
+OD
+IF DEST = OperandSize
+    CF := 1
+ELSE
+    CF := 0
+FI
+IF DEST = 0
+    ZF := 1
+ELSE
+    ZF := 0
+FI
+
+

Flags Affected + ¶ +

+

ZF flag is set to 1 in case of zero output (most significant bit of the source is set), and to 0 otherwise, CF flag is set to 1 if input was zero and cleared otherwise. OF, SF, PF, and AF flags are undefined.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
LZCNT unsigned __int32 _lzcnt_u32(unsigned __int32 src);
+
+
LZCNT unsigned __int64 _lzcnt_u64(unsigned __int64 src);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
If the DS, ES, FS, or GS register is used to access memory and it contains a null segment selector.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)For an illegal address in the SS segment.
#UDIf LOCK prefix is used.
+

Virtual 8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in Protected Mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf LOCK prefix is used.
diff --git a/x86/maskmovdqu.html b/x86/maskmovdqu.html new file mode 100644 index 0000000..5e8f415 --- /dev/null +++ b/x86/maskmovdqu.html @@ -0,0 +1,88 @@ + +MASKMOVDQU + — Store Selected Bytes of Double Quadword

MASKMOVDQU + — Store Selected Bytes of Double Quadword

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F F7 /r MASKMOVDQU xmm1, xmm2RMV/VSSE2Selectively write bytes from xmm1 to memory location using the byte mask in xmm2. The default memory location is specified by DS:DI/EDI/RDI.
VEX.128.66.0F.WIG F7 /r VMASKMOVDQU xmm1, xmm2RMV/VAVXSelectively write bytes from xmm1 to memory location using the byte mask in xmm2. The default memory location is specified by DS:DI/EDI/RDI.
+

Instruction Operand Encoding1 + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Stores selected bytes from the source operand (first operand) into an 128-bit memory location. The mask operand (second operand) selects which bytes from the source operand are written to memory. The source and mask operands are XMM registers. The memory location specified by the effective address in the DI/EDI/RDI register (the default segment register is DS, but this may be overridden with a segment-override prefix). The memory location does not need to be aligned on a natural boundary. (The size of the store address depends on the address-size attribute.)

+

The most significant bit in each byte of the mask operand determines whether the corresponding byte in the source operand is written to the corresponding byte location in memory: 0 indicates no write and 1 indicates write.

+

The MASKMOVDQU instruction generates a non-temporal hint to the processor to minimize cache pollution. The non-temporal hint is implemented by using a write combining (WC) memory type protocol (see “Caching of Temporal vs. Non-Temporal Data” in Chapter 10, of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1). Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with the SFENCE or MFENCE instruction should be used in conjunction with MASKMOVDQU instructions if multiple processors might use different memory types to read/write the destination memory locations.

+
+

1.ModRM.MOD = 011B required

+

Behavior with a mask of all 0s is as follows:

+
    +
  • No data will be written to memory.
  • +
  • Signaling of breakpoints (code or data) is not guaranteed; different processor implementations may signal or not signal these breakpoints.
  • +
  • Exceptions associated with addressing memory and page faults may still be signaled (implementation dependent).
  • +
  • If the destination memory region is mapped as UC or WP, enforcement of associated semantics for these memory types is not guaranteed (that is, is reserved) and is implementation-specific.
+

The MASKMOVDQU instruction can be used to improve performance of algorithms that need to merge data on a byte-by-byte basis. MASKMOVDQU should not cause a read for ownership; doing so generates unnecessary bandwidth since data is to be written directly using the byte-mask without allocating old data prior to the store.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

If VMASKMOVDQU is encoded with VEX.L= 1, an attempt to execute the instruction encoded with VEX.L= 1 will cause an #UD exception.

+

Operation + ¶ +

+
IF (MASK[7] = 1)
+    THEN DEST[DI/EDI] := SRC[7:0] ELSE (* Memory location unchanged *); FI;
+IF (MASK[15] = 1)
+    THEN DEST[DI/EDI +1] := SRC[15:8] ELSE (* Memory location unchanged *); FI;
+    (* Repeat operation for 3rd through 14th bytes in source operand *)
+IF (MASK[127] = 1)
+    THEN DEST[DI/EDI +15] := SRC[127:120] ELSE (* Memory location unchanged *); FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
void _mm_maskmoveu_si128(__m128i d, __m128i n, char * p)
+
+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + + + +
#UDIf VEX.L= 1
If VEX.vvvv ≠ 1111B.
diff --git a/x86/maskmovq.html b/x86/maskmovq.html new file mode 100644 index 0000000..b70e183 --- /dev/null +++ b/x86/maskmovq.html @@ -0,0 +1,74 @@ + +MASKMOVQ + — Store Selected Bytes of Quadword

MASKMOVQ + — Store Selected Bytes of Quadword

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F F7 /r MASKMOVQ mm1, mm2RMValidValidSelectively write bytes from mm1 to memory location using the byte mask in mm2. The default memory location is specified by DS:DI/EDI/RDI.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Stores selected bytes from the source operand (first operand) into a 64-bit memory location. The mask operand (second operand) selects which bytes from the source operand are written to memory. The source and mask operands are MMX technology registers. The memory location specified by the effective address in the DI/EDI/RDI register (the default segment register is DS, but this may be overridden with a segment-override prefix). The memory location does not need to be aligned on a natural boundary. (The size of the store address depends on the address-size attribute.)

+

The most significant bit in each byte of the mask operand determines whether the corresponding byte in the source operand is written to the corresponding byte location in memory: 0 indicates no write and 1 indicates write.

+

The MASKMOVQ instruction generates a non-temporal hint to the processor to minimize cache pollution. The non-temporal hint is implemented by using a write combining (WC) memory type protocol (see “Caching of Temporal vs. Non-Temporal Data” in Chapter 10, of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1). Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with the SFENCE or MFENCE instruction should be used in conjunction with MASKMOVQ instructions if multiple processors might use different memory types to read/write the destination memory locations.

+

This instruction causes a transition from x87 FPU to MMX technology state (that is, the x87 FPU top-of-stack pointer is set to 0 and the x87 FPU tag word is set to all 0s [valid]).

+

The behavior of the MASKMOVQ instruction with a mask of all 0s is as follows:

+
    +
  • No data will be written to memory.
  • +
  • Transition from x87 FPU to MMX technology state will occur.
  • +
  • Exceptions associated with addressing memory and page faults may still be signaled (implementation dependent).
  • +
  • Signaling of breakpoints (code or data) is not guaranteed (implementation dependent).
  • +
  • If the destination memory region is mapped as UC or WP, enforcement of associated semantics for these memory types is not guaranteed (that is, is reserved) and is implementation-specific.
+

The MASKMOVQ instruction can be used to improve performance for algorithms that need to merge data on a byteby-byte basis. It should not cause a read for ownership; doing so generates unnecessary bandwidth since data is to be written directly using the byte-mask without allocating old data prior to the store.

+

In 64-bit mode, the memory address is specified by DS:RDI.

+

Operation + ¶ +

+
IF (MASK[7] = 1)
+    THEN DEST[DI/EDI] := SRC[7:0] ELSE (* Memory location unchanged *); FI;
+IF (MASK[15] = 1)
+    THEN DEST[DI/EDI +1] := SRC[15:8] ELSE (* Memory location unchanged *); FI;
+    (* Repeat operation for 3rd through 6th bytes in source operand *)
+IF (MASK[63] = 1)
+    THEN DEST[DI/EDI +15] := SRC[63:56] ELSE (* Memory location unchanged *); FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
void _mm_maskmove_si64(__m64d, __m64n, char * p)
+
+

Other Exceptions + ¶ +

+

See Table 23-8, “Exception Conditions for Legacy SIMD/MMX Instructions without FP Exception,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/maxpd.html b/x86/maxpd.html new file mode 100644 index 0000000..4a117ab --- /dev/null +++ b/x86/maxpd.html @@ -0,0 +1,190 @@ + +MAXPD + — Maximum of Packed Double Precision Floating-Point Values

MAXPD + — Maximum of Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 5F /r MAXPD xmm1, xmm2/m128AV/VSSE2Return the maximum double precision floating-point values between xmm1 and xmm2/m128.
VEX.128.66.0F.WIG 5F /r VMAXPD xmm1, xmm2, xmm3/m128BV/VAVXReturn the maximum double precision floating-point values between xmm2 and xmm3/m128.
VEX.256.66.0F.WIG 5F /r VMAXPD ymm1, ymm2, ymm3/m256BV/VAVXReturn the maximum packed double precision floating-point values between ymm2 and ymm3/m256.
EVEX.128.66.0F.W1 5F /r VMAXPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FReturn the maximum packed double precision floating-point values between xmm2 and xmm3/m128/m64bcst and store result in xmm1 subject to writemask k1.
EVEX.256.66.0F.W1 5F /r VMAXPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FReturn the maximum packed double precision floating-point values between ymm2 and ymm3/m256/m64bcst and store result in ymm1 subject to writemask k1.
EVEX.512.66.0F.W1 5F /r VMAXPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{sae}CV/VAVX512FReturn the maximum packed double precision floating-point values between zmm2 and zmm3/m512/m64bcst and store result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed double precision floating-point values in the first source operand and the second source operand and returns the maximum value for each pair of values to the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned. If a value in the second operand is an SNaN, then SNaN is forwarded unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second operand (source operand), either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN source operand (from either the first or second operand) be returned, the action of MAXPD can be emulated using a sequence of instructions, such as a comparison followed by AND, ANDN, and OR.

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+
MAX(SRC1, SRC2)
+{
+    IF ((SRC1 = 0.0) and (SRC2 = 0.0)) THEN DEST := SRC2;
+        ELSE IF (SRC1 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC2 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC1 > SRC2) THEN DEST := SRC1;
+        ELSE DEST := SRC2;
+    FI;
+}
+
+

VMAXPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := MAX(SRC1[i+63:i], SRC2[63:0])
+                ELSE
+                    DEST[i+63:i] := MAX(SRC1[i+63:i], SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0
+                        ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMAXPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := MAX(SRC1[63:0], SRC2[63:0])
+DEST[127:64] := MAX(SRC1[127:64], SRC2[127:64])
+DEST[191:128] := MAX(SRC1[191:128], SRC2[191:128])
+DEST[255:192] := MAX(SRC1[255:192], SRC2[255:192])
+DEST[MAXVL-1:256] := 0
+
+

VMAXPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := MAX(SRC1[63:0], SRC2[63:0])
+DEST[127:64] := MAX(SRC1[127:64], SRC2[127:64])
+DEST[MAXVL-1:128] := 0
+
+

MAXPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := MAX(DEST[63:0], SRC[63:0])
+DEST[127:64] := MAX(DEST[127:64], SRC[127:64])
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMAXPD __m512d _mm512_max_pd( __m512d a, __m512d b);
+
+
VMAXPD __m512d _mm512_mask_max_pd(__m512d s, __mmask8 k, __m512d a, __m512d b,);
+
+
VMAXPD __m512d _mm512_maskz_max_pd( __mmask8 k, __m512d a, __m512d b);
+
+
VMAXPD __m512d _mm512_max_round_pd( __m512d a, __m512d b, int);
+
+
VMAXPD __m512d _mm512_mask_max_round_pd(__m512d s, __mmask8 k, __m512d a, __m512d b, int);
+
+
VMAXPD __m512d _mm512_maskz_max_round_pd( __mmask8 k, __m512d a, __m512d b, int);
+
+
VMAXPD __m256d _mm256_mask_max_pd(__m5256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VMAXPD __m256d _mm256_maskz_max_pd( __mmask8 k, __m256d a, __m256d b);
+
+
VMAXPD __m128d _mm_mask_max_pd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VMAXPD __m128d _mm_maskz_max_pd( __mmask8 k, __m128d a, __m128d b);
+
+
VMAXPD __m256d _mm256_max_pd (__m256d a, __m256d b);
+
+
(V)MAXPD __m128d _mm_max_pd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (including QNaN Source Operand), Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/maxps.html b/x86/maxps.html new file mode 100644 index 0000000..e6e74d9 --- /dev/null +++ b/x86/maxps.html @@ -0,0 +1,198 @@ + +MAXPS + — Maximum of Packed Single Precision Floating-Point Values

MAXPS + — Maximum of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 5F /r MAXPS xmm1, xmm2/m128AV/VSSEReturn the maximum single precision floating-point values between xmm1 and xmm2/mem.
VEX.128.0F.WIG 5F /r VMAXPS xmm1, xmm2, xmm3/m128BV/VAVXReturn the maximum single precision floating-point values between xmm2 and xmm3/mem.
VEX.256.0F.WIG 5F /r VMAXPS ymm1, ymm2, ymm3/m256BV/VAVXReturn the maximum single precision floating-point values between ymm2 and ymm3/mem.
EVEX.128.0F.W0 5F /r VMAXPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FReturn the maximum packed single precision floating-point values between xmm2 and xmm3/m128/m32bcst and store result in xmm1 subject to writemask k1.
EVEX.256.0F.W0 5F /r VMAXPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FReturn the maximum packed single precision floating-point values between ymm2 and ymm3/m256/m32bcst and store result in ymm1 subject to writemask k1.
EVEX.512.0F.W0 5F /r VMAXPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{sae}CV/VAVX512FReturn the maximum packed single precision floating-point values between zmm2 and zmm3/m512/m32bcst and store result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed single precision floating-point values in the first source operand and the second source operand and returns the maximum value for each pair of values to the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned. If a value in the second operand is an SNaN, then SNaN is forwarded unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second operand (source operand), either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN source operand (from either the first or second operand) be returned, the action of MAXPS can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN, and OR.

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+
MAX(SRC1, SRC2)
+{
+    IF ((SRC1 = 0.0) and (SRC2 = 0.0)) THEN DEST := SRC2;
+        ELSE IF (SRC1 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC2 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC1 > SRC2) THEN DEST := SRC1;
+        ELSE DEST := SRC2;
+    FI;
+}
+
+

VMAXPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := MAX(SRC1[i+31:i], SRC2[31:0])
+                ELSE
+                    DEST[i+31:i] := MAX(SRC1[i+31:i], SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0
+                        ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMAXPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := MAX(SRC1[31:0], SRC2[31:0])
+DEST[63:32] := MAX(SRC1[63:32], SRC2[63:32])
+DEST[95:64] := MAX(SRC1[95:64], SRC2[95:64])
+DEST[127:96] := MAX(SRC1[127:96], SRC2[127:96])
+DEST[159:128] := MAX(SRC1[159:128], SRC2[159:128])
+DEST[191:160] := MAX(SRC1[191:160], SRC2[191:160])
+DEST[223:192] := MAX(SRC1[223:192], SRC2[223:192])
+DEST[255:224] := MAX(SRC1[255:224], SRC2[255:224])
+DEST[MAXVL-1:256] := 0
+
+

VMAXPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := MAX(SRC1[31:0], SRC2[31:0])
+DEST[63:32] := MAX(SRC1[63:32], SRC2[63:32])
+DEST[95:64] := MAX(SRC1[95:64], SRC2[95:64])
+DEST[127:96] := MAX(SRC1[127:96], SRC2[127:96])
+DEST[MAXVL-1:128] := 0
+
+

MAXPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := MAX(DEST[31:0], SRC[31:0])
+DEST[63:32] := MAX(DEST[63:32], SRC[63:32])
+DEST[95:64] := MAX(DEST[95:64], SRC[95:64])
+DEST[127:96] := MAX(DEST[127:96], SRC[127:96])
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMAXPS __m512 _mm512_max_ps( __m512 a, __m512 b);
+
+
VMAXPS __m512 _mm512_mask_max_ps(__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VMAXPS __m512 _mm512_maskz_max_ps( __mmask16 k, __m512 a, __m512 b);
+
+
VMAXPS __m512 _mm512_max_round_ps( __m512 a, __m512 b, int);
+
+
VMAXPS __m512 _mm512_mask_max_round_ps(__m512 s, __mmask16 k, __m512 a, __m512 b, int);
+
+
VMAXPS __m512 _mm512_maskz_max_round_ps( __mmask16 k, __m512 a, __m512 b, int);
+
+
VMAXPS __m256 _mm256_mask_max_ps(__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VMAXPS __m256 _mm256_maskz_max_ps( __mmask8 k, __m256 a, __m256 b);
+
+
VMAXPS __m128 _mm_mask_max_ps(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VMAXPS __m128 _mm_maskz_max_ps( __mmask8 k, __m128 a, __m128 b);
+
+
VMAXPS __m256 _mm256_max_ps (__m256 a, __m256 b);
+
+
MAXPS __m128 _mm_max_ps (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (including QNaN Source Operand), Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/maxsd.html b/x86/maxsd.html new file mode 100644 index 0000000..4db7476 --- /dev/null +++ b/x86/maxsd.html @@ -0,0 +1,137 @@ + +MAXSD + — Return Maximum Scalar Double Precision Floating-Point Value

MAXSD + — Return Maximum Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 5F /r MAXSD xmm1, xmm2/m64AV/VSSE2Return the maximum scalar double precision floating-point value between xmm2/m64 and xmm1.
VEX.LIG.F2.0F.WIG 5F /r VMAXSD xmm1, xmm2, xmm3/m64BV/VAVXReturn the maximum scalar double precision floating-point value between xmm3/m64 and xmm2.
EVEX.LLIG.F2.0F.W1 5F /r VMAXSD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}CV/VAVX512FReturn the maximum scalar double precision floating-point value between xmm3/m64 and xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Compares the low double precision floating-point values in the first source operand and the second source operand, and returns the maximum value to the low quadword of the destination operand. The second source operand can be an XMM register or a 64-bit memory location. The first source and destination operands are XMM registers. When the second source operand is a memory operand, only 64 bits are accessed.

+

If the values being compared are both 0.0s (of either sign), the value in the second source operand is returned. If a value in the second source operand is an SNaN, that SNaN is returned unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second source operand, either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN of either source operand be returned, the action of MAXSD can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN, and OR.

+

128-bit Legacy SSE version: The destination and first source operand are the same. Bits (MAXVL-1:64) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded version: Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: The low quadword element of the destination operand is updated according to the writemask.

+

Software should ensure VMAXSD is encoded with VEX.L=0. Encoding VMAXSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+
MAX(SRC1, SRC2)
+{
+    IF ((SRC1 = 0.0) and (SRC2 = 0.0)) THEN DEST := SRC2;
+        ELSE IF (SRC1 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC2 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC1 > SRC2) THEN DEST := SRC1;
+        ELSE DEST := SRC2;
+    FI;
+}
+
+

VMAXSD (EVEX Encoded Version) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[63:0] := MAX(SRC1[63:0], SRC2[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VMAXSD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := MAX(SRC1[63:0], SRC2[63:0])
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

MAXSD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := MAX(DEST[63:0], SRC[63:0])
+DEST[MAXVL-1:64] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMAXSD __m128d _mm_max_round_sd( __m128d a, __m128d b, int);
+
+
VMAXSD __m128d _mm_mask_max_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, int);
+
+
VMAXSD __m128d _mm_maskz_max_round_sd( __mmask8 k, __m128d a, __m128d b, int);
+
+
MAXSD __m128d _mm_max_sd(__m128d a, __m128d b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (Including QNaN Source Operand), Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/maxss.html b/x86/maxss.html new file mode 100644 index 0000000..35ca539 --- /dev/null +++ b/x86/maxss.html @@ -0,0 +1,138 @@ + +MAXSS + — Return Maximum Scalar Single Precision Floating-Point Value

MAXSS + — Return Maximum Scalar Single Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 5F /r MAXSS xmm1, xmm2/m32AV/VSSEReturn the maximum scalar single precision floating-point value between xmm2/m32 and xmm1.
VEX.LIG.F3.0F.WIG 5F /r VMAXSS xmm1, xmm2, xmm3/m32BV/VAVXReturn the maximum scalar single precision floating-point value between xmm3/m32 and xmm2.
EVEX.LLIG.F3.0F.W0 5F /r VMAXSS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}CV/VAVX512FReturn the maximum scalar single precision floating-point value between xmm3/m32 and xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Compares the low single precision floating-point values in the first source operand and the second source operand, and returns the maximum value to the low doubleword of the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second source operand is returned. If a value in the second source operand is an SNaN, that SNaN is returned unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second source operand, either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN from either source operand be returned, the action of MAXSS can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN, and OR.

+

The second source operand can be an XMM register or a 32-bit memory location. The first source and destination operands are XMM registers.

+

128-bit Legacy SSE version: The destination and first source operand are the same. Bits (MAXVL:32) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded version: The first source operand is an xmm register encoded by VEX.vvvv. Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL:128) of the destination register are zeroed.

+

EVEX encoded version: The low doubleword element of the destination operand is updated according to the writemask.

+

Software should ensure VMAXSS is encoded with VEX.L=0. Encoding VMAXSS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+
MAX(SRC1, SRC2)
+{
+    IF ((SRC1 = 0.0) and (SRC2 = 0.0)) THEN DEST := SRC2;
+        ELSE IF (SRC1 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC2 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC1 > SRC2) THEN DEST := SRC1;
+        ELSE DEST := SRC2;
+    FI;
+}
+
+

VMAXSS (EVEX Encoded Version) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[31:0] := MAX(SRC1[31:0], SRC2[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VMAXSS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := MAX(SRC1[31:0], SRC2[31:0])
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

MAXSS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := MAX(DEST[31:0], SRC[31:0])
+DEST[MAXVL-1:32] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMAXSS __m128 _mm_max_round_ss( __m128 a, __m128 b, int);
+
+
VMAXSS __m128 _mm_mask_max_round_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, int);
+
+
VMAXSS __m128 _mm_maskz_max_round_ss( __mmask8 k, __m128 a, __m128 b, int);
+
+
MAXSS __m128 _mm_max_ss(__m128 a, __m128 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (Including QNaN Source Operand), Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/mfence.html b/x86/mfence.html new file mode 100644 index 0000000..d85f09f --- /dev/null +++ b/x86/mfence.html @@ -0,0 +1,63 @@ + +MFENCE + — Memory Fence

MFENCE + — Memory Fence

+ + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F AE F0 MFENCEZOV/VSSE2Serializes load and store operations.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Performs a serializing operation on all load-from-memory and store-to-memory instructions that were issued prior the MFENCE instruction. This serializing operation guarantees that every load and store instruction that precedes the MFENCE instruction in program order becomes globally visible before any load or store instruction that follows the MFENCE instruction.1 The MFENCE instruction is ordered with respect to all load and store instructions, other MFENCE instructions, any LFENCE and SFENCE instructions, and any serializing instructions (such as the CPUID instruction). MFENCE does not serialize the instruction stream.

+
+

1. A load instruction is considered to become globally visible when the value to be loaded into its destination register is determined.

+

Weakly ordered memory types can be used to achieve higher processor performance through such techniques as out-of-order issue, speculative reads, write-combining, and write-collapsing. The degree to which a consumer of data recognizes or knows that the data is weakly ordered varies among applications and may be unknown to the producer of this data. The MFENCE instruction provides a performance-efficient way of ensuring load and store ordering between routines that produce weakly-ordered results and routines that consume that data.

+

Processors are free to fetch and cache data speculatively from regions of system memory that use the WB, WC, and WT memory types. This speculative fetching can occur at any time and is not tied to instruction execution. Thus, it is not ordered with respect to executions of the MFENCE instruction; data can be brought into the caches speculatively just before, during, or after the execution of an MFENCE instruction.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Specification of the instruction's opcode above indicates a ModR/M byte of F0. For this instruction, the processor ignores the r/m field of the ModR/M byte. Thus, MFENCE is encoded by any opcode of the form 0F AE Fx, where x is in the range 0-7.

+

Operation + ¶ +

+
Wait_On_Following_Loads_And_Stores_Until(preceding_loads_and_stores_globally_visible);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
void _mm_mfence(void)
+
+

Exceptions (All Modes of Operation) + ¶ +

+

#UD If CPUID.01H:EDX.SSE2[bit 26] = 0.

+

If the LOCK prefix is used.

diff --git a/x86/minpd.html b/x86/minpd.html new file mode 100644 index 0000000..f82b009 --- /dev/null +++ b/x86/minpd.html @@ -0,0 +1,189 @@ + +MINPD + — Minimum of Packed Double Precision Floating-Point Values

MINPD + — Minimum of Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 5D /r MINPD xmm1, xmm2/m128AV/VSSE2Return the minimum double precision floating-point values between xmm1 and xmm2/mem
VEX.128.66.0F.WIG 5D /r VMINPD xmm1, xmm2, xmm3/m128BV/VAVXReturn the minimum double precision floating-point values between xmm2 and xmm3/mem.
VEX.256.66.0F.WIG 5D /r VMINPD ymm1, ymm2, ymm3/m256BV/VAVXReturn the minimum packed double precision floating-point values between ymm2 and ymm3/mem.
EVEX.128.66.0F.W1 5D /r VMINPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FReturn the minimum packed double precision floating-point values between xmm2 and xmm3/m128/m64bcst and store result in xmm1 subject to writemask k1.
EVEX.256.66.0F.W1 5D /r VMINPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FReturn the minimum packed double precision floating-point values between ymm2 and ymm3/m256/m64bcst and store result in ymm1 subject to writemask k1.
EVEX.512.66.0F.W1 5D /r VMINPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{sae}CV/VAVX512FReturn the minimum packed double precision floating-point values between zmm2 and zmm3/m512/m64bcst and store result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed double precision floating-point values in the first source operand and the second source operand and returns the minimum value for each pair of values to the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned. If a value in the second operand is an SNaN, then SNaN is forwarded unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second operand (source operand), either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN source operand (from either the first or second operand) be returned, the action of MINPD can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN, and OR.

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+
MIN(SRC1, SRC2)
+{
+    IF ((SRC1 = 0.0) and (SRC2 = 0.0)) THEN DEST := SRC2;
+        ELSE IF (SRC1 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC2 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC1 < SRC2) THEN DEST := SRC1;
+        ELSE DEST := SRC2;
+    FI;
+}
+
+

VMINPD (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := MIN(SRC1[i+63:i], SRC2[63:0])
+                ELSE
+                    DEST[i+63:i] := MIN(SRC1[i+63:i], SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0
+                        ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMINPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := MIN(SRC1[63:0], SRC2[63:0])
+DEST[127:64] := MIN(SRC1[127:64], SRC2[127:64])
+DEST[191:128] := MIN(SRC1[191:128], SRC2[191:128])
+DEST[255:192] := MIN(SRC1[255:192], SRC2[255:192])
+
+

VMINPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := MIN(SRC1[63:0], SRC2[63:0])
+DEST[127:64] := MIN(SRC1[127:64], SRC2[127:64])
+DEST[MAXVL-1:128] := 0
+
+

MINPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := MIN(SRC1[63:0], SRC2[63:0])
+DEST[127:64] := MIN(SRC1[127:64], SRC2[127:64])
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMINPD __m512d _mm512_min_pd( __m512d a, __m512d b);
+
+
VMINPD __m512d _mm512_mask_min_pd(__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VMINPD __m512d _mm512_maskz_min_pd( __mmask8 k, __m512d a, __m512d b);
+
+
VMINPD __m512d _mm512_min_round_pd( __m512d a, __m512d b, int);
+
+
VMINPD __m512d _mm512_mask_min_round_pd(__m512d s, __mmask8 k, __m512d a, __m512d b, int);
+
+
VMINPD __m512d _mm512_maskz_min_round_pd( __mmask8 k, __m512d a, __m512d b, int);
+
+
VMINPD __m256d _mm256_mask_min_pd(__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VMINPD __m256d _mm256_maskz_min_pd( __mmask8 k, __m256d a, __m256d b);
+
+
VMINPD __m128d _mm_mask_min_pd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VMINPD __m128d _mm_maskz_min_pd( __mmask8 k, __m128d a, __m128d b);
+
+
VMINPD __m256d _mm256_min_pd (__m256d a, __m256d b);
+
+
MINPD __m128d _mm_min_pd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (including QNaN Source Operand), Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/minps.html b/x86/minps.html new file mode 100644 index 0000000..3e71a6c --- /dev/null +++ b/x86/minps.html @@ -0,0 +1,197 @@ + +MINPS + — Minimum of Packed Single Precision Floating-Point Values

MINPS + — Minimum of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 5D /r MINPS xmm1, xmm2/m128AV/VSSEReturn the minimum single precision floating-point values between xmm1 and xmm2/mem.
VEX.128.0F.WIG 5D /r VMINPS xmm1, xmm2, xmm3/m128BV/VAVXReturn the minimum single precision floating-point values between xmm2 and xmm3/mem.
VEX.256.0F.WIG 5D /r VMINPS ymm1, ymm2, ymm3/m256BV/VAVXReturn the minimum single double precision floating-point values between ymm2 and ymm3/mem.
EVEX.128.0F.W0 5D /r VMINPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FReturn the minimum packed single precision floating-point values between xmm2 and xmm3/m128/m32bcst and store result in xmm1 subject to writemask k1.
EVEX.256.0F.W0 5D /r VMINPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FReturn the minimum packed single precision floating-point values between ymm2 and ymm3/m256/m32bcst and store result in ymm1 subject to writemask k1.
EVEX.512.0F.W0 5D /r VMINPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{sae}CV/VAVX512FReturn the minimum packed single precision floating-point values between zmm2 and zmm3/m512/m32bcst and store result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed single precision floating-point values in the first source operand and the second source operand and returns the minimum value for each pair of values to the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned. If a value in the second operand is an SNaN, then SNaN is forwarded unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second operand (source operand), either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN source operand (from either the first or second operand) be returned, the action of MINPS can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN, and OR.

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+
MIN(SRC1, SRC2)
+{
+    IF ((SRC1 = 0.0) and (SRC2 = 0.0)) THEN DEST := SRC2;
+        ELSE IF (SRC1 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC2 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC1 < SRC2) THEN DEST := SRC1;
+        ELSE DEST := SRC2;
+    FI;
+}
+
+

VMINPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := MIN(SRC1[i+31:i], SRC2[31:0])
+                ELSE
+                    DEST[i+31:i] := MIN(SRC1[i+31:i], SRC2[i+31:i])
+            FI;
+            ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0
+                        ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMINPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := MIN(SRC1[31:0], SRC2[31:0])
+DEST[63:32] := MIN(SRC1[63:32], SRC2[63:32])
+DEST[95:64] := MIN(SRC1[95:64], SRC2[95:64])
+DEST[127:96] := MIN(SRC1[127:96], SRC2[127:96])
+DEST[159:128] := MIN(SRC1[159:128], SRC2[159:128])
+DEST[191:160] := MIN(SRC1[191:160], SRC2[191:160])
+DEST[223:192] := MIN(SRC1[223:192], SRC2[223:192])
+DEST[255:224] := MIN(SRC1[255:224], SRC2[255:224])
+
+

VMINPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := MIN(SRC1[31:0], SRC2[31:0])
+DEST[63:32] := MIN(SRC1[63:32], SRC2[63:32])
+DEST[95:64] := MIN(SRC1[95:64], SRC2[95:64])
+DEST[127:96] := MIN(SRC1[127:96], SRC2[127:96])
+DEST[MAXVL-1:128] := 0
+
+

MINPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := MIN(SRC1[31:0], SRC2[31:0])
+DEST[63:32] := MIN(SRC1[63:32], SRC2[63:32])
+DEST[95:64] := MIN(SRC1[95:64], SRC2[95:64])
+DEST[127:96] := MIN(SRC1[127:96], SRC2[127:96])
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMINPS __m512 _mm512_min_ps( __m512 a, __m512 b);
+
+
VMINPS __m512 _mm512_mask_min_ps(__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VMINPS __m512 _mm512_maskz_min_ps( __mmask16 k, __m512 a, __m512 b);
+
+
VMINPS __m512 _mm512_min_round_ps( __m512 a, __m512 b, int);
+
+
VMINPS __m512 _mm512_mask_min_round_ps(__m512 s, __mmask16 k, __m512 a, __m512 b, int);
+
+
VMINPS __m512 _mm512_maskz_min_round_ps( __mmask16 k, __m512 a, __m512 b, int);
+
+
VMINPS __m256 _mm256_mask_min_ps(__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VMINPS __m256 _mm256_maskz_min_ps( __mmask8 k, __m256 a, __m25 b);
+
+
VMINPS __m128 _mm_mask_min_ps(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VMINPS __m128 _mm_maskz_min_ps( __mmask8 k, __m128 a, __m128 b);
+
+
VMINPS __m256 _mm256_min_ps (__m256 a, __m256 b);
+
+
MINPS __m128 _mm_min_ps (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (including QNaN Source Operand), Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/minsd.html b/x86/minsd.html new file mode 100644 index 0000000..02848f7 --- /dev/null +++ b/x86/minsd.html @@ -0,0 +1,138 @@ + +MINSD + — Return Minimum Scalar Double Precision Floating-Point Value

MINSD + — Return Minimum Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 5D /r MINSD xmm1, xmm2/m64AV/VSSE2Return the minimum scalar double precision floating-point value between xmm2/m64 and xmm1.
VEX.LIG.F2.0F.WIG 5D /r VMINSD xmm1, xmm2, xmm3/m64BV/VAVXReturn the minimum scalar double precision floating-point value between xmm3/m64 and xmm2.
EVEX.LLIG.F2.0F.W1 5D /r VMINSD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}CV/VAVX512FReturn the minimum scalar double precision floating-point value between xmm3/m64 and xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Compares the low double precision floating-point values in the first source operand and the second source operand, and returns the minimum value to the low quadword of the destination operand. When the source operand is a memory operand, only the 64 bits are accessed.

+

If the values being compared are both 0.0s (of either sign), the value in the second source operand is returned. If a value in the second source operand is an SNaN, then SNaN is returned unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second source operand, either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN source operand (from either the first or second source) be returned, the action of MINSD can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN, and OR.

+

The second source operand can be an XMM register or a 64-bit memory location. The first source and destination operands are XMM registers.

+

128-bit Legacy SSE version: The destination and first source operand are the same. Bits (MAXVL-1:64) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded version: Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: The low quadword element of the destination operand is updated according to the writemask.

+

Software should ensure VMINSD is encoded with VEX.L=0. Encoding VMINSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+
MIN(SRC1, SRC2)
+{
+    IF ((SRC1 = 0.0) and (SRC2 = 0.0)) THEN DEST := SRC2;
+        ELSE IF (SRC1 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC2 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC1 < SRC2) THEN DEST := SRC1;
+        ELSE DEST := SRC2;
+    FI;
+}
+
+

MINSD (EVEX Encoded Version) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[63:0] := MIN(SRC1[63:0], SRC2[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

MINSD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := MIN(SRC1[63:0], SRC2[63:0])
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

MINSD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := MIN(SRC1[63:0], SRC2[63:0])
+DEST[MAXVL-1:64] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMINSD __m128d _mm_min_round_sd(__m128d a, __m128d b, int);
+
+
VMINSD __m128d _mm_mask_min_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, int);
+
+
VMINSD __m128d _mm_maskz_min_round_sd( __mmask8 k, __m128d a, __m128d b, int);
+
+
MINSD __m128d _mm_min_sd(__m128d a, __m128d b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (including QNaN Source Operand), Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/minss.html b/x86/minss.html new file mode 100644 index 0000000..48bb69b --- /dev/null +++ b/x86/minss.html @@ -0,0 +1,138 @@ + +MINSS + — Return Minimum Scalar Single Precision Floating-Point Value

MINSS + — Return Minimum Scalar Single Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 5D /r MINSS xmm1,xmm2/m32AV/VSSEReturn the minimum scalar single precision floating-point value between xmm2/m32 and xmm1.
VEX.LIG.F3.0F.WIG 5D /r VMINSS xmm1,xmm2, xmm3/m32BV/VAVXReturn the minimum scalar single precision floating-point value between xmm3/m32 and xmm2.
EVEX.LLIG.F3.0F.W0 5D /r VMINSS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}CV/VAVX512FReturn the minimum scalar single precision floating-point value between xmm3/m32 and xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Compares the low single precision floating-point values in the first source operand and the second source operand and returns the minimum value to the low doubleword of the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second source operand is returned. If a value in the second operand is an SNaN, that SNaN is returned unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second source operand, either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN in either source operand be returned, the action of MINSD can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN, and OR.

+

The second source operand can be an XMM register or a 32-bit memory location. The first source and destination operands are XMM registers.

+

128-bit Legacy SSE version: The destination and first source operand are the same. Bits (MAXVL:32) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded version: The first source operand is an xmm register encoded by (E)VEX.vvvv. Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: The low doubleword element of the destination operand is updated according to the writemask.

+

Software should ensure VMINSS is encoded with VEX.L=0. Encoding VMINSS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+
MIN(SRC1, SRC2)
+{
+    IF ((SRC1 = 0.0) and (SRC2 = 0.0)) THEN DEST := SRC2;
+        ELSE IF (SRC1 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC2 = NaN) THEN DEST := SRC2; FI;
+        ELSE IF (SRC1 < SRC2) THEN DEST := SRC1;
+        ELSE DEST := SRC2;
+    FI;
+}
+
+

MINSS (EVEX Encoded Version) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[31:0] := MIN(SRC1[31:0], SRC2[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VMINSS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := MIN(SRC1[31:0], SRC2[31:0])
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

MINSS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := MIN(SRC1[31:0], SRC2[31:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMINSS __m128 _mm_min_round_ss( __m128 a, __m128 b, int);
+
+
VMINSS __m128 _mm_mask_min_round_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, int);
+
+
VMINSS __m128 _mm_maskz_min_round_ss( __mmask8 k, __m128 a, __m128 b, int);
+
+
MINSS __m128 _mm_min_ss(__m128 a, __m128 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (Including QNaN Source Operand), Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/monitor.html b/x86/monitor.html new file mode 100644 index 0000000..2f6927f --- /dev/null +++ b/x86/monitor.html @@ -0,0 +1,135 @@ + +MONITOR + — Set Up Monitor Address

MONITOR + — Set Up Monitor Address

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01 C8MONITORZOValidValidSets up a linear address range to be monitored by hardware and activates the monitor. The address range should be a write-back memory caching type. The address is DS:RAX/EAX/AX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

The MONITOR instruction arms address monitoring hardware using an address specified in EAX (the address range that the monitoring hardware checks for store operations can be determined by using CPUID). A store to an address within the specified address range triggers the monitoring hardware. The state of monitor hardware is used by MWAIT.

+

The address is specified in RAX/EAX/AX and the size is based on the effective address size of the encoded instruction. By default, the DS segment is used to create a linear address that is monitored. Segment overrides can be used.

+

ECX and EDX are also used. They communicate other information to MONITOR. ECX specifies optional extensions. EDX specifies optional hints; it does not change the architectural behavior of the instruction. For the Pentium 4 processor (family 15, model 3), no extensions or hints are defined. Undefined hints in EDX are ignored by the processor; undefined extensions in ECX raises a general protection fault.

+

The address range must use memory of the write-back type. Only write-back memory will correctly trigger the monitoring hardware. Additional information on determining what address range to use in order to prevent false wake-ups is described in Chapter 9, “Multiple-Processor Management‚” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

The MONITOR instruction is ordered as a load operation with respect to other memory transactions. The instruction is subject to the permission checking and faults associated with a byte load. Like a load, MONITOR sets the A-bit but not the D-bit in page tables.

+

CPUID.01H:ECX.MONITOR[bit 3] indicates the availability of MONITOR and MWAIT in the processor. When set, MONITOR may be executed only at privilege level 0 (use at any other privilege level results in an invalid-opcode exception). The operating system or system BIOS may disable this instruction by using the IA32_MISC_ENABLE MSR; disabling MONITOR clears the CPUID feature flag and causes execution to generate an invalid-opcode exception.

+

The instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
MONITOR sets up an address range for the monitor hardware using the content of EAX (RAX in 64-bit mode) as an effective address
+and puts the monitor hardware in armed state. Always use memory of the write-back caching type. A store to the specified address
+range will trigger the monitor hardware. The content of ECX and EDX are used to communicate other information to the monitor
+hardware.
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MONITOR void _mm_monitor(void const *p, unsigned extensions,unsigned hints)
+
+

Numeric Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If the value in EAX is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If ECX ≠ 0.
#SS(0)If the value in EAX is outside the SS segment limit.
#PF(fault-code)For a page fault.
#UDIf CPUID.01H:ECX.MONITOR[bit 3] = 0.
If current privilege level is not 0.
+

Real Address Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#GPIf the CS, DS, ES, FS, or GS register is used to access memory and the value in EAX is outside of the effective address space from 0 to FFFFH.
If ECX ≠ 0.
#SSIf the SS register is used to access memory and the value in EAX is outside of the effective address space from 0 to FFFFH.
#UDIf CPUID.01H:ECX.MONITOR[bit 3] = 0.
+

Virtual 8086 Mode Exceptions + ¶ +

+ + + +
#UDThe MONITOR instruction is not recognized in virtual-8086 mode (even if CPUID.01H:ECX.MONITOR[bit 3] = 1).
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the linear address of the operand in the CS, DS, ES, FS, or GS segment is in a non-canonical form.
If RCX ≠ 0.
#SS(0)If the SS register is used to access memory and the value in EAX is in a non-canonical form.
#PF(fault-code)For a page fault.
#UDIf the current privilege level is not 0.
If CPUID.01H:ECX.MONITOR[bit 3] = 0.
diff --git a/x86/mov-1.html b/x86/mov-1.html new file mode 100644 index 0000000..e76c45b --- /dev/null +++ b/x86/mov-1.html @@ -0,0 +1,191 @@ + +MOV + — Move to/from Control Registers

MOV + — Move to/from Control Registers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 20/r MOV r32, CR0–CR7MRN.E.ValidMove control register to r32.
0F 20/r MOV r64, CR0–CR7MRValidN.E.Move extended control register to r64.
REX.R + 0F 20 /0 MOV r64, CR8MRValidN.E.Move extended CR8 to r64.1
0F 22 /r MOV CR0–CR7, r32RMN.E.ValidMove r32 to control register.
0F 22 /r MOV CR0–CR7, r64RMValidN.E.Move r64 to extended control register.
REX.R + 0F 22 /0 MOV CR8, r64RMValidN.E.Move r64 to extended CR8.1
+
+

1. MOV CR* instructions, except for MOV CR8, are serializing instructions. MOV CR8 is not architecturally defined as a serializing instruction. For more information, see Chapter 9 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Moves the contents of a control register (CR0, CR2, CR3, CR4, or CR8) to a general-purpose register or the contents of a general-purpose register to a control register. The operand size for these instructions is always 32 bits in non-64-bit modes, regardless of the operand-size attribute. On a 64-bit capable processor, an execution of MOV to CR outside of 64-bit mode zeros the upper 32 bits of the control register. (See “Control Registers” in Chapter 2 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for a detailed description of the flags and fields in the control registers.) This instruction can be executed only when the current privilege level is 0.

+

At the opcode level, the reg field within the ModR/M byte specifies which of the control registers is loaded or read. The 2 bits in the mod field are ignored. The r/m field specifies the general-purpose register loaded or read. Some of the bits in CR0, CR3, and CR4 are reserved and must be written with zeros. Attempting to set any reserved bits in CR0[31:0] is ignored. Attempting to set any reserved bits in CR0[63:32] results in a general-protection exception, #GP(0). When PCIDs are not enabled, bits 2:0 and bits 11:5 of CR3 are not used and attempts to set them are ignored. Attempting to set any reserved bits in CR3[63:MAXPHYADDR] results in #GP(0). Attempting to set any reserved bits in CR4 results in #GP(0). On Pentium 4, Intel Xeon and P6 family processors, CR0.ET remains set after any load of CR0; attempts to clear this bit have no impact.

+

In certain cases, these instructions have the side effect of invalidating entries in the TLBs and the paging-structure caches. See Section 4.10.4.1, “Operations that Invalidate TLBs and Paging-Structure Caches,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for details.

+

The following side effects are implementation-specific for the Pentium 4, Intel Xeon, and P6 processor family: when modifying PE or PG in register CR0, or PSE or PAE in register CR4, all TLB entries are flushed, including global entries. Software should not depend on this functionality in all Intel 64 or IA-32 processors.

+

In 64-bit mode, the instruction’s default operation size is 64 bits. The REX.R prefix must be used to access CR8. Use of REX.B permits access to additional registers (R8-R15). Use of the REX.W prefix or 66H prefix is ignored. Use

+

of the REX.R prefix to specify a register other than CR8 causes an invalid-opcode exception. See the summary chart at the beginning of this section for encoding data and limits.

+

If CR4.PCIDE = 1, bit 63 of the source operand to MOV to CR3 determines whether the instruction invalidates entries in the TLBs and the paging-structure caches (see Section 4.10.4.1, “Operations that Invalidate TLBs and Paging-Structure Caches,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A). The instruction does not modify bit 63 of CR3, which is reserved and always 0.

+

See “Changes to Instruction Behavior in VMX Non-Root Operation” in Chapter 26 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more information about the behavior of this instruction in VMX non-root operation.

+

Operation + ¶ +

+
DEST := SRC;
+
+

Flags Affected + ¶ +

+

The OF, SF, ZF, AF, PF, and CF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If an attempt is made to write invalid bit combinations in CR0 (such as setting the PG flag to 1 when the PE flag is set to 0, or setting the CD flag to 0 when the NW flag is set to 1).
If an attempt is made to write a 1 to any reserved bit in CR4.
If an attempt is made to write 1 to CR4.PCIDE.
If any of the reserved bits are set in the page-directory pointers table (PDPT) and the loading of a control register causes the PDPT to be loaded into the processor.
If an attempt is made to activate IA-32e mode and either the current CS has the L-bit set or the TR references a 16-bit TSS.
#UDIf the LOCK prefix is used.
If an attempt is made to access CR1, CR5, CR6, CR7, or CR9–CR15.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GPIf an attempt is made to write a 1 to any reserved bit in CR4.
If an attempt is made to write 1 to CR4.PCIDE.
If an attempt is made to write invalid bit combinations in CR0 (such as setting the PG flag to 1 when the PE flag is set to 0).
If an attempt is made to activate IA-32e mode and either the current CS has the L-bit set or the TR references a 16-bit TSS.
#UDIf the LOCK prefix is used.
If an attempt is made to access CR1, CR5, CR6, CR7, or CR9–CR15.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)These instructions cannot be executed in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If an attempt is made to write invalid bit combinations in CR0 (such as setting the PG flag to 1 when the PE flag is set to 0, or setting the CD flag to 0 when the NW flag is set to 1).
If an attempt is made to change CR4.PCIDE from 0 to 1 while CR3[11:0] ≠ 000H.
If an attempt is made to clear CR0.PG[bit 31] while CR4.PCIDE = 1.
If an attempt is made to leave IA-32e mode by clearing CR4.PAE[bit 5].
#UDIf the LOCK prefix is used.
If an attempt is made to access CR1, CR5, CR6, CR7, or CR9–CR15.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If an attempt is made to write invalid bit combinations in CR0 (such as setting the PG flag to 1 when the PE flag is set to 0, or setting the CD flag to 0 when the NW flag is set to 1).
If an attempt is made to change CR4.PCIDE from 0 to 1 while CR3[11:0] ≠ 000H.
If an attempt is made to clear CR0.PG[bit 31].
If an attempt is made to write a 1 to any reserved bit in CR4.
If an attempt is made to write a 1 to any reserved bit in CR8.
If an attempt is made to write a 1 to any reserved bit in CR3[63:MAXPHYADDR].
If an attempt is made to leave IA-32e mode by clearing CR4.PAE[bit 5].
#UDIf the LOCK prefix is used.
If an attempt is made to access CR1, CR5, CR6, CR7, or CR9–CR15.
If the REX.R prefix is used to specify a register other than CR8.
diff --git a/x86/mov-2.html b/x86/mov-2.html new file mode 100644 index 0000000..ba846c2 --- /dev/null +++ b/x86/mov-2.html @@ -0,0 +1,143 @@ + +MOV + — Move to/from Debug Registers

MOV + — Move to/from Debug Registers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 21/r MOV r32, DR0–DR7MRN.E.ValidMove debug register to r32.
0F 21/r MOV r64, DR0–DR7MRValidN.E.Move extended debug register to r64.
0F 23 /r MOV DR0–DR7, r32RMN.E.ValidMove r32 to debug register.
0F 23 /r MOV DR0–DR7, r64RMValidN.E.Move r64 to extended debug register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Moves the contents of a debug register (DR0, DR1, DR2, DR3, DR4, DR5, DR6, or DR7) to a general-purpose register or vice versa. The operand size for these instructions is always 32 bits in non-64-bit modes, regardless of the operand-size attribute. (See Section 18.2, “Debug Registers”, of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for a detailed description of the flags and fields in the debug registers.)

+

The instructions must be executed at privilege level 0 or in real-address mode.

+

When the debug extension (DE) flag in register CR4 is clear, these instructions operate on debug registers in a manner that is compatible with Intel386 and Intel486 processors. In this mode, references to DR4 and DR5 refer to DR6 and DR7, respectively. When the DE flag in CR4 is set, attempts to reference DR4 and DR5 result in an undefined opcode (#UD) exception. (The CR4 register was added to the IA-32 Architecture beginning with the Pentium processor.)

+

At the opcode level, the reg field within the ModR/M byte specifies which of the debug registers is loaded or read. The two bits in the mod field are ignored. The r/m field specifies the general-purpose register loaded or read.

+

In 64-bit mode, the instruction’s default operation size is 64 bits. Use of the REX.B prefix permits access to additional registers (R8–R15). Use of the REX.W or 66H prefix is ignored. Use of the REX.R prefix causes an invalid-opcode exception. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF ((DE = 1) and (SRC or DEST = DR4 or DR5))
+    THEN
+        #UD;
+    ELSE
+        DEST := SRC;
+FI;
+
+

Flags Affected + ¶ +

+

The OF, SF, ZF, AF, PF, and CF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
#UDIf CR4.DE[bit 3] = 1 (debug extensions) and a MOV instruction is executed involving DR4 or DR5.
If the LOCK prefix is used.
#DBIf any debug register is accessed while the DR7.GD[bit 13] = 1.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.DE[bit 3] = 1 (debug extensions) and a MOV instruction is executed involving DR4 or DR5.
If the LOCK prefix is used.
#DBIf any debug register is accessed while the DR7.GD[bit 13] = 1.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)The debug registers cannot be loaded or read when in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If an attempt is made to write a 1 to any of bits 63:32 in DR6.
If an attempt is made to write a 1 to any of bits 63:32 in DR7.
#UDIf CR4.DE[bit 3] = 1 (debug extensions) and a MOV instruction is executed involving DR4 or DR5.
If the LOCK prefix is used.
If the REX.R prefix is used.
#DBIf any debug register is accessed while the DR7.GD[bit 13] = 1.
diff --git a/x86/mov.html b/x86/mov.html new file mode 100644 index 0000000..8d572d5 --- /dev/null +++ b/x86/mov.html @@ -0,0 +1,490 @@ + +MOV + — Move

MOV + — Move

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
88 /rMOV r/m8, r8MRValidValidMove r8 to r/m8.
REX + 88 /rMOV r/m81, r81MRValidN.E.Move r8 to r/m8.
89 /rMOV r/m16, r16MRValidValidMove r16 to r/m16.
89 /rMOV r/m32, r32MRValidValidMove r32 to r/m32.
REX.W + 89 /rMOV r/m64, r64MRValidN.E.Move r64 to r/m64.
8A /rMOV r8, r/m8RMValidValidMove r/m8 to r8.
REX + 8A /rMOV r81, r/m81RMValidN.E.Move r/m8 to r8.
8B /rMOV r16, r/m16RMValidValidMove r/m16 to r16.
8B /rMOV r32, r/m32RMValidValidMove r/m32 to r32.
REX.W + 8B /rMOV r64, r/m64RMValidN.E.Move r/m64 to r64.
8C /rMOV r/m16, Sreg2MRValidValidMove segment register to r/m16.
8C /rMOV r16/r32/m16, Sreg2MRValidValidMove zero extended 16-bit segment register to r16/r32/m16.
REX.W + 8C /rMOV r64/m16, Sreg2MRValidValidMove zero extended 16-bit segment register to r64/m16.
8E /rMOV Sreg, r/m162RMValidValidMove r/m16 to segment register.
REX.W + 8E /rMOV Sreg, r/m642RMValidValidMove lower 16 bits of r/m64 to segment register.
A0MOV AL, moffs83FDValidValidMove byte at (seg:offset) to AL.
REX.W + A0MOV AL, moffs83FDValidN.E.Move byte at (offset) to AL.
A1MOV AX, moffs163FDValidValidMove word at (seg:offset) to AX.
A1MOV EAX, moffs323FDValidValidMove doubleword at (seg:offset) to EAX.
REX.W + A1MOV RAX, moffs643FDValidN.E.Move quadword at (offset) to RAX.
A2MOV moffs8, ALTDValidValidMove AL to (seg:offset).
REX.W + A2MOV moffs81, ALTDValidN.E.Move AL to (offset).
A3MOV moffs163, AXTDValidValidMove AX to (seg:offset).
A3MOV moffs323, EAXTDValidValidMove EAX to (seg:offset).
REX.W + A3MOV moffs643, RAXTDValidN.E.Move RAX to (offset).
B0+ rb ibMOV r8, imm8OIValidValidMove imm8 to r8.
REX + B0+ rb ibMOV r81, imm8OIValidN.E.Move imm8 to r8.
B8+ rw iwMOV r16, imm16OIValidValidMove imm16 to r16.
B8+ rd idMOV r32, imm32OIValidValidMove imm32 to r32.
REX.W + B8+ rd ioMOV r64, imm64OIValidN.E.Move imm64 to r64.
C6 /0 ibMOV r/m8, imm8MIValidValidMove imm8 to r/m8.
REX + C6 /0 ibMOV r/m81, imm8MIValidN.E.Move imm8 to r/m8.
C7 /0 iwMOV r/m16, imm16MIValidValidMove imm16 to r/m16.
C7 /0 idMOV r/m32, imm32MIValidValidMove imm32 to r/m32.
REX.W + C7 /0 idMOV r/m64, imm32MIValidN.E.Move imm32 sign extended to 64-bits to r/m64.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

2. In 32-bit mode, the assembler may insert the 16-bit operand-size prefix with this instruction (see the following “Description” section for further information).

+

3. The moffs8, moffs16, moffs32, and moffs64 operands specify a simple offset relative to the segment base, where 8, 16, 32, and 64 refer to the size of the data. The address-size attribute of the instruction determines the size of the offset, either 16, 32, or 64 bits.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
FDAL/AX/EAX/RAXMoffsN/AN/A
TDMoffs (w)AL/AX/EAX/RAXN/AN/A
OIopcode + rd (w)imm8/16/32/64N/AN/A
MIModRM:r/m (w)imm8/16/32/64N/AN/A
+

Description + ¶ +

+

Copies the second operand (source operand) to the first operand (destination operand). The source operand can be an immediate value, general-purpose register, segment register, or memory location; the destination register can be a general-purpose register, segment register, or memory location. Both operands must be the same size, which can be a byte, a word, a doubleword, or a quadword.

+

The MOV instruction cannot be used to load the CS register. Attempting to do so results in an invalid opcode exception (#UD). To load the CS register, use the far JMP, CALL, or RET instruction.

+

If the destination operand is a segment register (DS, ES, FS, GS, or SS), the source operand must be a valid segment selector. In protected mode, moving a segment selector into a segment register automatically causes the segment descriptor information associated with that segment selector to be loaded into the hidden (shadow) part of the segment register. While loading this information, the segment selector and segment descriptor information is validated (see the “Operation” algorithm below). The segment descriptor data is obtained from the GDT or LDT entry for the specified segment selector.

+

A NULL segment selector (values 0000-0003) can be loaded into the DS, ES, FS, and GS registers without causing a protection exception. However, any subsequent attempt to reference a segment whose corresponding segment register is loaded with a NULL value causes a general protection exception (#GP) and no memory reference occurs.

+

Loading the SS register with a MOV instruction suppresses or inhibits some debug exceptions and inhibits interrupts on the following instruction boundary. (The inhibition ends after delivery of an exception or the execution of the next instruction.) This behavior allows a stack pointer to be loaded into the ESP register with the next instruction (MOV ESP, stack-pointer value) before an event can be delivered. See Section 6.8.3, “Masking Exceptions and Interrupts When Switching Stacks,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A. Intel recommends that software use the LSS instruction to load the SS register and ESP together.

+

When executing MOV Reg, Sreg, the processor copies the content of Sreg to the 16 least significant bits of the general-purpose register. The upper bits of the destination register are zero for most IA-32 processors (Pentium Pro processors and later) and all Intel 64 processors, with the exception that bits 31:16 are undefined for Intel Quark X1000 processors, Pentium, and earlier processors.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := SRC;
+Loading a segment register while in protected mode results in special checks and actions, as described in the following listing. These
+checks are performed on the segment selector and the segment descriptor to which it points.
+IF SS is loaded
+    THEN
+        IF segment selector is NULL
+            THEN #GP(0); FI;
+        IF segment selector index is outside descriptor table limits
+        OR segment selector's RPL ≠ CPL
+        OR segment is not a writable data segment
+        OR DPL ≠ CPL
+            THEN #GP(selector); FI;
+        IF segment not marked present
+            THEN #SS(selector);
+            ELSE
+                SS := segment selector;
+                SS := segment descriptor; FI;
+FI;
+IF DS, ES, FS, or GS is loaded with non-NULL selector
+THEN
+    IF segment selector index is outside descriptor table limits
+    OR segment is not a data or readable code segment
+    OR ((segment is a data or nonconforming code segment) AND ((RPL > DPL) or (CPL > DPL)))
+        THEN #GP(selector); FI;
+    IF segment not marked present
+        THEN #NP(selector);
+        ELSE
+            SegmentRegister := segment selector;
+            SegmentRegister := segment descriptor; FI;
+FI;
+IF DS, ES, FS, or GS is loaded with NULL selector
+    THEN
+        SegmentRegister := segment selector;
+        SegmentRegister := segment descriptor;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If attempt is made to load SS register with NULL segment selector.
If the destination operand is in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#GP(selector)If segment selector index is outside descriptor table limits.
If the SS register is being loaded and the segment selector's RPL and the segment descriptor’s DPL are not equal to the CPL.
If the SS register is being loaded and the segment pointed to is a non-writable data segment.
If the DS, ES, FS, or GS register is being loaded and the segment pointed to is not a data or readable code segment.
If the DS, ES, FS, or GS register is being loaded and the segment pointed to is a data or nonconforming code segment, and either the RPL or the CPL is greater than the DPL.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#SS(selector)If the SS register is being loaded and the segment pointed to is marked not present.
#NPIf the DS, ES, FS, or GS register is being loaded and the segment pointed to is marked not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf attempt is made to load the CS register.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf attempt is made to load the CS register.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf attempt is made to load the CS register.
If the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
If an attempt is made to load SS register with NULL segment selector when CPL = 3.
If an attempt is made to load SS register with NULL segment selector when CPL < 3 and CPL ≠ RPL.
#GP(selector)If segment selector index is outside descriptor table limits.
If the memory access to the descriptor table is non-canonical.
If the SS register is being loaded and the segment selector's RPL and the segment descriptor’s DPL are not equal to the CPL.
If the SS register is being loaded and the segment pointed to is a nonwritable data segment.
If the DS, ES, FS, or GS register is being loaded and the segment pointed to is not a data or readable code segment.
If the DS, ES, FS, or GS register is being loaded and the segment pointed to is a data or nonconforming code segment, but both the RPL and the CPL are greater than the DPL.
#SS(0)If the stack address is in a non-canonical form.
#SS(selector)If the SS register is being loaded and the segment pointed to is marked not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf attempt is made to load the CS register.
If the LOCK prefix is used.
diff --git a/x86/movapd.html b/x86/movapd.html new file mode 100644 index 0000000..b452846 --- /dev/null +++ b/x86/movapd.html @@ -0,0 +1,268 @@ + +MOVAPD + — Move Aligned Packed Double Precision Floating-Point Values

MOVAPD + — Move Aligned Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 28 /r MOVAPD xmm1, xmm2/m128AV/VSSE2Move aligned packed double precision floating-point values from xmm2/mem to xmm1.
66 0F 29 /r MOVAPD xmm2/m128, xmm1BV/VSSE2Move aligned packed double precision floating-point values from xmm1 to xmm2/mem.
VEX.128.66.0F.WIG 28 /r VMOVAPD xmm1, xmm2/m128AV/VAVXMove aligned packed double precision floating-point values from xmm2/mem to xmm1.
VEX.128.66.0F.WIG 29 /r VMOVAPD xmm2/m128, xmm1BV/VAVXMove aligned packed double precision floating-point values from xmm1 to xmm2/mem.
VEX.256.66.0F.WIG 28 /r VMOVAPD ymm1, ymm2/m256AV/VAVXMove aligned packed double precision floating-point values from ymm2/mem to ymm1.
VEX.256.66.0F.WIG 29 /r VMOVAPD ymm2/m256, ymm1BV/VAVXMove aligned packed double precision floating-point values from ymm1 to ymm2/mem.
EVEX.128.66.0F.W1 28 /r VMOVAPD xmm1 {k1}{z}, xmm2/m128CV/VAVX512VL AVX512FMove aligned packed double precision floating-point values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.66.0F.W1 28 /r VMOVAPD ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512FMove aligned packed double precision floating-point values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.66.0F.W1 28 /r VMOVAPD zmm1 {k1}{z}, zmm2/m512CV/VAVX512FMove aligned packed double precision floating-point values from zmm2/m512 to zmm1 using writemask k1.
EVEX.128.66.0F.W1 29 /r VMOVAPD xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512FMove aligned packed double precision floating-point values from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.66.0F.W1 29 /r VMOVAPD ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512FMove aligned packed double precision floating-point values from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.66.0F.W1 29 /r VMOVAPD zmm2/m512 {k1}{z}, zmm1DV/VAVX512FMove aligned packed double precision floating-point values from zmm1 to zmm2/m512 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
CFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
DFull MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves 2, 4 or 8 double precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load an XMM, YMM or ZMM register from an 128-bit, 256-bit or 512-bit memory location, to store the contents of an XMM, YMM or ZMM register into a 128-bit, 256-bit or 512-bit memory location, or to move data between two XMM, two YMM or two ZMM registers.

+

When the source or destination operand is a memory operand, the operand must be aligned on a 16-byte (128-bit versions), 32-byte (256-bit version) or 64-byte (EVEX.512 encoded version) boundary or a general-protection

+

exception (#GP) will be generated. For EVEX encoded versions, the operand must be aligned to the size of the memory operand. To move double precision floating-point values to and from unaligned memory locations, use the VMOVUPD instruction.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

EVEX.512 encoded version:

+

Moves 512 bits of packed double precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a ZMM register from a 512-bit float64 memory location, to store the contents of a ZMM register into a 512-bit float64 memory location, or to move data between two ZMM registers. When the source or destination operand is a memory operand, the operand must be aligned on a 64-byte boundary or a general-protection exception (#GP) will be generated. To move single precision floating-point values to and from unaligned memory locations, use the VMOVUPD instruction.

+

VEX.256 and EVEX.256 encoded versions:

+

Moves 256 bits of packed double precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a YMM register from a 256-bit memory location, to store the contents of a YMM register into a 256-bit memory location, or to move data between two YMM registers. When the source or destination operand is a memory operand, the operand must be aligned on a 32-byte boundary or a general-protection exception (#GP) will be generated. To move double precision floating-point values to and from unaligned memory locations, use the VMOVUPD instruction.

+

128-bit versions:

+

Moves 128 bits of packed double precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load an XMM register from a 128-bit memory location, to store the contents of an XMM register into a 128-bit memory location, or to move data between two XMM registers. When the source or destination operand is a memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated. To move single precision floating-point values to and from unaligned memory locations, use the VMOVUPD instruction.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding ZMM destination register remain unchanged.

+

(E)VEX.128 encoded version: Bits (MAXVL-1:128) of the destination ZMM register destination are zeroed.

+

Operation + ¶ +

+

VMOVAPD (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVAPD (EVEX Encoded Versions, Store-Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE
+        ELSE *DEST[i+63:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR;
+
+

VMOVAPD (EVEX Encoded Versions, Load-Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVAPD (VEX.256 Encoded Version, Load - and Register Copy) + ¶ +

+
DEST[255:0] := SRC[255:0]
+DEST[MAXVL-1:256] := 0
+
+

VMOVAPD (VEX.256 Encoded Version, Store-Form) + ¶ +

+
DEST[255:0] := SRC[255:0]
+
+

VMOVAPD (VEX.128 Encoded Version, Load - and Register Copy) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] := 0
+
+

MOVAPD (128-bit Load- and Register-Copy- Form Legacy SSE Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

(V)MOVAPD (128-bit Store-Form Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVAPD __m512d _mm512_load_pd( void * m);
+
+
VMOVAPD __m512d _mm512_mask_load_pd(__m512d s, __mmask8 k, void * m);
+
+
VMOVAPD __m512d _mm512_maskz_load_pd( __mmask8 k, void * m);
+
+
VMOVAPD void _mm512_store_pd( void * d, __m512d a);
+
+
VMOVAPD void _mm512_mask_store_pd( void * d, __mmask8 k, __m512d a);
+
+
VMOVAPD __m256d _mm256_mask_load_pd(__m256d s, __mmask8 k, void * m);
+
+
VMOVAPD __m256d _mm256_maskz_load_pd( __mmask8 k, void * m);
+
+
VMOVAPD void _mm256_mask_store_pd( void * d, __mmask8 k, __m256d a);
+
+
VMOVAPD __m128d _mm_mask_load_pd(__m128d s, __mmask8 k, void * m);
+
+
VMOVAPD __m128d _mm_maskz_load_pd( __mmask8 k, void * m);
+
+
VMOVAPD void _mm_mask_store_pd( void * d, __mmask8 k, __m128d a);
+
+
MOVAPD __m256d _mm256_load_pd (double * p);
+
+
MOVAPD void _mm256_store_pd(double * p, __m256d a);
+
+
MOVAPD __m128d _mm_load_pd (double * p);
+
+
MOVAPD void _mm_store_pd(double * p, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Exceptions Type1.SSE2 in Table 2-18, “Type 1 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-44, “Type E1 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B or VEX.vvvv != 1111B.
diff --git a/x86/movaps.html b/x86/movaps.html new file mode 100644 index 0000000..fb8f419 --- /dev/null +++ b/x86/movaps.html @@ -0,0 +1,265 @@ + +MOVAPS + — Move Aligned Packed Single Precision Floating-Point Values

MOVAPS + — Move Aligned Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 28 /r MOVAPS xmm1, xmm2/m128AV/VSSEMove aligned packed single precision floating-point values from xmm2/mem to xmm1.
NP 0F 29 /r MOVAPS xmm2/m128, xmm1BV/VSSEMove aligned packed single precision floating-point values from xmm1 to xmm2/mem.
VEX.128.0F.WIG 28 /r VMOVAPS xmm1, xmm2/m128AV/VAVXMove aligned packed single precision floating-point values from xmm2/mem to xmm1.
VEX.128.0F.WIG 29 /r VMOVAPS xmm2/m128, xmm1BV/VAVXMove aligned packed single precision floating-point values from xmm1 to xmm2/mem.
VEX.256.0F.WIG 28 /r VMOVAPS ymm1, ymm2/m256AV/VAVXMove aligned packed single precision floating-point values from ymm2/mem to ymm1.
VEX.256.0F.WIG 29 /r VMOVAPS ymm2/m256, ymm1BV/VAVXMove aligned packed single precision floating-point values from ymm1 to ymm2/mem.
EVEX.128.0F.W0 28 /r VMOVAPS xmm1 {k1}{z}, xmm2/m128CV/VAVX512VL AVX512FMove aligned packed single precision floating-point values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.0F.W0 28 /r VMOVAPS ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512FMove aligned packed single precision floating-point values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.0F.W0 28 /r VMOVAPS zmm1 {k1}{z}, zmm2/m512CV/VAVX512FMove aligned packed single precision floating-point values from zmm2/m512 to zmm1 using writemask k1.
EVEX.128.0F.W0 29 /r VMOVAPS xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512FMove aligned packed single precision floating-point values from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.0F.W0 29 /r VMOVAPS ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512FMove aligned packed single precision floating-point values from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.0F.W0 29 /r VMOVAPS zmm2/m512 {k1}{z}, zmm1DV/VAVX512FMove aligned packed single precision floating-point values from zmm1 to zmm2/m512 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
CFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
DFull MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves 4, 8 or 16 single precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load an XMM, YMM or ZMM register from an 128-bit, 256-bit or 512-bit memory location, to store the contents of an XMM, YMM or ZMM register into a 128-bit, 256-bit or 512-bit memory location, or to move data between two XMM, two YMM or two ZMM registers.

+

When the source or destination operand is a memory operand, the operand must be aligned on a 16-byte (128-bit version), 32-byte (VEX.256 encoded version) or 64-byte (EVEX.512 encoded version) boundary or a general-protection exception (#GP) will be generated. For EVEX.512 encoded versions, the operand must be aligned to the size of the memory operand. To move single precision floating-point values to and from unaligned memory locations, use the VMOVUPS instruction.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

EVEX.512 encoded version:

+

Moves 512 bits of packed single precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a ZMM register from a 512-bit float32 memory location, to store the contents of a ZMM register into a float32 memory location, or to move data between two ZMM registers. When the source or destination operand is a memory operand, the operand must be aligned on a 64-byte boundary or a general-protection exception (#GP) will be generated. To move single precision floating-point values to and from unaligned memory locations, use the VMOVUPS instruction.

+

VEX.256 and EVEX.256 encoded version:

+

Moves 256 bits of packed single precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a YMM register from a 256-bit memory location, to store the contents of a YMM register into a 256-bit memory location, or to move data between two YMM registers. When the source or destination operand is a memory operand, the operand must be aligned on a 32-byte boundary or a general-protection exception (#GP) will be generated.

+

128-bit versions:

+

Moves 128 bits of packed single precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load an XMM register from a 128-bit memory location, to store the contents of an XMM register into a 128-bit memory location, or to move data between two XMM registers. When the source or destination operand is a memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated. To move single precision floating-point values to and from unaligned memory locations, use the VMOVUPS instruction.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding ZMM destination register remain unchanged.

+

(E)VEX.128 encoded version: Bits (MAXVL-1:128) of the destination ZMM register are zeroed.

+

Operation + ¶ +

+

VMOVAPS (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVAPS (EVEX Encoded Versions, Store Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            SRC[i+31:i]
+        ELSE *DEST[i+31:i] remains unchanged*
+                ; merging-masking
+ENDFOR;
+
+

VMOVAPS (EVEX Encoded Versions, Load Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVAPS (VEX.256 Encoded Version, Load - and Register Copy) + ¶ +

+
DEST[255:0] := SRC[255:0]
+DEST[MAXVL-1:256] := 0
+
+

VMOVAPS (VEX.256 Encoded Version, Store-Form) + ¶ +

+
DEST[255:0] := SRC[255:0]
+
+

VMOVAPS (VEX.128 Encoded Version, Load - and Register Copy) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] := 0
+
+

MOVAPS (128-bit Load- and Register-Copy- Form Legacy SSE Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

(V)MOVAPS (128-bit Store-Form Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVAPS __m512 _mm512_load_ps( void * m);
+
+
VMOVAPS __m512 _mm512_mask_load_ps(__m512 s, __mmask16 k, void * m);
+
+
VMOVAPS __m512 _mm512_maskz_load_ps( __mmask16 k, void * m);
+
+
VMOVAPS void _mm512_store_ps( void * d, __m512 a);
+
+
VMOVAPS void _mm512_mask_store_ps( void * d, __mmask16 k, __m512 a);
+
+
VMOVAPS __m256 _mm256_mask_load_ps(__m256 a, __mmask8 k, void * s);
+
+
VMOVAPS __m256 _mm256_maskz_load_ps( __mmask8 k, void * s);
+
+
VMOVAPS void _mm256_mask_store_ps( void * d, __mmask8 k, __m256 a);
+
+
VMOVAPS __m128 _mm_mask_load_ps(__m128 a, __mmask8 k, void * s);
+
+
VMOVAPS __m128 _mm_maskz_load_ps( __mmask8 k, void * s);
+
+
VMOVAPS void _mm_mask_store_ps( void * d, __mmask8 k, __m128 a);
+
+
MOVAPS __m256 _mm256_load_ps (float * p);
+
+
MOVAPS void _mm256_store_ps(float * p, __m256 a);
+
+
MOVAPS __m128 _mm_load_ps (float * p);
+
+
MOVAPS void _mm_store_ps(float * p, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Exceptions Type1.SSE in Table 2-18, “Type 1 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instruction, see Table 2-44, “Type E1 Class Exception Conditions.”

diff --git a/x86/movbe.html b/x86/movbe.html new file mode 100644 index 0000000..63d4bc1 --- /dev/null +++ b/x86/movbe.html @@ -0,0 +1,204 @@ + +MOVBE + — Move Data After Swapping Bytes

MOVBE + — Move Data After Swapping Bytes

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
0F 38 F0 /r MOVBE r16, m16RMV/VMOVBEReverse byte order in m16 and move to r16.
0F 38 F0 /r MOVBE r32, m32RMV/VMOVBEReverse byte order in m32 and move to r32.
REX.W + 0F 38 F0 /r MOVBE r64, m64RMV/N.E.MOVBEReverse byte order in m64 and move to r64.
0F 38 F1 /r MOVBE m16, r16MRV/VMOVBEReverse byte order in r16 and move to m16.
0F 38 F1 /r MOVBE m32, r32MRV/VMOVBEReverse byte order in r32 and move to m32.
REX.W + 0F 38 F1 /r MOVBE m64, r64MRV/N.E.MOVBEReverse byte order in r64 and move to m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Performs a byte swap operation on the data copied from the second operand (source operand) and store the result in the first operand (destination operand). The source operand can be a general-purpose register, or memory location; the destination register can be a general-purpose register, or a memory location; however, both operands can not be registers, and only one operand can be a memory location. Both operands must be the same size, which can be a word, a doubleword or quadword.

+

The MOVBE instruction is provided for swapping the bytes on a read from memory or on a write to memory; thus providing support for converting little-endian values to big-endian format and vice versa.

+

In 64-bit mode, the instruction's default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
TEMP := SRC
+IF ( OperandSize = 16)
+    THEN
+        DEST[7:0] := TEMP[15:8];
+        DEST[15:8] := TEMP[7:0];
+    ELES IF ( OperandSize = 32)
+        DEST[7:0] := TEMP[31:24];
+        DEST[15:8] := TEMP[23:16];
+        DEST[23:16] := TEMP[15:8];
+        DEST[31:23] := TEMP[7:0];
+    ELSE IF ( OperandSize = 64)
+        DEST[7:0] := TEMP[63:56];
+        DEST[15:8] := TEMP[55:48];
+        DEST[23:16] := TEMP[47:40];
+        DEST[31:24] := TEMP[39:32];
+        DEST[39:32] := TEMP[31:24];
+        DEST[47:40] := TEMP[23:16];
+        DEST[55:48] := TEMP[15:8];
+        DEST[63:56] := TEMP[7:0];
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand is in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf CPUID.01H:ECX.MOVBE[bit 22] = 0.
If the LOCK prefix is used.
If REP (F3H) prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf CPUID.01H:ECX.MOVBE[bit 22] = 0.
If the LOCK prefix is used.
If REP (F3H) prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf CPUID.01H:ECX.MOVBE[bit 22] = 0.
If the LOCK prefix is used.
If REP (F3H) prefix is used.
If REPNE (F2H) prefix is used and CPUID.01H:ECX.SSE4_2[bit 20] = 0.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#SS(0)If the stack address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf CPUID.01H:ECX.MOVBE[bit 22] = 0.
If the LOCK prefix is used.
If REP (F3H) prefix is used.
diff --git a/x86/movd.movq.html b/x86/movd.movq.html new file mode 100644 index 0000000..b75f16e --- /dev/null +++ b/x86/movd.movq.html @@ -0,0 +1,278 @@ + +MOVD/MOVQ + — Move Doubleword/Move Quadword

MOVD/MOVQ + — Move Doubleword/Move Quadword

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/ En64/32-bit ModeCPUID Feature FlagDescription
NP 0F 6E /r MOVD mm, r/m32AV/VMMXMove doubleword from r/m32 to mm.
NP REX.W + 0F 6E /r MOVQ mm, r/m64AV/N.E.MMXMove quadword from r/m64 to mm.
NP 0F 7E /r MOVD r/m32, mmBV/VMMXMove doubleword from mm to r/m32.
NP REX.W + 0F 7E /r MOVQ r/m64, mmBV/N.E.MMXMove quadword from mm to r/m64.
66 0F 6E /r MOVD xmm, r/m32AV/VSSE2Move doubleword from r/m32 to xmm.
66 REX.W 0F 6E /r MOVQ xmm, r/m64AV/N.E.SSE2Move quadword from r/m64 to xmm.
66 0F 7E /r MOVD r/m32, xmmBV/VSSE2Move doubleword from xmm register to r/m32.
66 REX.W 0F 7E /r MOVQ r/m64, xmmBV/N.E.SSE2Move quadword from xmm register to r/m64.
VEX.128.66.0F.W0 6E / VMOVD xmm1, r32/m32AV/VAVXMove doubleword from r/m32 to xmm1.
VEX.128.66.0F.W1 6E /r VMOVQ xmm1, r64/m64AV/N.E1.AVXMove quadword from r/m64 to xmm1.
VEX.128.66.0F.W0 7E /r VMOVD r32/m32, xmm1BV/VAVXMove doubleword from xmm1 register to r/m32.
VEX.128.66.0F.W1 7E /r VMOVQ r64/m64, xmm1BV/N.E1.AVXMove quadword from xmm1 register to r/m64.
EVEX.128.66.0F.W0 6E /r VMOVD xmm1, r32/m32CV/VAVX512FMove doubleword from r/m32 to xmm1.
EVEX.128.66.0F.W1 6E /r VMOVQ xmm1, r64/m64CV/N.E.1AVX512FMove quadword from r/m64 to xmm1.
EVEX.128.66.0F.W0 7E /r VMOVD r32/m32, xmm1DV/VAVX512FMove doubleword from xmm1 register to r/m32.
EVEX.128.66.0F.W1 7E /r VMOVQ r64/m64, xmm1DV/N.E.1AVX512FMove quadword from xmm1 register to r/m64.
+
+

1. For this specific instruction, VEX.W/EVEX.W in non-64 bit is ignored; the instruction behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
CTuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
DTuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Copies a doubleword from the source operand (second operand) to the destination operand (first operand). The source and destination operands can be general-purpose registers, MMX technology registers, XMM registers, or 32-bit memory locations. This instruction can be used to move a doubleword to and from the low doubleword of an MMX technology register and a general-purpose register or a 32-bit memory location, or to and from the low doubleword of an XMM register and a general-purpose register or a 32-bit memory location. The instruction cannot be used to transfer data between MMX technology registers, between XMM registers, between general-purpose registers, or between memory locations.

+

When the destination operand is an MMX technology register, the source operand is written to the low doubleword of the register, and the register is zero-extended to 64 bits. When the destination operand is an XMM register, the source operand is written to the low doubleword of the register, and the register is zero-extended to 128 bits.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

MOVD/Q with XMM destination:

+

Moves a dword/qword integer from the source operand and stores it in the low 32/64-bits of the destination XMM register. The upper bits of the destination are zeroed. The source operand can be a 32/64-bit register or 32/64-bit memory location.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged. Qword operation requires the use of REX.W=1.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed. Qword operation requires the use of VEX.W=1.

+

EVEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed. Qword operation requires the use of EVEX.W=1.

+

MOVD/Q with 32/64 reg/mem destination:

+

Stores the low dword/qword of the source XMM register to 32/64-bit memory location or general-purpose register. Qword operation requires the use of REX.W=1, VEX.W=1, or EVEX.W=1.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

If VMOVD or VMOVQ is encoded with VEX.L= 1, an attempt to execute the instruction encoded with VEX.L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

MOVD (When Destination Operand is an MMX Technology Register) + ¶ +

+
DEST[31:0] := SRC;
+DEST[63:32] := 00000000H;
+
+

MOVD (When Destination Operand is an XMM Register) + ¶ +

+
DEST[31:0] := SRC;
+DEST[127:32] := 000000000000000000000000H;
+DEST[MAXVL-1:128] (Unmodified)
+
+

MOVD (When Source Operand is an MMX Technology or XMM Register) + ¶ +

+
DEST := SRC[31:0];
+
+

VMOVD (VEX-Encoded Version when Destination is an XMM Register) + ¶ +

+
DEST[31:0] := SRC[31:0]
+DEST[MAXVL-1:32] := 0
+
+

MOVQ (When Destination Operand is an XMM Register) + ¶ +

+
DEST[63:0] := SRC[63:0];
+DEST[127:64] := 0000000000000000H;
+DEST[MAXVL-1:128] (Unmodified)
+
+

MOVQ (When Destination Operand is r/m64) + ¶ +

+
DEST[63:0] := SRC[63:0];
+
+

MOVQ (When Source Operand is an XMM Register or r/m64) + ¶ +

+
DEST := SRC[63:0];
+
+

VMOVQ (VEX-Encoded Version When Destination is an XMM Register) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] := 0
+
+

VMOVD (EVEX-Encoded Version When Destination is an XMM Register) + ¶ +

+
DEST[31:0] := SRC[31:0]
+DEST[MAXVL-1:32] := 0
+
+

VMOVQ (EVEX-Encoded Version When Destination is an XMM Register) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVD __m64 _mm_cvtsi32_si64 (int i )
+
+
MOVD int _mm_cvtsi64_si32 ( __m64m )
+
+
MOVD __m128i _mm_cvtsi32_si128 (int a)
+
+
MOVD int _mm_cvtsi128_si32 ( __m128i a)
+
+
MOVQ __int64 _mm_cvtsi128_si64(__m128i);
+
+
MOVQ __m128i _mm_cvtsi64_si128(__int64);
+
+
VMOVD __m128i _mm_cvtsi32_si128( int);
+
+
VMOVD int _mm_cvtsi128_si32( __m128i );
+
+
VMOVQ __m128i _mm_cvtsi64_si128 (__int64);
+
+
VMOVQ __int64 _mm_cvtsi128_si64(__m128i );
+
+
VMOVQ __m128i _mm_loadl_epi64( __m128i * s);
+
+
VMOVQ void _mm_storel_epi64( __m128i * d, __m128i s);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

+

Additionally:

+ + + + + +
#UDIf VEX.L = 1.
If VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/movddup.html b/x86/movddup.html new file mode 100644 index 0000000..fd7edb5 --- /dev/null +++ b/x86/movddup.html @@ -0,0 +1,259 @@ + +MOVDDUP + — Replicate Double Precision Floating-Point Values

MOVDDUP + — Replicate Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 12 /r MOVDDUP xmm1, xmm2/m64AV/VSSE3Move double precision floating-point value from xmm2/m64 and duplicate into xmm1.
VEX.128.F2.0F.WIG 12 /r VMOVDDUP xmm1, xmm2/m64AV/VAVXMove double precision floating-point value from xmm2/m64 and duplicate into xmm1.
VEX.256.F2.0F.WIG 12 /r VMOVDDUP ymm1, ymm2/m256AV/VAVXMove even index double precision floating-point values from ymm2/mem and duplicate each element into ymm1.
EVEX.128.F2.0F.W1 12 /r VMOVDDUP xmm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512FMove double precision floating-point value from xmm2/m64 and duplicate each element into xmm1 subject to writemask k1.
EVEX.256.F2.0F.W1 12 /r VMOVDDUP ymm1 {k1}{z}, ymm2/m256BV/VAVX512VL AVX512FMove even index double precision floating-point values from ymm2/m256 and duplicate each element into ymm1 subject to writemask k1.
EVEX.512.F2.0F.W1 12 /r VMOVDDUP zmm1 {k1}{z}, zmm2/m512BV/VAVX512FMove even index double precision floating-point values from zmm2/m512 and duplicate each element into zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BMOVDDUPModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

For 256-bit or higher versions: Duplicates even-indexed double precision floating-point values from the source operand (the second operand) and into adjacent pair and store to the destination operand (the first operand).

+

For 128-bit versions: Duplicates the low double precision floating-point value from the source operand (the second operand) and store to the destination operand (the first operand).

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register are unchanged. The source operand is XMM register or a 64-bit memory location.

+

VEX.128 and EVEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed. The source operand is XMM register or a 64-bit memory location. The destination is updated conditionally under the writemask for EVEX version.

+

VEX.256 and EVEX.256 encoded version: Bits (MAXVL-1:256) of the destination register are zeroed. The source operand is YMM register or a 256-bit memory location. The destination is updated conditionally under the write-mask for EVEX version.

+

EVEX.512 encoded version: The destination is updated according to the writemask. The source operand is ZMM register or a 512-bit memory location.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X3 +X2 +X1 +X0 +SRC +DEST +X2 +X2 +X0 +X0 +
Figure 4-2. VMOVDDUP Operation
+

Operation + ¶ +

+

VMOVDDUP (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+TMP_SRC[63:0] := SRC[63:0]
+TMP_SRC[127:64] := SRC[63:0]
+IF VL >= 256
+    TMP_SRC[191:128] := SRC[191:128]
+    TMP_SRC[255:192] := SRC[191:128]
+FI;
+IF VL >= 512
+    TMP_SRC[319:256] := SRC[319:256]
+    TMP_SRC[383:320] := SRC[319:256]
+    TMP_SRC[477:384] := SRC[477:384]
+    TMP_SRC[511:484] := SRC[477:384]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_SRC[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+                        ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDDUP (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[127:64] := SRC[63:0]
+DEST[191:128] := SRC[191:128]
+DEST[255:192] := SRC[191:128]
+DEST[MAXVL-1:256] := 0
+
+

VMOVDDUP (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[127:64] := SRC[63:0]
+DEST[MAXVL-1:128] := 0
+
+

MOVDDUP (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[127:64] := SRC[63:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVDDUP __m512d _mm512_movedup_pd( __m512d a);
+
+
VMOVDDUP __m512d _mm512_mask_movedup_pd(__m512d s, __mmask8 k, __m512d a);
+
+
VMOVDDUP __m512d _mm512_maskz_movedup_pd( __mmask8 k, __m512d a);
+
+
VMOVDDUP __m256d _mm256_mask_movedup_pd(__m256d s, __mmask8 k, __m256d a);
+
+
VMOVDDUP __m256d _mm256_maskz_movedup_pd( __mmask8 k, __m256d a);
+
+
VMOVDDUP __m128d _mm_mask_movedup_pd(__m128d s, __mmask8 k, __m128d a);
+
+
VMOVDDUP __m128d _mm_maskz_movedup_pd( __mmask8 k, __m128d a);
+
+
MOVDDUP __m256d _mm256_movedup_pd (__m256d a);
+
+
MOVDDUP __m128d _mm_movedup_pd (__m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-52, “Type E5NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B or VEX.vvvv != 1111B.
diff --git a/x86/movdir64b.html b/x86/movdir64b.html new file mode 100644 index 0000000..c78691e --- /dev/null +++ b/x86/movdir64b.html @@ -0,0 +1,126 @@ + +MOVDIR64B + — Move 64 Bytes as Direct Store

MOVDIR64B + — Move 64 Bytes as Direct Store

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 F8 /r MOVDIR64B r16/r32/r64, m512AV/VMOVDIR64BMove 64-bytes as direct-store with guaranteed 64-byte write atomicity from the source memory operand address to destination memory address specified as offset to ES segment in the register operand.
+

Instruction Operand Encoding1 + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Moves 64-bytes as direct-store with 64-byte write atomicity from source memory address to destination memory address. The source operand is a normal memory operand. The destination operand is a memory location specified in a general-purpose register. The register content is interpreted as an offset into ES segment without any segment override. In 64-bit mode, the register operand width is 64-bits (32-bits with 67H prefix). Outside of 64-bit mode, the register width is 32-bits when CS.D=1 (16-bits with 67H prefix), and 16-bits when CS.D=0 (32-bits with 67H prefix). MOVDIR64B requires the destination address to be 64-byte aligned. No alignment restriction is enforced for source operand.

+

MOVDIR64B first reads 64-bytes from the source memory address. It then performs a 64-byte direct-store operation to the destination address. The load operation follows normal read ordering based on source address memory-type. The direct-store is implemented by using the write combining (WC) memory type protocol for writing data. Using this protocol, the processor does not write the data into the cache hierarchy, nor does it fetch the corresponding cache line from memory into the cache hierarchy. If the destination address is cached, the line is written-back (if modified) and invalidated from the cache, before the direct-store.

+

Unlike stores with non-temporal hint which allow UC/WP memory-type for destination to override the non-temporal hint, direct-stores always follow WC memory type protocol irrespective of destination address memory type (including UC/WP types). Unlike WC stores and stores with non-temporal hint, direct-stores are eligible for immediate eviction from the write-combining buffer, and thus not combined with younger stores (including direct-stores) to the same address. Older WC and non-temporal stores held in the write-combing buffer may be combined with younger direct stores to the same address. Direct stores are weakly ordered relative to other stores. Software that desires stronger ordering should use a fencing instruction (MFENCE or SFENCE) before or after a direct store to enforce the ordering desired.

+

There is no atomicity guarantee provided for the 64-byte load operation from source address, and processor implementations may use multiple load operations to read the 64-bytes. The 64-byte direct-store issued by MOVDIR64B guarantees 64-byte write-completion atomicity. This means that the data arrives at the destination in a single undivided 64-byte write transaction.

+

Availability of the MOVDIR64B instruction is indicated by the presence of the CPUID feature flag MOVDIR64B (bit 28 of the ECX register in leaf 07H, see “CPUID—CPU Identification” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A).

+
+

1. The Mod field of the ModR/M byte cannot have value 11B.

+

Operation + ¶ +

+
DEST := SRC;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVDIR64B void _movdir64b(void *dst, const void* src)
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
If address in destination (register) operand is not aligned to a 64-byte boundary.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code) For a page fault.
#UDIf CPUID.07H.0H:ECX.MOVDIR64B[bit 28] = 0.
If LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GPIf any part of the operand lies outside the effective address space from 0 to FFFFH.
If address in destination (register) operand is not aligned to a 64-byte boundary.
#UDIf CPUID.07H.0H:ECX.MOVDIR64B[bit 28] = 0.
If LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+ + + +
#PF(fault-code) For a page fault.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#SS(0)If memory address referencing the SS segment is in non-canonical form.
#GP(0)If the memory address is in non-canonical form.
If address in destination (register) operand is not aligned to a 64-byte boundary.
#PF(fault-code) For a page fault.
#UDIf CPUID.07H.0H:ECX.MOVDIR64B[bit 28] = 0.
If LOCK prefix is used.
diff --git a/x86/movdiri.html b/x86/movdiri.html new file mode 100644 index 0000000..e872293 --- /dev/null +++ b/x86/movdiri.html @@ -0,0 +1,136 @@ + +MOVDIRI + — Move Doubleword as Direct Store

MOVDIRI + — Move Doubleword as Direct Store

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 F9 /r MOVDIRI m32, r32AV/VMOVDIRIMove doubleword from r32 to m32 using direct store.
NP REX.W + 0F 38 F9 /r MOVDIRI m64, r64AV/N.E.MOVDIRIMove quadword from r64 to m64 using direct store.
+

Instruction Operand Encoding1 + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves the doubleword integer in the source operand (second operand) to the destination operand (first operand) using a direct-store operation. The source operand is a general purpose register. The destination operand is a 32-bit memory location. In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See summary chart at the beginning of this section for encoding data and limits.

+

The direct-store is implemented by using write combining (WC) memory type protocol for writing data. Using this protocol, the processor does not write the data into the cache hierarchy, nor does it fetch the corresponding cache line from memory into the cache hierarchy. If the destination address is cached, the line is written-back (if modified) and invalidated from the cache, before the direct-store. Unlike stores with non-temporal hint that allow uncached (UC) and write-protected (WP) memory-type for the destination to override the non-temporal hint, direct-stores always follow WC memory type protocol irrespective of the destination address memory type (including UC and WP types).

+

Unlike WC stores and stores with non-temporal hint, direct-stores are eligible for immediate eviction from the write-combining buffer, and thus not combined with younger stores (including direct-stores) to the same address. Older WC and non-temporal stores held in the write-combing buffer may be combined with younger direct stores to the same address. Direct stores are weakly ordered relative to other stores. Software that desires stronger ordering should use a fencing instruction (MFENCE or SFENCE) before or after a direct store to enforce the ordering desired.

+

Direct-stores issued by MOVDIRI to a destination aligned to a 4-byte boundary (8-byte boundary if used with REX.W prefix) guarantee 4-byte (8-byte with REX.W prefix) write-completion atomicity. This means that the data arrives at the destination in a single undivided 4-byte (or 8-byte) write transaction. If the destination is not aligned for the write size, the direct-stores issued by MOVDIRI are split and arrive at the destination in two parts. Each part of such split direct-store will not merge with younger stores but can arrive at the destination in either order. Availability of the MOVDIRI instruction is indicated by the presence of the CPUID feature flag MOVDIRI (bit 27 of the ECX register in leaf 07H, see “CPUID—CPU Identification” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A).

+
+

1. The Mod field of the ModR/M byte cannot have value 11B.

+

Operation + ¶ +

+
DEST := SRC;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVDIRI void _directstoreu_u32(void *dst, uint32_t val)
+
+
MOVDIRI void _directstoreu_u64(void *dst, uint64_t val)
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code) For a page fault.
#UDIf CPUID.07H.0H:ECX.MOVDIRI[bit 27] = 0.
If LOCK prefix or operand-size (66H) prefix is used.
#ACIf alignment checking is enabled and an unaligned memory reference made while in current privilege level 3.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#GPIf any part of the operand lies outside the effective address space from 0 to FFFFH.
#UDIf CPUID.07H.0H:ECX.MOVDIRI[bit 27] = 0.
If LOCK prefix or operand-size (66H) prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+ + + + + + +
#PF(fault-code) For a page fault.
#ACIf alignment checking is enabled and an unaligned memory reference made while in current privilege level 3.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#SS(0)If memory address referencing the SS segment is in non-canonical form.
#GP(0)If the memory address is in non-canonical form.
#PF(fault-code) For a page fault.
#UDIf CPUID.07H.0H:ECX.MOVDIRI[bit 27] = 0.
If LOCK prefix or operand-size (66H) prefix is used.
#ACIf alignment checking is enabled and an unaligned memory reference made while in current privilege level 3.
diff --git a/x86/movdq2q.html b/x86/movdq2q.html new file mode 100644 index 0000000..a849c81 --- /dev/null +++ b/x86/movdq2q.html @@ -0,0 +1,95 @@ + +MOVDQ2Q + — Move Quadword from XMM to MMX Technology Register

MOVDQ2Q + — Move Quadword from XMM to MMX Technology Register

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F2 0F D6 /rMOVDQ2Q mm, xmmRMValidValidMove low quadword from xmm to mmx register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Moves the low quadword from the source operand (second operand) to the destination operand (first operand). The source operand is an XMM register and the destination operand is an MMX technology register.

+

This instruction causes a transition from x87 FPU to MMX technology operation (that is, the x87 FPU top-of-stack pointer is set to 0 and the x87 FPU tag word is set to all 0s [valid]). If this instruction is executed while an x87 FPU floating-point exception is pending, the exception is handled before the MOVDQ2Q instruction is executed.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Operation + ¶ +

+
DEST := SRC[63:0];
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVDQ2Q __m64 _mm_movepi64_pi64 ( __m128i a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#NMIf CR0.TS[bit 3] = 1.
#UDIf CR0.EM[bit 2] = 1.
If CR4.OSFXSR[bit 9] = 0.
If CPUID.01H:EDX.SSE2[bit 26] = 0.
If the LOCK prefix is used.
#MFIf there is a pending x87 FPU exception.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/movdqa.vmovdqa32.vmovdqa64.html b/x86/movdqa.vmovdqa32.vmovdqa64.html new file mode 100644 index 0000000..4854906 --- /dev/null +++ b/x86/movdqa.vmovdqa32.vmovdqa64.html @@ -0,0 +1,384 @@ + +MOVDQA/VMOVDQA32/VMOVDQA64 + — Move Aligned Packed Integer Values

MOVDQA/VMOVDQA32/VMOVDQA64 + — Move Aligned Packed Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 6F /r MOVDQA xmm1, xmm2/m128AV/VSSE2Move aligned packed integer values from xmm2/mem to xmm1.
66 0F 7F /r MOVDQA xmm2/m128, xmm1BV/VSSE2Move aligned packed integer values from xmm1 to xmm2/mem.
VEX.128.66.0F.WIG 6F /r VMOVDQA xmm1, xmm2/m128AV/VAVXMove aligned packed integer values from xmm2/mem to xmm1.
VEX.128.66.0F.WIG 7F /r VMOVDQA xmm2/m128, xmm1BV/VAVXMove aligned packed integer values from xmm1 to xmm2/mem.
VEX.256.66.0F.WIG 6F /r VMOVDQA ymm1, ymm2/m256AV/VAVXMove aligned packed integer values from ymm2/mem to ymm1.
VEX.256.66.0F.WIG 7F /r VMOVDQA ymm2/m256, ymm1BV/VAVXMove aligned packed integer values from ymm1 to ymm2/mem.
EVEX.128.66.0F.W0 6F /r VMOVDQA32 xmm1 {k1}{z}, xmm2/m128CV/VAVX512VL AVX512FMove aligned packed doubleword integer values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.66.0F.W0 6F /r VMOVDQA32 ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512FMove aligned packed doubleword integer values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.66.0F.W0 6F /r VMOVDQA32 zmm1 {k1}{z}, zmm2/m512CV/VAVX512FMove aligned packed doubleword integer values from zmm2/m512 to zmm1 using writemask k1.
EVEX.128.66.0F.W0 7F /r VMOVDQA32 xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512FMove aligned packed doubleword integer values from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.66.0F.W0 7F /r VMOVDQA32 ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512FMove aligned packed doubleword integer values from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.66.0F.W0 7F /r VMOVDQA32 zmm2/m512 {k1}{z}, zmm1DV/VAVX512FMove aligned packed doubleword integer values from zmm1 to zmm2/m512 using writemask k1.
EVEX.128.66.0F.W1 6F /r VMOVDQA64 xmm1 {k1}{z}, xmm2/m128CV/VAVX512VL AVX512FMove aligned packed quadword integer values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.66.0F.W1 6F /r VMOVDQA64 ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512FMove aligned packed quadword integer values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.66.0F.W1 6F /r VMOVDQA64 zmm1 {k1}{z}, zmm2/m512CV/VAVX512FMove aligned packed quadword integer values from zmm2/m512 to zmm1 using writemask k1.
EVEX.128.66.0F.W1 7F /r VMOVDQA64 xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512FMove aligned packed quadword integer values from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.66.0F.W1 7F /r VMOVDQA64 ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512FMove aligned packed quadword integer values from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.66.0F.W1 7F /r VMOVDQA64 zmm2/m512 {k1}{z}, zmm1DV/VAVX512FMove aligned packed quadword integer values from zmm1 to zmm2/m512 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
CFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
DFull MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

EVEX encoded versions:

+

Moves 128, 256 or 512 bits of packed doubleword/quadword integer values from the source operand (the second operand) to the destination operand (the first operand). This instruction can be used to load a vector register from an int32/int64 memory location, to store the contents of a vector register into an int32/int64 memory location, or to move data between two ZMM registers. When the source or destination operand is a memory operand, the operand must be aligned on a 16 (EVEX.128)/32(EVEX.256)/64(EVEX.512)-byte boundary or a general-protection exception (#GP) will be generated. To move integer data to and from unaligned memory locations, use the VMOVDQU instruction.

+

The destination operand is updated at 32-bit (VMOVDQA32) or 64-bit (VMOVDQA64) granularity according to the writemask.

+

VEX.256 encoded version:

+

Moves 256 bits of packed integer values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a YMM register from a 256-bit memory location, to store the contents of a YMM register into a 256-bit memory location, or to move data between two YMM registers.

+

When the source or destination operand is a memory operand, the operand must be aligned on a 32-byte boundary or a general-protection exception (#GP) will be generated. To move integer data to and from unaligned memory locations, use the VMOVDQU instruction. Bits (MAXVL-1:256) of the destination register are zeroed.

+

128-bit versions:

+

Moves 128 bits of packed integer values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load an XMM register from a 128-bit memory location, to store the contents of an XMM register into a 128-bit memory location, or to move data between two XMM registers.

+

When the source or destination operand is a memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated. To move integer data to and from unaligned memory locations, use the VMOVDQU instruction.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding ZMM destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed.

+

Operation + ¶ +

+

VMOVDQA32 (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                    ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0
+                    ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQA32 (EVEX Encoded Versions, Store-Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE *DEST[i+31:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR;
+
+

VMOVDQA32 (EVEX Encoded Versions, Load-Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQA64 (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQA64 (EVEX Encoded Versions, Store-Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE *DEST[i+63:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR;
+
+

VMOVDQA64 (EVEX Encoded Versions, Load-Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQA (VEX.256 Encoded Version, Load - and Register Copy) + ¶ +

+
DEST[255:0] := SRC[255:0]
+DEST[MAXVL-1:256] := 0
+
+

VMOVDQA (VEX.256 Encoded Version, Store-Form) + ¶ +

+
DEST[255:0] := SRC[255:0]
+
+

VMOVDQA (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] := 0
+
+

VMOVDQA (128-bit Load- and Register-Copy- Form Legacy SSE Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

(V)MOVDQA (128-bit Store-Form Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVDQA32 __m512i _mm512_load_epi32( void * sa);
+
+
VMOVDQA32 __m512i _mm512_mask_load_epi32(__m512i s, __mmask16 k, void * sa);
+
+
VMOVDQA32 __m512i _mm512_maskz_load_epi32( __mmask16 k, void * sa);
+
+
VMOVDQA32 void _mm512_store_epi32(void * d, __m512i a);
+
+
VMOVDQA32 void _mm512_mask_store_epi32(void * d, __mmask16 k, __m512i a);
+
+
VMOVDQA32 __m256i _mm256_mask_load_epi32(__m256i s, __mmask8 k, void * sa);
+
+
VMOVDQA32 __m256i _mm256_maskz_load_epi32( __mmask8 k, void * sa);
+
+
VMOVDQA32 void _mm256_store_epi32(void * d, __m256i a);
+
+
VMOVDQA32 void _mm256_mask_store_epi32(void * d, __mmask8 k, __m256i a);
+
+
VMOVDQA32 __m128i _mm_mask_load_epi32(__m128i s, __mmask8 k, void * sa);
+
+
VMOVDQA32 __m128i _mm_maskz_load_epi32( __mmask8 k, void * sa);
+
+
VMOVDQA32 void _mm_store_epi32(void * d, __m128i a);
+
+
VMOVDQA32 void _mm_mask_store_epi32(void * d, __mmask8 k, __m128i a);
+
+
VMOVDQA64 __m512i _mm512_load_epi64( void * sa);
+
+
VMOVDQA64 __m512i _mm512_mask_load_epi64(__m512i s, __mmask8 k, void * sa);
+
+
VMOVDQA64 __m512i _mm512_maskz_load_epi64( __mmask8 k, void * sa);
+
+
VMOVDQA64 void _mm512_store_epi64(void * d, __m512i a);
+
+
VMOVDQA64 void _mm512_mask_store_epi64(void * d, __mmask8 k, __m512i a);
+
+
VMOVDQA64 __m256i _mm256_mask_load_epi64(__m256i s, __mmask8 k, void * sa);
+
+
VMOVDQA64 __m256i _mm256_maskz_load_epi64( __mmask8 k, void * sa);
+
+
VMOVDQA64 void _mm256_store_epi64(void * d, __m256i a);
+
+
VMOVDQA64 void _mm256_mask_store_epi64(void * d, __mmask8 k, __m256i a);
+
+
VMOVDQA64 __m128i _mm_mask_load_epi64(__m128i s, __mmask8 k, void * sa);
+
+
VMOVDQA64 __m128i _mm_maskz_load_epi64( __mmask8 k, void * sa);
+
+
VMOVDQA64 void _mm_store_epi64(void * d, __m128i a);
+
+
VMOVDQA64 void _mm_mask_store_epi64(void * d, __mmask8 k, __m128i a);
+
+
MOVDQA void __m256i _mm256_load_si256 (__m256i * p);
+
+
MOVDQA _mm256_store_si256(_m256i *p, __m256i a);
+
+
MOVDQA __m128i _mm_load_si128 (__m128i * p);
+
+
MOVDQA void _mm_store_si128(__m128i *p, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Exceptions Type1.SSE2 in Table 2-18, “Type 1 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-44, “Type E1 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B or VEX.vvvv != 1111B.
diff --git a/x86/movdqu.vmovdqu8.vmovdqu16.vmovdqu32.vmovdqu64.html b/x86/movdqu.vmovdqu8.vmovdqu16.vmovdqu32.vmovdqu64.html new file mode 100644 index 0000000..d251ce3 --- /dev/null +++ b/x86/movdqu.vmovdqu8.vmovdqu16.vmovdqu32.vmovdqu64.html @@ -0,0 +1,591 @@ + +MOVDQU/VMOVDQU8/VMOVDQU16/VMOVDQU32/VMOVDQU64 + — Move Unaligned Packed Integer Values

MOVDQU/VMOVDQU8/VMOVDQU16/VMOVDQU32/VMOVDQU64 + — Move Unaligned Packed Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 6F /r MOVDQU xmm1, xmm2/m128AV/VSSE2Move unaligned packed integer values from xmm2/m128 to xmm1.
F3 0F 7F /r MOVDQU xmm2/m128, xmm1BV/VSSE2Move unaligned packed integer values from xmm1 to xmm2/m128.
VEX.128.F3.0F.WIG 6F /r VMOVDQU xmm1, xmm2/m128AV/VAVXMove unaligned packed integer values from xmm2/m128 to xmm1.
VEX.128.F3.0F.WIG 7F /r VMOVDQU xmm2/m128, xmm1BV/VAVXMove unaligned packed integer values from xmm1 to xmm2/m128.
VEX.256.F3.0F.WIG 6F /r VMOVDQU ymm1, ymm2/m256AV/VAVXMove unaligned packed integer values from ymm2/m256 to ymm1.
VEX.256.F3.0F.WIG 7F /r VMOVDQU ymm2/m256, ymm1BV/VAVXMove unaligned packed integer values from ymm1 to ymm2/m256.
EVEX.128.F2.0F.W0 6F /r VMOVDQU8 xmm1 {k1}{z}, xmm2/m128CV/VAVX512VL AVX512BWMove unaligned packed byte integer values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.F2.0F.W0 6F /r VMOVDQU8 ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512BWMove unaligned packed byte integer values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.F2.0F.W0 6F /r VMOVDQU8 zmm1 {k1}{z}, zmm2/m512CV/VAVX512BWMove unaligned packed byte integer values from zmm2/m512 to zmm1 using writemask k1.
EVEX.128.F2.0F.W0 7F /r VMOVDQU8 xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512BWMove unaligned packed byte integer values from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.F2.0F.W0 7F /r VMOVDQU8 ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512BWMove unaligned packed byte integer values from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.F2.0F.W0 7F /r VMOVDQU8 zmm2/m512 {k1}{z}, zmm1DV/VAVX512BWMove unaligned packed byte integer values from zmm1 to zmm2/m512 using writemask k1.
EVEX.128.F2.0F.W1 6F /r VMOVDQU16 xmm1 {k1}{z}, xmm2/m128CV/VAVX512VL AVX512BWMove unaligned packed word integer values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.F2.0F.W1 6F /r VMOVDQU16 ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512BWMove unaligned packed word integer values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.F2.0F.W1 6F /r VMOVDQU16 zmm1 {k1}{z}, zmm2/m512CV/VAVX512BWMove unaligned packed word integer values from zmm2/m512 to zmm1 using writemask k1.
EVEX.128.F2.0F.W1 7F /r VMOVDQU16 xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512BWMove unaligned packed word integer values from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.F2.0F.W1 7F /r VMOVDQU16 ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512BWMove unaligned packed word integer values from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.F2.0F.W1 7F /r VMOVDQU16 zmm2/m512 {k1}{z}, zmm1DV/VAVX512BWMove unaligned packed word integer values from zmm1 to zmm2/m512 using writemask k1.
EVEX.128.F3.0F.W0 6F /r VMOVDQU32 xmm1 {k1}{z}, xmm2/mm128CV/VAVX512VL AVX512FMove unaligned packed doubleword integer values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.F3.0F.W0 6F /r VMOVDQU32 ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512FMove unaligned packed doubleword integer values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.F3.0F.W0 6F /r VMOVDQU32 zmm1 {k1}{z}, zmm2/m512CV/VAVX512FMove unaligned packed doubleword integer values from zmm2/m512 to zmm1 using writemask k1.
EVEX.128.F3.0F.W0 7F /r VMOVDQU32 xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512FMove unaligned packed doubleword integer values from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.F3.0F.W0 7F /r VMOVDQU32 ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512FMove unaligned packed doubleword integer values from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.F3.0F.W0 7F /r VMOVDQU32 zmm2/m512 {k1}{z}, zmm1DV/VAVX512FMove unaligned packed doubleword integer values from zmm1 to zmm2/m512 using writemask k1.
EVEX.128.F3.0F.W1 6F /r VMOVDQU64 xmm1 {k1}{z}, xmm2/m128CV/VAVX512VL AVX512FMove unaligned packed quadword integer values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.F3.0F.W1 6F /r VMOVDQU64 ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512FMove unaligned packed quadword integer values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.F3.0F.W1 6F /r VMOVDQU64 zmm1 {k1}{z}, zmm2/m512CV/VAVX512FMove unaligned packed quadword integer values from zmm2/m512 to zmm1 using writemask k1.
EVEX.128.F3.0F.W1 7F /r VMOVDQU64 xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512FMove unaligned packed quadword integer values from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.F3.0F.W1 7F /r VMOVDQU64 ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512FMove unaligned packed quadword integer values from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.F3.0F.W1 7F /r VMOVDQU64 zmm2/m512 {k1}{z}, zmm1DV/VAVX512FMove unaligned packed quadword integer values from zmm1 to zmm2/m512 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
CFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
DFull MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

EVEX encoded versions:

+

Moves 128, 256 or 512 bits of packed byte/word/doubleword/quadword integer values from the source operand (the second operand) to the destination operand (first operand). This instruction can be used to load a vector register from a memory location, to store the contents of a vector register into a memory location, or to move data between two vector registers.

+

The destination operand is updated at 8-bit (VMOVDQU8), 16-bit (VMOVDQU16), 32-bit (VMOVDQU32), or 64-bit (VMOVDQU64) granularity according to the writemask.

+

VEX.256 encoded version:

+

Moves 256 bits of packed integer values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a YMM register from a 256-bit memory location, to store the contents of a YMM register into a 256-bit memory location, or to move data between two YMM registers.

+

Bits (MAXVL-1:256) of the destination register are zeroed.

+

128-bit versions:

+

Moves 128 bits of packed integer values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load an XMM register from a 128-bit memory location, to store the contents of an XMM register into a 128-bit memory location, or to move data between two XMM registers.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

When the source or destination operand is a memory operand, the operand may be unaligned to any alignment without causing a general-protection exception (#GP) to be generated

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed.

+

Operation + ¶ +

+

VMOVDQU8 (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SRC[i+7:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE DEST[i+7:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQU8 (EVEX Encoded Versions, Store-Form) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+                THEN DEST[i+7:i] :=
+                    SRC[i+7:i]
+                ELSE *DEST[i+7:i] remains unchanged*
+                        ; merging-masking
+        I
+            ;
+ENDFOR;
+
+

VMOVDQU8 (EVEX Encoded Versions, Load-Form) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SRC[i+7:i]
+        ELSE
+            IF *merging-masking*
+                    ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE DEST[i+7:i] := 0
+                    ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQU16 (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SRC[i+15:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE DEST[i+15:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQU16 (EVEX Encoded Versions, Store-Form) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+                THEN DEST[i+15:i] :=
+                    SRC[i+15:i]
+                ELSE *DEST[i+15:i] remains unchanged*
+                        ; merging-masking
+        I
+            ;
+ENDFOR;
+
+

VMOVDQU16 (EVEX Encoded Versions, Load-Form) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SRC[i+15:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE DEST[i+15:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQU32 (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQU32 (EVEX Encoded Versions, Store-Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+                THEN DEST[i+31:i] :=
+                    SRC[i+31:i]
+                ELSE *DEST[i+31:i] remains unchanged*
+                        ; merging-masking
+        I
+            ;
+ENDFOR;
+
+

VMOVDQU32 (EVEX Encoded Versions, Load-Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQU64 (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQU64 (EVEX Encoded Versions, Store-Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE *DEST[i+63:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR;
+
+

VMOVDQU64 (EVEX Encoded Versions, Load-Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVDQU (VEX.256 Encoded Version, Load - and Register Copy) + ¶ +

+
DEST[255:0] := SRC[255:0]
+DEST[MAXVL-1:256] := 0
+
+

VMOVDQU (VEX.256 Encoded Version, Store-Form) + ¶ +

+
DEST[255:0] := SRC[255:0]
+VMOVDQU (VEX.128 encoded version)
+DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] := 0
+
+

VMOVDQU (128-bit Load- and Register-Copy- Form Legacy SSE Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

(V)MOVDQU (128-bit Store-Form Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVDQU16 __m512i _mm512_mask_loadu_epi16(__m512i s, __mmask32 k, void * sa);
+
+
VMOVDQU16 __m512i _mm512_maskz_loadu_epi16( __mmask32 k, void * sa);
+
+
VMOVDQU16 void _mm512_mask_storeu_epi16(void * d, __mmask32 k, __m512i a);
+
+
VMOVDQU16 __m256i _mm256_mask_loadu_epi16(__m256i s, __mmask16 k, void * sa);
+
+
VMOVDQU16 __m256i _mm256_maskz_loadu_epi16( __mmask16 k, void * sa);
+
+
VMOVDQU16 void _mm256_mask_storeu_epi16(void * d, __mmask16 k, __m256i a);
+
+
VMOVDQU16 __m128i _mm_mask_loadu_epi16(__m128i s, __mmask8 k, void * sa);
+
+
VMOVDQU16 __m128i _mm_maskz_loadu_epi16( __mmask8 k, void * sa);
+
+
VMOVDQU16 void _mm_mask_storeu_epi16(void * d, __mmask8 k, __m128i a);
+
+
VMOVDQU32 __m512i _mm512_loadu_epi32( void * sa);
+
+
VMOVDQU32 __m512i _mm512_mask_loadu_epi32(__m512i s, __mmask16 k, void * sa);
+
+
VMOVDQU32 __m512i _mm512_maskz_loadu_epi32( __mmask16 k, void * sa);
+
+
VMOVDQU32 void _mm512_storeu_epi32(void * d, __m512i a);
+
+
VMOVDQU32 void _mm512_mask_storeu_epi32(void * d, __mmask16 k, __m512i a);
+
+
VMOVDQU32 __m256i _mm256_mask_loadu_epi32(__m256i s, __mmask8 k, void * sa);
+
+
VMOVDQU32 __m256i _mm256_maskz_loadu_epi32( __mmask8 k, void * sa);
+
+
VMOVDQU32 void _mm256_storeu_epi32(void * d, __m256i a);
+
+
VMOVDQU32 void _mm256_mask_storeu_epi32(void * d, __mmask8 k, __m256i a);
+
+
VMOVDQU32 __m128i _mm_mask_loadu_epi32(__m128i s, __mmask8 k, void * sa);
+
+
VMOVDQU32 __m128i _mm_maskz_loadu_epi32( __mmask8 k, void * sa);
+
+
VMOVDQU32 void _mm_storeu_epi32(void * d, __m128i a);
+
+
VMOVDQU32 void _mm_mask_storeu_epi32(void * d, __mmask8 k, __m128i a);
+
+
VMOVDQU64 __m512i _mm512_loadu_epi64( void * sa);
+
+
VMOVDQU64 __m512i _mm512_mask_loadu_epi64(__m512i s, __mmask8 k, void * sa);
+
+
VMOVDQU64 __m512i _mm512_maskz_loadu_epi64( __mmask8 k, void * sa);
+
+
VMOVDQU64 void _mm512_storeu_epi64(void * d, __m512i a);
+
+
VMOVDQU64 void _mm512_mask_storeu_epi64(void * d, __mmask8 k, __m512i a);
+
+
VMOVDQU64 __m256i _mm256_mask_loadu_epi64(__m256i s, __mmask8 k, void * sa);
+
+
VMOVDQU64 __m256i _mm256_maskz_loadu_epi64( __mmask8 k, void * sa);
+
+
VMOVDQU64 void _mm256_storeu_epi64(void * d, __m256i a);
+
+
VMOVDQU64 void _mm256_mask_storeu_epi64(void * d, __mmask8 k, __m256i a);
+
+
VMOVDQU64 __m128i _mm_mask_loadu_epi64(__m128i s, __mmask8 k, void * sa);
+
+
VMOVDQU64 __m128i _mm_maskz_loadu_epi64( __mmask8 k, void * sa);
+
+
VMOVDQU64 void _mm_storeu_epi64(void * d, __m128i a);
+
+
VMOVDQU64 void _mm_mask_storeu_epi64(void * d, __mmask8 k, __m128i a);
+
+
VMOVDQU8 __m512i _mm512_mask_loadu_epi8(__m512i s, __mmask64 k, void * sa);
+
+
VMOVDQU8 __m512i _mm512_maskz_loadu_epi8( __mmask64 k, void * sa);
+
+
VMOVDQU8 void _mm512_mask_storeu_epi8(void * d, __mmask64 k, __m512i a);
+
+
VMOVDQU8 __m256i _mm256_mask_loadu_epi8(__m256i s, __mmask32 k, void * sa);
+
+
VMOVDQU8 __m256i _mm256_maskz_loadu_epi8( __mmask32 k, void * sa);
+
+
VMOVDQU8 void _mm256_mask_storeu_epi8(void * d, __mmask32 k, __m256i a);
+
+
VMOVDQU8 __m128i _mm_mask_loadu_epi8(__m128i s, __mmask16 k, void * sa);
+
+
VMOVDQU8 __m128i _mm_maskz_loadu_epi8( __mmask16 k, void * sa);
+
+
VMOVDQU8 void _mm_mask_storeu_epi8(void * d, __mmask16 k, __m128i a);
+
+
MOVDQU __m256i _mm256_loadu_si256 (__m256i * p);
+
+
MOVDQU _mm256_storeu_si256(_m256i *p, __m256i a);
+
+
MOVDQU __m128i _mm_loadu_si128 (__m128i * p);
+
+
MOVDQU _mm_storeu_si128(__m128i *p, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B or VEX.vvvv != 1111B.
diff --git a/x86/movhlps.html b/x86/movhlps.html new file mode 100644 index 0000000..96fde6e --- /dev/null +++ b/x86/movhlps.html @@ -0,0 +1,101 @@ + +MOVHLPS + — Move Packed Single Precision Floating-Point Values High to Low

MOVHLPS + — Move Packed Single Precision Floating-Point Values High to Low

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 12 /r MOVHLPS xmm1, xmm2RMV/VSSEMove two packed single precision floating-point values from high quadword of xmm2 to low quadword of xmm1.
VEX.128.0F.WIG 12 /r VMOVHLPS xmm1, xmm2, xmm3RVMV/VAVXMerge two packed single precision floating-point values from high quadword of xmm3 and low quadword of xmm2.
EVEX.128.0F.W0 12 /r VMOVHLPS xmm1, xmm2, xmm3RVMV/VAVX512FMerge two packed single precision floating-point values from high quadword of xmm3 and low quadword of xmm2.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. ModRM.MOD = 011B required.

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r) / EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction cannot be used for memory to register moves.

+

128-bit two-argument form:

+

Moves two packed single precision floating-point values from the high quadword of the second XMM argument (second operand) to the low quadword of the first XMM register (first argument). The quadword at bits 127:64 of the destination operand is left unchanged. Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

128-bit and EVEX three-argument form:

+

Moves two packed single precision floating-point values from the high quadword of the third XMM argument (third operand) to the low quadword of the destination (first operand). Copies the high quadword from the second XMM argument (second operand) to the high quadword of the destination (first operand). Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

If VMOVHLPS is encoded with VEX.L or EVEX.L’L= 1, an attempt to execute the instruction encoded with VEX.L or EVEX.L’L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

MOVHLPS (128-bit Two-Argument Form) + ¶ +

+
DEST[63:0] := SRC[127:64]
+DEST[MAXVL-1:64] (Unmodified)
+
+

VMOVHLPS (128-bit Three-Argument Form - VEX & EVEX) + ¶ +

+
DEST[63:0] := SRC2[127:64]
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVHLPS __m128 _mm_movehl_ps(__m128 a, __m128 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-24, “Type 7 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
+

EVEX-encoded instruction, see Exceptions Type E7NM.128 in Table 2-55, “Type E7NM Class Exception Conditions.”

diff --git a/x86/movhpd.html b/x86/movhpd.html new file mode 100644 index 0000000..1cc937b --- /dev/null +++ b/x86/movhpd.html @@ -0,0 +1,152 @@ + +MOVHPD + — Move High Packed Double Precision Floating-Point Value

MOVHPD + — Move High Packed Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 16 /r MOVHPD xmm1, m64AV/VSSE2Move double precision floating-point value from m64 to high quadword of xmm1.
VEX.128.66.0F.WIG 16 /r VMOVHPD xmm2, xmm1, m64BV/VAVXMerge double precision floating-point value from m64 and the low quadword of xmm1.
EVEX.128.66.0F.W1 16 /r VMOVHPD xmm2, xmm1, m64DV/VAVX512FMerge double precision floating-point value from m64 and the low quadword of xmm1.
66 0F 17 /r MOVHPD m64, xmm1CV/VSSE2Move double precision floating-point value from high quadword of xmm1 to m64.
VEX.128.66.0F.WIG 17 /r VMOVHPD m64, xmm1CV/VAVXMove double precision floating-point value from high quadword of xmm1 to m64.
EVEX.128.66.0F.W1 17 /r VMOVHPD m64, xmm1EV/VAVX512FMove double precision floating-point value from high quadword of xmm1 to m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
DTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
ETuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

This instruction cannot be used for register to register or memory to memory moves.

+

128-bit Legacy SSE load:

+

Moves a double precision floating-point value from the source 64-bit memory operand and stores it in the high 64-bits of the destination XMM register. The lower 64bits of the XMM register are preserved. Bits (MAXVL-1:128) of the corresponding destination register are preserved.

+

VEX.128 & EVEX encoded load:

+

Loads a double precision floating-point value from the source 64-bit memory operand (the third operand) and stores it in the upper 64-bits of the destination XMM register (first operand). The low 64-bits from the first source operand (second operand) are copied to the low 64-bits of the destination. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

128-bit store:

+

Stores a double precision floating-point value from the high 64-bits of the XMM register source (second operand) to the 64-bit memory location (first operand).

+

Note: VMOVHPD (store) (VEX.128.66.0F 17 /r) is legal and has the same behavior as the existing 66 0F 17 store. For VMOVHPD (store) VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instruction will #UD.

+

If VMOVHPD is encoded with VEX.L or EVEX.L’L= 1, an attempt to execute the instruction encoded with VEX.L or EVEX.L’L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

MOVHPD (128-bit Legacy SSE Load) + ¶ +

+
DEST[63:0] (Unmodified)
+DEST[127:64] := SRC[63:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VMOVHPD (VEX.128 & EVEX Encoded Load) + ¶ +

+
DEST[63:0] := SRC1[63:0]
+DEST[127:64] := SRC2[63:0]
+DEST[MAXVL-1:128] := 0
+
+

VMOVHPD (Store) + ¶ +

+
DEST[63:0] := SRC[127:64]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVHPD __m128d _mm_loadh_pd ( __m128d a, double *p)
+
+
MOVHPD void _mm_storeh_pd (double *p, __m128d a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

diff --git a/x86/movhps.html b/x86/movhps.html new file mode 100644 index 0000000..7bd586b --- /dev/null +++ b/x86/movhps.html @@ -0,0 +1,152 @@ + +MOVHPS + — Move High Packed Single Precision Floating-Point Values

MOVHPS + — Move High Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 16 /r MOVHPS xmm1, m64AV/VSSEMove two packed single precision floating-point values from m64 to high quadword of xmm1.
VEX.128.0F.WIG 16 /r VMOVHPS xmm2, xmm1, m64BV/VAVXMerge two packed single precision floating-point values from m64 and the low quadword of xmm1.
EVEX.128.0F.W0 16 /r VMOVHPS xmm2, xmm1, m64DV/VAVX512FMerge two packed single precision floating-point values from m64 and the low quadword of xmm1.
NP 0F 17 /r MOVHPS m64, xmm1CV/VSSEMove two packed single precision floating-point values from high quadword of xmm1 to m64.
VEX.128.0F.WIG 17 /r VMOVHPS m64, xmm1CV/VAVXMove two packed single precision floating-point values from high quadword of xmm1 to m64.
EVEX.128.0F.W0 17 /r VMOVHPS m64, xmm1EV/VAVX512FMove two packed single precision floating-point values from high quadword of xmm1 to m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
DTuple2ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
ETuple2ModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

This instruction cannot be used for register to register or memory to memory moves.

+

128-bit Legacy SSE load:

+

Moves two packed single precision floating-point values from the source 64-bit memory operand and stores them in the high 64-bits of the destination XMM register. The lower 64bits of the XMM register are preserved. Bits (MAXVL-1:128) of the corresponding destination register are preserved.

+

VEX.128 & EVEX encoded load:

+

Loads two single precision floating-point values from the source 64-bit memory operand (the third operand) and stores it in the upper 64-bits of the destination XMM register (first operand). The low 64-bits from the first source operand (the second operand) are copied to the lower 64-bits of the destination. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

128-bit store:

+

Stores two packed single precision floating-point values from the high 64-bits of the XMM register source (second operand) to the 64-bit memory location (first operand).

+

Note: VMOVHPS (store) (VEX.128.0F 17 /r) is legal and has the same behavior as the existing 0F 17 store. For VMOVHPS (store) VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instruction will #UD.

+

If VMOVHPS is encoded with VEX.L or EVEX.L’L= 1, an attempt to execute the instruction encoded with VEX.L or EVEX.L’L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

MOVHPS (128-bit Legacy SSE Load) + ¶ +

+
DEST[63:0] (Unmodified)
+DEST[127:64] := SRC[63:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VMOVHPS (VEX.128 and EVEX Encoded Load) + ¶ +

+
DEST[63:0] := SRC1[63:0]
+DEST[127:64] := SRC2[63:0]
+DEST[MAXVL-1:128] := 0
+
+

VMOVHPS (Store) + ¶ +

+
DEST[63:0] := SRC[127:64]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVHPS __m128 _mm_loadh_pi ( __m128 a, __m64 *p)
+
+
MOVHPS void _mm_storeh_pi (__m64 *p, __m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

diff --git a/x86/movlhps.html b/x86/movlhps.html new file mode 100644 index 0000000..4261421 --- /dev/null +++ b/x86/movlhps.html @@ -0,0 +1,102 @@ + +MOVLHPS + — Move Packed Single Precision Floating-Point Values Low to High

MOVLHPS + — Move Packed Single Precision Floating-Point Values Low to High

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 16 /r MOVLHPS xmm1, xmm2RMV/VSSEMove two packed single precision floating-point values from low quadword of xmm2 to high quadword of xmm1.
VEX.128.0F.WIG 16 /r VMOVLHPS xmm1, xmm2, xmm3RVMV/VAVXMerge two packed single precision floating-point values from low quadword of xmm3 and low quadword of xmm2.
EVEX.128.0F.W0 16 /r VMOVLHPS xmm1, xmm2, xmm3RVMV/VAVX512FMerge two packed single precision floating-point values from low quadword of xmm3 and low quadword of xmm2.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. ModRM.MOD = 011B required

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r) / EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction cannot be used for memory to register moves.

+

128-bit two-argument form:

+

Moves two packed single precision floating-point values from the low quadword of the second XMM argument (second operand) to the high quadword of the first XMM register (first argument). The low quadword of the destination operand is left unchanged. Bits (MAXVL-1:128) of the corresponding destination register are unmodified.

+

128-bit three-argument forms:

+

Moves two packed single precision floating-point values from the low quadword of the third XMM argument (third operand) to the high quadword of the destination (first operand). Copies the low quadword from the second XMM argument (second operand) to the low quadword of the destination (first operand). Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

If VMOVLHPS is encoded with VEX.L or EVEX.L’L= 1, an attempt to execute the instruction encoded with VEX.L or EVEX.L’L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

MOVLHPS (128-bit Two-Argument Form) + ¶ +

+
DEST[63:0] (Unmodified)
+DEST[127:64] := SRC[63:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VMOVLHPS (128-bit Three-Argument Form - VEX & EVEX) + ¶ +

+
DEST[63:0] := SRC1[63:0]
+DEST[127:64] := SRC2[63:0]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVLHPS __m128 _mm_movelh_ps(__m128 a, __m128 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-24, “Type 7 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
+

EVEX-encoded instruction, see Exceptions Type E7NM.128 in Table 2-55, “Type E7NM Class Exception Conditions.”

diff --git a/x86/movlpd.html b/x86/movlpd.html new file mode 100644 index 0000000..fbc8143 --- /dev/null +++ b/x86/movlpd.html @@ -0,0 +1,151 @@ + +MOVLPD + — Move Low Packed Double Precision Floating-Point Value

MOVLPD + — Move Low Packed Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 12 /r MOVLPD xmm1, m64AV/VSSE2Move double precision floating-point value from m64 to low quadword of xmm1.
VEX.128.66.0F.WIG 12 /r VMOVLPD xmm2, xmm1, m64BV/VAVXMerge double precision floating-point value from m64 and the high quadword of xmm1.
EVEX.128.66.0F.W1 12 /r VMOVLPD xmm2, xmm1, m64DV/VAVX512FMerge double precision floating-point value from m64 and the high quadword of xmm1.
66 0F 13/r MOVLPD m64, xmm1CV/VSSE2Move double precision floating-point value from low quadword of xmm1 to m64.
VEX.128.66.0F.WIG 13/r VMOVLPD m64, xmm1CV/VAVXMove double precision floating-point value from low quadword of xmm1 to m64.
EVEX.128.66.0F.W1 13/r VMOVLPD m64, xmm1EV/VAVX512FMove double precision floating-point value from low quadword of xmm1 to m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (r)VEX.vvvv (r)ModRM:r/m (r)N/A
CN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
DTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
ETuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

This instruction cannot be used for register to register or memory to memory moves.

+

128-bit Legacy SSE load:

+

Moves a double precision floating-point value from the source 64-bit memory operand and stores it in the low 64-bits of the destination XMM register. The upper 64bits of the XMM register are preserved. Bits (MAXVL-1:128) of the corresponding destination register are preserved.

+

VEX.128 & EVEX encoded load:

+

Loads a double precision floating-point value from the source 64-bit memory operand (third operand), merges it with the upper 64-bits of the first source XMM register (second operand), and stores it in the low 128-bits of the destination XMM register (first operand). Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

128-bit store:

+

Stores a double precision floating-point value from the low 64-bits of the XMM register source (second operand) to the 64-bit memory location (first operand).

+

Note: VMOVLPD (store) (VEX.128.66.0F 13 /r) is legal and has the same behavior as the existing 66 0F 13 store. For VMOVLPD (store) VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instruction will #UD.

+

If VMOVLPD is encoded with VEX.L or EVEX.L’L= 1, an attempt to execute the instruction encoded with VEX.L or EVEX.L’L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

MOVLPD (128-bit Legacy SSE Load) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] (Unmodified)
+
+

VMOVLPD (VEX.128 & EVEX Encoded Load) + ¶ +

+
DEST[63:0] := SRC2[63:0]
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VMOVLPD (Store) + ¶ +

+
DEST[63:0] := SRC[63:0]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVLPD __m128d _mm_loadl_pd ( __m128d a, double *p)
+
+
MOVLPD void _mm_storel_pd (double *p, __m128d a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

diff --git a/x86/movlps.html b/x86/movlps.html new file mode 100644 index 0000000..ef6ddee --- /dev/null +++ b/x86/movlps.html @@ -0,0 +1,151 @@ + +MOVLPS + — Move Low Packed Single Precision Floating-Point Values

MOVLPS + — Move Low Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 12 /r MOVLPS xmm1, m64AV/VSSEMove two packed single precision floating-point values from m64 to low quadword of xmm1.
VEX.128.0F.WIG 12 /r VMOVLPS xmm2, xmm1, m64BV/VAVXMerge two packed single precision floating-point values from m64 and the high quadword of xmm1.
EVEX.128.0F.W0 12 /r VMOVLPS xmm2, xmm1, m64DV/VAVX512FMerge two packed single precision floating-point values from m64 and the high quadword of xmm1.
0F 13/r MOVLPS m64, xmm1CV/VSSEMove two packed single precision floating-point values from low quadword of xmm1 to m64.
VEX.128.0F.WIG 13/r VMOVLPS m64, xmm1CV/VAVXMove two packed single precision floating-point values from low quadword of xmm1 to m64.
EVEX.128.0F.W0 13/r VMOVLPS m64, xmm1EV/VAVX512FMove two packed single precision floating-point values from low quadword of xmm1 to m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
DTuple2ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
ETuple2ModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

This instruction cannot be used for register to register or memory to memory moves.

+

128-bit Legacy SSE load:

+

Moves two packed single precision floating-point values from the source 64-bit memory operand and stores them in the low 64-bits of the destination XMM register. The upper 64bits of the XMM register are preserved. Bits (MAXVL-1:128) of the corresponding destination register are preserved.

+

VEX.128 & EVEX encoded load:

+

Loads two packed single precision floating-point values from the source 64-bit memory operand (the third operand), merges them with the upper 64-bits of the first source operand (the second operand), and stores them in the low 128-bits of the destination register (the first operand). Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

128-bit store:

+

Loads two packed single precision floating-point values from the low 64-bits of the XMM register source (second operand) to the 64-bit memory location (first operand).

+

Note: VMOVLPS (store) (VEX.128.0F 13 /r) is legal and has the same behavior as the existing 0F 13 store. For VMOVLPS (store) VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instruction will #UD.

+

If VMOVLPS is encoded with VEX.L or EVEX.L’L= 1, an attempt to execute the instruction encoded with VEX.L or EVEX.L’L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

MOVLPS (128-bit Legacy SSE Load) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] (Unmodified)
+
+

VMOVLPS (VEX.128 & EVEX Encoded Load) + ¶ +

+
DEST[63:0] := SRC2[63:0]
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VMOVLPS (Store) + ¶ +

+
DEST[63:0] := SRC[63:0]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVLPS __m128 _mm_loadl_pi ( __m128 a, __m64 *p)
+
+
MOVLPS void _mm_storel_pi (__m64 *p, __m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

diff --git a/x86/movmskpd.html b/x86/movmskpd.html new file mode 100644 index 0000000..551339d --- /dev/null +++ b/x86/movmskpd.html @@ -0,0 +1,102 @@ + +MOVMSKPD + — Extract Packed Double Precision Floating-Point Sign Mask

MOVMSKPD + — Extract Packed Double Precision Floating-Point Sign Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 50 /r MOVMSKPD reg, xmmRMV/VSSE2Extract 2-bit sign mask from xmm and store in reg. The upper bits of r32 or r64 are filled with zeros.
VEX.128.66.0F.WIG 50 /r VMOVMSKPD reg, xmm2RMV/VAVXExtract 2-bit sign mask from xmm2 and store in reg. The upper bits of r32 or r64 are zeroed.
VEX.256.66.0F.WIG 50 /r VMOVMSKPD reg, ymm2RMV/VAVXExtract 4-bit sign mask from ymm2 and store in reg. The upper bits of r32 or r64 are zeroed.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Extracts the sign bits from the packed double precision floating-point values in the source operand (second operand), formats them into a 2-bit mask, and stores the mask in the destination operand (first operand). The source operand is an XMM register, and the destination operand is a general-purpose register. The mask is stored in the 2 low-order bits of the destination operand. Zero-extend the upper bits of the destination.

+

In 64-bit mode, the instruction can access additional registers (XMM8-XMM15, R8-R15) when used with a REX.R prefix. The default operand size is 64-bit in 64-bit mode.

+

128-bit versions: The source operand is a YMM register. The destination operand is a general purpose register.

+

VEX.256 encoded version: The source operand is a YMM register. The destination operand is a general purpose register.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

(V)MOVMSKPD (128-bit Versions) + ¶ +

+
DEST[0] := SRC[63]
+DEST[1] := SRC[127]
+IF DEST = r32
+    THEN DEST[31:2] := 0;
+    ELSE DEST[63:2] := 0;
+FI
+
+

VMOVMSKPD (VEX.256 Encoded Version) + ¶ +

+
DEST[0] := SRC[63]
+DEST[1] := SRC[127]
+DEST[2] := SRC[191]
+DEST[3] := SRC[255]
+IF DEST = r32
+    THEN DEST[31:4] := 0;
+    ELSE DEST[63:4] := 0;
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVMSKPD int _mm_movemask_pd ( __m128d a)
+
+
VMOVMSKPD _mm256_movemask_pd(__m256d a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-24, “Type 7 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/movmskps.html b/x86/movmskps.html new file mode 100644 index 0000000..229ba8c --- /dev/null +++ b/x86/movmskps.html @@ -0,0 +1,119 @@ + +MOVMSKPS + — Extract Packed Single Precision Floating-Point Sign Mask

MOVMSKPS + — Extract Packed Single Precision Floating-Point Sign Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
NP 0F 50 /r MOVMSKPS reg, xmmRMV/VSSEExtract 4-bit sign mask from xmm and store in reg. The upper bits of r32 or r64 are filled with zeros.
VEX.128.0F.WIG 50 /r VMOVMSKPS reg, xmm2RMV/VAVXExtract 4-bit sign mask from xmm2 and store in reg. The upper bits of r32 or r64 are zeroed.
VEX.256.0F.WIG 50 /r VMOVMSKPS reg, ymm2RMV/VAVXExtract 8-bit sign mask from ymm2 and store in reg. The upper bits of r32 or r64 are zeroed.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. ModRM.MOD = 011B required

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Extracts the sign bits from the packed single precision floating-point values in the source operand (second operand), formats them into a 4- or 8-bit mask, and stores the mask in the destination operand (first operand). The source operand is an XMM or YMM register, and the destination operand is a general-purpose register. The mask is stored in the 4 or 8 low-order bits of the destination operand. The upper bits of the destination operand beyond the mask are filled with zeros.

+

In 64-bit mode, the instruction can access additional registers (XMM8-XMM15, R8-R15) when used with a REX.R prefix. The default operand size is 64-bit in 64-bit mode.

+

128-bit versions: The source operand is a YMM register. The destination operand is a general purpose register.

+

VEX.256 encoded version: The source operand is a YMM register. The destination operand is a general purpose register.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+
DEST[0] := SRC[31];
+DEST[1] := SRC[63];
+DEST[2] := SRC[95];
+DEST[3] := SRC[127];
+IF DEST = r32
+    THEN DEST[31:4] := ZeroExtend;
+    ELSE DEST[63:4] := ZeroExtend;
+FI;
+
+

(V)MOVMSKPS (128-bit version) + ¶ +

+
DEST[0] := SRC[31]
+DEST[1] := SRC[63]
+DEST[2] := SRC[95]
+DEST[3] := SRC[127]
+IF DEST = r32
+    THEN DEST[31:4] := 0;
+    ELSE DEST[63:4] := 0;
+FI
+
+

VMOVMSKPS (VEX.256 encoded version) + ¶ +

+
DEST[0] := SRC[31]
+DEST[1] := SRC[63]
+DEST[2] := SRC[95]
+DEST[3] := SRC[127]
+DEST[4] := SRC[159]
+DEST[5] := SRC[191]
+DEST[6] := SRC[223]
+DEST[7] := SRC[255]
+IF DEST = r32
+    THEN DEST[31:8] := 0;
+    ELSE DEST[63:8] := 0;
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
int _mm_movemask_ps(__m128 a)
+
+
int _mm256_movemask_ps(__m256 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-24, “Type 7 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/movntdq.html b/x86/movntdq.html new file mode 100644 index 0000000..711be6a --- /dev/null +++ b/x86/movntdq.html @@ -0,0 +1,124 @@ + +MOVNTDQ + — Store Packed Integers Using Non-Temporal Hint

MOVNTDQ + — Store Packed Integers Using Non-Temporal Hint

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F E7 /r MOVNTDQ m128, xmm1AV/VSSE2Move packed integer values in xmm1 to m128 using non-temporal hint.
VEX.128.66.0F.WIG E7 /r VMOVNTDQ m128, xmm1AV/VAVXMove packed integer values in xmm1 to m128 using non-temporal hint.
VEX.256.66.0F.WIG E7 /r VMOVNTDQ m256, ymm1AV/VAVXMove packed integer values in ymm1 to m256 using non-temporal hint.
EVEX.128.66.0F.W0 E7 /r VMOVNTDQ m128, xmm1BV/VAVX512VL AVX512FMove packed integer values in xmm1 to m128 using non-temporal hint.
EVEX.256.66.0F.W0 E7 /r VMOVNTDQ m256, ymm1BV/VAVX512VL AVX512FMove packed integer values in zmm1 to m256 using non-temporal hint.
EVEX.512.66.0F.W0 E7 /r VMOVNTDQ m512, zmm1BV/VAVX512FMove packed integer values in zmm1 to m512 using non-temporal hint.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. ModRM.MOD != 011B

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
BFull MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves the packed integers in the source operand (second operand) to the destination operand (first operand) using a non-temporal hint to prevent caching of the data during the write to memory. The source operand is an XMM register, YMM register or ZMM register, which is assumed to contain integer data (packed bytes, words, double-words, or quadwords). The destination operand is a 128-bit, 256-bit or 512-bit memory location. The memory operand must be aligned on a 16-byte (128-bit version), 32-byte (VEX.256 encoded version) or 64-byte (512-bit version) boundary otherwise a general-protection exception (#GP) will be generated.

+

The non-temporal hint is implemented by using a write combining (WC) memory type protocol when writing the data to memory. Using this protocol, the processor does not write the data into the cache hierarchy, nor does it fetch the corresponding cache line from memory into the cache hierarchy. The memory type of the region being written to can override the non-temporal hint, if the memory address specified for the non-temporal store is in an uncacheable (UC) or write protected (WP) memory region. For more information on non-temporal stores, see “Caching of Temporal vs. Non-Temporal Data” in Chapter 10 in the IA-32 Intel Architecture Software Developer’s Manual, Volume 1.

+

Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with the SFENCE or MFENCE instruction should be used in conjunction with VMOVNTDQ instructions if multiple processors might use different memory types to read/write the destination memory locations.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, VEX.L must be 0; otherwise instructions will #UD.

+

Operation + ¶ +

+

VMOVNTDQ(EVEX Encoded Versions) + ¶ +

+
VL = 128, 256, 512
+DEST[VL-1:0] := SRC[VL-1:0]
+DEST[MAXVL-1:VL] := 0
+
+

MOVNTDQ (Legacy and VEX Versions) + ¶ +

+
DEST := SRC
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVNTDQ void _mm512_stream_si512(void * p, __m512i a);
+
+
VMOVNTDQ void _mm256_stream_si256 (__m256i * p, __m256i a);
+
+
MOVNTDQ void _mm_stream_si128 (__m128i * p, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Exceptions Type1.SSE2 in Table 2-18, “Type 1 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-45, “Type E1NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/movntdqa.html b/x86/movntdqa.html new file mode 100644 index 0000000..5d5e37f --- /dev/null +++ b/x86/movntdqa.html @@ -0,0 +1,146 @@ + +MOVNTDQA + — Load Double Quadword Non-Temporal Aligned Hint

MOVNTDQA + — Load Double Quadword Non-Temporal Aligned Hint

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 2A /r MOVNTDQA xmm1, m128AV/VSSE4_1Move double quadword from m128 to xmm1 using non-temporal hint if WC memory type.
VEX.128.66.0F38.WIG 2A /r VMOVNTDQA xmm1, m128AV/VAVXMove double quadword from m128 to xmm using non-temporal hint if WC memory type.
VEX.256.66.0F38.WIG 2A /r VMOVNTDQA ymm1, m256AV/VAVX2Move 256-bit data from m256 to ymm using non-temporal hint if WC memory type.
EVEX.128.66.0F38.W0 2A /r VMOVNTDQA xmm1, m128BV/VAVX512VL AVX512FMove 128-bit data from m128 to xmm using non-temporal hint if WC memory type.
EVEX.256.66.0F38.W0 2A /r VMOVNTDQA ymm1, m256BV/VAVX512VL AVX512FMove 256-bit data from m256 to ymm using non-temporal hint if WC memory type.
EVEX.512.66.0F38.W0 2A /r VMOVNTDQA zmm1, m512BV/VAVX512FMove 512-bit data from m512 to zmm using non-temporal hint if WC memory type.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. ModRM.MOD != 011B

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

MOVNTDQA loads a double quadword from the source operand (second operand) to the destination operand (first operand) using a non-temporal hint if the memory source is WC (write combining) memory type. For WC memory type, the nontemporal hint may be implemented by loading a temporary internal buffer with the equivalent of an aligned cache line without filling this data to the cache. Any memory-type aliased lines in the cache will be snooped and flushed. Subsequent MOVNTDQA reads to unread portions of the WC cache line will receive data from the temporary internal buffer if data is available. The temporary internal buffer may be flushed by the processor at any time for any reason, for example:

+
    +
  • A load operation other than a MOVNTDQA which references memory already resident in a temporary internal buffer.
  • +
  • A non-WC reference to memory already resident in a temporary internal buffer.
  • +
  • Interleaving of reads and writes to a single temporary internal buffer.
  • +
  • Repeated (V)MOVNTDQA loads of a particular 16-byte item in a streaming line.
  • +
  • Certain micro-architectural conditions including resource shortages, detection of
+

a mis-speculation condition, and various fault conditions

+

The non-temporal hint is implemented by using a write combining (WC) memory type protocol when reading the data from memory. Using this protocol, the processor does not read the data into the cache hierarchy, nor does it fetch the corresponding cache line from memory into the cache hierarchy. The memory type of the region being read can override the non-temporal hint, if the memory address specified for the non-temporal read is not a WC memory region. Information on non-temporal reads and writes can be found in “Caching of Temporal vs. NonTemporal Data” in Chapter 10 in the Intel® 64 and IA-32 Architecture Software Developer’s Manual, Volume 3A.

+

Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with a MFENCE instruction should be used in conjunction with MOVNTDQA instructions if multiple processors might use different memory types for the referenced memory locations or to synchronize reads of a processor with writes by other agents in the system. A processor’s implementation of the streaming load hint does not override the effective memory type, but the implementation of the hint is processor dependent. For example, a processor implementa-

+

tion may choose to ignore the hint and process the instruction as a normal MOVDQA for any memory type. Alternatively, another implementation may optimize cache reads generated by MOVNTDQA on WB memory type to reduce cache evictions.

+

The 128-bit (V)MOVNTDQA addresses must be 16-byte aligned or the instruction will cause a #GP.

+

The 256-bit VMOVNTDQA addresses must be 32-byte aligned or the instruction will cause a #GP.

+

The 512-bit VMOVNTDQA addresses must be 64-byte aligned or the instruction will cause a #GP.

+

Operation + ¶ +

+

MOVNTDQA (128bit- Legacy SSE Form) + ¶ +

+
DEST := SRC
+DEST[MAXVL-1:128] (Unmodified)
+
+

VMOVNTDQA (VEX.128 and EVEX.128 Encoded Form) + ¶ +

+
DEST := SRC
+DEST[MAXVL-1:128] := 0
+
+

VMOVNTDQA (VEX.256 and EVEX.256 Encoded Forms) + ¶ +

+
DEST[255:0] := SRC[255:0]
+DEST[MAXVL-1:256] := 0
+
+

VMOVNTDQA (EVEX.512 Encoded Form) + ¶ +

+
DEST[511:0] := SRC[511:0]
+DEST[MAXVL-1:512] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVNTDQA __m512i _mm512_stream_load_si512(__m512i const* p);
+
+
MOVNTDQA __m128i _mm_stream_load_si128 (const __m128i *p);
+
+
VMOVNTDQA __m256i _mm256_stream_load_si256 (__m256i const* p);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-18, “Type 1 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-45, “Type E1NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/movnti.html b/x86/movnti.html new file mode 100644 index 0000000..3660de9 --- /dev/null +++ b/x86/movnti.html @@ -0,0 +1,130 @@ + +MOVNTI + — Store Doubleword Using Non-Temporal Hint

MOVNTI + — Store Doubleword Using Non-Temporal Hint

+ + + + + + + + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F C3 /r MOVNTI m32, r32MRV/VSSE2Move doubleword from r32 to m32 using non-temporal hint.
NP REX.W + 0F C3 /r MOVNTI m64, r64MRV/N.E.SSE2Move quadword from r64 to m64 using non-temporal hint.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves the doubleword integer in the source operand (second operand) to the destination operand (first operand) using a non-temporal hint to minimize cache pollution during the write to memory. The source operand is a general-purpose register. The destination operand is a 32-bit memory location.

+

The non-temporal hint is implemented by using a write combining (WC) memory type protocol when writing the data to memory. Using this protocol, the processor does not write the data into the cache hierarchy, nor does it fetch the corresponding cache line from memory into the cache hierarchy. The memory type of the region being written to can override the non-temporal hint, if the memory address specified for the non-temporal store is in an uncacheable (UC) or write protected (WP) memory region. For more information on non-temporal stores, see “Caching of Temporal vs. Non-Temporal Data” in Chapter 10 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with the SFENCE or MFENCE instruction should be used in conjunction with MOVNTI instructions if multiple processors might use different memory types to read/write the destination memory locations.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := SRC;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVNTI void _mm_stream_si32 (int *p, int a)
+
+
MOVNTI void _mm_stream_si64(__int64 *p, __int64 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code)For a page fault.
#UDIf CPUID.01H:EDX.SSE2[bit 26] = 0.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#GPIf any part of the operand lies outside the effective address space from 0 to FFFFH.
#UDIf CPUID.01H:EDX.SSE2[bit 26] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode.

+ + + +
#PF(fault-code)For a page fault.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)For a page fault.
#UDIf CPUID.01H:EDX.SSE2[bit 26] = 0.
If the LOCK prefix is used.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
diff --git a/x86/movntpd.html b/x86/movntpd.html new file mode 100644 index 0000000..9357ca9 --- /dev/null +++ b/x86/movntpd.html @@ -0,0 +1,124 @@ + +MOVNTPD + — Store Packed Double Precision Floating-Point Values Using Non-Temporal Hint

MOVNTPD + — Store Packed Double Precision Floating-Point Values Using Non-Temporal Hint

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 2B /r MOVNTPD m128, xmm1AV/VSSE2Move packed double precision values in xmm1 to m128 using non-temporal hint.
VEX.128.66.0F.WIG 2B /r VMOVNTPD m128, xmm1AV/VAVXMove packed double precision values in xmm1 to m128 using non-temporal hint.
VEX.256.66.0F.WIG 2B /r VMOVNTPD m256, ymm1AV/VAVXMove packed double precision values in ymm1 to m256 using non-temporal hint.
EVEX.128.66.0F.W1 2B /r VMOVNTPD m128, xmm1BV/VAVX512VL AVX512FMove packed double precision values in xmm1 to m128 using non-temporal hint.
EVEX.256.66.0F.W1 2B /r VMOVNTPD m256, ymm1BV/VAVX512VL AVX512FMove packed double precision values in ymm1 to m256 using non-temporal hint.
EVEX.512.66.0F.W1 2B /r VMOVNTPD m512, zmm1BV/VAVX512FMove packed double precision values in zmm1 to m512 using non-temporal hint.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. ModRM.MOD != 011B

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
BFull MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves the packed double precision floating-point values in the source operand (second operand) to the destination operand (first operand) using a non-temporal hint to prevent caching of the data during the write to memory. The source operand is an XMM register, YMM register or ZMM register, which is assumed to contain packed double precision, floating-pointing data. The destination operand is a 128-bit, 256-bit or 512-bit memory location. The memory operand must be aligned on a 16-byte (128-bit version), 32-byte (VEX.256 encoded version) or 64-byte (EVEX.512 encoded version) boundary otherwise a general-protection exception (#GP) will be generated.

+

The non-temporal hint is implemented by using a write combining (WC) memory type protocol when writing the data to memory. Using this protocol, the processor does not write the data into the cache hierarchy, nor does it fetch the corresponding cache line from memory into the cache hierarchy. The memory type of the region being written to can override the non-temporal hint, if the memory address specified for the non-temporal store is in an uncacheable (UC) or write protected (WP) memory region. For more information on non-temporal stores, see “Caching of Temporal vs. Non-Temporal Data” in Chapter 10 in the IA-32 Intel Architecture Software Developer’s Manual, Volume 1.

+

Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with the SFENCE or MFENCE instruction should be used in conjunction with MOVNTPD instructions if multiple processors might use different memory types to read/write the destination memory locations.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, VEX.L must be 0; otherwise instructions will #UD.

+

Operation + ¶ +

+

VMOVNTPD (EVEX Encoded Versions) + ¶ +

+
VL = 128, 256, 512
+DEST[VL-1:0] := SRC[VL-1:0]
+DEST[MAXVL-1:VL] := 0
+
+

MOVNTPD (Legacy and VEX Versions) + ¶ +

+
DEST := SRC
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVNTPD void _mm512_stream_pd(double * p, __m512d a);
+
+
VMOVNTPD void _mm256_stream_pd (double * p, __m256d a);
+
+
MOVNTPD void _mm_stream_pd (double * p, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Exceptions Type1.SSE2 in Table 2-18, “Type 1 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-45, “Type E1NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/movntps.html b/x86/movntps.html new file mode 100644 index 0000000..10c3642 --- /dev/null +++ b/x86/movntps.html @@ -0,0 +1,124 @@ + +MOVNTPS + — Store Packed Single Precision Floating-Point Values Using Non-Temporal Hint

MOVNTPS + — Store Packed Single Precision Floating-Point Values Using Non-Temporal Hint

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 2B /r MOVNTPS m128, xmm1AV/VSSEMove packed single precision values xmm1 to mem using non-temporal hint.
VEX.128.0F.WIG 2B /r VMOVNTPS m128, xmm1AV/VAVXMove packed single precision values xmm1 to mem using non-temporal hint.
VEX.256.0F.WIG 2B /r VMOVNTPS m256, ymm1AV/VAVXMove packed single precision values ymm1 to mem using non-temporal hint.
EVEX.128.0F.W0 2B /r VMOVNTPS m128, xmm1BV/VAVX512VL AVX512FMove packed single precision values in xmm1 to m128 using non-temporal hint.
EVEX.256.0F.W0 2B /r VMOVNTPS m256, ymm1BV/VAVX512VL AVX512FMove packed single precision values in ymm1 to m256 using non-temporal hint.
EVEX.512.0F.W0 2B /r VMOVNTPS m512, zmm1BV/VAVX512FMove packed single precision values in zmm1 to m512 using non-temporal hint.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. ModRM.MOD != 011B

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
BFull MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves the packed single precision floating-point values in the source operand (second operand) to the destination operand (first operand) using a non-temporal hint to prevent caching of the data during the write to memory. The source operand is an XMM register, YMM register or ZMM register, which is assumed to contain packed single precision, floating-pointing. The destination operand is a 128-bit, 256-bit or 512-bit memory location. The memory operand must be aligned on a 16-byte (128-bit version), 32-byte (VEX.256 encoded version) or 64-byte (EVEX.512 encoded version) boundary otherwise a general-protection exception (#GP) will be generated.

+

The non-temporal hint is implemented by using a write combining (WC) memory type protocol when writing the data to memory. Using this protocol, the processor does not write the data into the cache hierarchy, nor does it fetch the corresponding cache line from memory into the cache hierarchy. The memory type of the region being written to can override the non-temporal hint, if the memory address specified for the non-temporal store is in an uncacheable (UC) or write protected (WP) memory region. For more information on non-temporal stores, see “Caching of Temporal vs. Non-Temporal Data” in Chapter 10 in the IA-32 Intel Architecture Software Developer’s Manual, Volume 1.

+

Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with the SFENCE or MFENCE instruction should be used in conjunction with MOVNTPS instructions if multiple processors might use different memory types to read/write the destination memory locations.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VMOVNTPS (EVEX Encoded Versions) + ¶ +

+
VL = 128, 256, 512
+DEST[VL-1:0] := SRC[VL-1:0]
+DEST[MAXVL-1:VL] := 0
+
+

MOVNTPS + ¶ +

+
DEST := SRC
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVNTPS void _mm512_stream_ps(float * p, __m512d a);
+
+
MOVNTPS void _mm_stream_ps (float * p, __m128d a);
+
+
VMOVNTPS void _mm256_stream_ps (float * p, __m256 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Exceptions Type1.SSE in Table 2-18, “Type 1 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-45, “Type E1NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/movntq.html b/x86/movntq.html new file mode 100644 index 0000000..941b41a --- /dev/null +++ b/x86/movntq.html @@ -0,0 +1,65 @@ + +MOVNTQ + — Store of Quadword Using Non-Temporal Hint

MOVNTQ + — Store of Quadword Using Non-Temporal Hint

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F E7 /rMOVNTQ m64, mmMRValidValidMove quadword from mm to m64 using non-temporal hint.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves the quadword in the source operand (second operand) to the destination operand (first operand) using a non-temporal hint to minimize cache pollution during the write to memory. The source operand is an MMX technology register, which is assumed to contain packed integer data (packed bytes, words, or doublewords). The destination operand is a 64-bit memory location.

+

The non-temporal hint is implemented by using a write combining (WC) memory type protocol when writing the data to memory. Using this protocol, the processor does not write the data into the cache hierarchy, nor does it fetch the corresponding cache line from memory into the cache hierarchy. The memory type of the region being written to can override the non-temporal hint, if the memory address specified for the non-temporal store is in an uncacheable (UC) or write protected (WP) memory region. For more information on non-temporal stores, see “Caching of Temporal vs. Non-Temporal Data” in Chapter 10 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with the SFENCE or MFENCE instruction should be used in conjunction with MOVNTQ instructions if multiple processors might use different memory types to read/write the destination memory locations.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
DEST := SRC;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVNTQ void _mm_stream_pi(__m64 * p, __m64 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 23-8, “Exception Conditions for Legacy SIMD/MMX Instructions without FP Exception,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/movq.html b/x86/movq.html new file mode 100644 index 0000000..aa7a0a8 --- /dev/null +++ b/x86/movq.html @@ -0,0 +1,200 @@ + +MOVQ + — Move Quadword

MOVQ + — Move Quadword

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/ En64/32-bit ModeCPUID Feature FlagDescription
NP 0F 6F /r MOVQ mm, mm/m64AV/VMMXMove quadword from mm/m64 to mm.
NP 0F 7F /r MOVQ mm/m64, mmBV/VMMXMove quadword from mm to mm/m64.
F3 0F 7E /r MOVQ xmm1, xmm2/m64AV/VSSE2Move quadword from xmm2/mem64 to xmm1.
VEX.128.F3.0F.WIG 7E /r VMOVQ xmm1, xmm2/m64AV/VAVXMove quadword from xmm2 to xmm1.
EVEX.128.F3.0F.W1 7E /r VMOVQ xmm1, xmm2/m64CV/VAVX512FMove quadword from xmm2/m64 to xmm1.
66 0F D6 /r MOVQ xmm2/m64, xmm1BV/VSSE2Move quadword from xmm1 to xmm2/mem64.
VEX.128.66.0F.WIG D6 /r VMOVQ xmm1/m64, xmm2BV/VAVXMove quadword from xmm2 register to xmm1/m64.
EVEX.128.66.0F.W1 D6 /r VMOVQ xmm1/m64, xmm2DV/VAVX512FMove quadword from xmm2 register to xmm1/m64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
CTuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
DTuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Copies a quadword from the source operand (second operand) to the destination operand (first operand). The source and destination operands can be MMX technology registers, XMM registers, or 64-bit memory locations. This instruction can be used to move a quadword between two MMX technology registers or between an MMX technology register and a 64-bit memory location, or to move data between two XMM registers or between an XMM register and a 64-bit memory location. The instruction cannot be used to transfer data between memory locations.

+

When the source operand is an XMM register, the low quadword is moved; when the destination operand is an XMM register, the quadword is stored to the low quadword of the register, and the high quadword is cleared to all 0s.

+

In 64-bit mode and if not encoded using VEX/EVEX, use of the REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

If VMOVQ is encoded with VEX.L= 1, an attempt to execute the instruction encoded with VEX.L= 1 will cause an #UD exception.

+

Operation + ¶ +

+

MOVQ Instruction When Operating on MMX Technology Registers and Memory Locations + ¶ +

+
DEST := SRC;
+
+

MOVQ Instruction When Source and Destination Operands are XMM Registers + ¶ +

+
DEST[63:0] := SRC[63:0];
+DEST[127:64] := 0000000000000000H;
+
+

MOVQ Instruction When Source Operand is XMM Register and Destination + ¶ +

+
operand is memory location:
+    DEST := SRC[63:0];
+
+

MOVQ Instruction When Source Operand is Memory Location and Destination + ¶ +

+
operand is XMM register:
+    DEST[63:0] := SRC;
+    DEST[127:64] := 0000000000000000H;
+
+

VMOVQ (VEX.128.F3.0F 7E) With XMM Register Source and Destination + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] := 0
+
+

VMOVQ (VEX.128.66.0F D6) With XMM Register Source and Destination + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] := 0
+
+

VMOVQ (7E - EVEX Encoded Version) With XMM Register Source and Destination + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] := 0
+
+

VMOVQ (D6 - EVEX Encoded Version) With XMM Register Source and Destination + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] := 0
+
+

VMOVQ (7E) With Memory Source + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] := 0
+
+

VMOVQ (7E - EVEX Encoded Version) With Memory Source + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[:MAXVL-1:64] := 0
+
+

VMOVQ (D6) With Memory DEST + ¶ +

+
DEST[63:0] := SRC2[63:0]
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVQ __m128i _mm_loadu_si64( void * s);
+
+
VMOVQ void _mm_storeu_si64( void * d, __m128i s);
+
+
MOVQ m128i _mm_move_epi64(__m128i a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 23-8, “Exception Conditions for Legacy SIMD/MMX Instructions without FP Exception,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/movq2dq.html b/x86/movq2dq.html new file mode 100644 index 0000000..f45d673 --- /dev/null +++ b/x86/movq2dq.html @@ -0,0 +1,94 @@ + +MOVQ2DQ + — Move Quadword from MMX Technology to XMM Register

MOVQ2DQ + — Move Quadword from MMX Technology to XMM Register

+ + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F D6 /r MOVQ2DQ xmm, mmRMV/VSSE2Move quadword from mmx to low quadword of xmm.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Moves the quadword from the source operand (second operand) to the low quadword of the destination operand (first operand). The source operand is an MMX technology register and the destination operand is an XMM register.

+

This instruction causes a transition from x87 FPU to MMX technology operation (that is, the x87 FPU top-of-stack pointer is set to 0 and the x87 FPU tag word is set to all 0s [valid]). If this instruction is executed while an x87 FPU floating-point exception is pending, the exception is handled before the MOVQ2DQ instruction is executed.

+

In 64-bit mode, use of the REX.R prefix permits this instruction to access additional registers (XMM8-XMM15).

+

Operation + ¶ +

+
DEST[63:0] := SRC[63:0];
+DEST[127:64] := 00000000000000000H;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MOVQ2DQ__128i _mm_movpi64_epi64 ( __m64 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#NMIf CR0.TS[bit 3] = 1.
#UDIf CR0.EM[bit 2] = 1.
If CR4.OSFXSR[bit 9] = 0.
If CPUID.01H:EDX.SSE2[bit 26] = 0.
If the LOCK prefix is used.
#MFIf there is a pending x87 FPU exception.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/movs.movsb.movsw.movsd.movsq.html b/x86/movs.movsb.movsw.movsd.movsq.html new file mode 100644 index 0000000..4b9a5f2 --- /dev/null +++ b/x86/movs.movsb.movsw.movsd.movsq.html @@ -0,0 +1,260 @@ + +MOVS/MOVSB/MOVSW/MOVSD/MOVSQ + — Move Data From String to String

MOVS/MOVSB/MOVSW/MOVSD/MOVSQ + — Move Data From String to String

+ + + + + +

\

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
A4MOVS m8, m8ZOValidValidFor legacy mode, Move byte from address DS:(E)SI to ES:(E)DI. For 64-bit mode move byte from address (R|E)SI to (R|E)DI.
A5MOVS m16, m16ZOValidValidFor legacy mode, move word from address DS:(E)SI to ES:(E)DI. For 64-bit mode move word at address (R|E)SI to (R|E)DI.
A5MOVS m32, m32ZOValidValidFor legacy mode, move dword from address DS:(E)SI to ES:(E)DI. For 64-bit mode move dword from address (R|E)SI to (R|E)DI.
REX.W + A5MOVS m64, m64ZOValidN.E.Move qword from address (R|E)SI to (R|E)DI.
A4MOVSBZOValidValidFor legacy mode, Move byte from address DS:(E)SI to ES:(E)DI. For 64-bit mode move byte from address (R|E)SI to (R|E)DI.
A5MOVSWZOValidValidFor legacy mode, move word from address DS:(E)SI to ES:(E)DI. For 64-bit mode move word at address (R|E)SI to (R|E)DI.
A5MOVSDZOValidValidFor legacy mode, move dword from address DS:(E)SI to ES:(E)DI. For 64-bit mode move dword from address (R|E)SI to (R|E)DI.
REX.W + A5MOVSQZOValidN.E.Move qword from address (R|E)SI to (R|E)DI.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Moves the byte, word, or doubleword specified with the second operand (source operand) to the location specified with the first operand (destination operand). Both the source and destination operands are located in memory. The address of the source operand is read from the DS:ESI or the DS:SI registers (depending on the address-size attribute of the instruction, 32 or 16, respectively). The address of the destination operand is read from the ES:EDI or the ES:DI registers (again depending on the address-size attribute of the instruction). The DS segment may be overridden with a segment override prefix, but the ES segment cannot be overridden.

+

At the assembly-code level, two forms of this instruction are allowed: the “explicit-operands” form and the “no-operands” form. The explicit-operands form (specified with the MOVS mnemonic) allows the source and destination operands to be specified explicitly. Here, the source and destination operands should be symbols that indicate the size and location of the source value and the destination, respectively. This explicit-operands form is provided to allow documentation; however, note that the documentation provided by this form can be misleading. That is, the source and destination operand symbols must specify the correct type (size) of the operands (bytes, words, or doublewords), but they do not have to specify the correct location. The locations of the source and destination operands are always specified by the DS:(E)SI and ES:(E)DI registers, which must be loaded correctly before the move string instruction is executed.

+

The no-operands form provides “short forms” of the byte, word, and doubleword versions of the MOVS instructions. Here also DS:(E)SI and ES:(E)DI are assumed to be the source and destination operands, respectively. The size of the source and destination operands is selected with the mnemonic: MOVSB (byte move), MOVSW (word move), or MOVSD (doubleword move).

+

After the move operation, the (E)SI and (E)DI registers are incremented or decremented automatically according to the setting of the DF flag in the EFLAGS register. (If the DF flag is 0, the (E)SI and (E)DI register are incre-

+

mented; if the DF flag is 1, the (E)SI and (E)DI registers are decremented.) The registers are incremented or decremented by 1 for byte operations, by 2 for word operations, or by 4 for doubleword operations.

+
+

To improve performance, more recent processors support modifications to the processor’s operation during the string store operations initiated with MOVS and MOVSB. See Section 7.3.9.3 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for additional information on fast-string operation.

+

The MOVS, MOVSB, MOVSW, and MOVSD instructions can be preceded by the REP prefix (see “REP/REPE/REPZ /REPNE/REPNZ—Repeat String Operation Prefix” for a description of the REP prefix) for block moves of ECX bytes, words, or doublewords.

+

In 64-bit mode, the instruction’s default address size is 64 bits, 32-bit address size is supported using the prefix 67H. The 64-bit addresses are specified by RSI and RDI; 32-bit address are specified by ESI and EDI. Use of the REX.W prefix promotes doubleword operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := SRC;
+Non-64-bit Mode:
+IF (Byte move)
+    THEN IF DF = 0
+        THEN
+            (E)SI := (E)SI + 1;
+            (E)DI := (E)DI + 1;
+        ELSE
+            (E)SI := (E)SI – 1;
+            (E)DI := (E)DI – 1;
+        FI;
+    ELSE IF (Word move)
+        THEN IF DF = 0
+            (E)SI := (E)SI + 2;
+            (E)DI := (E)DI + 2;
+            FI;
+        ELSE
+            (E)SI := (E)SI – 2;
+            (E)DI := (E)DI – 2;
+        FI;
+    ELSE IF (Doubleword move)
+        THEN IF DF = 0
+            (E)SI := (E)SI + 4;
+            (E)DI := (E)DI + 4;
+            FI;
+        ELSE
+            (E)SI := (E)SI – 4;
+            (E)DI := (E)DI – 4;
+        FI;
+FI;
+64-bit Mode:
+IF (Byte move)
+    THEN IF DF = 0
+        THEN
+            (R|E)SI := (R|E)SI + 1;
+            (R|E)DI := (R|E)DI + 1;
+        ELSE
+            (R|E)SI := (R|E)SI – 1;
+            (R|E)DI := (R|E)DI – 1;
+        FI;
+    ELSE IF (Word move)
+        THEN IF DF = 0
+            (R|E)SI := (R|E)SI + 2;
+            (R|E)DI := (R|E)DI + 2;
+            FI;
+        ELSE
+            (R|E)SI := (R|E)SI – 2;
+            (R|E)DI := (R|E)DI – 2;
+        FI;
+    ELSE IF (Doubleword move)
+        THEN IF DF = 0
+            (R|E)SI := (R|E)SI + 4;
+            (R|E)DI := (R|E)DI + 4;
+            FI;
+        ELSE
+            (R|E)SI := (R|E)SI – 4;
+            (R|E)DI := (R|E)DI – 4;
+        FI;
+    ELSE IF (Quadword move)
+        THEN IF DF = 0
+            (R|E)SI := (R|E)SI + 8;
+            (R|E)DI := (R|E)DI + 8;
+            FI;
+        ELSE
+            (R|E)SI := (R|E)SI – 8;
+            (R|E)DI := (R|E)DI – 8;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/movsd.html b/x86/movsd.html new file mode 100644 index 0000000..f929c18 --- /dev/null +++ b/x86/movsd.html @@ -0,0 +1,262 @@ + +MOVSD + — Move or Merge Scalar Double Precision Floating-Point Value

MOVSD + — Move or Merge Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 10 /r MOVSD xmm1, xmm2AV/VSSE2Move scalar double precision floating-point value from xmm2 to xmm1 register.
F2 0F 10 /r MOVSD xmm1, m64AV/VSSE2Load scalar double precision floating-point value from m64 to xmm1 register.
F2 0F 11 /r MOVSD xmm1/m64, xmm2CV/VSSE2Move scalar double precision floating-point value from xmm2 register to xmm1/m64.
VEX.LIG.F2.0F.WIG 10 /r VMOVSD xmm1, xmm2, xmm3BV/VAVXMerge scalar double precision floating-point value from xmm2 and xmm3 to xmm1 register.
VEX.LIG.F2.0F.WIG 10 /r VMOVSD xmm1, m64DV/VAVXLoad scalar double precision floating-point value from m64 to xmm1 register.
VEX.LIG.F2.0F.WIG 11 /r VMOVSD xmm1, xmm2, xmm3EV/VAVXMerge scalar double precision floating-point value from xmm2 and xmm3 registers to xmm1.
VEX.LIG.F2.0F.WIG 11 /r VMOVSD m64, xmm1CV/VAVXStore scalar double precision floating-point value from xmm1 register to m64.
EVEX.LLIG.F2.0F.W1 10 /r VMOVSD xmm1 {k1}{z}, xmm2, xmm3BV/VAVX512FMerge scalar double precision floating-point value from xmm2 and xmm3 registers to xmm1 under writemask k1.
EVEX.LLIG.F2.0F.W1 10 /r VMOVSD xmm1 {k1}{z}, m64FV/VAVX512FLoad scalar double precision floating-point value from m64 to xmm1 register under writemask k1.
EVEX.LLIG.F2.0F.W1 11 /r VMOVSD xmm1 {k1}{z}, xmm2, xmm3EV/VAVX512FMerge scalar double precision floating-point value from xmm2 and xmm3 registers to xmm1 under writemask k1.
EVEX.LLIG.F2.0F.W1 11 /r VMOVSD m64 {k1}, xmm1GV/VAVX512FStore scalar double precision floating-point value from xmm1 register to m64 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
DN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
EN/AModRM:r/m (w)EVEX.vvvv (r)ModRM:reg (r)N/A
FTuple1 ScalarModRM:reg (r, w)ModRM:r/m (r)N/AN/A
GTuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves a scalar double precision floating-point value from the source operand (second operand) to the destination operand (first operand). The source and destination operands can be XMM registers or 64-bit memory locations. This instruction can be used to move a double precision floating-point value to and from the low quadword of an XMM register and a 64-bit memory location, or to move a double precision floating-point value between the low quadwords of two XMM registers. The instruction cannot be used to transfer data between memory locations.

+

Legacy version: When the source and destination operands are XMM registers, bits MAXVL:64 of the destination operand remains unchanged. When the source operand is a memory location and destination operand is an XMM

+

registers, the quadword at bits 127:64 of the destination operand is cleared to all 0s, bits MAXVL:128 of the destination operand remains unchanged.

+

VEX and EVEX encoded register-register syntax: Moves a scalar double precision floating-point value from the second source operand (the third operand) to the low quadword element of the destination operand (the first operand). Bits 127:64 of the destination operand are copied from the first source operand (the second operand). Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX and EVEX encoded memory store syntax: When the source operand is a memory location and destination operand is an XMM registers, bits MAXVL:64 of the destination operand is cleared to all 0s.

+

EVEX encoded versions: The low quadword of the destination is updated according to the writemask.

+

Note: For VMOVSD (memory store and load forms), VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instruction will #UD.

+

Operation + ¶ +

+

VMOVSD (EVEX.LLIG.F2.0F 10 /r: VMOVSD xmm1, m64 With Support for 32 Registers) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[63:0] := SRC[63:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[MAXVL-1:64] := 0
+
+

VMOVSD (EVEX.LLIG.F2.0F 11 /r: VMOVSD m64, xmm1 With Support for 32 Registers) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[63:0] := SRC[63:0]
+    ELSE *DEST[63:0] remains unchanged* ; merging-masking
+FI;
+
+

VMOVSD (EVEX.LLIG.F2.0F 11 /r: VMOVSD xmm1, xmm2, xmm3) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[63:0] := SRC2[63:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

MOVSD (128-bit Legacy SSE Version: MOVSD xmm1, xmm2) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] (Unmodified)
+
+

VMOVSD (VEX.128.F2.0F 11 /r: VMOVSD xmm1, xmm2, xmm3) + ¶ +

+
DEST[63:0] := SRC2[63:0]
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VMOVSD (VEX.128.F2.0F 10 /r: VMOVSD xmm1, xmm2, xmm3) + ¶ +

+
DEST[63:0] := SRC2[63:0]
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VMOVSD (VEX.128.F2.0F 10 /r: VMOVSD xmm1, m64) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[MAXVL-1:64] := 0
+
+

MOVSD/VMOVSD (128-bit Versions: MOVSD m64, xmm1 or VMOVSD m64, xmm1) + ¶ +

+
DEST[63:0] := SRC[63:0]
+
+

MOVSD (128-bit Legacy SSE Version: MOVSD xmm1, m64) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[127:64] := 0
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVSD __m128d _mm_mask_load_sd(__m128d s, __mmask8 k, double * p);
+
+
VMOVSD __m128d _mm_maskz_load_sd( __mmask8 k, double * p);
+
+
VMOVSD __m128d _mm_mask_move_sd(__m128d sh, __mmask8 k, __m128d sl, __m128d a);
+
+
VMOVSD __m128d _mm_maskz_move_sd( __mmask8 k, __m128d s, __m128d a);
+
+
VMOVSD void _mm_mask_store_sd(double * p, __mmask8 k, __m128d s);
+
+
MOVSD __m128d _mm_load_sd (double *p)
+
+
MOVSD void _mm_store_sd (double *p, __m128d a)
+
+
MOVSD __m128d _mm_move_sd ( __m128d a, __m128d b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instruction, see Table 2-58, “Type E10 Class Exception Conditions.”

diff --git a/x86/movshdup.html b/x86/movshdup.html new file mode 100644 index 0000000..51c3102 --- /dev/null +++ b/x86/movshdup.html @@ -0,0 +1,329 @@ + +MOVSHDUP + — Replicate Single Precision Floating-Point Values

MOVSHDUP + — Replicate Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 16 /r MOVSHDUP xmm1, xmm2/m128AV/VSSE3Move odd index single precision floating-point values from xmm2/mem and duplicate each element into xmm1.
VEX.128.F3.0F.WIG 16 /r VMOVSHDUP xmm1, xmm2/m128AV/VAVXMove odd index single precision floating-point values from xmm2/mem and duplicate each element into xmm1.
VEX.256.F3.0F.WIG 16 /r VMOVSHDUP ymm1, ymm2/m256AV/VAVXMove odd index single precision floating-point values from ymm2/mem and duplicate each element into ymm1.
EVEX.128.F3.0F.W0 16 /r VMOVSHDUP xmm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512FMove odd index single precision floating-point values from xmm2/m128 and duplicate each element into xmm1 under writemask.
EVEX.256.F3.0F.W0 16 /r VMOVSHDUP ymm1 {k1}{z}, ymm2/m256BV/VAVX512VL AVX512FMove odd index single precision floating-point values from ymm2/m256 and duplicate each element into ymm1 under writemask.
EVEX.512.F3.0F.W0 16 /r VMOVSHDUP zmm1 {k1}{z}, zmm2/m512BV/VAVX512FMove odd index single precision floating-point values from zmm2/m512 and duplicate each element into zmm1 under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Duplicates odd-indexed single precision floating-point values from the source operand (the second operand) to adjacent element pair in the destination operand (the first operand). See Figure 4-3. The source operand is an XMM, YMM or ZMM register or 128, 256 or 512-bit memory location and the destination operand is an XMM, YMM or ZMM register.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed.

+

VEX.256 encoded version: Bits (MAXVL-1:256) of the destination register are zeroed.

+

EVEX encoded version: The destination operand is updated at 32-bit granularity according to the writemask.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X7 X6 X5 X4 X3 X2 X1 X0 +SRC +DEST X7 X7 X5 X5 X3 X3 X1 X1 +
Figure 4-3. MOVSHDUP Operation
+

Operation + ¶ +

+

VMOVSHDUP (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+TMP_SRC[31:0] := SRC[63:32]
+TMP_SRC[63:32] := SRC[63:32]
+TMP_SRC[95:64] := SRC[127:96]
+TMP_SRC[127:96] := SRC[127:96]
+IF VL >= 256
+    TMP_SRC[159:128] := SRC[191:160]
+    TMP_SRC[191:160] := SRC[191:160]
+    TMP_SRC[223:192] := SRC[255:224]
+    TMP_SRC[255:224] := SRC[255:224]
+FI;
+IF VL >= 512
+    TMP_SRC[287:256] := SRC[319:288]
+    TMP_SRC[319:288] := SRC[319:288]
+    TMP_SRC[351:320] := SRC[383:352]
+    TMP_SRC[383:352] := SRC[383:352]
+    TMP_SRC[415:384] := SRC[447:416]
+    TMP_SRC[447:416] := SRC[447:416]
+    TMP_SRC[479:448] := SRC[511:480]
+    TMP_SRC[511:480] := SRC[511:480]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVSHDUP (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC[63:32]
+DEST[63:32] := SRC[63:32]
+DEST[95:64] := SRC[127:96]
+DEST[127:96] := SRC[127:96]
+DEST[159:128] := SRC[191:160]
+DEST[191:160] := SRC[191:160]
+DEST[223:192] := SRC[255:224]
+DEST[255:224] := SRC[255:224]
+DEST[MAXVL-1:256] := 0
+
+

VMOVSHDUP (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC[63:32]
+DEST[63:32] := SRC[63:32]
+DEST[95:64] := SRC[127:96]
+DEST[127:96] := SRC[127:96]
+DEST[MAXVL-1:128] := 0
+
+

MOVSHDUP (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC[63:32]
+DEST[63:32] := SRC[63:32]
+DEST[95:64] := SRC[127:96]
+DEST[127:96] := SRC[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVSHDUP __m512 _mm512_movehdup_ps( __m512 a);
+
+
VMOVSHDUP __m512 _mm512_mask_movehdup_ps(__m512 s, __mmask16 k, __m512 a);
+
+
VMOVSHDUP __m512 _mm512_maskz_movehdup_ps( __mmask16 k, __m512 a);
+
+
VMOVSHDUP __m256 _mm256_mask_movehdup_ps(__m256 s, __mmask8 k, __m256 a);
+
+
VMOVSHDUP __m256 _mm256_maskz_movehdup_ps( __mmask8 k, __m256 a);
+
+
VMOVSHDUP __m128 _mm_mask_movehdup_ps(__m128 s, __mmask8 k, __m128 a);
+
+
VMOVSHDUP __m128 _mm_maskz_movehdup_ps( __mmask8 k, __m128 a);
+
+
VMOVSHDUP __m256 _mm256_movehdup_ps (__m256 a);
+
+
VMOVSHDUP __m128 _mm_movehdup_ps (__m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B or VEX.vvvv != 1111B.
diff --git a/x86/movsldup.html b/x86/movsldup.html new file mode 100644 index 0000000..0814d4d --- /dev/null +++ b/x86/movsldup.html @@ -0,0 +1,327 @@ + +MOVSLDUP + — Replicate Single Precision Floating-Point Values

MOVSLDUP + — Replicate Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 12 /r MOVSLDUP xmm1, xmm2/m128AV/VSSE3Move even index single precision floating-point values from xmm2/mem and duplicate each element into xmm1.
VEX.128.F3.0F.WIG 12 /r VMOVSLDUP xmm1, xmm2/m128AV/VAVXMove even index single precision floating-point values from xmm2/mem and duplicate each element into xmm1.
VEX.256.F3.0F.WIG 12 /r VMOVSLDUP ymm1, ymm2/m256AV/VAVXMove even index single precision floating-point values from ymm2/mem and duplicate each element into ymm1.
EVEX.128.F3.0F.W0 12 /r VMOVSLDUP xmm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512FMove even index single precision floating-point values from xmm2/m128 and duplicate each element into xmm1 under writemask.
EVEX.256.F3.0F.W0 12 /r VMOVSLDUP ymm1 {k1}{z}, ymm2/m256BV/VAVX512VL AVX512FMove even index single precision floating-point values from ymm2/m256 and duplicate each element into ymm1 under writemask.
EVEX.512.F3.0F.W0 12 /r VMOVSLDUP zmm1 {k1}{z}, zmm2/m512BV/VAVX512FMove even index single precision floating-point values from zmm2/m512 and duplicate each element into zmm1 under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Duplicates even-indexed single precision floating-point values from the source operand (the second operand). See Figure 4-4. The source operand is an XMM, YMM or ZMM register or 128, 256 or 512-bit memory location and the destination operand is an XMM, YMM or ZMM register.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed.

+

VEX.256 encoded version: Bits (MAXVL-1:256) of the destination register are zeroed.

+

EVEX encoded version: The destination operand is updated at 32-bit granularity according to the writemask.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X7 X6 X5 X4 X3 X2 X1 X0 +SRC +DEST X6 X6 X4 X4 X2 X2 X0 X0 +
Figure 4-4. MOVSLDUP Operation
+

Operation + ¶ +

+

VMOVSLDUP (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+TMP_SRC[31:0] := SRC[31:0]
+TMP_SRC[63:32] := SRC[31:0]
+TMP_SRC[95:64] := SRC[95:64]
+TMP_SRC[127:96] := SRC[95:64]
+IF VL >= 256
+    TMP_SRC[159:128] := SRC[159:128]
+    TMP_SRC[191:160] := SRC[159:128]
+    TMP_SRC[223:192] := SRC[223:192]
+    TMP_SRC[255:224] := SRC[223:192]
+FI;
+IF VL >= 512
+    TMP_SRC[287:256] := SRC[287:256]
+    TMP_SRC[319:288] := SRC[287:256]
+    TMP_SRC[351:320] := SRC[351:320]
+    TMP_SRC[383:352] := SRC[351:320]
+    TMP_SRC[415:384] := SRC[415:384]
+    TMP_SRC[447:416] := SRC[415:384]
+    TMP_SRC[479:448] := SRC[479:448]
+    TMP_SRC[511:480] := SRC[479:448]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVSLDUP (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC[31:0]
+DEST[63:32] := SRC[31:0]
+DEST[95:64] := SRC[95:64]
+DEST[127:96] := SRC[95:64]
+DEST[159:128] := SRC[159:128]
+DEST[191:160] := SRC[159:128]
+DEST[223:192] := SRC[223:192]
+DEST[255:224] := SRC[223:192]
+DEST[MAXVL-1:256] := 0
+
+

VMOVSLDUP (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC[31:0]
+DEST[63:32] := SRC[31:0]
+DEST[95:64] := SRC[95:64]
+DEST[127:96] := SRC[95:64]
+DEST[MAXVL-1:128] := 0
+
+

MOVSLDUP (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC[31:0]
+DEST[63:32] := SRC[31:0]
+DEST[95:64] := SRC[95:64]
+DEST[127:96] := SRC[95:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVSLDUP __m512 _mm512_moveldup_ps( __m512 a);
+
+
VMOVSLDUP __m512 _mm512_mask_moveldup_ps(__m512 s, __mmask16 k, __m512 a);
+
+
VMOVSLDUP __m512 _mm512_maskz_moveldup_ps( __mmask16 k, __m512 a);
+
+
VMOVSLDUP __m256 _mm256_mask_moveldup_ps(__m256 s, __mmask8 k, __m256 a);
+
+
VMOVSLDUP __m256 _mm256_maskz_moveldup_ps( __mmask8 k, __m256 a);
+
+
VMOVSLDUP __m128 _mm_mask_moveldup_ps(__m128 s, __mmask8 k, __m128 a);
+
+
VMOVSLDUP __m128 _mm_maskz_moveldup_ps( __mmask8 k, __m128 a);
+
+
VMOVSLDUP __m256 _mm256_moveldup_ps (__m256 a);
+
+
VMOVSLDUP __m128 _mm_moveldup_ps (__m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B or VEX.vvvv != 1111B.
diff --git a/x86/movss.html b/x86/movss.html new file mode 100644 index 0000000..16be1af --- /dev/null +++ b/x86/movss.html @@ -0,0 +1,263 @@ + +MOVSS + — Move or Merge Scalar Single Precision Floating-Point Value

MOVSS + — Move or Merge Scalar Single Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 10 /r MOVSS xmm1, xmm2AV/VSSEMerge scalar single precision floating-point value from xmm2 to xmm1 register.
F3 0F 10 /r MOVSS xmm1, m32AV/VSSELoad scalar single precision floating-point value from m32 to xmm1 register.
VEX.LIG.F3.0F.WIG 10 /r VMOVSS xmm1, xmm2, xmm3BV/VAVXMerge scalar single precision floating-point value from xmm2 and xmm3 to xmm1 register
VEX.LIG.F3.0F.WIG 10 /r VMOVSS xmm1, m32DV/VAVXLoad scalar single precision floating-point value from m32 to xmm1 register.
F3 0F 11 /r MOVSS xmm2/m32, xmm1CV/VSSEMove scalar single precision floating-point value from xmm1 register to xmm2/m32.
VEX.LIG.F3.0F.WIG 11 /r VMOVSS xmm1, xmm2, xmm3EV/VAVXMove scalar single precision floating-point value from xmm2 and xmm3 to xmm1 register.
VEX.LIG.F3.0F.WIG 11 /r VMOVSS m32, xmm1CV/VAVXMove scalar single precision floating-point value from xmm1 register to m32.
EVEX.LLIG.F3.0F.W0 10 /r VMOVSS xmm1 {k1}{z}, xmm2, xmm3BV/VAVX512FMove scalar single precision floating-point value from xmm2 and xmm3 to xmm1 register under writemask k1.
EVEX.LLIG.F3.0F.W0 10 /r VMOVSS xmm1 {k1}{z}, m32FV/VAVX512FMove scalar single precision floating-point values from m32 to xmm1 under writemask k1.
EVEX.LLIG.F3.0F.W0 11 /r VMOVSS xmm1 {k1}{z}, xmm2, xmm3EV/VAVX512FMove scalar single precision floating-point value from xmm2 and xmm3 to xmm1 register under writemask k1.
EVEX.LLIG.F3.0F.W0 11 /r VMOVSS m32 {k1}, xmm1GV/VAVX512FMove scalar single precision floating-point values from xmm1 to m32 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
DN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
EN/AModRM:r/m (w)EVEX.vvvv (r)ModRM:reg (r)N/A
FTuple1 ScalarModRM:reg (r, w)ModRM:r/m (r)N/AN/A
GTuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Moves a scalar single precision floating-point value from the source operand (second operand) to the destination operand (first operand). The source and destination operands can be XMM registers or 32-bit memory locations. This instruction can be used to move a single precision floating-point value to and from the low doubleword of an XMM register and a 32-bit memory location, or to move a single precision floating-point value between the low doublewords of two XMM registers. The instruction cannot be used to transfer data between memory locations.

+

Legacy version: When the source and destination operands are XMM registers, bits (MAXVL-1:32) of the corresponding destination register are unmodified. When the source operand is a memory location and destination

+

operand is an XMM registers, Bits (127:32) of the destination operand is cleared to all 0s, bits MAXVL:128 of the destination operand remains unchanged.

+

VEX and EVEX encoded register-register syntax: Moves a scalar single precision floating-point value from the second source operand (the third operand) to the low doubleword element of the destination operand (the first operand). Bits 127:32 of the destination operand are copied from the first source operand (the second operand). Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX and EVEX encoded memory load syntax: When the source operand is a memory location and destination operand is an XMM registers, bits MAXVL:32 of the destination operand is cleared to all 0s.

+

EVEX encoded versions: The low doubleword of the destination is updated according to the writemask.

+

Note: For memory store form instruction “VMOVSS m32, xmm1”, VEX.vvvv is reserved and must be 1111b otherwise instruction will #UD. For memory store form instruction “VMOVSS mv {k1}, xmm1”, EVEX.vvvv is reserved and must be 1111b otherwise instruction will #UD.

+

Software should ensure VMOVSS is encoded with VEX.L=0. Encoding VMOVSS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VMOVSS (EVEX.LLIG.F3.0F.W0 11 /r When the Source Operand is Memory and the Destination is an XMM Register) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[31:0] := SRC[31:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[MAXVL-1:32] := 0
+
+

VMOVSS (EVEX.LLIG.F3.0F.W0 10 /r When the Source Operand is an XMM Register and the Destination is Memory) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[31:0] := SRC[31:0]
+    ELSE *DEST[31:0] remains unchanged* ; merging-masking
+FI;
+
+

VMOVSS (EVEX.LLIG.F3.0F.W0 10/11 /r Where the Source and Destination are XMM Registers) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[31:0] := SRC2[31:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

MOVSS (Legacy SSE Version When the Source and Destination Operands are Both XMM Registers) + ¶ +

+
DEST[31:0] := SRC[31:0]
+DEST[MAXVL-1:32] (Unmodified)
+
+

VMOVSS (VEX.128.F3.0F 11 /r Where the Destination is an XMM Register) + ¶ +

+
DEST[31:0] := SRC2[31:0]
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VMOVSS (VEX.128.F3.0F 10 /r Where the Source and Destination are XMM Registers) + ¶ +

+
DEST[31:0] := SRC2[31:0]
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VMOVSS (VEX.128.F3.0F 10 /r When the Source Operand is Memory and the Destination is an XMM Register) + ¶ +

+
DEST[31:0] := SRC[31:0]
+DEST[MAXVL-1:32] := 0
+
+

MOVSS/VMOVSS (When the Source Operand is an XMM Register and the Destination is Memory) + ¶ +

+
DEST[31:0] := SRC[31:0]
+
+

MOVSS (Legacy SSE Version when the Source Operand is Memory and the Destination is an XMM Register) + ¶ +

+
DEST[31:0] := SRC[31:0]
+DEST[127:32] := 0
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVSS __m128 _mm_mask_load_ss(__m128 s, __mmask8 k, float * p);
+
+
VMOVSS __m128 _mm_maskz_load_ss( __mmask8 k, float * p);
+
+
VMOVSS __m128 _mm_mask_move_ss(__m128 sh, __mmask8 k, __m128 sl, __m128 a);
+
+
VMOVSS __m128 _mm_maskz_move_ss( __mmask8 k, __m128 s, __m128 a);
+
+
VMOVSS void _mm_mask_store_ss(float * p, __mmask8 k, __m128 a);
+
+
MOVSS __m128 _mm_load_ss(float * p)
+
+
MOVSS void_mm_store_ss(float * p, __m128 a)
+
+
MOVSS __m128 _mm_move_ss(__m128 a, __m128 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instruction, see Table 2-58, “Type E10 Class Exception Conditions.”

diff --git a/x86/movsx.movsxd.html b/x86/movsx.movsxd.html new file mode 100644 index 0000000..0b2cc7a --- /dev/null +++ b/x86/movsx.movsxd.html @@ -0,0 +1,179 @@ + +MOVSX/MOVSXD + — Move With Sign-Extension

MOVSX/MOVSXD + — Move With Sign-Extension

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F BE /rMOVSX r16, r/m8RMValidValidMove byte to word with sign-extension.
0F BE /rMOVSX r32, r/m8RMValidValidMove byte to doubleword with sign-extension.
REX.W + 0F BE /rMOVSX r64, r/m8RMValidN.E.Move byte to quadword with sign-extension.
0F BF /rMOVSX r32, r/m16RMValidValidMove word to doubleword, with sign-extension.
REX.W + 0F BF /rMOVSX r64, r/m16RMValidN.E.Move word to quadword with sign-extension.
63 /r1MOVSXD r16, r/m16RMValidN.E.Move word to word with sign-extension.
63 /r1MOVSXD r32, r/m32RMValidN.E.Move doubleword to doubleword with sign-extension.
REX.W + 63 /rMOVSXD r64, r/m32RMValidN.E.Move doubleword to quadword with sign-extension.
+
+

1. The use of MOVSXD without REX.W in 64-bit mode is discouraged. Regular MOV should be used instead of using MOVSXD without REX.W.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Copies the contents of the source operand (register or memory location) to the destination operand (register) and sign extends the value to 16 or 32 bits (see Figure 7-6 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1). The size of the converted value depends on the operand-size attribute.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := SignExtend(SRC);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/movupd.html b/x86/movupd.html new file mode 100644 index 0000000..b5e99ae --- /dev/null +++ b/x86/movupd.html @@ -0,0 +1,265 @@ + +MOVUPD + — Move Unaligned Packed Double Precision Floating-Point Values

MOVUPD + — Move Unaligned Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 10 /r MOVUPD xmm1, xmm2/m128AV/VSSE2Move unaligned packed double precision floating-point from xmm2/mem to xmm1.
66 0F 11 /r MOVUPD xmm2/m128, xmm1BV/VSSE2Move unaligned packed double precision floating-point from xmm1 to xmm2/mem.
VEX.128.66.0F.WIG 10 /r VMOVUPD xmm1, xmm2/m128AV/VAVXMove unaligned packed double precision floating-point from xmm2/mem to xmm1.
VEX.128.66.0F.WIG 11 /r VMOVUPD xmm2/m128, xmm1BV/VAVXMove unaligned packed double precision floating-point from xmm1 to xmm2/mem.
VEX.256.66.0F.WIG 10 /r VMOVUPD ymm1, ymm2/m256AV/VAVXMove unaligned packed double precision floating-point from ymm2/mem to ymm1.
VEX.256.66.0F.WIG 11 /r VMOVUPD ymm2/m256, ymm1BV/VAVXMove unaligned packed double precision floating-point from ymm1 to ymm2/mem.
EVEX.128.66.0F.W1 10 /r VMOVUPD xmm1 {k1}{z}, xmm2/m128CV/VAVX512VL AVX512FMove unaligned packed double precision floating-point from xmm2/m128 to xmm1 using writemask k1.
EVEX.128.66.0F.W1 11 /r VMOVUPD xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512FMove unaligned packed double precision floating-point from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.66.0F.W1 10 /r VMOVUPD ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512FMove unaligned packed double precision floating-point from ymm2/m256 to ymm1 using writemask k1.
EVEX.256.66.0F.W1 11 /r VMOVUPD ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512FMove unaligned packed double precision floating-point from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.66.0F.W1 10 /r VMOVUPD zmm1 {k1}{z}, zmm2/m512CV/VAVX512FMove unaligned packed double precision floating-point values from zmm2/m512 to zmm1 using writemask k1.
EVEX.512.66.0F.W1 11 /r VMOVUPD zmm2/m512 {k1}{z}, zmm1DV/VAVX512FMove unaligned packed double precision floating-point values from zmm1 to zmm2/m512 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
CFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
DFull MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Note: VEX.vvvv and EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

EVEX.512 encoded version:

+

Moves 512 bits of packed double precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a ZMM register from a float64 memory location, to store the contents of a ZMM register into a memory. The destination operand is updated according to the writemask.

+

VEX.256 encoded version:

+

Moves 256 bits of packed double precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a YMM register from a 256-bit memory location, to store the contents of a YMM register into a 256-bit memory location, or to move data between two YMM registers. Bits (MAXVL-1:256) of the destination register are zeroed.

+

128-bit versions:

+

Moves 128 bits of packed double precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load an XMM register from a 128-bit memory location, to store the contents of an XMM register into a 128-bit memory location, or to move data between two XMM registers.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

When the source or destination operand is a memory operand, the operand may be unaligned on a 16-byte boundary without causing a general-protection exception (#GP) to be generated

+

VEX.128 and EVEX.128 encoded versions: Bits (MAXVL-1:128) of the destination register are zeroed.

+

Operation + ¶ +

+

VMOVUPD (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVUPD (EVEX Encoded Versions, Store-Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE *DEST[i+63:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR;
+
+

VMOVUPD (EVEX Encoded Versions, Load-Form) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVUPD (VEX.256 Encoded Version, Load - and Register Copy) + ¶ +

+
DEST[255:0] := SRC[255:0]
+DEST[MAXVL-1:256] := 0
+
+

VMOVUPD (VEX.256 Encoded Version, Store-Form) + ¶ +

+
DEST[255:0] := SRC[255:0]
+
+

VMOVUPD (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] := 0
+
+

MOVUPD (128-bit Load- and Register-Copy- Form Legacy SSE Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

(V)MOVUPD (128-bit Store-Form Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVUPD __m512d _mm512_loadu_pd( void * s);
+
+
VMOVUPD __m512d _mm512_mask_loadu_pd(__m512d a, __mmask8 k, void * s);
+
+
VMOVUPD __m512d _mm512_maskz_loadu_pd( __mmask8 k, void * s);
+
+
VMOVUPD void _mm512_storeu_pd( void * d, __m512d a);
+
+
VMOVUPD void _mm512_mask_storeu_pd( void * d, __mmask8 k, __m512d a);
+
+
VMOVUPD __m256d _mm256_mask_loadu_pd(__m256d s, __mmask8 k, void * m);
+
+
VMOVUPD __m256d _mm256_maskz_loadu_pd( __mmask8 k, void * m);
+
+
VMOVUPD void _mm256_mask_storeu_pd( void * d, __mmask8 k, __m256d a);
+
+
VMOVUPD __m128d _mm_mask_loadu_pd(__m128d s, __mmask8 k, void * m);
+
+
VMOVUPD __m128d _mm_maskz_loadu_pd( __mmask8 k, void * m);
+
+
VMOVUPD void _mm_mask_storeu_pd( void * d, __mmask8 k, __m128d a);
+
+
MOVUPD __m256d _mm256_loadu_pd (double * p);
+
+
MOVUPD void _mm256_storeu_pd( double *p, __m256d a);
+
+
MOVUPD __m128d _mm_loadu_pd (double * p);
+
+
MOVUPD void _mm_storeu_pd( double *p, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

Note treatment of #AC varies; additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/movups.html b/x86/movups.html new file mode 100644 index 0000000..fdd90de --- /dev/null +++ b/x86/movups.html @@ -0,0 +1,266 @@ + +MOVUPS + — Move Unaligned Packed Single Precision Floating-Point Values

MOVUPS + — Move Unaligned Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 10 /r MOVUPS xmm1, xmm2/m128AV/VSSEMove unaligned packed single precision floating-point from xmm2/mem to xmm1.
NP 0F 11 /r MOVUPS xmm2/m128, xmm1BV/VSSEMove unaligned packed single precision floating-point from xmm1 to xmm2/mem.
VEX.128.0F.WIG 10 /r VMOVUPS xmm1, xmm2/m128AV/VAVXMove unaligned packed single precision floating-point from xmm2/mem to xmm1.
VEX.128.0F.WIG 11 /r VMOVUPS xmm2/m128, xmm1BV/VAVXMove unaligned packed single precision floating-point from xmm1 to xmm2/mem.
VEX.256.0F.WIG 10 /r VMOVUPS ymm1, ymm2/m256AV/VAVXMove unaligned packed single precision floating-point from ymm2/mem to ymm1.
VEX.256.0F.WIG 11 /r VMOVUPS ymm2/m256, ymm1BV/VAVXMove unaligned packed single precision floating-point from ymm1 to ymm2/mem.
EVEX.128.0F.W0 10 /r VMOVUPS xmm1 {k1}{z}, xmm2/m128CV/VAVX512VL AVX512FMove unaligned packed single precision floating-point values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.0F.W0 10 /r VMOVUPS ymm1 {k1}{z}, ymm2/m256CV/VAVX512VL AVX512FMove unaligned packed single precision floating-point values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.0F.W0 10 /r VMOVUPS zmm1 {k1}{z}, zmm2/m512CV/VAVX512FMove unaligned packed single precision floating-point values from zmm2/m512 to zmm1 using writemask k1.
EVEX.128.0F.W0 11 /r VMOVUPS xmm2/m128 {k1}{z}, xmm1DV/VAVX512VL AVX512FMove unaligned packed single precision floating-point values from xmm1 to xmm2/m128 using writemask k1.
EVEX.256.0F.W0 11 /r VMOVUPS ymm2/m256 {k1}{z}, ymm1DV/VAVX512VL AVX512FMove unaligned packed single precision floating-point values from ymm1 to ymm2/m256 using writemask k1.
EVEX.512.0F.W0 11 /r VMOVUPS zmm2/m512 {k1}{z}, zmm1DV/VAVX512FMove unaligned packed single precision floating-point values from zmm1 to zmm2/m512 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
CFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
DFull MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Note: VEX.vvvv and EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

EVEX.512 encoded version:

+

Moves 512 bits of packed single precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a ZMM register from a 512-bit float32 memory location, to store the contents of a ZMM register into memory. The destination operand is updated according to the writemask.

+

VEX.256 and EVEX.256 encoded versions:

+

Moves 256 bits of packed single precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load a YMM register from a 256-bit memory location, to store the contents of a YMM register into a 256-bit memory location, or to move data between two YMM registers. Bits (MAXVL-1:256) of the destination register are zeroed.

+

128-bit versions:

+

Moves 128 bits of packed single precision floating-point values from the source operand (second operand) to the destination operand (first operand). This instruction can be used to load an XMM register from a 128-bit memory location, to store the contents of an XMM register into a 128-bit memory location, or to move data between two XMM registers.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

When the source or destination operand is a memory operand, the operand may be unaligned without causing a general-protection exception (#GP) to be generated.

+

VEX.128 and EVEX.128 encoded versions: Bits (MAXVL-1:128) of the destination register are zeroed.

+

Operation + ¶ +

+

VMOVUPS (EVEX Encoded Versions, Register-Copy Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVUPS (EVEX Encoded Versions, Store-Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE *DEST[i+31:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR;
+
+

VMOVUPS (EVEX Encoded Versions, Load-Form) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0 ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMOVUPS (VEX.256 Encoded Version, Load - and Register Copy) + ¶ +

+
DEST[255:0] := SRC[255:0]
+DEST[MAXVL-1:256] := 0
+
+

VMOVUPS (VEX.256 Encoded Version, Store-Form) + ¶ +

+
DEST[255:0] := SRC[255:0]
+
+

VMOVUPS (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] := 0
+
+

MOVUPS (128-bit Load- and Register-Copy- Form Legacy SSE Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

(V)MOVUPS (128-bit Store-Form Version) + ¶ +

+
DEST[127:0] := SRC[127:0]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVUPS __m512 _mm512_loadu_ps( void * s);
+
+
VMOVUPS __m512 _mm512_mask_loadu_ps(__m512 a, __mmask16 k, void * s);
+
+
VMOVUPS __m512 _mm512_maskz_loadu_ps( __mmask16 k, void * s);
+
+
VMOVUPS void _mm512_storeu_ps( void * d, __m512 a);
+
+
VMOVUPS void _mm512_mask_storeu_ps( void * d, __mmask8 k, __m512 a);
+
+
VMOVUPS __m256 _mm256_mask_loadu_ps(__m256 a, __mmask8 k, void * s);
+
+
VMOVUPS __m256 _mm256_maskz_loadu_ps( __mmask8 k, void * s);
+
+
VMOVUPS void _mm256_mask_storeu_ps( void * d, __mmask8 k, __m256 a);
+
+
VMOVUPS __m128 _mm_mask_loadu_ps(__m128 a, __mmask8 k, void * s);
+
+
VMOVUPS __m128 _mm_maskz_loadu_ps( __mmask8 k, void * s);
+
+
VMOVUPS void _mm_mask_storeu_ps( void * d, __mmask8 k, __m128 a);
+
+
MOVUPS __m256 _mm256_loadu_ps ( float * p);
+
+
MOVUPS void _mm256 _storeu_ps( float *p, __m256 a);
+
+
MOVUPS __m128 _mm_loadu_ps ( float * p);
+
+
MOVUPS void _mm_storeu_ps( float *p, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

Note treatment of #AC varies.

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B or VEX.vvvv != 1111B.
diff --git a/x86/movzx.html b/x86/movzx.html new file mode 100644 index 0000000..5141e14 --- /dev/null +++ b/x86/movzx.html @@ -0,0 +1,160 @@ + +MOVZX + — Move With Zero-Extend

MOVZX + — Move With Zero-Extend

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F B6 /rMOVZX r16, r/m8RMValidValidMove byte to word with zero-extension.
0F B6 /rMOVZX r32, r/m8RMValidValidMove byte to doubleword, zero-extension.
REX.W + 0F B6 /rMOVZX r64, r/m81RMValidN.E.Move byte to quadword, zero-extension.
0F B7 /rMOVZX r32, r/m16RMValidValidMove word to doubleword, zero-extension.
REX.W + 0F B7 /rMOVZX r64, r/m16RMValidN.E.Move word to quadword, zero-extension.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if the REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Copies the contents of the source operand (register or memory location) to the destination operand (register) and zero extends the value. The size of the converted value depends on the operand-size attribute.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bit operands. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := ZeroExtend(SRC);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/mpsadbw.html b/x86/mpsadbw.html new file mode 100644 index 0000000..28312d7 --- /dev/null +++ b/x86/mpsadbw.html @@ -0,0 +1,823 @@ + +MPSADBW + — Compute Multiple Packed Sums of Absolute Difference

MPSADBW + — Compute Multiple Packed Sums of Absolute Difference

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
66 0F 3A 42 /r ib MPSADBW xmm1, xmm2/m128, imm8RMIV/VSSE4_1Sums absolute 8-bit integer difference of adjacent groups of 4 byte integers in xmm1 and xmm2/m128 and writes the results in xmm1. Starting offsets within xmm1 and xmm2/m128 are determined by imm8.
VEX.128.66.0F3A.WIG 42 /r ib VMPSADBW xmm1, xmm2, xmm3/m128, imm8RVMIV/VAVXSums absolute 8-bit integer difference of adjacent groups of 4 byte integers in xmm2 and xmm3/m128 and writes the results in xmm1. Starting offsets within xmm2 and xmm3/m128 are determined by imm8.
VEX.256.66.0F3A.WIG 42 /r ib VMPSADBW ymm1, ymm2, ymm3/m256, imm8RVMIV/VAVX2Sums absolute 8-bit integer difference of adjacent groups of 4 byte integers in xmm2 and ymm3/m128 and writes the results in ymm1. Starting offsets within ymm2 and xmm3/m128 are determined by imm8.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (r, w)ModRM:r/m (r)imm8N/A
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

(V)MPSADBW calculates packed word results of sum-absolute-difference (SAD) of unsigned bytes from two blocks of 32-bit dword elements, using two select fields in the immediate byte to select the offsets of the two blocks within the first source operand and the second operand. Packed SAD word results are calculated within each 128-bit lane. Each SAD word result is calculated between a stationary block_2 (whose offset within the second source operand is selected by a two bit select control, multiplied by 32 bits) and a sliding block_1 at consecutive byte-granular position within the first source operand. The offset of the first 32-bit block of block_1 is selectable using a one bit select control, multiplied by 32 bits.

+

128-bit Legacy SSE version: Imm8[1:0]*32 specifies the bit offset of block_2 within the second source operand. Imm[2]*32 specifies the initial bit offset of the block_1 within the first source operand. The first source operand and destination operand are the same. The first source and destination operands are XMM registers. The second source operand is either an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged. Bits 7:3 of the immediate byte are ignored.

+

VEX.128 encoded version: Imm8[1:0]*32 specifies the bit offset of block_2 within the second source operand. Imm[2]*32 specifies the initial bit offset of the block_1 within the first source operand. The first source and destination operands are XMM registers. The second source operand is either an XMM register or a 128-bit memory location. Bits (127:128) of the corresponding YMM register are zeroed. Bits 7:3 of the immediate byte are ignored.

+

VEX.256 encoded version: The sum-absolute-difference (SAD) operation is repeated 8 times for MPSADW between the same block_2 (fixed offset within the second source operand) and a variable block_1 (offset is shifted by 8 bits for each SAD operation) in the first source operand. Each 16-bit result of eight SAD operations between block_2 and block_1 is written to the respective word in the lower 128 bits of the destination operand.

+

Additionally, VMPSADBW performs another eight SAD operations on block_4 of the second source operand and block_3 of the first source operand. (Imm8[4:3]*32 + 128) specifies the bit offset of block_4 within the second source operand. (Imm[5]*32+128) specifies the initial bit offset of the block_3 within the first source operand. Each 16-bit result of eight SAD operations between block_4 and block_3 is written to the respective word in the upper 128 bits of the destination operand.

+

The first source operand is a YMM register. The second source register can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. Bits 7:6 of the immediate byte are ignored.

+

Note: If VMPSADBW is encoded with VEX.L= 1, an attempt to execute the instruction encoded with VEX.L= 1 will cause an #UD exception.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Imm[4:3]*32+128 +128 +255 +224 +192 +Src2 +Abs. Diff. +Imm[5]*32+128 +Src1 +Sum +144 128 +255 +Destination +Imm[1:0]*32 +0 +127 +96 +64 +Src2 +Abs. Diff. +Imm[2]*32 +Src1 +Sum +16 0 +127 +Destination +
Figure 4-5. 256-bit VMPSADBW Operation
+

Operation + ¶ +

+

VMPSADBW (VEX.256 Encoded Version) + ¶ +

+
BLK2_OFFSET := imm8[1:0]*32
+BLK1_OFFSET := imm8[2]*32
+SRC1_BYTE0 := SRC1[BLK1_OFFSET+7:BLK1_OFFSET]
+SRC1_BYTE1 := SRC1[BLK1_OFFSET+15:BLK1_OFFSET+8]
+SRC1_BYTE2 := SRC1[BLK1_OFFSET+23:BLK1_OFFSET+16]
+SRC1_BYTE3 := SRC1[BLK1_OFFSET+31:BLK1_OFFSET+24]
+SRC1_BYTE4 := SRC1[BLK1_OFFSET+39:BLK1_OFFSET+32]
+SRC1_BYTE5 := SRC1[BLK1_OFFSET+47:BLK1_OFFSET+40]
+SRC1_BYTE6 := SRC1[BLK1_OFFSET+55:BLK1_OFFSET+48]
+SRC1_BYTE7 := SRC1[BLK1_OFFSET+63:BLK1_OFFSET+56]
+SRC1_BYTE8 := SRC1[BLK1_OFFSET+71:BLK1_OFFSET+64]
+SRC1_BYTE9 := SRC1[BLK1_OFFSET+79:BLK1_OFFSET+72]
+SRC1_BYTE10 := SRC1[BLK1_OFFSET+87:BLK1_OFFSET+80]
+SRC2_BYTE0 := SRC2[BLK2_OFFSET+7:BLK2_OFFSET]
+SRC2_BYTE1 := SRC2[BLK2_OFFSET+15:BLK2_OFFSET+8]
+SRC2_BYTE2 := SRC2[BLK2_OFFSET+23:BLK2_OFFSET+16]
+SRC2_BYTE3 := SRC2[BLK2_OFFSET+31:BLK2_OFFSET+24]
+TEMP0 := ABS(SRC1_BYTE0 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE1 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE2 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE3 - SRC2_BYTE3)
+DEST[15:0] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE1 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE2 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE3 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE4 - SRC2_BYTE3)
+DEST[31:16] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE2 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE3 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE4 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE5 - SRC2_BYTE3)
+DEST[47:32] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE3 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE4 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE5 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE6 - SRC2_BYTE3)
+DEST[63:48] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE4 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE5 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE6 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE7 - SRC2_BYTE3)
+DEST[79:64] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE5 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE6 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE7 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE8 - SRC2_BYTE3)
+DEST[95:80] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE6 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE7 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE8 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE9 - SRC2_BYTE3)
+DEST[111:96] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE7 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE8 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE9 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE10 - SRC2_BYTE3)
+DEST[127:112] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+BLK2_OFFSET := imm8[4:3]*32 + 128
+BLK1_OFFSET := imm8[5]*32 + 128
+SRC1_BYTE0 := SRC1[BLK1_OFFSET+7:BLK1_OFFSET]
+SRC1_BYTE1 := SRC1[BLK1_OFFSET+15:BLK1_OFFSET+8]
+SRC1_BYTE2 := SRC1[BLK1_OFFSET+23:BLK1_OFFSET+16]
+SRC1_BYTE3 := SRC1[BLK1_OFFSET+31:BLK1_OFFSET+24]
+SRC1_BYTE4 := SRC1[BLK1_OFFSET+39:BLK1_OFFSET+32]
+SRC1_BYTE5 := SRC1[BLK1_OFFSET+47:BLK1_OFFSET+40]
+SRC1_BYTE6 := SRC1[BLK1_OFFSET+55:BLK1_OFFSET+48]
+SRC1_BYTE7 := SRC1[BLK1_OFFSET+63:BLK1_OFFSET+56]
+SRC1_BYTE8 := SRC1[BLK1_OFFSET+71:BLK1_OFFSET+64]
+SRC1_BYTE9 := SRC1[BLK1_OFFSET+79:BLK1_OFFSET+72]
+SRC1_BYTE10 := SRC1[BLK1_OFFSET+87:BLK1_OFFSET+80]
+SRC2_BYTE0 := SRC2[BLK2_OFFSET+7:BLK2_OFFSET]
+SRC2_BYTE1 := SRC2[BLK2_OFFSET+15:BLK2_OFFSET+8]
+SRC2_BYTE2 := SRC2[BLK2_OFFSET+23:BLK2_OFFSET+16]
+SRC2_BYTE3 := SRC2[BLK2_OFFSET+31:BLK2_OFFSET+24]
+TEMP0 := ABS(SRC1_BYTE0 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE1 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE2 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE3 - SRC2_BYTE3)
+DEST[143:128] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE1 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE2 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE3 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE4 - SRC2_BYTE3)
+DEST[159:144] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE2 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE3 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE4 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE5 - SRC2_BYTE3)
+DEST[175:160] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE3 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE4 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE5 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE6 - SRC2_BYTE3)
+DEST[191:176] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE4 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE5 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE6 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE7 - SRC2_BYTE3)
+DEST[207:192] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE5 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE6 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE7 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE8 - SRC2_BYTE3)
+DEST[223:208] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE6 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE7 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE8 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE9 - SRC2_BYTE3)
+DEST[239:224] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE7 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE8 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE9 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE10 - SRC2_BYTE3)
+DEST[255:240] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+
+

VMPSADBW (VEX.128 Encoded Version) + ¶ +

+
BLK2_OFFSET := imm8[1:0]*32
+BLK1_OFFSET := imm8[2]*32
+SRC1_BYTE0 := SRC1[BLK1_OFFSET+7:BLK1_OFFSET]
+SRC1_BYTE1 := SRC1[BLK1_OFFSET+15:BLK1_OFFSET+8]
+SRC1_BYTE2 := SRC1[BLK1_OFFSET+23:BLK1_OFFSET+16]
+SRC1_BYTE3 := SRC1[BLK1_OFFSET+31:BLK1_OFFSET+24]
+SRC1_BYTE4 := SRC1[BLK1_OFFSET+39:BLK1_OFFSET+32]
+SRC1_BYTE5 := SRC1[BLK1_OFFSET+47:BLK1_OFFSET+40]
+SRC1_BYTE6 := SRC1[BLK1_OFFSET+55:BLK1_OFFSET+48]
+SRC1_BYTE7 := SRC1[BLK1_OFFSET+63:BLK1_OFFSET+56]
+SRC1_BYTE8 := SRC1[BLK1_OFFSET+71:BLK1_OFFSET+64]
+SRC1_BYTE9 := SRC1[BLK1_OFFSET+79:BLK1_OFFSET+72]
+SRC1_BYTE10 := SRC1[BLK1_OFFSET+87:BLK1_OFFSET+80]
+SRC2_BYTE0 := SRC2[BLK2_OFFSET+7:BLK2_OFFSET]
+SRC2_BYTE1 := SRC2[BLK2_OFFSET+15:BLK2_OFFSET+8]
+SRC2_BYTE2 := SRC2[BLK2_OFFSET+23:BLK2_OFFSET+16]
+SRC2_BYTE3 := SRC2[BLK2_OFFSET+31:BLK2_OFFSET+24]
+TEMP0 := ABS(SRC1_BYTE0 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE1 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE2 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE3 - SRC2_BYTE3)
+DEST[15:0] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE1 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE2 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE3 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE4 - SRC2_BYTE3)
+DEST[31:16] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE2 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE3 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE4 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE5 - SRC2_BYTE3)
+DEST[47:32] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE3 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE4 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE5 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE6 - SRC2_BYTE3)
+DEST[63:48] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE4 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE5 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE6 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE7 - SRC2_BYTE3)
+DEST[79:64] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE5 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE6 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE7 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE8 - SRC2_BYTE3)
+DEST[95:80] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE6 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE7 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE8 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE9 - SRC2_BYTE3)
+DEST[111:96] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS(SRC1_BYTE7 - SRC2_BYTE0)
+TEMP1 := ABS(SRC1_BYTE8 - SRC2_BYTE1)
+TEMP2 := ABS(SRC1_BYTE9 - SRC2_BYTE2)
+TEMP3 := ABS(SRC1_BYTE10 - SRC2_BYTE3)
+DEST[127:112] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+DEST[MAXVL-1:128] := 0
+
+

MPSADBW (128-bit Legacy SSE Version) + ¶ +

+
SRC_OFFSET := imm8[1:0]*32
+DEST_OFFSET := imm8[2]*32
+DEST_BYTE0 := DEST[DEST_OFFSET+7:DEST_OFFSET]
+DEST_BYTE1 := DEST[DEST_OFFSET+15:DEST_OFFSET+8]
+DEST_BYTE2 := DEST[DEST_OFFSET+23:DEST_OFFSET+16]
+DEST_BYTE3 := DEST[DEST_OFFSET+31:DEST_OFFSET+24]
+DEST_BYTE4 := DEST[DEST_OFFSET+39:DEST_OFFSET+32]
+DEST_BYTE5 := DEST[DEST_OFFSET+47:DEST_OFFSET+40]
+DEST_BYTE6 := DEST[DEST_OFFSET+55:DEST_OFFSET+48]
+DEST_BYTE7 := DEST[DEST_OFFSET+63:DEST_OFFSET+56]
+DEST_BYTE8 := DEST[DEST_OFFSET+71:DEST_OFFSET+64]
+DEST_BYTE9 := DEST[DEST_OFFSET+79:DEST_OFFSET+72]
+DEST_BYTE10 := DEST[DEST_OFFSET+87:DEST_OFFSET+80]
+SRC_BYTE0 := SRC[SRC_OFFSET+7:SRC_OFFSET]
+SRC_BYTE1 := SRC[SRC_OFFSET+15:SRC_OFFSET+8]
+SRC_BYTE2 := SRC[SRC_OFFSET+23:SRC_OFFSET+16]
+SRC_BYTE3 := SRC[SRC_OFFSET+31:SRC_OFFSET+24]
+TEMP0 := ABS( DEST_BYTE0 - SRC_BYTE0)
+TEMP1 := ABS( DEST_BYTE1 - SRC_BYTE1)
+TEMP2 := ABS( DEST_BYTE2 - SRC_BYTE2)
+TEMP3 := ABS( DEST_BYTE3 - SRC_BYTE3)
+DEST[15:0] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS( DEST_BYTE1 - SRC_BYTE0)
+TEMP1 := ABS( DEST_BYTE2 - SRC_BYTE1)
+TEMP2 := ABS( DEST_BYTE3 - SRC_BYTE2)
+TEMP3 := ABS( DEST_BYTE4 - SRC_BYTE3)
+DEST[31:16] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS( DEST_BYTE2 - SRC_BYTE0)
+TEMP1 := ABS( DEST_BYTE3 - SRC_BYTE1)
+TEMP2 := ABS( DEST_BYTE4 - SRC_BYTE2)
+TEMP3 := ABS( DEST_BYTE5 - SRC_BYTE3)
+DEST[47:32] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS( DEST_BYTE3 - SRC_BYTE0)
+TEMP1 := ABS( DEST_BYTE4 - SRC_BYTE1)
+TEMP2 := ABS( DEST_BYTE5 - SRC_BYTE2)
+TEMP3 := ABS( DEST_BYTE6 - SRC_BYTE3)
+DEST[63:48] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS( DEST_BYTE4 - SRC_BYTE0)
+TEMP1 := ABS( DEST_BYTE5 - SRC_BYTE1)
+TEMP2 := ABS( DEST_BYTE6 - SRC_BYTE2)
+TEMP3 := ABS( DEST_BYTE7 - SRC_BYTE3)
+DEST[79:64] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS( DEST_BYTE5 - SRC_BYTE0)
+TEMP1 := ABS( DEST_BYTE6 - SRC_BYTE1)
+TEMP2 := ABS( DEST_BYTE7 - SRC_BYTE2)
+TEMP3 := ABS( DEST_BYTE8 - SRC_BYTE3)
+DEST[95:80] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS( DEST_BYTE6 - SRC_BYTE0)
+TEMP1 := ABS( DEST_BYTE7 - SRC_BYTE1)
+TEMP2 := ABS( DEST_BYTE8 - SRC_BYTE2)
+TEMP3 := ABS( DEST_BYTE9 - SRC_BYTE3)
+DEST[111:96] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+TEMP0 := ABS( DEST_BYTE7 - SRC_BYTE0)
+TEMP1 := ABS( DEST_BYTE8 - SRC_BYTE1)
+TEMP2 := ABS( DEST_BYTE9 - SRC_BYTE2)
+TEMP3 := ABS( DEST_BYTE10 - SRC_BYTE3)
+DEST[127:112] := TEMP0 + TEMP1 + TEMP2 + TEMP3
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)MPSADBW __m128i _mm_mpsadbw_epu8 (__m128i s1, __m128i s2, const int mask);
+
+
VMPSADBW __m256i _mm256_mpsadbw_epu8 (__m256i s1, __m256i s2, const int mask);
+
+

Flags Affected + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/mul.html b/x86/mul.html new file mode 100644 index 0000000..fa2da9d --- /dev/null +++ b/x86/mul.html @@ -0,0 +1,199 @@ + +MUL + — Unsigned Multiply

MUL + — Unsigned Multiply

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F6 /4MUL r/m8MValidValidUnsigned multiply (AX := AL ∗ r/m8).
REX + F6 /4MUL r/m81MValidN.E.Unsigned multiply (AX := AL ∗ r/m8).
F7 /4MUL r/m16MValidValidUnsigned multiply (DX:AX := AX ∗ r/m16).
F7 /4MUL r/m32MValidValidUnsigned multiply (EDX:EAX := EAX ∗ r/m32).
REX.W + F7 /4MUL r/m64MValidN.E.Unsigned multiply (RDX:RAX := RAX ∗ r/m64).
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Performs an unsigned multiplication of the first operand (destination operand) and the second operand (source operand) and stores the result in the destination operand. The destination operand is an implied operand located in register AL, AX or EAX (depending on the size of the operand); the source operand is located in a general-purpose register or a memory location. The action of this instruction and the location of the result depends on the opcode and the operand size as shown in Table 4-9.

+

The result is stored in register AX, register pair DX:AX, or register pair EDX:EAX (depending on the operand size), with the high-order bits of the product contained in register AH, DX, or EDX, respectively. If the high-order bits of the product are 0, the CF and OF flags are cleared; otherwise, the flags are set.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits.

+

See the summary chart at the beginning of this section for encoding data and limits.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + +
Operand SizeSource 1Source 2Destination
ByteALr/m8AX
WordAXr/m16DX:AX
DoublewordEAXr/m32EDX:EAX
QuadwordRAXr/m64RDX:RAX
+
Table 4-9. MUL Results
+

Operation + ¶ +

+
IF (Byte operation)
+    THEN
+        AX := AL ∗ SRC;
+    ELSE (* Word or doubleword operation *)
+        IF OperandSize = 16
+            THEN
+                DX:AX := AX ∗ SRC;
+            ELSE IF OperandSize = 32
+                THEN EDX:EAX := EAX ∗ SRC; FI;
+            ELSE (* OperandSize = 64 *)
+                RDX:RAX := RAX ∗ SRC;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

The OF and CF flags are set to 0 if the upper half of the result is 0; otherwise, they are set to 1. The SF, ZF, AF, and PF flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
diff --git a/x86/mulpd.html b/x86/mulpd.html new file mode 100644 index 0000000..c68e5bf --- /dev/null +++ b/x86/mulpd.html @@ -0,0 +1,177 @@ + +MULPD + — Multiply Packed Double Precision Floating-Point Values

MULPD + — Multiply Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 59 /r MULPD xmm1, xmm2/m128AV/VSSE2Multiply packed double precision floating-point values in xmm2/m128 with xmm1 and store result in xmm1.
VEX.128.66.0F.WIG 59 /r VMULPD xmm1,xmm2, xmm3/m128BV/VAVXMultiply packed double precision floating-point values in xmm3/m128 with xmm2 and store result in xmm1.
VEX.256.66.0F.WIG 59 /r VMULPD ymm1, ymm2, ymm3/m256BV/VAVXMultiply packed double precision floating-point values in ymm3/m256 with ymm2 and store result in ymm1.
EVEX.128.66.0F.W1 59 /r VMULPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm3/m128/m64bcst to xmm2 and store result in xmm1.
EVEX.256.66.0F.W1 59 /r VMULPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm3/m256/m64bcst to ymm2 and store result in ymm1.
EVEX.512.66.0F.W1 59 /r VMULPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}CV/VAVX512FMultiply packed double precision floating-point values in zmm3/m512/m64bcst with zmm2 and store result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiply packed double precision floating-point values from the first source operand with corresponding values in the second source operand, and stores the packed double precision floating-point results in the destination operand.

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. Bits (MAXVL-1:256) of the corresponding destination ZMM register are zeroed.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the destination YMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+

VMULPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := SRC1[i+63:i] * SRC2[63:0]
+                ELSE
+                    DEST[i+63:i] := SRC1[i+63:i] * SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMULPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] * SRC2[63:0]
+DEST[127:64] := SRC1[127:64] * SRC2[127:64]
+DEST[191:128] := SRC1[191:128] * SRC2[191:128]
+DEST[255:192] := SRC1[255:192] * SRC2[255:192]
+DEST[MAXVL-1:256] := 0;
+.
+
+

VMULPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] * SRC2[63:0]
+DEST[127:64] := SRC1[127:64] * SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

MULPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] * SRC[63:0]
+DEST[127:64] := DEST[127:64] * SRC[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMULPD __m512d _mm512_mul_pd( __m512d a, __m512d b);
+
+
VMULPD __m512d _mm512_mask_mul_pd(__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VMULPD __m512d _mm512_maskz_mul_pd( __mmask8 k, __m512d a, __m512d b);
+
+
VMULPD __m512d _mm512_mul_round_pd( __m512d a, __m512d b, int);
+
+
VMULPD __m512d _mm512_mask_mul_round_pd(__m512d s, __mmask8 k, __m512d a, __m512d b, int);
+
+
VMULPD __m512d _mm512_maskz_mul_round_pd( __mmask8 k, __m512d a, __m512d b, int);
+
+
VMULPD __m256d _mm256_mul_pd (__m256d a, __m256d b);
+
+
MULPD __m128d _mm_mul_pd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/mulps.html b/x86/mulps.html new file mode 100644 index 0000000..1d94e0d --- /dev/null +++ b/x86/mulps.html @@ -0,0 +1,192 @@ + +MULPS + — Multiply Packed Single Precision Floating-Point Values

MULPS + — Multiply Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 59 /r MULPS xmm1, xmm2/m128AV/VSSEMultiply packed single precision floating-point values in xmm2/m128 with xmm1 and store result in xmm1.
VEX.128.0F.WIG 59 /r VMULPS xmm1,xmm2, xmm3/m128BV/VAVXMultiply packed single precision floating-point values in xmm3/m128 with xmm2 and store result in xmm1.
VEX.256.0F.WIG 59 /r VMULPS ymm1, ymm2, ymm3/m256BV/VAVXMultiply packed single precision floating-point values in ymm3/m256 with ymm2 and store result in ymm1.
EVEX.128.0F.W0 59 /r VMULPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm3/m128/m32bcst to xmm2 and store result in xmm1.
EVEX.256.0F.W0 59 /r VMULPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm3/m256/m32bcst to ymm2 and store result in ymm1.
EVEX.512.0F.W0 59 /r VMULPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst {er}CV/VAVX512FMultiply packed single precision floating-point values in zmm3/m512/m32bcst with zmm2 and store result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiply the packed single precision floating-point values from the first source operand with the corresponding values in the second source operand, and stores the packed double precision floating-point results in the destination operand.

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. Bits (MAXVL-1:256) of the corresponding destination ZMM register are zeroed.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the destination YMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+

VMULPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := SRC1[i+31:i] * SRC2[31:0]
+                ELSE
+                    DEST[i+31:i] := SRC1[i+31:i] * SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VMULPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] * SRC2[31:0]
+DEST[63:32] := SRC1[63:32] * SRC2[63:32]
+DEST[95:64] := SRC1[95:64] * SRC2[95:64]
+DEST[127:96] := SRC1[127:96] * SRC2[127:96]
+DEST[159:128] := SRC1[159:128] * SRC2[159:128]
+DEST[191:160] := SRC1[191:160] * SRC2[191:160]
+DEST[223:192] := SRC1[223:192] * SRC2[223:192]
+DEST[255:224] := SRC1[255:224] * SRC2[255:224].
+DEST[MAXVL-1:256] := 0;
+
+

VMULPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] * SRC2[31:0]
+DEST[63:32] := SRC1[63:32] * SRC2[63:32]
+DEST[95:64] := SRC1[95:64] * SRC2[95:64]
+DEST[127:96] := SRC1[127:96] * SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

MULPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] * SRC2[31:0]
+DEST[63:32] := SRC1[63:32] * SRC2[63:32]
+DEST[95:64] := SRC1[95:64] * SRC2[95:64]
+DEST[127:96] := SRC1[127:96] * SRC2[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMULPS __m512 _mm512_mul_ps( __m512 a, __m512 b);
+
+
VMULPS __m512 _mm512_mask_mul_ps(__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VMULPS __m512 _mm512_maskz_mul_ps(__mmask16 k, __m512 a, __m512 b);
+
+
VMULPS __m512 _mm512_mul_round_ps( __m512 a, __m512 b, int);
+
+
VMULPS __m512 _mm512_mask_mul_round_ps(__m512 s, __mmask16 k, __m512 a, __m512 b, int);
+
+
VMULPS __m512 _mm512_maskz_mul_round_ps(__mmask16 k, __m512 a, __m512 b, int);
+
+
VMULPS __m256 _mm256_mask_mul_ps(__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VMULPS __m256 _mm256_maskz_mul_ps(__mmask8 k, __m256 a, __m256 b);
+
+
VMULPS __m128 _mm_mask_mul_ps(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VMULPS __m128 _mm_maskz_mul_ps(__mmask8 k, __m128 a, __m128 b);
+
+
VMULPS __m256 _mm256_mul_ps (__m256 a, __m256 b);
+
+
MULPS __m128 _mm_mul_ps (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/mulsd.html b/x86/mulsd.html new file mode 100644 index 0000000..53085ce --- /dev/null +++ b/x86/mulsd.html @@ -0,0 +1,136 @@ + +MULSD + — Multiply Scalar Double Precision Floating-Point Value

MULSD + — Multiply Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 59 /r MULSD xmm1,xmm2/m64AV/VSSE2Multiply the low double precision floating-point value in xmm2/m64 by low double precision floating-point value in xmm1.
VEX.LIG.F2.0F.WIG 59 /r VMULSD xmm1,xmm2, xmm3/m64BV/VAVXMultiply the low double precision floating-point value in xmm3/m64 by low double precision floating-point value in xmm2.
EVEX.LLIG.F2.0F.W1 59 /r VMULSD xmm1 {k1}{z}, xmm2, xmm3/m64 {er}CV/VAVX512FMultiply the low double precision floating-point value in xmm3/m64 by low double precision floating-point value in xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiplies the low double precision floating-point value in the second source operand by the low double precision floating-point value in the first source operand, and stores the double precision floating-point result in the destination operand. The second source operand can be an XMM register or a 64-bit memory location. The first source operand and the destination operands are XMM registers.

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:64) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded version: The quadword at bits 127:64 of the destination operand is copied from the same bits of the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: The low quadword element of the destination operand is updated according to the write-mask.

+

Software should ensure VMULSD is encoded with VEX.L=0. Encoding VMULSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VMULSD (EVEX Encoded Version) + ¶ +

+
IF (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := SRC1[63:0] * SRC2[63:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+            FI
+    FI;
+ENDFOR
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VMULSD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] * SRC2[63:0]
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

MULSD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] * SRC[63:0]
+DEST[MAXVL-1:64] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMULSD __m128d _mm_mask_mul_sd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VMULSD __m128d _mm_maskz_mul_sd( __mmask8 k, __m128d a, __m128d b);
+
+
VMULSD __m128d _mm_mul_round_sd( __m128d a, __m128d b, int);
+
+
VMULSD __m128d _mm_mask_mul_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, int);
+
+
VMULSD __m128d _mm_maskz_mul_round_sd( __mmask8 k, __m128d a, __m128d b, int);
+
+
MULSD __m128d _mm_mul_sd (__m128d a, __m128d b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/mulss.html b/x86/mulss.html new file mode 100644 index 0000000..bceb4d2 --- /dev/null +++ b/x86/mulss.html @@ -0,0 +1,136 @@ + +MULSS + — Multiply Scalar Single Precision Floating-Point Values

MULSS + — Multiply Scalar Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 59 /r MULSS xmm1,xmm2/m32AV/VSSEMultiply the low single precision floating-point value in xmm2/m32 by the low single precision floating-point value in xmm1.
VEX.LIG.F3.0F.WIG 59 /r VMULSS xmm1,xmm2, xmm3/m32BV/VAVXMultiply the low single precision floating-point value in xmm3/m32 by the low single precision floating-point value in xmm2.
EVEX.LLIG.F3.0F.W0 59 /r VMULSS xmm1 {k1}{z}, xmm2, xmm3/m32 {er}CV/VAVX512FMultiply the low single precision floating-point value in xmm3/m32 by the low single precision floating-point value in xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiplies the low single precision floating-point value from the second source operand by the low single precision floating-point value in the first source operand, and stores the single precision floating-point result in the destination operand. The second source operand can be an XMM register or a 32-bit memory location. The first source operand and the destination operands are XMM registers.

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:32) of the corresponding YMM destination register remain unchanged.

+

VEX.128 and EVEX encoded version: The first source operand is an xmm register encoded by VEX.vvvv. The three high-order doublewords of the destination operand are copied from the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: The low doubleword element of the destination operand is updated according to the write-mask.

+

Software should ensure VMULSS is encoded with VEX.L=0. Encoding VMULSS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VMULSS (EVEX Encoded Version) + ¶ +

+
IF (EVEX.b = 1) AND SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := SRC1[31:0] * SRC2[31:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+            FI
+    FI;
+ENDFOR
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VMULSS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] * SRC2[31:0]
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

MULSS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := DEST[31:0] * SRC[31:0]
+DEST[MAXVL-1:32] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMULSS __m128 _mm_mask_mul_ss(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VMULSS __m128 _mm_maskz_mul_ss( __mmask8 k, __m128 a, __m128 b);
+
+
VMULSS __m128 _mm_mul_round_ss( __m128 a, __m128 b, int);
+
+
VMULSS __m128 _mm_mask_mul_round_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, int);
+
+
VMULSS __m128 _mm_maskz_mul_round_ss( __mmask8 k, __m128 a, __m128 b, int);
+
+
MULSS __m128 _mm_mul_ss(__m128 a, __m128 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Underflow, Overflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/mulx.html b/x86/mulx.html new file mode 100644 index 0000000..e9998d4 --- /dev/null +++ b/x86/mulx.html @@ -0,0 +1,84 @@ + +MULX + — Unsigned Multiply Without Affecting Flags

MULX + — Unsigned Multiply Without Affecting Flags

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/ En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.F2.0F38.W0 F6 /r MULX r32a, r32b, r/m32RVMV/VBMI2Unsigned multiply of r/m32 with EDX without affecting arithmetic flags.
VEX.LZ.F2.0F38.W1 F6 /r MULX r64a, r64b, r/m64RVMV/N.E.BMI2Unsigned multiply of r/m64 with RDX without affecting arithmetic flags.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RVMModRM:reg (w)VEX.vvvv (w)ModRM:r/m (r)RDX/EDX is implied 64/32 bits source
+

Description + ¶ +

+

Performs an unsigned multiplication of the implicit source operand (EDX/RDX) and the specified source operand (the third operand) and stores the low half of the result in the second destination (second operand), the high half of the result in the first destination operand (first operand), without reading or writing the arithmetic flags. This enables efficient programming where the software can interleave add with carry operations and multiplications.

+

If the first and second operand are identical, it will contain the high half of the multiplication result.

+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
// DEST1: ModRM:reg
+// DEST2: VEX.vvvv
+IF (OperandSize = 32)
+    SRC1 := EDX;
+    DEST2 := (SRC1*SRC2)[31:0];
+    DEST1 := (SRC1*SRC2)[63:32];
+ELSE IF (OperandSize = 64)
+    SRC1 := RDX;
+        DEST2 := (SRC1*SRC2)[63:0];
+        DEST1 := (SRC1*SRC2)[127:64];
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
Auto-generated from high-level language when possible. unsigned int mulx_u32(unsigned int a, unsigned int b, unsigned int * hi);
+
+
unsigned __int64 mulx_u64(unsigned __int64 a, unsigned __int64 b, unsigned __int64 * hi);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/mwait.html b/x86/mwait.html new file mode 100644 index 0000000..31728c9 --- /dev/null +++ b/x86/mwait.html @@ -0,0 +1,176 @@ + +MWAIT + — Monitor Wait

MWAIT + — Monitor Wait

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01 C9MWAITZOValidValidA hint that allows the processor to stop instruction execution and enter an implementation-dependent optimized state until occurrence of a class of events.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

MWAIT instruction provides hints to allow the processor to enter an implementation-dependent optimized state. There are two principal targeted usages: address-range monitor and advanced power management. Both usages of MWAIT require the use of the MONITOR instruction.

+

CPUID.01H:ECX.MONITOR[bit 3] indicates the availability of MONITOR and MWAIT in the processor. When set, MWAIT may be executed only at privilege level 0 (use at any other privilege level results in an invalid-opcode exception). The operating system or system BIOS may disable this instruction by using the IA32_MISC_ENABLE MSR; disabling MWAIT clears the CPUID feature flag and causes execution to generate an invalid-opcode exception.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

ECX specifies optional extensions for the MWAIT instruction. EAX may contain hints such as the preferred optimized state the processor should enter. The first processors to implement MWAIT supported only the zero value for EAX and ECX. Later processors allowed setting ECX[0] to enable masked interrupts as break events for MWAIT (see below). Software can use the CPUID instruction to determine the extensions and hints supported by the processor.

+

MWAIT for Address Range Monitoring + ¶ +

+

For address-range monitoring, the MWAIT instruction operates with the MONITOR instruction. The two instructions allow the definition of an address at which to wait (MONITOR) and a implementation-dependent-optimized operation to commence at the wait address (MWAIT). The execution of MWAIT is a hint to the processor that it can enter an implementation-dependent-optimized state while waiting for an event or a store operation to the address range armed by MONITOR.

+

The following cause the processor to exit the implementation-dependent-optimized state: a store to the address range armed by the MONITOR instruction, an NMI or SMI, a debug exception, a machine check exception, the BINIT# signal, the INIT# signal, and the RESET# signal. Other implementation-dependent events may also cause the processor to exit the implementation-dependent-optimized state.

+

In addition, an external interrupt causes the processor to exit the implementation-dependent-optimized state either (1) if the interrupt would be delivered to software (e.g., as it would be if HLT had been executed instead of MWAIT); or (2) if ECX[0] = 1. Software can execute MWAIT with ECX[0] = 1 only if CPUID.05H:ECX[bit 1] = 1. (Implementation-specific conditions may result in an interrupt causing the processor to exit the implementation-dependent-optimized state even if interrupts are masked and ECX[0] = 0.)

+

Following exit from the implementation-dependent-optimized state, control passes to the instruction following the MWAIT instruction. A pending interrupt that is not masked (including an NMI or an SMI) may be delivered before execution of that instruction. Unlike the HLT instruction, the MWAIT instruction does not support a restart at the MWAIT instruction following the handling of an SMI.

+

If the preceding MONITOR instruction did not successfully arm an address range or if the MONITOR instruction has not been executed prior to executing MWAIT, then the processor will not enter the implementation-dependent-optimized state. Execution will resume at the instruction following the MWAIT.

+

MWAIT for Power Management + ¶ +

+

MWAIT accepts a hint and optional extension to the processor that it can enter a specified target C state while waiting for an event or a store operation to the address range armed by MONITOR. Support for MWAIT extensions for power management is indicated by CPUID.05H:ECX[bit 0] reporting 1.

+

EAX and ECX are used to communicate the additional information to the MWAIT instruction, such as the kind of optimized state the processor should enter. ECX specifies optional extensions for the MWAIT instruction. EAX may contain hints such as the preferred optimized state the processor should enter. Implementation-specific conditions may cause a processor to ignore the hint and enter a different optimized state. Future processor implementations may implement several optimized “waiting” states and will select among those states based on the hint argument.

+

Table 4-10 describes the meaning of ECX and EAX registers for MWAIT extensions.

+
+ + + + + + + + + +
BitsDescription
0Treat interrupts as break events even if masked (e.g., even if EFLAGS.IF=0). May be set only if CPUID.05H:ECX[bit 1] = 1.
31: 1Reserved
+
Table 4-10. MWAIT Extension Register (ECX)
+
+ + + + + + + + + + + + +
BitsDescription
3:0Sub C-state within a C-state, indicated by bits [7:4]
7:4Target C-state* Value of 0 means C1; 1 means C2 and so on Value of 01111B means C0 Note: Target C states for MWAIT extensions are processor-specific C-states, not ACPI C-states
31: 8Reserved
+
Table 4-11. MWAIT Hints Register (EAX)
+

Note that if MWAIT is used to enter any of the C-states that are numerically higher than C1, a store to the address range armed by the MONITOR instruction will cause the processor to exit MWAIT only if the store was originated by other processor agents. A store from non-processor agent might not cause the processor to exit MWAIT in such cases.

+

For additional details of MWAIT extensions, see Chapter 15, “Power and Thermal Management,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

Operation + ¶ +

+
(* MWAIT takes the argument in EAX as a hint extension and is architected to take the argument in ECX as an instruction extension
+MWAIT EAX, ECX *)
+{
+WHILE ( (“Monitor Hardware is in armed state”)) {
+    implementation_dependent_optimized_state(EAX, ECX); }
+Set the state of Monitor Hardware as triggered;
+}
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
MWAIT void _mm_mwait(unsigned extensions, unsigned hints)
+
+

Example + ¶ +

+

MONITOR/MWAIT instruction pair must be coded in the same loop because execution of the MWAIT instruction will trigger the monitor hardware. It is not a proper usage to execute MONITOR once and then execute MWAIT in a loop. Setting up MONITOR without executing MWAIT has no adverse effects.

+

Typically the MONITOR/MWAIT pair is used in a sequence, such as:

+

EAX = Logical Address(Trigger)

+

ECX = 0 (*Hints *)

+

EDX = 0 (* Hints *)

+

IF ( !trigger_store_happened) {

+

MONITOR EAX, ECX, EDX

+

IF ( !trigger_store_happened ) {

+

MWAIT EAX, ECX

+

}

+

}

+

The above code sequence makes sure that a triggering store does not happen between the first check of the trigger and the execution of the monitor instruction. Without the second check that triggering store would go un-noticed. Typical usage of MONITOR and MWAIT would have the above code sequence within a loop.

+

Numeric Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If ECX[31:1] ≠ 0.
If ECX[0] = 1 and CPUID.05H:ECX[bit 1] = 0.
#UDIf CPUID.01H:ECX.MONITOR[bit 3] = 0.
If current privilege level is not 0.
+

Real Address Mode Exceptions + ¶ +

+ + + + + + + + +
#GPIf ECX[31:1] ≠ 0.
If ECX[0] = 1 and CPUID.05H:ECX[bit 1] = 0.
#UDIf CPUID.01H:ECX.MONITOR[bit 3] = 0.
+

Virtual 8086 Mode Exceptions + ¶ +

+ + + +
#UDThe MWAIT instruction is not recognized in virtual-8086 mode (even if CPUID.01H:ECX.MONITOR[bit 3] = 1).
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If RCX[63:1] ≠ 0.
If RCX[0] = 1 and CPUID.05H:ECX[bit 1] = 0.
#UDIf the current privilege level is not 0.
If CPUID.01H:ECX.MONITOR[bit 3] = 0.
diff --git a/x86/neg.html b/x86/neg.html new file mode 100644 index 0000000..7ea7676 --- /dev/null +++ b/x86/neg.html @@ -0,0 +1,167 @@ + +NEG + — Two's Complement Negation

NEG + — Two's Complement Negation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F6 /3NEG r/m8MValidValidTwo's complement negate r/m8.
REX + F6 /3NEG r/m81MValidN.E.Two's complement negate r/m8.
F7 /3NEG r/m16MValidValidTwo's complement negate r/m16.
F7 /3NEG r/m32MValidValidTwo's complement negate r/m32.
REX.W + F7 /3NEG r/m64MValidN.E.Two's complement negate r/m64.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r, w)N/AN/AN/A
+

Description + ¶ +

+

Replaces the value of operand (the destination operand) with its two's complement. (This operation is equivalent to subtracting the operand from 0.) The destination operand is located in a general-purpose register or a memory location.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF DEST = 0
+    THEN CF := 0;
+    ELSE CF := 1;
+FI;
+DEST := [– (DEST)]
+
+

Flags Affected + ¶ +

+

The CF flag set to 0 if the source operand is 0; otherwise it is set to 1. The OF, SF, ZF, AF, and PF flags are set according to the result.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same as for protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/nop.html b/x86/nop.html new file mode 100644 index 0000000..cf565fe --- /dev/null +++ b/x86/nop.html @@ -0,0 +1,97 @@ + +NOP + — No Operation

NOP + — No Operation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 90NOPZOValidValidOne byte no-operation instruction.
NP 0F 1F /0NOP r/m16MValidValidMulti-byte no-operation instruction.
NP 0F 1F /0NOP r/m32MValidValidMulti-byte no-operation instruction.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

This instruction performs no operation. It is a one-byte or multi-byte NOP that takes up space in the instruction stream but does not impact machine context, except for the EIP register.

+

The multi-byte form of NOP is available on processors with model encoding:

+
    +
  • CPUID.01H.EAX[Bytes 11:8] = 0110B or 1111B
+

The multi-byte NOP instruction does not alter the content of a register and will not issue a memory operation. The instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
The one-byte NOP instruction is an alias mnemonic for the XCHG (E)AX, (E)AX instruction.
+The multi-byte NOP instruction performs no operation on supported processors and generates undefined opcode
+exception on processors that do not support the multi-byte NOP instruction.
+The memory operand form of the instruction allows software to create a byte sequence of “no operation” as one
+instruction. For situations where multiple-byte NOPs are needed, the recommended operations (32-bit mode and
+64-bit mode) are:
+
+
+ + + + + + + + +
LengthAssemblyByte Sequence
2 bytes 3 bytes 4 bytes 5 bytes 6 bytes 7 bytes 8 bytes 9 bytes66 NOP NOP DWORD ptr [EAX] NOP DWORD ptr [EAX + 00H] NOP DWORD ptr [EAX + EAX*1 + 00H] 66 NOP DWORD ptr [EAX + EAX*1 + 00H] NOP DWORD ptr [EAX + 00000000H] NOP DWORD ptr [EAX + EAX*1 + 00000000H] 66 NOP DWORD ptr [EAX + EAX*1 + 00000000H]66 90H 0F 1F 00H 0F 1F 40 00H 0F 1F 44 00 00H 66 0F 1F 44 00 00H 0F 1F 80 00 00 00 00H 0F 1F 84 00 00 00 00 00H 66 0F 1F 84 00 00 00 00 00H
+
Table 4-12. Recommended Multi-Byte Sequence of NOP Instruction
+

Flags Affected + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/not.html b/x86/not.html new file mode 100644 index 0000000..e7d7cdb --- /dev/null +++ b/x86/not.html @@ -0,0 +1,163 @@ + +NOT + — One's Complement Negation

NOT + — One's Complement Negation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F6 /2NOT r/m8MValidValidReverse each bit of r/m8.
REX + F6 /2NOT r/m81MValidN.E.Reverse each bit of r/m8.
F7 /2NOT r/m16MValidValidReverse each bit of r/m16.
F7 /2NOT r/m32MValidValidReverse each bit of r/m32.
REX.W + F7 /2NOT r/m64MValidN.E.Reverse each bit of r/m64.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r, w)N/AN/AN/A
+

Description + ¶ +

+

Performs a bitwise NOT operation (each 1 is set to 0, and each 0 is set to 1) on the destination operand and stores the result in the destination operand location. The destination operand can be a register or a memory location.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := NOT DEST;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand points to a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same as for protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/or.html b/x86/or.html new file mode 100644 index 0000000..426424c --- /dev/null +++ b/x86/or.html @@ -0,0 +1,300 @@ + +OR + — Logical Inclusive OR

OR + — Logical Inclusive OR

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0C ibOR AL, imm8IValidValidAL OR imm8.
0D iwOR AX, imm16IValidValidAX OR imm16.
0D idOR EAX, imm32IValidValidEAX OR imm32.
REX.W + 0D idOR RAX, imm32IValidN.E.RAX OR imm32 (sign-extended).
80 /1 ibOR r/m8, imm8MIValidValidr/m8 OR imm8.
REX + 80 /1 ibOR r/m81, imm8MIValidN.E.r/m8 OR imm8.
81 /1 iwOR r/m16, imm16MIValidValidr/m16 OR imm16.
81 /1 idOR r/m32, imm32MIValidValidr/m32 OR imm32.
REX.W + 81 /1 idOR r/m64, imm32MIValidN.E.r/m64 OR imm32 (sign-extended).
83 /1 ibOR r/m16, imm8MIValidValidr/m16 OR imm8 (sign-extended).
83 /1 ibOR r/m32, imm8MIValidValidr/m32 OR imm8 (sign-extended).
REX.W + 83 /1 ibOR r/m64, imm8MIValidN.E.r/m64 OR imm8 (sign-extended).
08 /rOR r/m8, r8MRValidValidr/m8 OR r8.
REX + 08 /rOR r/m81, r81MRValidN.E.r/m8 OR r8.
09 /rOR r/m16, r16MRValidValidr/m16 OR r16.
09 /rOR r/m32, r32MRValidValidr/m32 OR r32.
REX.W + 09 /rOR r/m64, r64MRValidN.E.r/m64 OR r64.
0A /rOR r8, r/m8RMValidValidr8 OR r/m8.
REX + 0A /rOR r81, r/m81RMValidN.E.r8 OR r/m8.
0B /rOR r16, r/m16RMValidValidr16 OR r/m16.
0B /rOR r32, r/m32RMValidValidr32 OR r/m32.
REX.W + 0B /rOR r64, r/m64RMValidN.E.r64 OR r/m64.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
IAL/AX/EAX/RAXimm8/16/32N/AN/A
MIModRM:r/m (r, w)imm8/16/32N/AN/A
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Performs a bitwise inclusive OR operation between the destination (first) and source (second) operands and stores the result in the destination operand location. The source operand can be an immediate, a register, or a memory location; the destination operand can be a register or a memory location. (However, two memory operands cannot be used in one instruction.) Each bit of the result of the OR instruction is set to 0 if both corresponding bits of the first and second operands are 0; otherwise, each bit is set to 1.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := DEST OR SRC;
+
+

Flags Affected + ¶ +

+

The OF and CF flags are cleared; the SF, ZF, and PF flags are set according to the result. The state of the AF flag is undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand points to a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same as for protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/orpd.html b/x86/orpd.html new file mode 100644 index 0000000..d986139 --- /dev/null +++ b/x86/orpd.html @@ -0,0 +1,173 @@ + +ORPD + — Bitwise Logical OR of Packed Double Precision Floating-Point Values

ORPD + — Bitwise Logical OR of Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 56/r ORPD xmm1, xmm2/m128AV/VSSE2Return the bitwise logical OR of packed double precision floating-point values in xmm1 and xmm2/mem.
VEX.128.66.0F 56 /r VORPD xmm1,xmm2, xmm3/m128BV/VAVXReturn the bitwise logical OR of packed double precision floating-point values in xmm2 and xmm3/mem.
VEX.256.66.0F 56 /r VORPD ymm1, ymm2, ymm3/m256BV/VAVXReturn the bitwise logical OR of packed double precision floating-point values in ymm2 and ymm3/mem.
EVEX.128.66.0F.W1 56 /r VORPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512DQReturn the bitwise logical OR of packed double precision floating-point values in xmm2 and xmm3/m128/m64bcst subject to writemask k1.
EVEX.256.66.0F.W1 56 /r VORPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512DQReturn the bitwise logical OR of packed double precision floating-point values in ymm2 and ymm3/m256/m64bcst subject to writemask k1.
EVEX.512.66.0F.W1 56 /r VORPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512DQReturn the bitwise logical OR of packed double precision floating-point values in zmm2 and zmm3/m512/m64bcst subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical OR of the two, four or eight packed double precision floating-point values from the first source operand and the second source operand, and stores the result in the destination operand.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with write-mask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

VORPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := SRC1[i+63:i] BITWISE OR SRC2[63:0]
+                ELSE
+                    DEST[i+63:i] := SRC1[i+63:i] BITWISE OR SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VORPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] BITWISE OR SRC2[63:0]
+DEST[127:64] := SRC1[127:64] BITWISE OR SRC2[127:64]
+DEST[191:128] := SRC1[191:128] BITWISE OR SRC2[191:128]
+DEST[255:192] := SRC1[255:192] BITWISE OR SRC2[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VORPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] BITWISE OR SRC2[63:0]
+DEST[127:64] := SRC1[127:64] BITWISE OR SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

ORPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] BITWISE OR SRC[63:0]
+DEST[127:64] := DEST[127:64] BITWISE OR SRC[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VORPD __m512d _mm512_or_pd ( __m512d a, __m512d b);
+
+
VORPD __m512d _mm512_mask_or_pd ( __m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VORPD __m512d _mm512_maskz_or_pd (__mmask8 k, __m512d a, __m512d b);
+
+
VORPD __m256d _mm256_mask_or_pd (__m256d s, ___mmask8 k, __m256d a, __m256d b);
+
+
VORPD __m256d _mm256_maskz_or_pd (__mmask8 k, __m256d a, __m256d b);
+
+
VORPD __m128d _mm_mask_or_pd ( __m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VORPD __m128d _mm_maskz_or_pd (__mmask8 k, __m128d a, __m128d b);
+
+
VORPD __m256d _mm256_or_pd (__m256d a, __m256d b);
+
+
ORPD __m128d _mm_or_pd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/orps.html b/x86/orps.html new file mode 100644 index 0000000..849732d --- /dev/null +++ b/x86/orps.html @@ -0,0 +1,181 @@ + +ORPS + — Bitwise Logical OR of Packed Single Precision Floating-Point Values

ORPS + — Bitwise Logical OR of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 56 /r ORPS xmm1, xmm2/m128AV/VSSEReturn the bitwise logical OR of packed single precision floating-point values in xmm1 and xmm2/mem.
VEX.128.0F 56 /r VORPS xmm1,xmm2, xmm3/m128BV/VAVXReturn the bitwise logical OR of packed single precision floating-point values in xmm2 and xmm3/mem.
VEX.256.0F 56 /r VORPS ymm1, ymm2, ymm3/m256BV/VAVXReturn the bitwise logical OR of packed single precision floating-point values in ymm2 and ymm3/mem.
EVEX.128.0F.W0 56 /r VORPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512DQReturn the bitwise logical OR of packed single precision floating-point values in xmm2 and xmm3/m128/m32bcst subject to writemask k1.
EVEX.256.0F.W0 56 /r VORPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512DQReturn the bitwise logical OR of packed single precision floating-point values in ymm2 and ymm3/m256/m32bcst subject to writemask k1.
EVEX.512.0F.W0 56 /r VORPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512DQReturn the bitwise logical OR of packed single precision floating-point values in zmm2 and zmm3/m512/m32bcst subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical OR of the four, eight or sixteen packed single precision floating-point values from the first source operand and the second source operand, and stores the result in the destination operand

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with write-mask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

VORPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := SRC1[i+31:i] BITWISE OR SRC2[31:0]
+                ELSE
+                    DEST[i+31:i] := SRC1[i+31:i] BITWISE OR SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VORPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE OR SRC2[31:0]
+DEST[63:32] := SRC1[63:32] BITWISE OR SRC2[63:32]
+DEST[95:64] := SRC1[95:64] BITWISE OR SRC2[95:64]
+DEST[127:96] := SRC1[127:96] BITWISE OR SRC2[127:96]
+DEST[159:128] := SRC1[159:128] BITWISE OR SRC2[159:128]
+DEST[191:160] := SRC1[191:160] BITWISE OR SRC2[191:160]
+DEST[223:192] := SRC1[223:192] BITWISE OR SRC2[223:192]
+DEST[255:224] := SRC1[255:224] BITWISE OR SRC2[255:224].
+DEST[MAXVL-1:256] := 0
+
+

VORPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE OR SRC2[31:0]
+DEST[63:32] := SRC1[63:32] BITWISE OR SRC2[63:32]
+DEST[95:64] := SRC1[95:64] BITWISE OR SRC2[95:64]
+DEST[127:96] := SRC1[127:96] BITWISE OR SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

ORPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE OR SRC2[31:0]
+DEST[63:32] := SRC1[63:32] BITWISE OR SRC2[63:32]
+DEST[95:64] := SRC1[95:64] BITWISE OR SRC2[95:64]
+DEST[127:96] := SRC1[127:96] BITWISE OR SRC2[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VORPS __m512 _mm512_or_ps ( __m512 a, __m512 b);
+
+
VORPS __m512 _mm512_mask_or_ps ( __m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VORPS __m512 _mm512_maskz_or_ps (__mmask16 k, __m512 a, __m512 b);
+
+
VORPS __m256 _mm256_mask_or_ps (__m256 s, ___mmask8 k, __m256 a, __m256 b);
+
+
VORPS __m256 _mm256_maskz_or_ps (__mmask8 k, __m256 a, __m256 b);
+
+
VORPS __m128 _mm_mask_or_ps ( __m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VORPS __m128 _mm_maskz_or_ps (__mmask8 k, __m128 a, __m128 b);
+
+
VORPS __m256 _mm256_or_ps (__m256 a, __m256 b);
+
+
ORPS __m128 _mm_or_ps (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/out.html b/x86/out.html new file mode 100644 index 0000000..4eddc2f --- /dev/null +++ b/x86/out.html @@ -0,0 +1,153 @@ + +OUT + — Output to Port

OUT + — Output to Port

+ +

Opcode1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
E6 ibOUT imm8, ALIValidValidOutput byte in AL to I/O port address imm8.
E7 ibOUT imm8, AXIValidValidOutput word in AX to I/O port address imm8.
E7 ibOUT imm8, EAXIValidValidOutput doubleword in EAX to I/O port address imm8.
EEOUT DX, ALZOValidValidOutput byte in AL to I/O port address in DX.
EFOUT DX, AXZOValidValidOutput word in AX to I/O port address in DX.
EFOUT DX, EAXZOValidValidOutput doubleword in EAX to I/O port address in DX.
+
+

1. See the IA-32 Architecture Compatibility section below.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
Iimm8N/AN/AN/A
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Copies the value from the second operand (source operand) to the I/O port specified with the destination operand (first operand). The source operand can be register AL, AX, or EAX, depending on the size of the port being accessed (8, 16, or 32 bits, respectively); the destination operand can be a byte-immediate or the DX register. Using a byte immediate allows I/O port addresses 0 to 255 to be accessed; using the DX register as a source operand allows I/O ports from 0 to 65,535 to be accessed.

+

The size of the I/O port being accessed is determined by the opcode for an 8-bit I/O port or by the operand-size attribute of the instruction for a 16- or 32-bit I/O port.

+

At the machine code level, I/O instructions are shorter when accessing 8-bit I/O ports. Here, the upper eight bits of the port address will be 0.

+

This instruction is only useful for accessing I/O ports located in the processor’s I/O address space. See Chapter 19, “Input/Output,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for more information on accessing I/O ports in the I/O address space.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

After executing an OUT instruction, the Pentium® processor ensures that the EWBE# pin has been sampled active before it begins to execute the next instruction. (Note that the instruction can be prefetched if EWBE# is not active, but it will not be executed until the EWBE# pin is sampled active.) Only the Pentium processor family has the EWBE# pin.

+

Operation + ¶ +

+
IF ((PE = 1) and ((CPL > IOPL) or (VM = 1)))
+    THEN (* Protected mode with CPL > IOPL or virtual-8086 mode *)
+        IF (Any I/O Permission Bit for I/O port being accessed = 1)
+            THEN (* I/O operation is not allowed *)
+                #GP(0);
+            ELSE ( * I/O operation is allowed *)
+                DEST := SRC; (* Writes to selected I/O port *)
+        FI;
+    ELSE (Real Mode or Protected Mode with CPL ≤ IOPL *)
+        DEST := SRC; (* Writes to selected I/O port *)
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the CPL is greater than (has less privilege) the I/O privilege level (IOPL) and any of the corresponding I/O permission bits in TSS for the I/O port being accessed is 1.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + +
#GP(0)If any of the I/O permission bits in the TSS for the I/O port being accessed is 1.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same as protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+

Same as protected mode exceptions.

diff --git a/x86/outs.outsb.outsw.outsd.html b/x86/outs.outsb.outsw.outsd.html new file mode 100644 index 0000000..2e5f21a --- /dev/null +++ b/x86/outs.outsb.outsw.outsd.html @@ -0,0 +1,253 @@ + +OUTS/OUTSB/OUTSW/OUTSD + — Output String to Port

OUTS/OUTSB/OUTSW/OUTSD + — Output String to Port

+ + + + +

Opcode1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
6EOUTS DX, m8ZOValidValidOutput byte from memory location specified in DS:(E)SI or RSI to I/O port specified in DX2.
6FOUTS DX, m16ZOValidValidOutput word from memory location specified in DS:(E)SI or RSI to I/O port specified in DX2.
6FOUTS DX, m32ZOValidValidOutput doubleword from memory location specified in DS:(E)SI or RSI to I/O port specified in DX2.
6EOUTSBZOValidValidOutput byte from memory location specified in DS:(E)SI or RSI to I/O port specified in DX2.
6FOUTSWZOValidValidOutput word from memory location specified in DS:(E)SI or RSI to I/O port specified in DX2.
6FOUTSDZOValidValidOutput doubleword from memory location specified in DS:(E)SI or RSI to I/O port specified in DX2.
+
+

1. See the IA-32 Architecture Compatibility section below.

+

2. In 64-bit mode, only 64-bit (RSI) and 32-bit (ESI) address sizes are supported. In non-64-bit mode, only 32-bit (ESI) and 16-bit (SI) address sizes are supported.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Copies data from the source operand (second operand) to the I/O port specified with the destination operand (first operand). The source operand is a memory location, the address of which is read from either the DS:SI, DS:ESI or the RSI registers (depending on the address-size attribute of the instruction, 16, 32 or 64, respectively). (The DS segment may be overridden with a segment override prefix.) The destination operand is an I/O port address (from 0 to 65,535) that is read from the DX register. The size of the I/O port being accessed (that is, the size of the source and destination operands) is determined by the opcode for an 8-bit I/O port or by the operand-size attribute of the instruction for a 16- or 32-bit I/O port.

+

At the assembly-code level, two forms of this instruction are allowed: the “explicit-operands” form and the “no-operands” form. The explicit-operands form (specified with the OUTS mnemonic) allows the source and destination operands to be specified explicitly. Here, the source operand should be a symbol that indicates the size of the I/O port and the source address, and the destination operand must be DX. This explicit-operands form is provided to allow documentation; however, note that the documentation provided by this form can be misleading. That is, the source operand symbol must specify the correct type (size) of the operand (byte, word, or doubleword), but it does not have to specify the correct location. The location is always specified by the DS:(E)SI or RSI registers, which must be loaded correctly before the OUTS instruction is executed.

+

The no-operands form provides “short forms” of the byte, word, and doubleword versions of the OUTS instructions. Here also DS:(E)SI is assumed to be the source operand and DX is assumed to be the destination operand. The size of the I/O port is specified with the choice of mnemonic: OUTSB (byte), OUTSW (word), or OUTSD (doubleword).

+

After the byte, word, or doubleword is transferred from the memory location to the I/O port, the SI/ESI/RSI register is incremented or decremented automatically according to the setting of the DF flag in the EFLAGS register. (If the DF flag is 0, the (E)SI register is incremented; if the DF flag is 1, the SI/ESI/RSI register is decremented.) The SI/ESI/RSI register is incremented or decremented by 1 for byte operations, by 2 for word operations, and by 4 for doubleword operations.

+

The OUTS, OUTSB, OUTSW, and OUTSD instructions can be preceded by the REP prefix for block input of ECX bytes, words, or doublewords. See “REP/REPE/REPZ /REPNE/REPNZ—Repeat String Operation Prefix” in this chapter for a

+

description of the REP prefix. This instruction is only useful for accessing I/O ports located in the processor’s I/O address space. See Chapter 19, “Input/Output,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for more information on accessing I/O ports in the I/O address space.

+

In 64-bit mode, the default operand size is 32 bits; operand size is not promoted by the use of REX.W. In 64-bit mode, the default address size is 64 bits, and 64-bit address is specified using RSI by default. 32-bit address using ESI is support using the prefix 67H, but 16-bit address is not supported in 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

After executing an OUTS, OUTSB, OUTSW, or OUTSD instruction, the Pentium processor ensures that the EWBE# pin has been sampled active before it begins to execute the next instruction. (Note that the instruction can be prefetched if EWBE# is not active, but it will not be executed until the EWBE# pin is sampled active.) Only the Pentium processor family has the EWBE# pin.

+

For the Pentium 4, Intel® Xeon®, and P6 processor family, upon execution of an OUTS, OUTSB, OUTSW, or OUTSD instruction, the processor will not execute the next instruction until the data phase of the transaction is complete.

+

Operation + ¶ +

+
IF ((PE = 1) and ((CPL > IOPL) or (VM = 1)))
+    THEN (* Protected mode with CPL > IOPL or virtual-8086 mode *)
+        IF (Any I/O Permission Bit for I/O port being accessed = 1)
+            THEN (* I/O operation is not allowed *)
+                #GP(0);
+            ELSE (* I/O operation is allowed *)
+                DEST := SRC; (* Writes to I/O port *)
+        FI;
+    ELSE (Real Mode or Protected Mode or 64-Bit Mode with CPL ≤ IOPL *)
+        DEST := SRC; (* Writes to I/O port *)
+FI;
+Byte transfer:
+    IF 64-bit mode
+        Then
+            IF 64-Bit Address Size
+                THEN
+                    IF DF = 0
+                        THEN RSI := RSI RSI + 1;
+                        ELSE RSI := RSI or – 1;
+                    FI;
+                ELSE (* 32-Bit Address Size *)
+                    IF DF = 0
+                        THEN ESI := ESI + 1;
+                        ELSE ESI := ESI – 1;
+                    FI;
+            FI;
+        ELSE
+            IF DF = 0
+                THEN (E)SI := (E)SI + 1;
+                ELSE (E)SI := (E)SI – 1;
+            FI;
+    FI;
+Word transfer:
+    IF 64-bit mode
+        Then
+            IF 64-Bit Address Size
+                THEN
+                    IF DF = 0
+                        THEN RSI := RSI RSI + 2;
+                        ELSE RSI := RSI or – 2;
+                    FI;
+                ELSE (* 32-Bit Address Size *)
+                    IF DF = 0
+                        THEN ESI := ESI + 2;
+                        ELSE ESI := ESI – 2;
+                    FI;
+            FI;
+        ELSE
+            IF DF = 0
+                THEN (E)SI := (E)SI + 2;
+                ELSE (E)SI := (E)SI – 2;
+            FI;
+    FI;
+Doubleword transfer:
+    IF 64-bit mode
+        Then
+            IF 64-Bit Address Size
+                THEN
+                    IF DF = 0
+                        THEN RSI := RSI RSI + 4;
+                        ELSE RSI := RSI or – 4;
+                    FI;
+                ELSE (* 32-Bit Address Size *)
+                    IF DF = 0
+                        THEN ESI := ESI + 4;
+                        ELSE ESI := ESI – 4;
+                    FI;
+            FI;
+        ELSE
+            IF DF = 0
+                THEN (E)SI := (E)SI + 4;
+                ELSE (E)SI := (E)SI – 4;
+            FI;
+    FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the CPL is greater than (has less privilege) the I/O privilege level (IOPL) and any of the corresponding I/O permission bits in TSS for the I/O port being accessed is 1.
If a memory operand effective address is outside the limit of the CS, DS, ES, FS, or GS segment.
If the segment register contains a NULL segment selector.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If any of the I/O permission bits in the TSS for the I/O port being accessed is 1.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same as for protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the CPL is greater than (has less privilege) the I/O privilege level (IOPL) and any of the corresponding I/O permission bits in TSS for the I/O port being accessed is 1.
If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/pabsb.pabsw.pabsd.pabsq.html b/x86/pabsb.pabsw.pabsd.pabsq.html new file mode 100644 index 0000000..02c1035 --- /dev/null +++ b/x86/pabsb.pabsw.pabsd.pabsq.html @@ -0,0 +1,450 @@ + +PABSB/PABSW/PABSD/PABSQ + — Packed Absolute Value

PABSB/PABSW/PABSD/PABSQ + — Packed Absolute Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 1C /r1 PABSB mm1, mm2/m64AV/VSSSE3Compute the absolute value of bytes in mm2/m64 and store UNSIGNED result in mm1.
66 0F 38 1C /r PABSB xmm1, xmm2/m128AV/VSSSE3Compute the absolute value of bytes in xmm2/m128 and store UNSIGNED result in xmm1.
NP 0F 38 1D /r1 PABSW mm1, mm2/m64AV/VSSSE3Compute the absolute value of 16-bit integers in mm2/m64 and store UNSIGNED result in mm1.
66 0F 38 1D /r PABSW xmm1, xmm2/m128AV/VSSSE3Compute the absolute value of 16-bit integers in xmm2/m128 and store UNSIGNED result in xmm1.
NP 0F 38 1E /r1 PABSD mm1, mm2/m64AV/VSSSE3Compute the absolute value of 32-bit integers in mm2/m64 and store UNSIGNED result in mm1.
66 0F 38 1E /r PABSD xmm1, xmm2/m128AV/VSSSE3Compute the absolute value of 32-bit integers in xmm2/m128 and store UNSIGNED result in xmm1.
VEX.128.66.0F38.WIG 1C /r VPABSB xmm1, xmm2/m128AV/VAVXCompute the absolute value of bytes in xmm2/m128 and store UNSIGNED result in xmm1.
VEX.128.66.0F38.WIG 1D /r VPABSW xmm1, xmm2/m128AV/VAVXCompute the absolute value of 16- bit integers in xmm2/m128 and store UNSIGNED result in xmm1.
VEX.128.66.0F38.WIG 1E /r VPABSD xmm1, xmm2/m128AV/VAVXCompute the absolute value of 32- bit integers in xmm2/m128 and store UNSIGNED result in xmm1.
VEX.256.66.0F38.WIG 1C /r VPABSB ymm1, ymm2/m256AV/VAVX2Compute the absolute value of bytes in ymm2/m256 and store UNSIGNED result in ymm1.
VEX.256.66.0F38.WIG 1D /r VPABSW ymm1, ymm2/m256AV/VAVX2Compute the absolute value of 16-bit integers in ymm2/m256 and store UNSIGNED result in ymm1.
VEX.256.66.0F38.WIG 1E /r VPABSD ymm1, ymm2/m256AV/VAVX2Compute the absolute value of 32-bit integers in ymm2/m256 and store UNSIGNED result in ymm1.
EVEX.128.66.0F38.WIG 1C /r VPABSB xmm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512BWCompute the absolute value of bytes in xmm2/m128 and store UNSIGNED result in xmm1 using writemask k1.
EVEX.256.66.0F38.WIG 1C /r VPABSB ymm1 {k1}{z}, ymm2/m256BV/VAVX512VL AVX512BWCompute the absolute value of bytes in ymm2/m256 and store UNSIGNED result in ymm1 using writemask k1.
EVEX.512.66.0F38.WIG 1C /r VPABSB zmm1 {k1}{z}, zmm2/m512BV/VAVX512BWCompute the absolute value of bytes in zmm2/m512 and store UNSIGNED result in zmm1 using writemask k1.
EVEX.128.66.0F38.WIG 1D /r VPABSW xmm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512BWCompute the absolute value of 16-bit integers in xmm2/m128 and store UNSIGNED result in xmm1 using writemask k1.
EVEX.256.66.0F38.WIG 1D /r VPABSW ymm1 {k1}{z}, ymm2/m256BV/VAVX512VL AVX512BWCompute the absolute value of 16-bit integers in ymm2/m256 and store UNSIGNED result in ymm1 using writemask k1.
EVEX.512.66.0F38.WIG 1D /r VPABSW zmm1 {k1}{z}, zmm2/m512BV/VAVX512BWCompute the absolute value of 16-bit integers in zmm2/m512 and store UNSIGNED result in zmm1 using writemask k1.
EVEX.128.66.0F38.W0 1E /r VPABSD xmm1 {k1}{z}, xmm2/m128/m32bcstCV/VAVX512VL AVX512FCompute the absolute value of 32-bit integers in xmm2/m128/m32bcst and store UNSIGNED result in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 1E /r VPABSD ymm1 {k1}{z}, ymm2/m256/m32bcstCV/VAVX512VL AVX512FCompute the absolute value of 32-bit integers in ymm2/m256/m32bcst and store UNSIGNED result in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 1E /r VPABSD zmm1 {k1}{z}, zmm2/m512/m32bcstCV/VAVX512FCompute the absolute value of 32-bit integers in zmm2/m512/m32bcst and store UNSIGNED result in zmm1 using writemask k1.
EVEX.128.66.0F38.W1 1F /r VPABSQ xmm1 {k1}{z}, xmm2/m128/m64bcstCV/VAVX512VL AVX512FCompute the absolute value of 64-bit integers in xmm2/m128/m64bcst and store UNSIGNED result in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 1F /r VPABSQ ymm1 {k1}{z}, ymm2/m256/m64bcstCV/VAVX512VL AVX512FCompute the absolute value of 64-bit integers in ymm2/m256/m64bcst and store UNSIGNED result in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 1F /r VPABSQ zmm1 {k1}{z}, zmm2/m512/m64bcstCV/VAVX512FCompute the absolute value of 64-bit integers in zmm2/m512/m64bcst and store UNSIGNED result in zmm1 using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
CFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

PABSB/W/D computes the absolute value of each data element of the source operand (the second operand) and stores the UNSIGNED results in the destination operand (the first operand). PABSB operates on signed bytes, PABSW operates on signed 16-bit words, and PABSD operates on signed 32-bit integers.

+

EVEX encoded VPABSD/Q: The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the writemask.

+

EVEX encoded VPABSB/W: The source operand is a ZMM/YMM/XMM register, or a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the writemask.

+

VEX.256 encoded versions: The source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding register destination are zeroed.

+

VEX.128 encoded versions: The source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding register destination are zeroed.

+

128-bit Legacy SSE version: The source operand can be an XMM register or an 128-bit memory location. The destination is an XMM register. The upper bits (VL_MAX-1:128) of the corresponding register destination are unmodified.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

PABSB With 128-bit Operands: + ¶ +

+
Unsigned DEST[7:0] := ABS(SRC[7: 0])
+Repeat operation for 2nd through 15th bytes
+Unsigned DEST[127:120] := ABS(SRC[127:120])
+
+

VPABSB With 128-bit Operands: + ¶ +

+
Unsigned DEST[7:0] := ABS(SRC[7: 0])
+Repeat operation for 2nd through 15th bytes
+Unsigned DEST[127:120] := ABS(SRC[127:120])
+
+

VPABSB With 256-bit Operands: + ¶ +

+
Unsigned DEST[7:0] := ABS(SRC[7: 0])
+Repeat operation for 2nd through 31st bytes
+Unsigned DEST[255:248] := ABS(SRC[255:248])
+
+

VPABSB (EVEX Encoded Versions) + ¶ +

+
    (KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN
+            Unsigned DEST[i+7:i] := ABS(SRC[i+7:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

PABSW With 128-bit Operands: + ¶ +

+
Unsigned DEST[15:0] := ABS(SRC[15:0])
+Repeat operation for 2nd through 7th 16-bit words
+Unsigned DEST[127:112] := ABS(SRC[127:112])
+
+

VPABSW With 128-bit Operands: + ¶ +

+
Unsigned DEST[15:0] := ABS(SRC[15:0])
+Repeat operation for 2nd through 7th 16-bit words
+Unsigned DEST[127:112] := ABS(SRC[127:112])
+
+

VPABSW With 256-bit Operands: + ¶ +

+
Unsigned DEST[15:0] := ABS(SRC[15:0])
+Repeat operation for 2nd through 15th 16-bit words
+Unsigned DEST[255:240] := ABS(SRC[255:240])
+
+

VPABSW (EVEX Encoded Versions) + ¶ +

+
    (KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN
+            Unsigned DEST[i+15:i] := ABS(SRC[i+15:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

PABSD With 128-bit Operands: + ¶ +

+
Unsigned DEST[31:0] := ABS(SRC[31:0])
+Repeat operation for 2nd through 3rd 32-bit double words
+Unsigned DEST[127:96] := ABS(SRC[127:96])
+
+

VPABSD With 128-bit Operands: + ¶ +

+
Unsigned DEST[31:0] := ABS(SRC[31:0])
+Repeat operation for 2nd through 3rd 32-bit double words
+Unsigned DEST[127:96] := ABS(SRC[127:96])
+
+

VPABSD With 256-bit Operands: + ¶ +

+
Unsigned DEST[31:0] := ABS(SRC[31:0])
+Repeat operation for 2nd through 7th 32-bit double words
+Unsigned DEST[255:224] := ABS(SRC[255:224])
+
+

VPABSD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN
+                    Unsigned DEST[i+31:i] := ABS(SRC[31:0])
+                ELSE
+                    Unsigned DEST[i+31:i] := ABS(SRC[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPABSQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN
+                    Unsigned DEST[i+63:i] := ABS(SRC[63:0])
+                ELSE
+                    Unsigned DEST[i+63:i] := ABS(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPABSB__m512i _mm512_abs_epi8 ( __m512i a)
+
+
VPABSW__m512i _mm512_abs_epi16 ( __m512i a)
+
+
VPABSB__m512i _mm512_mask_abs_epi8 ( __m512i s, __mmask64 m, __m512i a)
+
+
VPABSW__m512i _mm512_mask_abs_epi16 ( __m512i s, __mmask32 m, __m512i a)
+
+
VPABSB__m512i _mm512_maskz_abs_epi8 (__mmask64 m, __m512i a)
+
+
VPABSW__m512i _mm512_maskz_abs_epi16 (__mmask32 m, __m512i a)
+
+
VPABSB__m256i _mm256_mask_abs_epi8 (__m256i s, __mmask32 m, __m256i a)
+
+
VPABSW__m256i _mm256_mask_abs_epi16 (__m256i s, __mmask16 m, __m256i a)
+
+
VPABSB__m256i _mm256_maskz_abs_epi8 (__mmask32 m, __m256i a)
+
+
VPABSW__m256i _mm256_maskz_abs_epi16 (__mmask16 m, __m256i a)
+
+
VPABSB__m128i _mm_mask_abs_epi8 (__m128i s, __mmask16 m, __m128i a)
+
+
VPABSW__m128i _mm_mask_abs_epi16 (__m128i s, __mmask8 m, __m128i a)
+
+
VPABSB__m128i _mm_maskz_abs_epi8 (__mmask16 m, __m128i a)
+
+
VPABSW__m128i _mm_maskz_abs_epi16 (__mmask8 m, __m128i a)
+
+
VPABSD __m256i _mm256_mask_abs_epi32(__m256i s, __mmask8 k, __m256i a);
+
+
VPABSD __m256i _mm256_maskz_abs_epi32( __mmask8 k, __m256i a);
+
+
VPABSD __m128i _mm_mask_abs_epi32(__m128i s, __mmask8 k, __m128i a);
+
+
VPABSD __m128i _mm_maskz_abs_epi32( __mmask8 k, __m128i a);
+
+
VPABSD __m512i _mm512_abs_epi32( __m512i a);
+
+
VPABSD __m512i _mm512_mask_abs_epi32(__m512i s, __mmask16 k, __m512i a);
+
+
VPABSD __m512i _mm512_maskz_abs_epi32( __mmask16 k, __m512i a);
+
+
VPABSQ __m512i _mm512_abs_epi64( __m512i a);
+
+
VPABSQ __m512i _mm512_mask_abs_epi64(__m512i s, __mmask8 k, __m512i a);
+
+
VPABSQ __m512i _mm512_maskz_abs_epi64( __mmask8 k, __m512i a);
+
+
VPABSQ __m256i _mm256_mask_abs_epi64(__m256i s, __mmask8 k, __m256i a);
+
+
VPABSQ __m256i _mm256_maskz_abs_epi64( __mmask8 k, __m256i a);
+
+
VPABSQ __m128i _mm_mask_abs_epi64(__m128i s, __mmask8 k, __m128i a);
+
+
VPABSQ __m128i _mm_maskz_abs_epi64( __mmask8 k, __m128i a);
+
+
PABSB __m128i _mm_abs_epi8 (__m128i a)
+
+
VPABSB __m128i _mm_abs_epi8 (__m128i a)
+
+
VPABSB __m256i _mm256_abs_epi8 (__m256i a)
+
+
PABSW __m128i _mm_abs_epi16 (__m128i a)
+
+
VPABSW __m128i _mm_abs_epi16 (__m128i a)
+
+
VPABSW __m256i _mm256_abs_epi16 (__m256i a)
+
+
PABSD __m128i _mm_abs_epi32 (__m128i a)
+
+
VPABSD __m128i _mm_abs_epi32 (__m128i a)
+
+
VPABSD __m256i _mm256_abs_epi32 (__m256i a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPABSD/Q, see Table 2-49, “Type E4 Class Exception Conditions.”

+

EVEX-encoded VPABSB/W, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/packsswb.packssdw.html b/x86/packsswb.packssdw.html new file mode 100644 index 0000000..de525c4 --- /dev/null +++ b/x86/packsswb.packssdw.html @@ -0,0 +1,551 @@ + +PACKSSWB/PACKSSDW + — Pack With Signed Saturation

PACKSSWB/PACKSSDW + — Pack With Signed Saturation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 63 /r1 PACKSSWB mm1, mm2/m64AV/VMMXConverts 4 packed signed word integers from mm1 and from mm2/m64 into 8 packed signed byte integers in mm1 using signed saturation.
66 0F 63 /r PACKSSWB xmm1, xmm2/m128AV/VSSE2Converts 8 packed signed word integers from xmm1 and from xmm2/m128 into 16 packed signed byte integers in xmm1 using signed saturation.
NP 0F 6B /r1 PACKSSDW mm1, mm2/m64AV/VMMXConverts 2 packed signed doubleword integers from mm1 and from mm2/m64 into 4 packed signed word integers in mm1 using signed saturation.
66 0F 6B /r PACKSSDW xmm1, xmm2/m128AV/VSSE2Converts 4 packed signed doubleword integers from xmm1 and from xmm2/m128 into 8 packed signed word integers in xmm1 using signed saturation.
VEX.128.66.0F.WIG 63 /r VPACKSSWB xmm1,xmm2, xmm3/m128BV/VAVXConverts 8 packed signed word integers from xmm2 and from xmm3/m128 into 16 packed signed byte integers in xmm1 using signed saturation.
VEX.128.66.0F.WIG 6B /r VPACKSSDW xmm1,xmm2, xmm3/m128BV/VAVXConverts 4 packed signed doubleword integers from xmm2 and from xmm3/m128 into 8 packed signed word integers in xmm1 using signed saturation.
VEX.256.66.0F.WIG 63 /r VPACKSSWB ymm1, ymm2, ymm3/m256BV/VAVX2Converts 16 packed signed word integers from ymm2 and from ymm3/m256 into 32 packed signed byte integers in ymm1 using signed saturation.
VEX.256.66.0F.WIG 6B /r VPACKSSDW ymm1, ymm2, ymm3/m256BV/VAVX2Converts 8 packed signed doubleword integers from ymm2 and from ymm3/m256 into 16 packed signed word integers in ymm1using signed saturation.
EVEX.128.66.0F.WIG 63 /r VPACKSSWB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWConverts packed signed word integers from xmm2 and from xmm3/m128 into packed signed byte integers in xmm1 using signed saturation under writemask k1.
EVEX.256.66.0F.WIG 63 /r VPACKSSWB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWConverts packed signed word integers from ymm2 and from ymm3/m256 into packed signed byte integers in ymm1 using signed saturation under writemask k1.
EVEX.512.66.0F.WIG 63 /r VPACKSSWB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWConverts packed signed word integers from zmm2 and from zmm3/m512 into packed signed byte integers in zmm1 using signed saturation under writemask k1.
EVEX.128.66.0F.W0 6B /r VPACKSSDW xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstDV/VAVX512VL AVX512BWConverts packed signed doubleword integers from xmm2 and from xmm3/m128/m32bcst into packed signed word integers in xmm1 using signed saturation under writemask k1.
EVEX.256.66.0F.W0 6B /r VPACKSSDW ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstDV/VAVX512VL AVX512BWConverts packed signed doubleword integers from ymm2 and from ymm3/m256/m32bcst into packed signed word integers in ymm1 using signed saturation under writemask k1.
EVEX.512.66.0F.W0 6B /r VPACKSSDW zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstDV/VAVX512BWConverts packed signed doubleword integers from zmm2 and from zmm3/m512/m32bcst into packed signed word integers in zmm1 using signed saturation under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts packed signed word integers into packed signed byte integers (PACKSSWB) or converts packed signed doubleword integers into packed signed word integers (PACKSSDW), using saturation to handle overflow conditions. See Figure 4-6 for an example of the packing operation.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +64-Bit SRC +64-Bit DEST +C +B +A +D’ C’ B’ A’ +64-Bit DEST +
Figure 4-6. Operation of the PACKSSDW Instruction Using 64-Bit Operands
+

PACKSSWB converts packed signed word integers in the first and second source operands into packed signed byte integers using signed saturation to handle overflow conditions beyond the range of signed byte integers. If the signed word value is beyond the range of a signed byte value (i.e., greater than 7FH or less than 80H), the saturated signed byte integer value of 7FH or 80H, respectively, is stored in the destination. PACKSSDW converts packed signed doubleword integers in the first and second source operands into packed signed word integers using signed saturation to handle overflow conditions beyond 7FFFH and 8000H.

+

EVEX encoded PACKSSWB: The first source operand is a ZMM/YMM/XMM register. The second source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register, updated conditional under the writemask k1.

+

EVEX encoded PACKSSDW: The first source operand is a ZMM/YMM/XMM register. The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-

+

bit memory location. The destination operand is a ZMM/YMM/XMM register, updated conditional under the write-mask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The first source operand is an XMM register. The second operand can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM destination register destination are unmodified.

+

Operation + ¶ +

+

PACKSSWB Instruction (128-bit Legacy SSE Version) + ¶ +

+
DEST[7:0] := SaturateSignedWordToSignedByte (DEST[15:0]);
+DEST[15:8] := SaturateSignedWordToSignedByte (DEST[31:16]);
+DEST[23:16] := SaturateSignedWordToSignedByte (DEST[47:32]);
+DEST[31:24] := SaturateSignedWordToSignedByte (DEST[63:48]);
+DEST[39:32] := SaturateSignedWordToSignedByte (DEST[79:64]);
+DEST[47:40] := SaturateSignedWordToSignedByte (DEST[95:80]);
+DEST[55:48] := SaturateSignedWordToSignedByte (DEST[111:96]);
+DEST[63:56] := SaturateSignedWordToSignedByte (DEST[127:112]);
+DEST[71:64] := SaturateSignedWordToSignedByte (SRC[15:0]);
+DEST[79:72] := SaturateSignedWordToSignedByte (SRC[31:16]);
+DEST[87:80] := SaturateSignedWordToSignedByte (SRC[47:32]);
+DEST[95:88] := SaturateSignedWordToSignedByte (SRC[63:48]);
+DEST[103:96] := SaturateSignedWordToSignedByte (SRC[79:64]);
+DEST[111:104] := SaturateSignedWordToSignedByte (SRC[95:80]);
+DEST[119:112] := SaturateSignedWordToSignedByte (SRC[111:96]);
+DEST[127:120] := SaturateSignedWordToSignedByte (SRC[127:112]);
+DEST[MAXVL-1:128] (Unmodified)
+
+

PACKSSDW Instruction (128-bit Legacy SSE Version) + ¶ +

+
DEST[15:0] := SaturateSignedDwordToSignedWord (DEST[31:0]);
+DEST[31:16] := SaturateSignedDwordToSignedWord (DEST[63:32]);
+DEST[47:32] := SaturateSignedDwordToSignedWord (DEST[95:64]);
+DEST[63:48] := SaturateSignedDwordToSignedWord (DEST[127:96]);
+DEST[79:64] := SaturateSignedDwordToSignedWord (SRC[31:0]);
+DEST[95:80] := SaturateSignedDwordToSignedWord (SRC[63:32]);
+DEST[111:96] := SaturateSignedDwordToSignedWord (SRC[95:64]);
+DEST[127:112] := SaturateSignedDwordToSignedWord (SRC[127:96]);
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPACKSSWB Instruction (VEX.128 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateSignedWordToSignedByte (SRC1[15:0]);
+DEST[15:8] := SaturateSignedWordToSignedByte (SRC1[31:16]);
+DEST[23:16] := SaturateSignedWordToSignedByte (SRC1[47:32]);
+DEST[31:24] := SaturateSignedWordToSignedByte (SRC1[63:48]);
+DEST[39:32] := SaturateSignedWordToSignedByte (SRC1[79:64]);
+DEST[47:40] := SaturateSignedWordToSignedByte (SRC1[95:80]);
+DEST[55:48] := SaturateSignedWordToSignedByte (SRC1[111:96]);
+DEST[63:56] := SaturateSignedWordToSignedByte (SRC1[127:112]);
+DEST[71:64] := SaturateSignedWordToSignedByte (SRC2[15:0]);
+DEST[79:72] := SaturateSignedWordToSignedByte (SRC2[31:16]);
+DEST[87:80] := SaturateSignedWordToSignedByte (SRC2[47:32]);
+DEST[95:88] := SaturateSignedWordToSignedByte (SRC2[63:48]);
+DEST[103:96] := SaturateSignedWordToSignedByte (SRC2[79:64]);
+DEST[111:104] := SaturateSignedWordToSignedByte (SRC2[95:80]);
+DEST[119:112] := SaturateSignedWordToSignedByte (SRC2[111:96]);
+DEST[127:120] := SaturateSignedWordToSignedByte (SRC2[127:112]);
+DEST[MAXVL-1:128] := 0;
+
+

VPACKSSDW Instruction (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateSignedDwordToSignedWord (SRC1[31:0]);
+DEST[31:16] := SaturateSignedDwordToSignedWord (SRC1[63:32]);
+DEST[47:32] := SaturateSignedDwordToSignedWord (SRC1[95:64]);
+DEST[63:48] := SaturateSignedDwordToSignedWord (SRC1[127:96]);
+DEST[79:64] := SaturateSignedDwordToSignedWord (SRC2[31:0]);
+DEST[95:80] := SaturateSignedDwordToSignedWord (SRC2[63:32]);
+DEST[111:96] := SaturateSignedDwordToSignedWord (SRC2[95:64]);
+DEST[127:112] := SaturateSignedDwordToSignedWord (SRC2[127:96]);
+DEST[MAXVL-1:128] := 0;
+
+

VPACKSSWB Instruction (VEX.256 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateSignedWordToSignedByte (SRC1[15:0]);
+DEST[15:8] := SaturateSignedWordToSignedByte (SRC1[31:16]);
+DEST[23:16] := SaturateSignedWordToSignedByte (SRC1[47:32]);
+DEST[31:24] := SaturateSignedWordToSignedByte (SRC1[63:48]);
+DEST[39:32] := SaturateSignedWordToSignedByte (SRC1[79:64]);
+DEST[47:40] := SaturateSignedWordToSignedByte (SRC1[95:80]);
+DEST[55:48] := SaturateSignedWordToSignedByte (SRC1[111:96]);
+DEST[63:56] := SaturateSignedWordToSignedByte (SRC1[127:112]);
+DEST[71:64] := SaturateSignedWordToSignedByte (SRC2[15:0]);
+DEST[79:72] := SaturateSignedWordToSignedByte (SRC2[31:16]);
+DEST[87:80] := SaturateSignedWordToSignedByte (SRC2[47:32]);
+DEST[95:88] := SaturateSignedWordToSignedByte (SRC2[63:48]);
+DEST[103:96] := SaturateSignedWordToSignedByte (SRC2[79:64]);
+DEST[111:104] := SaturateSignedWordToSignedByte (SRC2[95:80]);
+DEST[119:112] := SaturateSignedWordToSignedByte (SRC2[111:96]);
+DEST[127:120] := SaturateSignedWordToSignedByte (SRC2[127:112]);
+DEST[135:128] := SaturateSignedWordToSignedByte (SRC1[143:128]);
+DEST[143:136] := SaturateSignedWordToSignedByte (SRC1[159:144]);
+DEST[151:144] := SaturateSignedWordToSignedByte (SRC1[175:160]);
+DEST[159:152] := SaturateSignedWordToSignedByte (SRC1[191:176]);
+DEST[167:160] := SaturateSignedWordToSignedByte (SRC1[207:192]);
+DEST[175:168] := SaturateSignedWordToSignedByte (SRC1[223:208]);
+DEST[183:176] := SaturateSignedWordToSignedByte (SRC1[239:224]);
+DEST[191:184] := SaturateSignedWordToSignedByte (SRC1[255:240]);
+DEST[199:192] := SaturateSignedWordToSignedByte (SRC2[143:128]);
+DEST[207:200] := SaturateSignedWordToSignedByte (SRC2[159:144]);
+DEST[215:208] := SaturateSignedWordToSignedByte (SRC2[175:160]);
+DEST[223:216] := SaturateSignedWordToSignedByte (SRC2[191:176]);
+DEST[231:224] := SaturateSignedWordToSignedByte (SRC2[207:192]);
+DEST[239:232] := SaturateSignedWordToSignedByte (SRC2[223:208]);
+DEST[247:240] := SaturateSignedWordToSignedByte (SRC2[239:224]);
+DEST[255:248] := SaturateSignedWordToSignedByte (SRC2[255:240]);
+DEST[MAXVL-1:256] := 0;
+
+

VPACKSSDW Instruction (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateSignedDwordToSignedWord (SRC1[31:0]);
+DEST[31:16] := SaturateSignedDwordToSignedWord (SRC1[63:32]);
+DEST[47:32] := SaturateSignedDwordToSignedWord (SRC1[95:64]);
+DEST[63:48] := SaturateSignedDwordToSignedWord (SRC1[127:96]);
+DEST[79:64] := SaturateSignedDwordToSignedWord (SRC2[31:0]);
+DEST[95:80] := SaturateSignedDwordToSignedWord (SRC2[63:32]);
+DEST[111:96] := SaturateSignedDwordToSignedWord (SRC2[95:64]);
+DEST[127:112] := SaturateSignedDwordToSignedWord (SRC2[127:96]);
+DEST[143:128] := SaturateSignedDwordToSignedWord (SRC1[159:128]);
+DEST[159:144] := SaturateSignedDwordToSignedWord (SRC1[191:160]);
+DEST[175:160] := SaturateSignedDwordToSignedWord (SRC1[223:192]);
+DEST[191:176] := SaturateSignedDwordToSignedWord (SRC1[255:224]);
+DEST[207:192] := SaturateSignedDwordToSignedWord (SRC2[159:128]);
+DEST[223:208] := SaturateSignedDwordToSignedWord (SRC2[191:160]);
+DEST[239:224] := SaturateSignedDwordToSignedWord (SRC2[223:192]);
+DEST[255:240] := SaturateSignedDwordToSignedWord (SRC2[255:224]);
+DEST[MAXVL-1:256] := 0;
+
+

VPACKSSWB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+TMP_DEST[7:0] := SaturateSignedWordToSignedByte (SRC1[15:0]);
+TMP_DEST[15:8] := SaturateSignedWordToSignedByte (SRC1[31:16]);
+TMP_DEST[23:16] := SaturateSignedWordToSignedByte (SRC1[47:32]);
+TMP_DEST[31:24] := SaturateSignedWordToSignedByte (SRC1[63:48]);
+TMP_DEST[39:32] := SaturateSignedWordToSignedByte (SRC1[79:64]);
+TMP_DEST[47:40] := SaturateSignedWordToSignedByte (SRC1[95:80]);
+TMP_DEST[55:48] := SaturateSignedWordToSignedByte (SRC1[111:96]);
+TMP_DEST[63:56] := SaturateSignedWordToSignedByte (SRC1[127:112]);
+TMP_DEST[71:64] := SaturateSignedWordToSignedByte (SRC2[15:0]);
+TMP_DEST[79:72] := SaturateSignedWordToSignedByte (SRC2[31:16]);
+TMP_DEST[87:80] := SaturateSignedWordToSignedByte (SRC2[47:32]);
+TMP_DEST[95:88] := SaturateSignedWordToSignedByte (SRC2[63:48]);
+TMP_DEST[103:96] := SaturateSignedWordToSignedByte (SRC2[79:64]);
+TMP_DEST[111:104] := SaturateSignedWordToSignedByte (SRC2[95:80]);
+TMP_DEST[119:112] := SaturateSignedWordToSignedByte (SRC2[111:96]);
+TMP_DEST[127:120] := SaturateSignedWordToSignedByte (SRC2[127:112]);
+IF VL >= 256
+    TMP_DEST[135:128] := SaturateSignedWordToSignedByte (SRC1[143:128]);
+    TMP_DEST[143:136] := SaturateSignedWordToSignedByte (SRC1[159:144]);
+    TMP_DEST[151:144] := SaturateSignedWordToSignedByte (SRC1[175:160]);
+    TMP_DEST[159:152] := SaturateSignedWordToSignedByte (SRC1[191:176]);
+    TMP_DEST[167:160] := SaturateSignedWordToSignedByte (SRC1[207:192]);
+    TMP_DEST[175:168] := SaturateSignedWordToSignedByte (SRC1[223:208]);
+    TMP_DEST[183:176] := SaturateSignedWordToSignedByte (SRC1[239:224]);
+    TMP_DEST[191:184] := SaturateSignedWordToSignedByte (SRC1[255:240]);
+    TMP_DEST[199:192] := SaturateSignedWordToSignedByte (SRC2[143:128]);
+    TMP_DEST[207:200] := SaturateSignedWordToSignedByte (SRC2[159:144]);
+    TMP_DEST[215:208] := SaturateSignedWordToSignedByte (SRC2[175:160]);
+    TMP_DEST[223:216] := SaturateSignedWordToSignedByte (SRC2[191:176]);
+    TMP_DEST[231:224] := SaturateSignedWordToSignedByte (SRC2[207:192]);
+    TMP_DEST[239:232] := SaturateSignedWordToSignedByte (SRC2[223:208]);
+    TMP_DEST[247:240] := SaturateSignedWordToSignedByte (SRC2[239:224]);
+    TMP_DEST[255:248] := SaturateSignedWordToSignedByte (SRC2[255:240]);
+FI;
+IF VL >= 512
+    TMP_DEST[263:256] := SaturateSignedWordToSignedByte (SRC1[271:256]);
+    TMP_DEST[271:264] := SaturateSignedWordToSignedByte (SRC1[287:272]);
+    TMP_DEST[279:272] := SaturateSignedWordToSignedByte (SRC1[303:288]);
+    TMP_DEST[287:280] := SaturateSignedWordToSignedByte (SRC1[319:304]);
+    TMP_DEST[295:288] := SaturateSignedWordToSignedByte (SRC1[335:320]);
+    TMP_DEST[303:296] := SaturateSignedWordToSignedByte (SRC1[351:336]);
+    TMP_DEST[311:304] := SaturateSignedWordToSignedByte (SRC1[367:352]);
+    TMP_DEST[319:312] := SaturateSignedWordToSignedByte (SRC1[383:368]);
+    TMP_DEST[327:320] := SaturateSignedWordToSignedByte (SRC2[271:256]);
+    TMP_DEST[335:328] := SaturateSignedWordToSignedByte (SRC2[287:272]);
+    TMP_DEST[343:336] := SaturateSignedWordToSignedByte (SRC2[303:288]);
+    TMP_DEST[351:344] := SaturateSignedWordToSignedByte (SRC2[319:304]);
+    TMP_DEST[359:352] := SaturateSignedWordToSignedByte (SRC2[335:320]);
+    TMP_DEST[367:360] := SaturateSignedWordToSignedByte (SRC2[351:336]);
+    TMP_DEST[375:368] := SaturateSignedWordToSignedByte (SRC2[367:352]);
+    TMP_DEST[383:376] := SaturateSignedWordToSignedByte (SRC2[383:368]);
+    TMP_DEST[391:384] := SaturateSignedWordToSignedByte (SRC1[399:384]);
+    TMP_DEST[399:392] := SaturateSignedWordToSignedByte (SRC1[415:400]);
+    TMP_DEST[407:400] := SaturateSignedWordToSignedByte (SRC1[431:416]);
+    TMP_DEST[415:408] := SaturateSignedWordToSignedByte (SRC1[447:432]);
+    TMP_DEST[423:416] := SaturateSignedWordToSignedByte (SRC1[463:448]);
+    TMP_DEST[431:424] := SaturateSignedWordToSignedByte (SRC1[479:464]);
+    TMP_DEST[439:432] := SaturateSignedWordToSignedByte (SRC1[495:480]);
+    TMP_DEST[447:440] := SaturateSignedWordToSignedByte (SRC1[511:496]);
+    TMP_DEST[455:448] := SaturateSignedWordToSignedByte (SRC2[399:384]);
+    TMP_DEST[463:456] := SaturateSignedWordToSignedByte (SRC2[415:400]);
+    TMP_DEST[471:464] := SaturateSignedWordToSignedByte (SRC2[431:416]);
+    TMP_DEST[479:472] := SaturateSignedWordToSignedByte (SRC2[447:432]);
+    TMP_DEST[487:480] := SaturateSignedWordToSignedByte (SRC2[463:448]);
+    TMP_DEST[495:488] := SaturateSignedWordToSignedByte (SRC2[479:464]);
+    TMP_DEST[503:496] := SaturateSignedWordToSignedByte (SRC2[495:480]);
+    TMP_DEST[511:504] := SaturateSignedWordToSignedByte (SRC2[511:496]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+7:i] := TMP_DEST[i+7:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPACKSSDW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO ((KL/2) - 1)
+    i := j * 32
+    IF (EVEX.b == 1) AND (SRC2 *is memory*)
+        THEN
+            TMP_SRC2[i+31:i] := SRC2[31:0]
+        ELSE
+            TMP_SRC2[i+31:i] := SRC2[i+31:i]
+    FI;
+ENDFOR;
+TMP_DEST[15:0] := SaturateSignedDwordToSignedWord (SRC1[31:0]);
+TMP_DEST[31:16] := SaturateSignedDwordToSignedWord (SRC1[63:32]);
+TMP_DEST[47:32] := SaturateSignedDwordToSignedWord (SRC1[95:64]);
+TMP_DEST[63:48] := SaturateSignedDwordToSignedWord (SRC1[127:96]);
+TMP_DEST[79:64] := SaturateSignedDwordToSignedWord (TMP_SRC2[31:0]);
+TMP_DEST[95:80] := SaturateSignedDwordToSignedWord (TMP_SRC2[63:32]);
+TMP_DEST[111:96] := SaturateSignedDwordToSignedWord (TMP_SRC2[95:64]);
+TMP_DEST[127:112] := SaturateSignedDwordToSignedWord (TMP_SRC2[127:96]);
+IF VL >= 256
+    TMP_DEST[143:128] := SaturateSignedDwordToSignedWord (SRC1[159:128]);
+    TMP_DEST[159:144] := SaturateSignedDwordToSignedWord (SRC1[191:160]);
+    TMP_DEST[175:160] := SaturateSignedDwordToSignedWord (SRC1[223:192]);
+    TMP_DEST[191:176] := SaturateSignedDwordToSignedWord (SRC1[255:224]);
+    TMP_DEST[207:192] := SaturateSignedDwordToSignedWord (TMP_SRC2[159:128]);
+    TMP_DEST[223:208] := SaturateSignedDwordToSignedWord (TMP_SRC2[191:160]);
+    TMP_DEST[239:224] := SaturateSignedDwordToSignedWord (TMP_SRC2[223:192]);
+    TMP_DEST[255:240] := SaturateSignedDwordToSignedWord (TMP_SRC2[255:224]);
+FI;
+IF VL >= 512
+    TMP_DEST[271:256] := SaturateSignedDwordToSignedWord (SRC1[287:256]);
+    TMP_DEST[287:272] := SaturateSignedDwordToSignedWord (SRC1[319:288]);
+    TMP_DEST[303:288] := SaturateSignedDwordToSignedWord (SRC1[351:320]);
+    TMP_DEST[319:304] := SaturateSignedDwordToSignedWord (SRC1[383:352]);
+    TMP_DEST[335:320] := SaturateSignedDwordToSignedWord (TMP_SRC2[287:256]);
+    TMP_DEST[351:336] := SaturateSignedDwordToSignedWord (TMP_SRC2[319:288]);
+    TMP_DEST[367:352] := SaturateSignedDwordToSignedWord (TMP_SRC2[351:320]);
+    TMP_DEST[383:368] := SaturateSignedDwordToSignedWord (TMP_SRC2[383:352]);
+    TMP_DEST[399:384] := SaturateSignedDwordToSignedWord (SRC1[415:384]);
+    TMP_DEST[415:400] := SaturateSignedDwordToSignedWord (SRC1[447:416]);
+    TMP_DEST[431:416] := SaturateSignedDwordToSignedWord (SRC1[479:448]);
+    TMP_DEST[447:432] := SaturateSignedDwordToSignedWord (SRC1[511:480]);
+    TMP_DEST[463:448] := SaturateSignedDwordToSignedWord (TMP_SRC2[415:384]);
+    TMP_DEST[479:464] := SaturateSignedDwordToSignedWord (TMP_SRC2[447:416]);
+    TMP_DEST[495:480] := SaturateSignedDwordToSignedWord (TMP_SRC2[479:448]);
+    TMP_DEST[511:496] := SaturateSignedDwordToSignedWord (TMP_SRC2[511:480]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPACKSSDW__m512i _mm512_packs_epi32(__m512i m1, __m512i m2);
+
+
VPACKSSDW__m512i _mm512_mask_packs_epi32(__m512i s, __mmask32 k, __m512i m1, __m512i m2);
+
+
VPACKSSDW__m512i _mm512_maskz_packs_epi32( __mmask32 k, __m512i m1, __m512i m2);
+
+
VPACKSSDW__m256i _mm256_mask_packs_epi32( __m256i s, __mmask16 k, __m256i m1, __m256i m2);
+
+
VPACKSSDW__m256i _mm256_maskz_packs_epi32( __mmask16 k, __m256i m1, __m256i m2);
+
+
VPACKSSDW__m128i _mm_mask_packs_epi32( __m128i s, __mmask8 k, __m128i m1, __m128i m2);
+
+
VPACKSSDW__m128i _mm_maskz_packs_epi32( __mmask8 k, __m128i m1, __m128i m2);
+
+
VPACKSSWB__m512i _mm512_packs_epi16(__m512i m1, __m512i m2);
+
+
VPACKSSWB__m512i _mm512_mask_packs_epi16(__m512i s, __mmask32 k, __m512i m1, __m512i m2);
+
+
VPACKSSWB__m512i _mm512_maskz_packs_epi16( __mmask32 k, __m512i m1, __m512i m2);
+
+
VPACKSSWB__m256i _mm256_mask_packs_epi16( __m256i s, __mmask16 k, __m256i m1, __m256i m2);
+
+
VPACKSSWB__m256i _mm256_maskz_packs_epi16( __mmask16 k, __m256i m1, __m256i m2);
+
+
VPACKSSWB__m128i _mm_mask_packs_epi16( __m128i s, __mmask8 k, __m128i m1, __m128i m2);
+
+
VPACKSSWB__m128i _mm_maskz_packs_epi16( __mmask8 k, __m128i m1, __m128i m2);
+
+
PACKSSWB __m128i _mm_packs_epi16(__m128i m1, __m128i m2)
+
+
PACKSSDW __m128i _mm_packs_epi32(__m128i m1, __m128i m2)
+
+
VPACKSSWB __m256i _mm256_packs_epi16(__m256i m1, __m256i m2)
+
+
VPACKSSDW __m256i _mm256_packs_epi32(__m256i m1, __m256i m2)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPACKSSDW, see Table 2-50, “Type E4NF Class Exception Conditions.”

+

EVEX-encoded VPACKSSWB, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/packusdw.html b/x86/packusdw.html new file mode 100644 index 0000000..a6925f2 --- /dev/null +++ b/x86/packusdw.html @@ -0,0 +1,301 @@ + +PACKUSDW + — Pack With Unsigned Saturation

PACKUSDW + — Pack With Unsigned Saturation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 2B /r PACKUSDW xmm1, xmm2/m128AV/VSSE4_1Convert 4 packed signed doubleword integers from xmm1 and 4 packed signed doubleword integers from xmm2/m128 into 8 packed unsigned word integers in xmm1 using unsigned saturation.
VEX.128.66.0F38 2B /r VPACKUSDW xmm1,xmm2, xmm3/m128BV/VAVXConvert 4 packed signed doubleword integers from xmm2 and 4 packed signed doubleword integers from xmm3/m128 into 8 packed unsigned word integers in xmm1 using unsigned saturation.
VEX.256.66.0F38 2B /r VPACKUSDW ymm1, ymm2, ymm3/m256BV/VAVX2Convert 8 packed signed doubleword integers from ymm2 and 8 packed signed doubleword integers from ymm3/m256 into 16 packed unsigned word integers in ymm1 using unsigned saturation.
EVEX.128.66.0F38.W0 2B /r VPACKUSDW xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512BWConvert packed signed doubleword integers from xmm2 and packed signed doubleword integers from xmm3/m128/m32bcst into packed unsigned word integers in xmm1 using unsigned saturation under writemask k1.
EVEX.256.66.0F38.W0 2B /r VPACKUSDW ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512BWConvert packed signed doubleword integers from ymm2 and packed signed doubleword integers from ymm3/m256/m32bcst into packed unsigned word integers in ymm1 using unsigned saturation under writemask k1.
EVEX.512.66.0F38.W0 2B /r VPACKUSDW zmm1{k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512BWConvert packed signed doubleword integers from zmm2 and packed signed doubleword integers from zmm3/m512/m32bcst into packed unsigned word integers in zmm1 using unsigned saturation under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts packed signed doubleword integers in the first and second source operands into packed unsigned word integers using unsigned saturation to handle overflow conditions. If the signed doubleword value is beyond the range of an unsigned word (that is, greater than FFFFH or less than 0000H), the saturated unsigned word integer value of FFFFH or 0000H, respectively, is stored in the destination.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register, updated conditionally under the writemask k1.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The first source operand is an XMM register. The second operand can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding destination register destination are unmodified.

+

Operation + ¶ +

+

PACKUSDW (Legacy SSE Instruction) + ¶ +

+
TMP[15:0] := (DEST[31:0] < 0) ? 0 : DEST[15:0];
+DEST[15:0] := (DEST[31:0] > FFFFH) ? FFFFH : TMP[15:0] ;
+TMP[31:16] := (DEST[63:32] < 0) ? 0 : DEST[47:32];
+DEST[31:16] := (DEST[63:32] > FFFFH) ? FFFFH : TMP[31:16] ;
+TMP[47:32] := (DEST[95:64] < 0) ? 0 : DEST[79:64];
+DEST[47:32] := (DEST[95:64] > FFFFH) ? FFFFH : TMP[47:32] ;
+TMP[63:48] := (DEST[127:96] < 0) ? 0 : DEST[111:96];
+DEST[63:48] := (DEST[127:96] > FFFFH) ? FFFFH : TMP[63:48] ;
+TMP[79:64] := (SRC[31:0] < 0) ? 0 : SRC[15:0];
+DEST[79:64] := (SRC[31:0] > FFFFH) ? FFFFH : TMP[79:64] ;
+TMP[95:80] := (SRC[63:32] < 0) ? 0 : SRC[47:32];
+DEST[95:80] := (SRC[63:32] > FFFFH) ? FFFFH : TMP[95:80] ;
+TMP[111:96] := (SRC[95:64] < 0) ? 0 : SRC[79:64];
+DEST[111:96] := (SRC[95:64] > FFFFH) ? FFFFH : TMP[111:96] ;
+TMP[127:112] := (SRC[127:96] < 0) ? 0 : SRC[111:96];
+DEST[127:112] := (SRC[127:96] > FFFFH) ? FFFFH : TMP[127:112] ;
+DEST[MAXVL-1:128] (Unmodified)
+
+

PACKUSDW (VEX.128 Encoded Version) + ¶ +

+
TMP[15:0] := (SRC1[31:0] < 0) ? 0 : SRC1[15:0];
+DEST[15:0] := (SRC1[31:0] > FFFFH) ? FFFFH : TMP[15:0] ;
+TMP[31:16] := (SRC1[63:32] < 0) ? 0 : SRC1[47:32];
+DEST[31:16] := (SRC1[63:32] > FFFFH) ? FFFFH : TMP[31:16] ;
+TMP[47:32] := (SRC1[95:64] < 0) ? 0 : SRC1[79:64];
+DEST[47:32] := (SRC1[95:64] > FFFFH) ? FFFFH : TMP[47:32] ;
+TMP[63:48] := (SRC1[127:96] < 0) ? 0 : SRC1[111:96];
+DEST[63:48] := (SRC1[127:96] > FFFFH) ? FFFFH : TMP[63:48] ;
+TMP[79:64] := (SRC2[31:0] < 0) ? 0 : SRC2[15:0];
+DEST[79:64] := (SRC2[31:0] > FFFFH) ? FFFFH : TMP[79:64] ;
+TMP[95:80] := (SRC2[63:32] < 0) ? 0 : SRC2[47:32];
+DEST[95:80] := (SRC2[63:32] > FFFFH) ? FFFFH : TMP[95:80] ;
+TMP[111:96] := (SRC2[95:64] < 0) ? 0 : SRC2[79:64];
+DEST[111:96] := (SRC2[95:64] > FFFFH) ? FFFFH : TMP[111:96] ;
+TMP[127:112] := (SRC2[127:96] < 0) ? 0 : SRC2[111:96];
+DEST[127:112] := (SRC2[127:96] > FFFFH) ? FFFFH : TMP[127:112];
+DEST[MAXVL-1:128] := 0;
+
+

VPACKUSDW (VEX.256 Encoded Version) + ¶ +

+
TMP[15:0] := (SRC1[31:0] < 0) ? 0 : SRC1[15:0];
+DEST[15:0] := (SRC1[31:0] > FFFFH) ? FFFFH : TMP[15:0] ;
+TMP[31:16] := (SRC1[63:32] < 0) ? 0 : SRC1[47:32];
+DEST[31:16] := (SRC1[63:32] > FFFFH) ? FFFFH : TMP[31:16] ;
+TMP[47:32] := (SRC1[95:64] < 0) ? 0 : SRC1[79:64];
+DEST[47:32] := (SRC1[95:64] > FFFFH) ? FFFFH : TMP[47:32] ;
+TMP[63:48] := (SRC1[127:96] < 0) ? 0 : SRC1[111:96];
+DEST[63:48] := (SRC1[127:96] > FFFFH) ? FFFFH : TMP[63:48] ;
+TMP[79:64] := (SRC2[31:0] < 0) ? 0 : SRC2[15:0];
+DEST[79:64] := (SRC2[31:0] > FFFFH) ? FFFFH : TMP[79:64] ;
+TMP[95:80] := (SRC2[63:32] < 0) ? 0 : SRC2[47:32];
+DEST[95:80] := (SRC2[63:32] > FFFFH) ? FFFFH : TMP[95:80] ;
+TMP[111:96] := (SRC2[95:64] < 0) ? 0 : SRC2[79:64];
+DEST[111:96] := (SRC2[95:64] > FFFFH) ? FFFFH : TMP[111:96] ;
+TMP[127:112] := (SRC2[127:96] < 0) ? 0 : SRC2[111:96];
+DEST[127:112] := (SRC2[127:96] > FFFFH) ? FFFFH : TMP[127:112] ;
+TMP[143:128] := (SRC1[159:128] < 0) ? 0 : SRC1[143:128];
+DEST[143:128] := (SRC1[159:128] > FFFFH) ? FFFFH : TMP[143:128] ;
+TMP[159:144] := (SRC1[191:160] < 0) ? 0 : SRC1[175:160];
+DEST[159:144] := (SRC1[191:160] > FFFFH) ? FFFFH : TMP[159:144] ;
+TMP[175:160] := (SRC1[223:192] < 0) ? 0 : SRC1[207:192];
+DEST[175:160] := (SRC1[223:192] > FFFFH) ? FFFFH : TMP[175:160] ;
+TMP[191:176] := (SRC1[255:224] < 0) ? 0 : SRC1[239:224];
+DEST[191:176] := (SRC1[255:224] > FFFFH) ? FFFFH : TMP[191:176] ;
+TMP[207:192] := (SRC2[159:128] < 0) ? 0 : SRC2[143:128];
+DEST[207:192] := (SRC2[159:128] > FFFFH) ? FFFFH : TMP[207:192] ;
+TMP[223:208] := (SRC2[191:160] < 0) ? 0 : SRC2[175:160];
+DEST[223:208] := (SRC2[191:160] > FFFFH) ? FFFFH : TMP[223:208] ;
+TMP[239:224] := (SRC2[223:192] < 0) ? 0 : SRC2[207:192];
+DEST[239:224] := (SRC2[223:192] > FFFFH) ? FFFFH : TMP[239:224] ;
+TMP[255:240] := (SRC2[255:224] < 0) ? 0 : SRC2[239:224];
+DEST[255:240] := (SRC2[255:224] > FFFFH) ? FFFFH : TMP[255:240] ;
+DEST[MAXVL-1:256] := 0;
+
+

VPACKUSDW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO ((KL/2) - 1)
+    i := j * 32
+    IF (EVEX.b == 1) AND (SRC2 *is memory*)
+        THEN
+            TMP_SRC2[i+31:i] := SRC2[31:0]
+        ELSE
+            TMP_SRC2[i+31:i] := SRC2[i+31:i]
+    FI;
+ENDFOR;
+TMP[15:0] := (SRC1[31:0] < 0) ? 0 : SRC1[15:0];
+DEST[15:0] := (SRC1[31:0] > FFFFH) ? FFFFH : TMP[15:0] ;
+TMP[31:16] := (SRC1[63:32] < 0) ? 0 : SRC1[47:32];
+DEST[31:16] := (SRC1[63:32] > FFFFH) ? FFFFH : TMP[31:16] ;
+TMP[47:32] := (SRC1[95:64] < 0) ? 0 : SRC1[79:64];
+DEST[47:32] := (SRC1[95:64] > FFFFH) ? FFFFH : TMP[47:32] ;
+TMP[63:48] := (SRC1[127:96] < 0) ? 0 : SRC1[111:96];
+DEST[63:48] := (SRC1[127:96] > FFFFH) ? FFFFH : TMP[63:48] ;
+TMP[79:64] := (TMP_SRC2[31:0] < 0) ? 0 : TMP_SRC2[15:0];
+DEST[79:64] := (TMP_SRC2[31:0] > FFFFH) ? FFFFH : TMP[79:64] ;
+TMP[95:80] := (TMP_SRC2[63:32] < 0) ? 0 : TMP_SRC2[47:32];
+DEST[95:80] := (TMP_SRC2[63:32] > FFFFH) ? FFFFH : TMP[95:80] ;
+TMP[111:96] := (TMP_SRC2[95:64] < 0) ? 0 : TMP_SRC2[79:64];
+DEST[111:96] := (TMP_SRC2[95:64] > FFFFH) ? FFFFH : TMP[111:96] ;
+TMP[127:112] := (TMP_SRC2[127:96] < 0) ? 0 : TMP_SRC2[111:96];
+DEST[127:112] := (TMP_SRC2[127:96] > FFFFH) ? FFFFH : TMP[127:112] ;
+IF VL >= 256
+    TMP[143:128] := (SRC1[159:128] < 0) ? 0 : SRC1[143:128];
+    DEST[143:128] := (SRC1[159:128] > FFFFH) ? FFFFH : TMP[143:128] ;
+    TMP[159:144] := (SRC1[191:160] < 0) ? 0 : SRC1[175:160];
+    DEST[159:144] := (SRC1[191:160] > FFFFH) ? FFFFH : TMP[159:144] ;
+    TMP[175:160] := (SRC1[223:192] < 0) ? 0 : SRC1[207:192];
+    DEST[175:160] := (SRC1[223:192] > FFFFH) ? FFFFH : TMP[175:160] ;
+    TMP[191:176] := (SRC1[255:224] < 0) ? 0 : SRC1[239:224];
+    DEST[191:176] := (SRC1[255:224] > FFFFH) ? FFFFH : TMP[191:176] ;
+    TMP[207:192] := (TMP_SRC2[159:128] < 0) ? 0 : TMP_SRC2[143:128];
+    DEST[207:192] := (TMP_SRC2[159:128] > FFFFH) ? FFFFH : TMP[207:192] ;
+    TMP[223:208] := (TMP_SRC2[191:160] < 0) ? 0 : TMP_SRC2[175:160];
+    DEST[223:208] := (TMP_SRC2[191:160] > FFFFH) ? FFFFH : TMP[223:208] ;
+    TMP[239:224] := (TMP_SRC2[223:192] < 0) ? 0 : TMP_SRC2[207:192];
+    DEST[239:224] := (TMP_SRC2[223:192] > FFFFH) ? FFFFH : TMP[239:224] ;
+    TMP[255:240] := (TMP_SRC2[255:224] < 0) ? 0 : TMP_SRC2[239:224];
+    DEST[255:240] := (TMP_SRC2[255:224] > FFFFH) ? FFFFH : TMP[255:240] ;
+FI;
+IF VL >= 512
+    TMP[271:256] := (SRC1[287:256] < 0) ? 0 : SRC1[271:256];
+    DEST[271:256] := (SRC1[287:256] > FFFFH) ? FFFFH : TMP[271:256] ;
+    TMP[287:272] := (SRC1[319:288] < 0) ? 0 : SRC1[303:288];
+    DEST[287:272] := (SRC1[319:288] > FFFFH) ? FFFFH : TMP[287:272] ;
+    TMP[303:288] := (SRC1[351:320] < 0) ? 0 : SRC1[335:320];
+    DEST[303:288] := (SRC1[351:320] > FFFFH) ? FFFFH : TMP[303:288] ;
+    TMP[319:304] := (SRC1[383:352] < 0) ? 0 : SRC1[367:352];
+    DEST[319:304] := (SRC1[383:352] > FFFFH) ? FFFFH : TMP[319:304] ;
+    TMP[335:320] := (TMP_SRC2[287:256] < 0) ? 0 : TMP_SRC2[271:256];
+    DEST[335:304] := (TMP_SRC2[287:256] > FFFFH) ? FFFFH : TMP[79:64] ;
+    TMP[351:336] := (TMP_SRC2[319:288] < 0) ? 0 : TMP_SRC2[303:288];
+    DEST[351:336] := (TMP_SRC2[319:288] > FFFFH) ? FFFFH : TMP[351:336] ;
+    TMP[367:352] := (TMP_SRC2[351:320] < 0) ? 0 : TMP_SRC2[315:320];
+    DEST[367:352] := (TMP_SRC2[351:320] > FFFFH) ? FFFFH : TMP[367:352] ;
+    TMP[383:368] := (TMP_SRC2[383:352] < 0) ? 0 : TMP_SRC2[367:352];
+    DEST[383:368] := (TMP_SRC2[383:352] > FFFFH) ? FFFFH : TMP[383:368] ;
+    TMP[399:384] := (SRC1[415:384] < 0) ? 0 : SRC1[399:384];
+    DEST[399:384] := (SRC1[415:384] > FFFFH) ? FFFFH : TMP[399:384] ;
+    TMP[415:400] := (SRC1[447:416] < 0) ? 0 : SRC1[431:416];
+    DEST[415:400] := (SRC1[447:416] > FFFFH) ? FFFFH : TMP[415:400] ;
+    TMP[431:416] := (SRC1[479:448] < 0) ? 0 : SRC1[463:448];
+    DEST[431:416] := (SRC1[479:448] > FFFFH) ? FFFFH : TMP[431:416] ;
+    TMP[447:432] := (SRC1[511:480] < 0) ? 0 : SRC1[495:480];
+    DEST[447:432] := (SRC1[511:480] > FFFFH) ? FFFFH : TMP[447:432] ;
+    TMP[463:448] := (TMP_SRC2[415:384] < 0) ? 0 : TMP_SRC2[399:384];
+    DEST[463:448] := (TMP_SRC2[415:384] > FFFFH) ? FFFFH : TMP[463:448] ;
+    TMP[475:464] := (TMP_SRC2[447:416] < 0) ? 0 : TMP_SRC2[431:416];
+    DEST[475:464] := (TMP_SRC2[447:416] > FFFFH) ? FFFFH : TMP[475:464] ;
+    TMP[491:476] := (TMP_SRC2[479:448] < 0) ? 0 : TMP_SRC2[463:448];
+    DEST[491:476] := (TMP_SRC2[479:448] > FFFFH) ? FFFFH : TMP[491:476] ;
+    TMP[511:492] := (TMP_SRC2[511:480] < 0) ? 0 : TMP_SRC2[495:480];
+    DEST[511:492] := (TMP_SRC2[511:480] > FFFFH) ? FFFFH : TMP[511:492] ;
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPACKUSDW__m512i _mm512_packus_epi32(__m512i m1, __m512i m2);
+
+
VPACKUSDW__m512i _mm512_mask_packus_epi32(__m512i s, __mmask32 k, __m512i m1, __m512i m2);
+
+
VPACKUSDW__m512i _mm512_maskz_packus_epi32( __mmask32 k, __m512i m1, __m512i m2);
+
+
VPACKUSDW__m256i _mm256_mask_packus_epi32( __m256i s, __mmask16 k, __m256i m1, __m256i m2);
+
+
VPACKUSDW__m256i _mm256_maskz_packus_epi32( __mmask16 k, __m256i m1, __m256i m2);
+
+
VPACKUSDW__m128i _mm_mask_packus_epi32( __m128i s, __mmask8 k, __m128i m1, __m128i m2);
+
+
VPACKUSDW__m128i _mm_maskz_packus_epi32( __mmask8 k, __m128i m1, __m128i m2);
+
+
PACKUSDW__m128i _mm_packus_epi32(__m128i m1, __m128i m2);
+
+
VPACKUSDW__m256i _mm256_packus_epi32(__m256i m1, __m256i m2);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/packuswb.html b/x86/packuswb.html new file mode 100644 index 0000000..ed0e185 --- /dev/null +++ b/x86/packuswb.html @@ -0,0 +1,317 @@ + +PACKUSWB + — Pack With Unsigned Saturation

PACKUSWB + — Pack With Unsigned Saturation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 67 /r1 PACKUSWB mm, mm/m64AV/VMMXConverts 4 signed word integers from mm and 4 signed word integers from mm/m64 into 8 unsigned byte integers in mm using unsigned saturation.
66 0F 67 /r PACKUSWB xmm1, xmm2/m128AV/VSSE2Converts 8 signed word integers from xmm1 and 8 signed word integers from xmm2/m128 into 16 unsigned byte integers in xmm1 using unsigned saturation.
VEX.128.66.0F.WIG 67 /r VPACKUSWB xmm1, xmm2, xmm3/m128BV/VAVXConverts 8 signed word integers from xmm2 and 8 signed word integers from xmm3/m128 into 16 unsigned byte integers in xmm1 using unsigned saturation.
VEX.256.66.0F.WIG 67 /r VPACKUSWB ymm1, ymm2, ymm3/m256BV/VAVX2Converts 16 signed word integers from ymm2 and 16signed word integers from ymm3/m256 into 32 unsigned byte integers in ymm1 using unsigned saturation.
EVEX.128.66.0F.WIG 67 /r VPACKUSWB xmm1{k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWConverts signed word integers from xmm2 and signed word integers from xmm3/m128 into unsigned byte integers in xmm1 using unsigned saturation under writemask k1.
EVEX.256.66.0F.WIG 67 /r VPACKUSWB ymm1{k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWConverts signed word integers from ymm2 and signed word integers from ymm3/m256 into unsigned byte integers in ymm1 using unsigned saturation under writemask k1.
EVEX.512.66.0F.WIG 67 /r VPACKUSWB zmm1{k1}{z}, zmm2, zmm3/m512CV/VAVX512BWConverts signed word integers from zmm2 and signed word integers from zmm3/m512 into unsigned byte integers in zmm1 using unsigned saturation under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts 4, 8, 16, or 32 signed word integers from the destination operand (first operand) and 4, 8, 16, or 32 signed word integers from the source operand (second operand) into 8, 16, 32 or 64 unsigned byte integers and stores the result in the destination operand. (See Figure 4-6 for an example of the packing operation.) If a signed word integer value is beyond the range of an unsigned byte integer (that is, greater than FFH or less than 00H), the saturated unsigned byte integer value of FFH or 00H, respectively, is stored in the destination.

+

EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand is a ZMM register or a 512-bit memory location. The destination operand is a ZMM register.

+

VEX.256 and EVEX.256 encoded versions: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 and EVEX.128 encoded versions: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding register destination are zeroed.

+

128-bit Legacy SSE version: The first source operand is an XMM register. The second operand can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

PACKUSWB (With 64-bit Operands) + ¶ +

+
DEST[7:0] := SaturateSignedWordToUnsignedByte DEST[15:0];
+DEST[15:8] := SaturateSignedWordToUnsignedByte DEST[31:16];
+DEST[23:16] := SaturateSignedWordToUnsignedByte DEST[47:32];
+DEST[31:24] := SaturateSignedWordToUnsignedByte DEST[63:48];
+DEST[39:32] := SaturateSignedWordToUnsignedByte SRC[15:0];
+DEST[47:40] := SaturateSignedWordToUnsignedByte SRC[31:16];
+DEST[55:48] := SaturateSignedWordToUnsignedByte SRC[47:32];
+DEST[63:56] := SaturateSignedWordToUnsignedByte SRC[63:48];
+
+

PACKUSWB (Legacy SSE Instruction) + ¶ +

+
DEST[7:0] := SaturateSignedWordToUnsignedByte (DEST[15:0]);
+DEST[15:8] := SaturateSignedWordToUnsignedByte (DEST[31:16]);
+DEST[23:16] := SaturateSignedWordToUnsignedByte (DEST[47:32]);
+DEST[31:24] := SaturateSignedWordToUnsignedByte (DEST[63:48]);
+DEST[39:32] := SaturateSignedWordToUnsignedByte (DEST[79:64]);
+DEST[47:40] := SaturateSignedWordToUnsignedByte (DEST[95:80]);
+DEST[55:48] := SaturateSignedWordToUnsignedByte (DEST[111:96]);
+DEST[63:56] := SaturateSignedWordToUnsignedByte (DEST[127:112]);
+DEST[71:64] := SaturateSignedWordToUnsignedByte (SRC[15:0]);
+DEST[79:72] := SaturateSignedWordToUnsignedByte (SRC[31:16]);
+DEST[87:80] := SaturateSignedWordToUnsignedByte (SRC[47:32]);
+DEST[95:88] := SaturateSignedWordToUnsignedByte (SRC[63:48]);
+DEST[103:96] := SaturateSignedWordToUnsignedByte (SRC[79:64]);
+DEST[111:104] := SaturateSignedWordToUnsignedByte (SRC[95:80]);
+DEST[119:112] := SaturateSignedWordToUnsignedByte (SRC[111:96]);
+DEST[127:120] := SaturateSignedWordToUnsignedByte (SRC[127:112]);
+
+

PACKUSWB (VEX.128 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateSignedWordToUnsignedByte (SRC1[15:0]);
+DEST[15:8] := SaturateSignedWordToUnsignedByte (SRC1[31:16]);
+DEST[23:16] := SaturateSignedWordToUnsignedByte (SRC1[47:32]);
+DEST[31:24] := SaturateSignedWordToUnsignedByte (SRC1[63:48]);
+DEST[39:32] := SaturateSignedWordToUnsignedByte (SRC1[79:64]);
+DEST[47:40] := SaturateSignedWordToUnsignedByte (SRC1[95:80]);
+DEST[55:48] := SaturateSignedWordToUnsignedByte (SRC1[111:96]);
+DEST[63:56] := SaturateSignedWordToUnsignedByte (SRC1[127:112]);
+DEST[71:64] := SaturateSignedWordToUnsignedByte (SRC2[15:0]);
+DEST[79:72] := SaturateSignedWordToUnsignedByte (SRC2[31:16]);
+DEST[87:80] := SaturateSignedWordToUnsignedByte (SRC2[47:32]);
+DEST[95:88] := SaturateSignedWordToUnsignedByte (SRC2[63:48]);
+DEST[103:96] := SaturateSignedWordToUnsignedByte (SRC2[79:64]);
+DEST[111:104] := SaturateSignedWordToUnsignedByte (SRC2[95:80]);
+DEST[119:112] := SaturateSignedWordToUnsignedByte (SRC2[111:96]);
+DEST[127:120] := SaturateSignedWordToUnsignedByte (SRC2[127:112]);
+DEST[MAXVL-1:128] := 0;
+
+

VPACKUSWB (VEX.256 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateSignedWordToUnsignedByte (SRC1[15:0]);
+DEST[15:8] := SaturateSignedWordToUnsignedByte (SRC1[31:16]);
+DEST[23:16] := SaturateSignedWordToUnsignedByte (SRC1[47:32]);
+DEST[31:24] := SaturateSignedWordToUnsignedByte (SRC1[63:48]);
+DEST[39:32] := SaturateSignedWordToUnsignedByte (SRC1[79:64]);
+DEST[47:40] := SaturateSignedWordToUnsignedByte (SRC1[95:80]);
+DEST[55:48] := SaturateSignedWordToUnsignedByte (SRC1[111:96]);
+DEST[63:56] := SaturateSignedWordToUnsignedByte (SRC1[127:112]);
+DEST[71:64] := SaturateSignedWordToUnsignedByte (SRC2[15:0]);
+DEST[79:72] := SaturateSignedWordToUnsignedByte (SRC2[31:16]);
+DEST[87:80] := SaturateSignedWordToUnsignedByte (SRC2[47:32]);
+DEST[95:88] := SaturateSignedWordToUnsignedByte (SRC2[63:48]);
+DEST[103:96] := SaturateSignedWordToUnsignedByte (SRC2[79:64]);
+DEST[111:104] := SaturateSignedWordToUnsignedByte (SRC2[95:80]);
+DEST[119:112] := SaturateSignedWordToUnsignedByte (SRC2[111:96]);
+DEST[127:120] := SaturateSignedWordToUnsignedByte (SRC2[127:112]);
+DEST[135:128] := SaturateSignedWordToUnsignedByte (SRC1[143:128]);
+DEST[143:136] := SaturateSignedWordToUnsignedByte (SRC1[159:144]);
+DEST[151:144] := SaturateSignedWordToUnsignedByte (SRC1[175:160]);
+DEST[159:152] := SaturateSignedWordToUnsignedByte (SRC1[191:176]);
+DEST[167:160] := SaturateSignedWordToUnsignedByte (SRC1[207:192]);
+DEST[175:168] := SaturateSignedWordToUnsignedByte (SRC1[223:208]);
+DEST[183:176] := SaturateSignedWordToUnsignedByte (SRC1[239:224]);
+DEST[191:184] := SaturateSignedWordToUnsignedByte (SRC1[255:240]);
+DEST[199:192] := SaturateSignedWordToUnsignedByte (SRC2[143:128]);
+DEST[207:200] := SaturateSignedWordToUnsignedByte (SRC2[159:144]);
+DEST[215:208] := SaturateSignedWordToUnsignedByte (SRC2[175:160]);
+DEST[223:216] := SaturateSignedWordToUnsignedByte (SRC2[191:176]);
+DEST[231:224] := SaturateSignedWordToUnsignedByte (SRC2[207:192]);
+DEST[239:232] := SaturateSignedWordToUnsignedByte (SRC2[223:208]);
+DEST[247:240] := SaturateSignedWordToUnsignedByte (SRC2[239:224]);
+DEST[255:248] := SaturateSignedWordToUnsignedByte (SRC2[255:240]);
+
+

VPACKUSWB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+TMP_DEST[7:0] := SaturateSignedWordToUnsignedByte (SRC1[15:0]);
+TMP_DEST[15:8] := SaturateSignedWordToUnsignedByte (SRC1[31:16]);
+TMP_DEST[23:16] := SaturateSignedWordToUnsignedByte (SRC1[47:32]);
+TMP_DEST[31:24] := SaturateSignedWordToUnsignedByte (SRC1[63:48]);
+TMP_DEST[39:32] := SaturateSignedWordToUnsignedByte (SRC1[79:64]);
+TMP_DEST[47:40] := SaturateSignedWordToUnsignedByte (SRC1[95:80]);
+TMP_DEST[55:48] := SaturateSignedWordToUnsignedByte (SRC1[111:96]);
+TMP_DEST[63:56] := SaturateSignedWordToUnsignedByte (SRC1[127:112]);
+TMP_DEST[71:64] := SaturateSignedWordToUnsignedByte (SRC2[15:0]);
+TMP_DEST[79:72] := SaturateSignedWordToUnsignedByte (SRC2[31:16]);
+TMP_DEST[87:80] := SaturateSignedWordToUnsignedByte (SRC2[47:32]);
+TMP_DEST[95:88] := SaturateSignedWordToUnsignedByte (SRC2[63:48]);
+TMP_DEST[103:96] := SaturateSignedWordToUnsignedByte (SRC2[79:64]);
+TMP_DEST[111:104] := SaturateSignedWordToUnsignedByte (SRC2[95:80]);
+TMP_DEST[119:112] := SaturateSignedWordToUnsignedByte (SRC2[111:96]);
+TMP_DEST[127:120] := SaturateSignedWordToUnsignedByte (SRC2[127:112]);
+IF VL >= 256
+    TMP_DEST[135:128] := SaturateSignedWordToUnsignedByte (SRC1[143:128]);
+    TMP_DEST[143:136] := SaturateSignedWordToUnsignedByte (SRC1[159:144]);
+    TMP_DEST[151:144] := SaturateSignedWordToUnsignedByte (SRC1[175:160]);
+    TMP_DEST[159:152] := SaturateSignedWordToUnsignedByte (SRC1[191:176]);
+    TMP_DEST[167:160] := SaturateSignedWordToUnsignedByte (SRC1[207:192]);
+    TMP_DEST[175:168] := SaturateSignedWordToUnsignedByte (SRC1[223:208]);
+    TMP_DEST[183:176] := SaturateSignedWordToUnsignedByte (SRC1[239:224]);
+    TMP_DEST[191:184] := SaturateSignedWordToUnsignedByte (SRC1[255:240]);
+    TMP_DEST[199:192] := SaturateSignedWordToUnsignedByte (SRC2[143:128]);
+    TMP_DEST[207:200] := SaturateSignedWordToUnsignedByte (SRC2[159:144]);
+    TMP_DEST[215:208] := SaturateSignedWordToUnsignedByte (SRC2[175:160]);
+    TMP_DEST[223:216] := SaturateSignedWordToUnsignedByte (SRC2[191:176]);
+    TMP_DEST[231:224] := SaturateSignedWordToUnsignedByte (SRC2[207:192]);
+    TMP_DEST[239:232] := SaturateSignedWordToUnsignedByte (SRC2[223:208]);
+    TMP_DEST[247:240] := SaturateSignedWordToUnsignedByte (SRC2[239:224]);
+    TMP_DEST[255:248] := SaturateSignedWordToUnsignedByte (SRC2[255:240]);
+FI;
+IF VL >= 512
+    TMP_DEST[263:256] := SaturateSignedWordToUnsignedByte (SRC1[271:256]);
+    TMP_DEST[271:264] := SaturateSignedWordToUnsignedByte (SRC1[287:272]);
+    TMP_DEST[279:272] := SaturateSignedWordToUnsignedByte (SRC1[303:288]);
+    TMP_DEST[287:280] := SaturateSignedWordToUnsignedByte (SRC1[319:304]);
+    TMP_DEST[295:288] := SaturateSignedWordToUnsignedByte (SRC1[335:320]);
+    TMP_DEST[303:296] := SaturateSignedWordToUnsignedByte (SRC1[351:336]);
+    TMP_DEST[311:304] := SaturateSignedWordToUnsignedByte (SRC1[367:352]);
+    TMP_DEST[319:312] := SaturateSignedWordToUnsignedByte (SRC1[383:368]);
+    TMP_DEST[327:320] := SaturateSignedWordToUnsignedByte (SRC2[271:256]);
+    TMP_DEST[335:328] := SaturateSignedWordToUnsignedByte (SRC2[287:272]);
+    TMP_DEST[343:336] := SaturateSignedWordToUnsignedByte (SRC2[303:288]);
+    TMP_DEST[351:344] := SaturateSignedWordToUnsignedByte (SRC2[319:304]);
+    TMP_DEST[359:352] := SaturateSignedWordToUnsignedByte (SRC2[335:320]);
+    TMP_DEST[367:360] := SaturateSignedWordToUnsignedByte (SRC2[351:336]);
+    TMP_DEST[375:368] := SaturateSignedWordToUnsignedByte (SRC2[367:352]);
+    TMP_DEST[383:376] := SaturateSignedWordToUnsignedByte (SRC2[383:368]);
+    TMP_DEST[391:384] := SaturateSignedWordToUnsignedByte (SRC1[399:384]);
+    TMP_DEST[399:392] := SaturateSignedWordToUnsignedByte (SRC1[415:400]);
+    TMP_DEST[407:400] := SaturateSignedWordToUnsignedByte (SRC1[431:416]);
+    TMP_DEST[415:408] := SaturateSignedWordToUnsignedByte (SRC1[447:432]);
+    TMP_DEST[423:416] := SaturateSignedWordToUnsignedByte (SRC1[463:448]);
+    TMP_DEST[431:424] := SaturateSignedWordToUnsignedByte (SRC1[479:464]);
+    TMP_DEST[439:432] := SaturateSignedWordToUnsignedByte (SRC1[495:480]);
+    TMP_DEST[447:440] := SaturateSignedWordToUnsignedByte (SRC1[511:496]);
+    TMP_DEST[455:448] := SaturateSignedWordToUnsignedByte (SRC2[399:384]);
+    TMP_DEST[463:456] := SaturateSignedWordToUnsignedByte (SRC2[415:400]);
+    TMP_DEST[471:464] := SaturateSignedWordToUnsignedByte (SRC2[431:416]);
+    TMP_DEST[479:472] := SaturateSignedWordToUnsignedByte (SRC2[447:432]);
+    TMP_DEST[487:480] := SaturateSignedWordToUnsignedByte (SRC2[463:448]);
+    TMP_DEST[495:488] := SaturateSignedWordToUnsignedByte (SRC2[479:464]);
+    TMP_DEST[503:496] := SaturateSignedWordToUnsignedByte (SRC2[495:480]);
+    TMP_DEST[511:504] := SaturateSignedWordToUnsignedByte (SRC2[511:496]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+7:i] := TMP_DEST[i+7:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPACKUSWB __m512i _mm512_packus_epi16(__m512i m1, __m512i m2);
+
+
VPACKUSWB __m512i _mm512_mask_packus_epi16(__m512i s, __mmask64 k, __m512i m1, __m512i m2);
+
+
VPACKUSWB __m512i _mm512_maskz_packus_epi16(__mmask64 k, __m512i m1, __m512i m2);
+
+
VPACKUSWB __m256i _mm256_mask_packus_epi16(__m256i s, __mmask32 k, __m256i m1, __m256i m2);
+
+
VPACKUSWB __m256i _mm256_maskz_packus_epi16(__mmask32 k, __m256i m1, __m256i m2);
+
+
VPACKUSWB __m128i _mm_mask_packus_epi16(__m128i s, __mmask16 k, __m128i m1, __m128i m2);
+
+
VPACKUSWB __m128i _mm_maskz_packus_epi16(__mmask16 k, __m128i m1, __m128i m2);
+
+
PACKUSWB __m64 _mm_packs_pu16(__m64 m1, __m64 m2)
+
+
(V)PACKUSWB __m128i _mm_packus_epi16(__m128i m1, __m128i m2)
+
+
VPACKUSWB __m256i _mm256_packus_epi16(__m256i m1, __m256i m2);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/paddb.paddw.paddd.paddq.html b/x86/paddb.paddw.paddd.paddq.html new file mode 100644 index 0000000..cc1e0a2 --- /dev/null +++ b/x86/paddb.paddw.paddd.paddq.html @@ -0,0 +1,539 @@ + +PADDB/PADDW/PADDD/PADDQ + — Add Packed Integers

PADDB/PADDW/PADDD/PADDQ + — Add Packed Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F FC /r1 PADDB mm, mm/m64AV/VMMXAdd packed byte integers from mm/m64 and mm.
NP 0F FD /r1 PADDW mm, mm/m64AV/VMMXAdd packed word integers from mm/m64 and mm.
NP 0F FE /r1 PADDD mm, mm/m64AV/VMMXAdd packed doubleword integers from mm/m64 and mm.
NP 0F D4 /r1 PADDQ mm, mm/m64AV/VMMXAdd packed quadword integers from mm/m64 and mm.
66 0F FC /r PADDB xmm1, xmm2/m128AV/VSSE2Add packed byte integers from xmm2/m128 and xmm1.
66 0F FD /r PADDW xmm1, xmm2/m128AV/VSSE2Add packed word integers from xmm2/m128 and xmm1.
66 0F FE /r PADDD xmm1, xmm2/m128AV/VSSE2Add packed doubleword integers from xmm2/m128 and xmm1.
66 0F D4 /r PADDQ xmm1, xmm2/m128AV/VSSE2Add packed quadword integers from xmm2/m128 and xmm1.
VEX.128.66.0F.WIG FC /r VPADDB xmm1, xmm2, xmm3/m128BV/VAVXAdd packed byte integers from xmm2, and xmm3/m128 and store in xmm1.
VEX.128.66.0F.WIG FD /r VPADDW xmm1, xmm2, xmm3/m128BV/VAVXAdd packed word integers from xmm2, xmm3/m128 and store in xmm1.
VEX.128.66.0F.WIG FE /r VPADDD xmm1, xmm2, xmm3/m128BV/VAVXAdd packed doubleword integers from xmm2, xmm3/m128 and store in xmm1.
VEX.128.66.0F.WIG D4 /r VPADDQ xmm1, xmm2, xmm3/m128BV/VAVXAdd packed quadword integers from xmm2, xmm3/m128 and store in xmm1.
VEX.256.66.0F.WIG FC /r VPADDB ymm1, ymm2, ymm3/m256BV/VAVX2Add packed byte integers from ymm2, and ymm3/m256 and store in ymm1.
VEX.256.66.0F.WIG FD /r VPADDW ymm1, ymm2, ymm3/m256BV/VAVX2Add packed word integers from ymm2, ymm3/m256 and store in ymm1.
VEX.256.66.0F.WIG FE /r VPADDD ymm1, ymm2, ymm3/m256BV/VAVX2Add packed doubleword integers from ymm2, ymm3/m256 and store in ymm1.
VEX.256.66.0F.WIG D4 /r VPADDQ ymm1, ymm2, ymm3/m256BV/VAVX2Add packed quadword integers from ymm2, ymm3/m256 and store in ymm1.
EVEX.128.66.0F.WIG FC /r VPADDB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWAdd packed byte integers from xmm2, and xmm3/m128 and store in xmm1 using writemask k1.
EVEX.128.66.0F.WIG FD /r VPADDW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWAdd packed word integers from xmm2, and xmm3/m128 and store in xmm1 using writemask k1.
EVEX.128.66.0F.W0 FE /r VPADDD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstDV/VAVX512VL AVX512FAdd packed doubleword integers from xmm2, and xmm3/m128/m32bcst and store in xmm1 using writemask k1.
EVEX.128.66.0F.W1 D4 /r VPADDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstDV/VAVX512VL AVX512FAdd packed quadword integers from xmm2, and xmm3/m128/m64bcst and store in xmm1 using writemask k1.
EVEX.256.66.0F.WIG FC /r VPADDB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWAdd packed byte integers from ymm2, and ymm3/m256 and store in ymm1 using writemask k1.
EVEX.256.66.0F.WIG FD /r VPADDW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWAdd packed word integers from ymm2, and ymm3/m256 and store in ymm1 using writemask k1.
EVEX.256.66.0F.W0 FE /r VPADDD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstDV/VAVX512VL AVX512FAdd packed doubleword integers from ymm2, ymm3/m256/m32bcst and store in ymm1 using writemask k1.
EVEX.256.66.0F.W1 D4 /r VPADDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstDV/VAVX512VL AVX512FAdd packed quadword integers from ymm2, ymm3/m256/m64bcst and store in ymm1 using writemask k1.
EVEX.512.66.0F.WIG FC /r VPADDB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWAdd packed byte integers from zmm2, and zmm3/m512 and store in zmm1 using writemask k1.
EVEX.512.66.0F.WIG FD /r VPADDW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWAdd packed word integers from zmm2, and zmm3/m512 and store in zmm1 using writemask k1.
EVEX.512.66.0F.W0 FE /r VPADDD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstDV/VAVX512FAdd packed doubleword integers from zmm2, zmm3/m512/m32bcst and store in zmm1 using writemask k1.
EVEX.512.66.0F.W1 D4 /r VPADDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstDV/VAVX512FAdd packed quadword integers from zmm2, zmm3/m512/m64bcst and store in zmm1 using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD add of the packed integers from the source operand (second operand) and the destination operand (first operand), and stores the packed integer results in the destination operand. See Figure 9-4 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a SIMD operation. Overflow is handled with wraparound, as described in the following paragraphs.

+

The PADDB and VPADDB instructions add packed byte integers from the first source operand and second source operand and store the packed integer results in the destination operand. When an individual result is too large to be represented in 8 bits (overflow), the result is wrapped around and the low 8 bits are written to the destination operand (that is, the carry is ignored).

+

The PADDW and VPADDW instructions add packed word integers from the first source operand and second source operand and store the packed integer results in the destination operand. When an individual result is too large to

+

be represented in 16 bits (overflow), the result is wrapped around and the low 16 bits are written to the destination operand (that is, the carry is ignored).

+

The PADDD and VPADDD instructions add packed doubleword integers from the first source operand and second source operand and store the packed integer results in the destination operand. When an individual result is too large to be represented in 32 bits (overflow), the result is wrapped around and the low 32 bits are written to the destination operand (that is, the carry is ignored).

+

The PADDQ and VPADDQ instructions add packed quadword integers from the first source operand and second source operand and store the packed integer results in the destination operand. When a quadword result is too large to be represented in 64 bits (overflow), the result is wrapped around and the low 64 bits are written to the destination operand (that is, the carry is ignored).

+

Note that the (V)PADDB, (V)PADDW, (V)PADDD and (V)PADDQ instructions can operate on either unsigned or signed (two's complement notation) packed integers; however, it does not set bits in the EFLAGS register to indicate overflow and/or a carry. To prevent undetected overflow conditions, software must control the ranges of values operated on.

+

EVEX encoded VPADDD/Q: The first source operand is a ZMM/YMM/XMM register. The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the write-mask.

+

EVEX encoded VPADDB/W: The first source operand is a ZMM/YMM/XMM register. The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the writemask.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. the upper bits (MAXVL-1:256) of the destination are cleared.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The first source operand is an XMM register. The second operand can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Operation + ¶ +

+

PADDB (With 64-bit Operands) + ¶ +

+
DEST[7:0] := DEST[7:0] + SRC[7:0];
+(* Repeat add operation for 2nd through 7th byte *)
+DEST[63:56] := DEST[63:56] + SRC[63:56];
+
+

PADDW (With 64-bit Operands) + ¶ +

+
DEST[15:0] := DEST[15:0] + SRC[15:0];
+(* Repeat add operation for 2nd and 3th word *)
+DEST[63:48] := DEST[63:48] + SRC[63:48];
+
+

PADDD (With 64-bit Operands) + ¶ +

+
DEST[31:0] := DEST[31:0] + SRC[31:0];
+DEST[63:32] := DEST[63:32] + SRC[63:32];
+
+

PADDQ (With 64-Bit Operands) + ¶ +

+
DEST[63:0] := DEST[63:0] + SRC[63:0];
+
+

PADDB (Legacy SSE Instruction) + ¶ +

+
DEST[7:0] := DEST[7:0] + SRC[7:0];
+(* Repeat add operation for 2nd through 15th byte *)
+DEST[127:120] := DEST[127:120] + SRC[127:120];
+DEST[MAXVL-1:128] (Unmodified)
+
+

PADDW (Legacy SSE Instruction) + ¶ +

+
DEST[15:0] := DEST[15:0] + SRC[15:0];
+(* Repeat add operation for 2nd through 7th word *)
+DEST[127:112] := DEST[127:112] + SRC[127:112];
+DEST[MAXVL-1:128] (Unmodified)
+
+

PADDD (Legacy SSE Instruction) + ¶ +

+
DEST[31:0] := DEST[31:0] + SRC[31:0];
+(* Repeat add operation for 2nd and 3th doubleword *)
+DEST[127:96] := DEST[127:96] + SRC[127:96];
+DEST[MAXVL-1:128] (Unmodified)
+
+

PADDQ (Legacy SSE Instruction) + ¶ +

+
DEST[63:0] := DEST[63:0] + SRC[63:0];
+DEST[127:64] := DEST[127:64] + SRC[127:64];
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPADDB (VEX.128 Encoded Instruction) + ¶ +

+
DEST[7:0] := SRC1[7:0] + SRC2[7:0];
+(* Repeat add operation for 2nd through 15th byte *)
+DEST[127:120] := SRC1[127:120] + SRC2[127:120];
+DEST[MAXVL-1:128] := 0;
+
+

VPADDW (VEX.128 Encoded Instruction) + ¶ +

+
DEST[15:0] := SRC1[15:0] + SRC2[15:0];
+(* Repeat add operation for 2nd through 7th word *)
+DEST[127:112] := SRC1[127:112] + SRC2[127:112];
+DEST[MAXVL-1:128] := 0;
+
+

VPADDD (VEX.128 Encoded Instruction) + ¶ +

+
DEST[31:0] := SRC1[31:0] + SRC2[31:0];
+(* Repeat add operation for 2nd and 3th doubleword *)
+DEST[127:96] := SRC1[127:96] + SRC2[127:96];
+DEST[MAXVL-1:128] := 0;
+
+

VPADDQ (VEX.128 Encoded Instruction) + ¶ +

+
DEST[63:0] := SRC1[63:0] + SRC2[63:0];
+DEST[127:64] := SRC1[127:64] + SRC2[127:64];
+DEST[MAXVL-1:128] := 0;
+
+

VPADDB (VEX.256 Encoded Instruction) + ¶ +

+
DEST[7:0] := SRC1[7:0] + SRC2[7:0];
+(* Repeat add operation for 2nd through 31th byte *)
+DEST[255:248] := SRC1[255:248] + SRC2[255:248];
+
+

VPADDW (VEX.256 Encoded Instruction) + ¶ +

+
DEST[15:0] := SRC1[15:0] + SRC2[15:0];
+(* Repeat add operation for 2nd through 15th word *)
+DEST[255:240] := SRC1[255:240] + SRC2[255:240];
+
+

VPADDD (VEX.256 Encoded Instruction) + ¶ +

+
DEST[31:0] := SRC1[31:0] + SRC2[31:0];
+(* Repeat add operation for 2nd and 7th doubleword *)
+DEST[255:224] := SRC1[255:224] + SRC2[255:224];
+
+

VPADDQ (VEX.256 Encoded Instruction) + ¶ +

+
DEST[63:0] := SRC1[63:0] + SRC2[63:0];
+DEST[127:64] := SRC1[127:64] + SRC2[127:64];
+DEST[191:128] := SRC1[191:128] + SRC2[191:128];
+DEST[255:192] := SRC1[255:192] + SRC2[255:192];
+
+

VPADDB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SRC1[i+7:i] + SRC2[i+7:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPADDW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SRC1[i+15:i] + SRC2[i+15:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPADDD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := SRC1[i+31:i] + SRC2[31:0]
+                ELSE DEST[i+31:i] := SRC1[i+31:i] + SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPADDQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := SRC1[i+63:i] + SRC2[63:0]
+                ELSE DEST[i+63:i] := SRC1[i+63:i] + SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPADDB__m512i _mm512_add_epi8 ( __m512i a, __m512i b)
+
+
VPADDW__m512i _mm512_add_epi16 ( __m512i a, __m512i b)
+
+
VPADDB__m512i _mm512_mask_add_epi8 ( __m512i s, __mmask64 m, __m512i a, __m512i b)
+
+
VPADDW__m512i _mm512_mask_add_epi16 ( __m512i s, __mmask32 m, __m512i a, __m512i b)
+
+
VPADDB__m512i _mm512_maskz_add_epi8 (__mmask64 m, __m512i a, __m512i b)
+
+
VPADDW__m512i _mm512_maskz_add_epi16 (__mmask32 m, __m512i a, __m512i b)
+
+
VPADDB__m256i _mm256_mask_add_epi8 (__m256i s, __mmask32 m, __m256i a, __m256i b)
+
+
VPADDW__m256i _mm256_mask_add_epi16 (__m256i s, __mmask16 m, __m256i a, __m256i b)
+
+
VPADDB__m256i _mm256_maskz_add_epi8 (__mmask32 m, __m256i a, __m256i b)
+
+
VPADDW__m256i _mm256_maskz_add_epi16 (__mmask16 m, __m256i a, __m256i b)
+
+
VPADDB__m128i _mm_mask_add_epi8 (__m128i s, __mmask16 m, __m128i a, __m128i b)
+
+
VPADDW__m128i _mm_mask_add_epi16 (__m128i s, __mmask8 m, __m128i a, __m128i b)
+
+
VPADDB__m128i _mm_maskz_add_epi8 (__mmask16 m, __m128i a, __m128i b)
+
+
VPADDW__m128i _mm_maskz_add_epi16 (__mmask8 m, __m128i a, __m128i b)
+
+
VPADDD __m512i _mm512_add_epi32( __m512i a, __m512i b);
+
+
VPADDD __m512i _mm512_mask_add_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPADDD __m512i _mm512_maskz_add_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPADDD __m256i _mm256_mask_add_epi32(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPADDD __m256i _mm256_maskz_add_epi32( __mmask8 k, __m256i a, __m256i b);
+
+
VPADDD __m128i _mm_mask_add_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPADDD __m128i _mm_maskz_add_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
VPADDQ __m512i _mm512_add_epi64( __m512i a, __m512i b);
+
+
VPADDQ __m512i _mm512_mask_add_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPADDQ __m512i _mm512_maskz_add_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPADDQ __m256i _mm256_mask_add_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPADDQ __m256i _mm256_maskz_add_epi64( __mmask8 k, __m256i a, __m256i b);
+
+
VPADDQ __m128i _mm_mask_add_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPADDQ __m128i _mm_maskz_add_epi64( __mmask8 k, __m128i a, __m128i b);
+
+
PADDB __m128i _mm_add_epi8 (__m128i a,__m128i b );
+
+
PADDW __m128i _mm_add_epi16 ( __m128i a, __m128i b);
+
+
PADDD __m128i _mm_add_epi32 ( __m128i a, __m128i b);
+
+
PADDQ __m128i _mm_add_epi64 ( __m128i a, __m128i b);
+
+
VPADDB __m256i _mm256_add_epi8 (__m256ia,__m256i b );
+
+
VPADDW __m256i _mm256_add_epi16 ( __m256i a, __m256i b);
+
+
VPADDD __m256i _mm256_add_epi32 ( __m256i a, __m256i b);
+
+
VPADDQ __m256i _mm256_add_epi64 ( __m256i a, __m256i b);
+
+
PADDB __m64 _mm_add_pi8(__m64 m1, __m64 m2)
+
+
PADDW __m64 _mm_add_pi16(__m64 m1, __m64 m2)
+
+
PADDD __m64 _mm_add_pi32(__m64 m1, __m64 m2)
+
+
PADDQ __m64 _mm_add_si64(__m64 m1, __m64 m2)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPADDD/Q, see Table 2-49, “Type E4 Class Exception Conditions.”

+

EVEX-encoded VPADDB/W, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/paddsb.paddsw.html b/x86/paddsb.paddsw.html new file mode 100644 index 0000000..0721488 --- /dev/null +++ b/x86/paddsb.paddsw.html @@ -0,0 +1,294 @@ + +PADDSB/PADDSW + — Add Packed Signed Integers with Signed Saturation

PADDSB/PADDSW + — Add Packed Signed Integers with Signed Saturation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F EC /r1 PADDSB mm, mm/m64AV/VMMXAdd packed signed byte integers from mm/m64 and mm and saturate the results.
66 0F EC /r PADDSB xmm1, xmm2/m128AV/VSSE2Add packed signed byte integers from xmm2/m128 and xmm1 saturate the results.
NP 0F ED /r1 PADDSW mm, mm/m64AV/VMMXAdd packed signed word integers from mm/m64 and mm and saturate the results.
66 0F ED /r PADDSW xmm1, xmm2/m128AV/VSSE2Add packed signed word integers from xmm2/m128 and xmm1 and saturate the results.
VEX.128.66.0F.WIG EC /r VPADDSB xmm1, xmm2, xmm3/m128BV/VAVXAdd packed signed byte integers from xmm3/m128 and xmm2 saturate the results.
VEX.128.66.0F.WIG ED /r VPADDSW xmm1, xmm2, xmm3/m128BV/VAVXAdd packed signed word integers from xmm3/m128 and xmm2 and saturate the results.
VEX.256.66.0F.WIG EC /r VPADDSB ymm1, ymm2, ymm3/m256BV/VAVX2Add packed signed byte integers from ymm2, and ymm3/m256 and store the saturated results in ymm1.
VEX.256.66.0F.WIG ED /r VPADDSW ymm1, ymm2, ymm3/m256BV/VAVX2Add packed signed word integers from ymm2, and ymm3/m256 and store the saturated results in ymm1.
EVEX.128.66.0F.WIG EC /r VPADDSB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWAdd packed signed byte integers from xmm2, and xmm3/m128 and store the saturated results in xmm1 under writemask k1.
EVEX.256.66.0F.WIG EC /r VPADDSB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWAdd packed signed byte integers from ymm2, and ymm3/m256 and store the saturated results in ymm1 under writemask k1.
EVEX.512.66.0F.WIG EC /r VPADDSB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWAdd packed signed byte integers from zmm2, and zmm3/m512 and store the saturated results in zmm1 under writemask k1.
EVEX.128.66.0F.WIG ED /r VPADDSW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWAdd packed signed word integers from xmm2, and xmm3/m128 and store the saturated results in xmm1 under writemask k1.
EVEX.256.66.0F.WIG ED /r VPADDSW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWAdd packed signed word integers from ymm2, and ymm3/m256 and store the saturated results in ymm1 under writemask k1.
EVEX.512.66.0F.WIG ED /r VPADDSW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWAdd packed signed word integers from zmm2, and zmm3/m512 and store the saturated results in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD add of the packed signed integers from the source operand (second operand) and the destination operand (first operand), and stores the packed integer results in the destination operand. See Figure 9-4 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a SIMD operation. Overflow is handled with signed saturation, as described in the following paragraphs.

+

(V)PADDSB performs a SIMD add of the packed signed integers with saturation from the first source operand and second source operand and stores the packed integer results in the destination operand. When an individual byte result is beyond the range of a signed byte integer (that is, greater than 7FH or less than 80H), the saturated value of 7FH or 80H, respectively, is written to the destination operand.

+

(V)PADDSW performs a SIMD add of the packed signed word integers with saturation from the first source operand and second source operand and stores the packed integer results in the destination operand. When an individual word result is beyond the range of a signed word integer (that is, greater than 7FFFH or less than 8000H), the saturated value of 7FFFH or 8000H, respectively, is written to the destination operand.

+

EVEX encoded versions: The first source operand is an ZMM/YMM/XMM register. The second source operand is an ZMM/YMM/XMM register or a memory location. The destination operand is an ZMM/YMM/XMM register.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding register destination are zeroed.

+

128-bit Legacy SSE version: The first source operand is an XMM register. The second operand can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

PADDSB (With 64-bit Operands) + ¶ +

+
DEST[7:0] := SaturateToSignedByte(DEST[7:0] + SRC (7:0]);
+(* Repeat add operation for 2nd through 7th bytes *)
+DEST[63:56] := SaturateToSignedByte(DEST[63:56] + SRC[63:56] );
+
+

PADDSB (With 128-bit Operands) + ¶ +

+
DEST[7:0] := SaturateToSignedByte (DEST[7:0] + SRC[7:0]);
+(* Repeat add operation for 2nd through 14th bytes *)
+DEST[127:120] := SaturateToSignedByte (DEST[111:120] + SRC[127:120]);
+
+

VPADDSB (VEX.128 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateToSignedByte (SRC1[7:0] + SRC2[7:0]);
+(* Repeat subtract operation for 2nd through 14th bytes *)
+DEST[127:120] := SaturateToSignedByte (SRC1[111:120] + SRC2[127:120]);
+DEST[MAXVL-1:128] := 0
+
+

VPADDSB (VEX.256 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateToSignedByte (SRC1[7:0] + SRC2[7:0]);
+(* Repeat add operation for 2nd through 31st bytes *)
+DEST[255:248] := SaturateToSignedByte (SRC1[255:248] + SRC2[255:248]);
+
+

VPADDSB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateToSignedByte (SRC1[i+7:i] + SRC2[i+7:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+PADDSW (with 64-bit operands)
+    DEST[15:0] := SaturateToSignedWord(DEST[15:0] + SRC[15:0] );
+    (* Repeat add operation for 2nd and 7th words *)
+    DEST[63:48] := SaturateToSignedWord(DEST[63:48] + SRC[63:48] );
+PADDSW (with 128-bit operands)
+    DEST[15:0] := SaturateToSignedWord (DEST[15:0] + SRC[15:0]);
+    (* Repeat add operation for 2nd through 7th words *)
+    DEST[127:112] := SaturateToSignedWord (DEST[127:112] + SRC[127:112]);
+
+

VPADDSW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToSignedWord (SRC1[15:0] + SRC2[15:0]);
+(* Repeat subtract operation for 2nd through 7th words *)
+DEST[127:112] := SaturateToSignedWord (SRC1[127:112] + SRC2[127:112]);
+DEST[MAXVL-1:128] := 0
+
+

VPADDSW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToSignedWord (SRC1[15:0] + SRC2[15:0]);
+(* Repeat add operation for 2nd through 15th words *)
+DEST[255:240] := SaturateToSignedWord (SRC1[255:240] + SRC2[255:240])
+
+

VPADDSW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateToSignedWord (SRC1[i+15:i] + SRC2[i+15:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
PADDSB __m64 _mm_adds_pi8(__m64 m1, __m64 m2)
+
+
(V)PADDSB __m128i _mm_adds_epi8 ( __m128i a, __m128i b)
+
+
VPADDSB __m256i _mm256_adds_epi8 ( __m256i a, __m256i b)
+
+
PADDSW __m64 _mm_adds_pi16(__m64 m1, __m64 m2)
+
+
(V)PADDSW __m128i _mm_adds_epi16 ( __m128i a, __m128i b)
+
+
VPADDSW __m256i _mm256_adds_epi16 ( __m256i a, __m256i b)
+
+
VPADDSB __m512i _mm512_adds_epi8 ( __m512i a, __m512i b)
+
+
VPADDSW __m512i _mm512_adds_epi16 ( __m512i a, __m512i b)
+
+
VPADDSB __m512i _mm512_mask_adds_epi8 ( __m512i s, __mmask64 m, __m512i a, __m512i b)
+
+
VPADDSW __m512i _mm512_mask_adds_epi16 ( __m512i s, __mmask32 m, __m512i a, __m512i b)
+
+
VPADDSB __m512i _mm512_maskz_adds_epi8 (__mmask64 m, __m512i a, __m512i b)
+
+
VPADDSW __m512i _mm512_maskz_adds_epi16 (__mmask32 m, __m512i a, __m512i b)
+
+
VPADDSB __m256i _mm256_mask_adds_epi8 (__m256i s, __mmask32 m, __m256i a, __m256i b)
+
+
VPADDSW __m256i _mm256_mask_adds_epi16 (__m256i s, __mmask16 m, __m256i a, __m256i b)
+
+
VPADDSB __m256i _mm256_maskz_adds_epi8 (__mmask32 m, __m256i a, __m256i b)
+
+
VPADDSW __m256i _mm256_maskz_adds_epi16 (__mmask16 m, __m256i a, __m256i b)
+
+
VPADDSB __m128i _mm_mask_adds_epi8 (__m128i s, __mmask16 m, __m128i a, __m128i b)
+
+
VPADDSW __m128i _mm_mask_adds_epi16 (__m128i s, __mmask8 m, __m128i a, __m128i b)
+
+
VPADDSB __m128i _mm_maskz_adds_epi8 (__mmask16 m, __m128i a, __m128i b)
+
+
VPADDSW __m128i _mm_maskz_adds_epi16 (__mmask8 m, __m128i a, __m128i b)
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/paddusb.paddusw.html b/x86/paddusb.paddusw.html new file mode 100644 index 0000000..b9748c2 --- /dev/null +++ b/x86/paddusb.paddusw.html @@ -0,0 +1,300 @@ + +PADDUSB/PADDUSW + — Add Packed Unsigned Integers With Unsigned Saturation

PADDUSB/PADDUSW + — Add Packed Unsigned Integers With Unsigned Saturation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F DC /r1 PADDUSB mm, mm/m64AV/VMMXAdd packed unsigned byte integers from mm/m64 and mm and saturate the results.
66 0F DC /r PADDUSB xmm1, xmm2/m128AV/VSSE2Add packed unsigned byte integers from xmm2/m128 and xmm1 saturate the results.
NP 0F DD /r1 PADDUSW mm, mm/m64AV/VMMXAdd packed unsigned word integers from mm/m64 and mm and saturate the results.
66 0F DD /r PADDUSW xmm1, xmm2/m128AV/VSSE2Add packed unsigned word integers from xmm2/m128 to xmm1 and saturate the results.
VEX.128.660F.WIG DC /r VPADDUSB xmm1, xmm2, xmm3/m128BV/VAVXAdd packed unsigned byte integers from xmm3/m128 to xmm2 and saturate the results.
VEX.128.66.0F.WIG DD /r VPADDUSW xmm1, xmm2, xmm3/m128BV/VAVXAdd packed unsigned word integers from xmm3/m128 to xmm2 and saturate the results.
VEX.256.66.0F.WIG DC /r VPADDUSB ymm1, ymm2, ymm3/m256BV/VAVX2Add packed unsigned byte integers from ymm2, and ymm3/m256 and store the saturated results in ymm1.
VEX.256.66.0F.WIG DD /r VPADDUSW ymm1, ymm2, ymm3/m256BV/VAVX2Add packed unsigned word integers from ymm2, and ymm3/m256 and store the saturated results in ymm1.
EVEX.128.66.0F.WIG DC /r VPADDUSB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWAdd packed unsigned byte integers from xmm2, and xmm3/m128 and store the saturated results in xmm1 under writemask k1.
EVEX.256.66.0F.WIG DC /r VPADDUSB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWAdd packed unsigned byte integers from ymm2, and ymm3/m256 and store the saturated results in ymm1 under writemask k1.
EVEX.512.66.0F.WIG DC /r VPADDUSB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWAdd packed unsigned byte integers from zmm2, and zmm3/m512 and store the saturated results in zmm1 under writemask k1.
EVEX.128.66.0F.WIG DD /r VPADDUSW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWAdd packed unsigned word integers from xmm2, and xmm3/m128 and store the saturated results in xmm1 under writemask k1.
EVEX.256.66.0F.WIG DD /r VPADDUSW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWAdd packed unsigned word integers from ymm2, and ymm3/m256 and store the saturated results in ymm1 under writemask k1.
EVEX.512.66.0F.WIG DD /r VPADDUSW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWAdd packed unsigned word integers from zmm2, and zmm3/m512 and store the saturated results in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD add of the packed unsigned integers from the source operand (second operand) and the destination operand (first operand), and stores the packed integer results in the destination operand. See Figure 9-4 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a SIMD operation. Overflow is handled with unsigned saturation, as described in the following paragraphs.

+

(V)PADDUSB performs a SIMD add of the packed unsigned integers with saturation from the first source operand and second source operand and stores the packed integer results in the destination operand. When an individual byte result is beyond the range of an unsigned byte integer (that is, greater than FFH), the saturated value of FFH is written to the destination operand.

+

(V)PADDUSW performs a SIMD add of the packed unsigned word integers with saturation from the first source operand and second source operand and stores the packed integer results in the destination operand. When an individual word result is beyond the range of an unsigned word integer (that is, greater than FFFFH), the saturated value of FFFFH is written to the destination operand.

+

EVEX encoded versions: The first source operand is an ZMM/YMM/XMM register. The second source operand is an ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination is an ZMM/YMM/XMM register.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

VEX.128 encoded version: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding destination register destination are zeroed.

+

128-bit Legacy SSE version: The first source operand is an XMM register. The second operand can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

PADDUSB (With 64-bit Operands) + ¶ +

+
DEST[7:0] := SaturateToUnsignedByte(DEST[7:0] + SRC (7:0] );
+(* Repeat add operation for 2nd through 7th bytes *)
+DEST[63:56] := SaturateToUnsignedByte(DEST[63:56] + SRC[63:56]
+
+

PADDUSB (With 128-bit Operands) + ¶ +

+
DEST[7:0] := SaturateToUnsignedByte (DEST[7:0] + SRC[7:0]);
+(* Repeat add operation for 2nd through 14th bytes *)
+DEST[127:120] := SaturateToUnSignedByte (DEST[127:120] + SRC[127:120]);
+
+

VPADDUSB (VEX.128 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateToUnsignedByte (SRC1[7:0] + SRC2[7:0]);
+(* Repeat subtract operation for 2nd through 14th bytes *)
+DEST[127:120] := SaturateToUnsignedByte (SRC1[111:120] + SRC2[127:120]);
+DEST[MAXVL-1:128] := 0
+
+

VPADDUSB (VEX.256 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateToUnsignedByte (SRC1[7:0] + SRC2[7:0]);
+(* Repeat add operation for 2nd through 31st bytes *)
+DEST[255:248] := SaturateToUnsignedByte (SRC1[255:248] + SRC2[255:248]);
+
+

PADDUSW (With 64-bit Operands) + ¶ +

+
DEST[15:0] := SaturateToUnsignedWord(DEST[15:0] + SRC[15:0] );
+(* Repeat add operation for 2nd and 3rd words *)
+DEST[63:48] := SaturateToUnsignedWord(DEST[63:48] + SRC[63:48] );
+
+

PADDUSW (With 128-bit Operands) + ¶ +

+
DEST[15:0] := SaturateToUnsignedWord (DEST[15:0] + SRC[15:0]);
+(* Repeat add operation for 2nd through 7th words *)
+DEST[127:112] := SaturateToUnSignedWord (DEST[127:112] + SRC[127:112]);
+
+

VPADDUSW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToUnsignedWord (SRC1[15:0] + SRC2[15:0]);
+(* Repeat subtract operation for 2nd through 7th words *)
+DEST[127:112] := SaturateToUnsignedWord (SRC1[127:112] + SRC2[127:112]);
+DEST[MAXVL-1:128] := 0
+
+

VPADDUSW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToUnsignedWord (SRC1[15:0] + SRC2[15:0]);
+(* Repeat add operation for 2nd through 15th words *)
+DEST[255:240] := SaturateToUnsignedWord (SRC1[255:240] + SRC2[255:240])
+
+

VPADDUSB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateToUnsignedByte (SRC1[i+7:i] + SRC2[i+7:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPADDUSW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateToUnsignedWord (SRC1[i+15:i] + SRC2[i+15:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
PADDUSB __m64 _mm_adds_pu8(__m64 m1, __m64 m2)
+
+
PADDUSW __m64 _mm_adds_pu16(__m64 m1, __m64 m2)
+
+
(V)PADDUSB __m128i _mm_adds_epu8 ( __m128i a, __m128i b)
+
+
(V)PADDUSW __m128i _mm_adds_epu16 ( __m128i a, __m128i b)
+
+
VPADDUSB __m256i _mm256_adds_epu8 ( __m256i a, __m256i b)
+
+
VPADDUSW __m256i _mm256_adds_epu16 ( __m256i a, __m256i b)
+
+
VPADDUSB __m512i _mm512_adds_epu8 ( __m512i a, __m512i b)
+
+
VPADDUSW __m512i _mm512_adds_epu16 ( __m512i a, __m512i b)
+
+
VPADDUSB __m512i _mm512_mask_adds_epu8 ( __m512i s, __mmask64 m, __m512i a, __m512i b)
+
+
VPADDUSW __m512i _mm512_mask_adds_epu16 ( __m512i s, __mmask32 m, __m512i a, __m512i b)
+
+
VPADDUSB __m512i _mm512_maskz_adds_epu8 (__mmask64 m, __m512i a, __m512i b)
+
+
VPADDUSW __m512i _mm512_maskz_adds_epu16 (__mmask32 m, __m512i a, __m512i b)
+
+
VPADDUSB __m256i _mm256_mask_adds_epu8 (__m256i s, __mmask32 m, __m256i a, __m256i b)
+
+
VPADDUSW __m256i _mm256_mask_adds_epu16 (__m256i s, __mmask16 m, __m256i a, __m256i b)
+
+
VPADDUSB __m256i _mm256_maskz_adds_epu8 (__mmask32 m, __m256i a, __m256i b)
+
+
VPADDUSW __m256i _mm256_maskz_adds_epu16 (__mmask16 m, __m256i a, __m256i b)
+
+
VPADDUSB __m128i _mm_mask_adds_epu8 (__m128i s, __mmask16 m, __m128i a, __m128i b)
+
+
VPADDUSW __m128i _mm_mask_adds_epu16 (__m128i s, __mmask8 m, __m128i a, __m128i b)
+
+
VPADDUSB __m128i _mm_maskz_adds_epu8 (__mmask16 m, __m128i a, __m128i b)
+
+
VPADDUSW __m128i _mm_maskz_adds_epu16 (__mmask8 m, __m128i a, __m128i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/palignr.html b/x86/palignr.html new file mode 100644 index 0000000..b5634ed --- /dev/null +++ b/x86/palignr.html @@ -0,0 +1,410 @@ + +PALIGNR + — Packed Align Right

PALIGNR + — Packed Align Right

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 3A 0F /r ib1 PALIGNR mm1, mm2/m64, imm8AV/VSSSE3Concatenate destination and source operands, extract byte-aligned result shifted to the right by constant value in imm8 into mm1.
66 0F 3A 0F /r ib PALIGNR xmm1, xmm2/m128, imm8AV/VSSSE3Concatenate destination and source operands, extract byte-aligned result shifted to the right by constant value in imm8 into xmm1.
VEX.128.66.0F3A.WIG 0F /r ib VPALIGNR xmm1, xmm2, xmm3/m128, imm8BV/VAVXConcatenate xmm2 and xmm3/m128, extract byte aligned result shifted to the right by constant value in imm8 and result is stored in xmm1.
VEX.256.66.0F3A.WIG 0F /r ib VPALIGNR ymm1, ymm2, ymm3/m256, imm8BV/VAVX2Concatenate pairs of 16 bytes in ymm2 and ymm3/m256 into 32-byte intermediate result, extract byte-aligned, 16-byte result shifted to the right by constant values in imm8 from each intermediate result, and two 16-byte results are stored in ymm1.
EVEX.128.66.0F3A.WIG 0F /r ib VPALIGNR xmm1 {k1}{z}, xmm2, xmm3/m128, imm8CV/VAVX512VL AVX512BWConcatenate xmm2 and xmm3/m128 into a 32-byte intermediate result, extract byte aligned result shifted to the right by constant value in imm8 and result is stored in xmm1.
EVEX.256.66.0F3A.WIG 0F /r ib VPALIGNR ymm1 {k1}{z}, ymm2, ymm3/m256, imm8CV/VAVX512VL AVX512BWConcatenate pairs of 16 bytes in ymm2 and ymm3/m256 into 32-byte intermediate result, extract byte-aligned, 16-byte result shifted to the right by constant values in imm8 from each intermediate result, and two 16-byte results are stored in ymm1.
EVEX.512.66.0F3A.WIG 0F /r ib VPALIGNR zmm1 {k1}{z}, zmm2, zmm3/m512, imm8CV/VAVX512BWConcatenate pairs of 16 bytes in zmm2 and zmm3/m512 into 32-byte intermediate result, extract byte-aligned, 16-byte result shifted to the right by constant values in imm8 from each intermediate result, and four 16-byte results are stored in zmm1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

(V)PALIGNR concatenates the destination operand (the first operand) and the source operand (the second operand) into an intermediate composite, shifts the composite at byte granularity to the right by a constant immediate, and extracts the right-aligned result into the destination. The first and the second operands can be an MMX, XMM or a YMM register. The immediate value is considered unsigned. Immediate shift counts larger than the 2L (i.e., 32 for 128-bit operands, or 16 for 64-bit operands) produce a zero result. Both operands can be MMX registers, XMM registers or YMM registers. When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

In 64-bit mode and not encoded by VEX/EVEX prefix, use the REX prefix to access additional registers.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

EVEX.512 encoded version: The first source operand is a ZMM register and contains four 16-byte blocks. The second source operand is a ZMM register or a 512-bit memory location containing four 16-byte block. The destination operand is a ZMM register and contain four 16-byte results. The imm8[7:0] is the common shift count

+

used for each of the four successive 16-byte block sources. The low 16-byte block of the two source operands produce the low 16-byte result of the destination operand, the high 16-byte block of the two source operands produce the high 16-byte result of the destination operand and so on for the blocks in the middle.

+

VEX.256 and EVEX.256 encoded versions: The first source operand is a YMM register and contains two 16-byte blocks. The second source operand is a YMM register or a 256-bit memory location containing two 16-byte block. The destination operand is a YMM register and contain two 16-byte results. The imm8[7:0] is the common shift count used for the two lower 16-byte block sources and the two upper 16-byte block sources. The low 16-byte block of the two source operands produce the low 16-byte result of the destination operand, the high 16-byte block of the two source operands produce the high 16-byte result of the destination operand. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 and EVEX.128 encoded versions: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

Concatenation is done with 128-bit data in the first and second source operand for both 128-bit and 256-bit instructions. The high 128-bits of the intermediate composite 256-bit result came from the 128-bit data from the first source operand; the low 128-bits of the intermediate result came from the 128-bit data of the second source operand.

+

0 127 0 127

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC1 +SRC2 +Imm8[7:0]*8 +128 255 +128 +255 +SRC1 +SRC2 +Imm8[7:0]*8 +128 127 +0 +255 +DEST +DEST +
Figure 4-7. 256-bit VPALIGN Instruction Operation
+

Operation + ¶ +

+

PALIGNR (With 64-bit Operands) + ¶ +

+
temp1[127:0] = CONCATENATE(DEST,SRC)>>(imm8*8)
+DEST[63:0] = temp1[63:0]
+
+

PALIGNR (With 128-bit Operands) + ¶ +

+
temp1[255:0] := ((DEST[127:0] << 128) OR SRC[127:0])>>(imm8*8);
+DEST[127:0] := temp1[127:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPALIGNR (VEX.128 Encoded Version) + ¶ +

+
temp1[255:0] := ((SRC1[127:0] << 128) OR SRC2[127:0])>>(imm8*8);
+DEST[127:0] := temp1[127:0]
+DEST[MAXVL-1:128] := 0
+
+

VPALIGNR (VEX.256 Encoded Version) + ¶ +

+
temp1[255:0] := ((SRC1[127:0] << 128) OR SRC2[127:0])>>(imm8[7:0]*8);
+DEST[127:0] := temp1[127:0]
+temp1[255:0] := ((SRC1[255:128] << 128) OR SRC2[255:128])>>(imm8[7:0]*8);
+DEST[MAXVL-1:128] := temp1[127:0]
+
+

VPALIGNR (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR l := 0 TO VL-1 with increments of 128
+    temp1[255:0] := ((SRC1[l+127:l] << 128) OR SRC2[l+127:l])>>(imm8[7:0]*8);
+    TMP_DEST[l+127:l] := temp1[127:0]
+ENDFOR;
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := TMP_DEST[i+7:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+7:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
PALIGNR __m64 _mm_alignr_pi8 (__m64 a, __m64 b, int n)
+
+
(V)PALIGNR __m128i _mm_alignr_epi8 (__m128i a, __m128i b, int n)
+
+
VPALIGNR __m256i _mm256_alignr_epi8 (__m256i a, __m256i b, const int n)
+
+
VPALIGNR __m512i _mm512_alignr_epi8 (__m512i a, __m512i b, const int n)
+
+
VPALIGNR __m512i _mm512_mask_alignr_epi8 (__m512i s, __mmask64 m, __m512i a, __m512i b, const int n)
+
+
VPALIGNR __m512i _mm512_maskz_alignr_epi8 ( __mmask64 m, __m512i a, __m512i b, const int n)
+
+
VPALIGNR __m256i _mm256_mask_alignr_epi8 (__m256i s, __mmask32 m, __m256i a, __m256i b, const int n)
+
+
VPALIGNR __m256i _mm256_maskz_alignr_epi8 (__mmask32 m, __m256i a, __m256i b, const int n)
+
+
VPALIGNR __m128i _mm_mask_alignr_epi8 (__m128i s, __mmask16 m, __m128i a, __m128i b, const int n)
+
+
VPALIGNR __m128i _mm_maskz_alignr_epi8 (__mmask16 m, __m128i a, __m128i b, const int n)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/pand.html b/x86/pand.html new file mode 100644 index 0000000..1b2a6a9 --- /dev/null +++ b/x86/pand.html @@ -0,0 +1,241 @@ + +PAND + — Logical AND

PAND + — Logical AND

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F DB /r1 PAND mm, mm/m64AV/VMMXBitwise AND mm/m64 and mm.
66 0F DB /r PAND xmm1, xmm2/m128AV/VSSE2Bitwise AND of xmm2/m128 and xmm1.
VEX.128.66.0F.WIG DB /r VPAND xmm1, xmm2, xmm3/m128BV/VAVXBitwise AND of xmm3/m128 and xmm.
VEX.256.66.0F.WIG DB /r VPAND ymm1, ymm2, ymm3/.m256BV/VAVX2Bitwise AND of ymm2, and ymm3/m256 and store result in ymm1.
EVEX.128.66.0F.W0 DB /r VPANDD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FBitwise AND of packed doubleword integers in xmm2 and xmm3/m128/m32bcst and store result in xmm1 using writemask k1.
EVEX.256.66.0F.W0 DB /r VPANDD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FBitwise AND of packed doubleword integers in ymm2 and ymm3/m256/m32bcst and store result in ymm1 using writemask k1.
EVEX.512.66.0F.W0 DB /r VPANDD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FBitwise AND of packed doubleword integers in zmm2 and zmm3/m512/m32bcst and store result in zmm1 using writemask k1.
EVEX.128.66.0F.W1 DB /r VPANDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FBitwise AND of packed quadword integers in xmm2 and xmm3/m128/m64bcst and store result in xmm1 using writemask k1.
EVEX.256.66.0F.W1 DB /r VPANDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FBitwise AND of packed quadword integers in ymm2 and ymm3/m256/m64bcst and store result in ymm1 using writemask k1.
EVEX.512.66.0F.W1 DB /r VPANDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FBitwise AND of packed quadword integers in zmm2 and zmm3/m512/m64bcst and store result in zmm1 using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical AND operation on the first source operand and second source operand and stores the result in the destination operand. Each bit of the result is set to 1 if the corresponding bits of the first and second operands are 1, otherwise it is set to 0.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE instructions: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand can be an MMX technology register.

+

128-bit Legacy SSE version: The first source operand is an XMM register. The second operand can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with write-mask k1 at 32/64-bit granularity.

+

VEX.256 encoded versions: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded versions: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

Operation + ¶ +

+

PAND (64-bit Operand) + ¶ +

+
DEST := DEST AND SRC
+
+

PAND (128-bit Legacy SSE Version) + ¶ +

+
DEST := DEST AND SRC
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPAND (VEX.128 Encoded Version) + ¶ +

+
DEST := SRC1 AND SRC2
+DEST[MAXVL-1:128] := 0
+
+

VPAND (VEX.256 Encoded Instruction) + ¶ +

+
DEST[255:0] := (SRC1[255:0] AND SRC2[255:0])
+DEST[MAXVL-1:256] := 0
+
+

VPANDD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := SRC1[i+31:i] BITWISE AND SRC2[31:0]
+                ELSE DEST[i+31:i] := SRC1[i+31:i] BITWISE AND SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPANDQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := SRC1[i+63:i] BITWISE AND SRC2[63:0]
+                ELSE DEST[i+63:i] := SRC1[i+63:i] BITWISE AND SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPANDD __m512i _mm512_and_epi32( __m512i a, __m512i b);
+
+
VPANDD __m512i _mm512_mask_and_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPANDD __m512i _mm512_maskz_and_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPANDQ __m512i _mm512_and_epi64( __m512i a, __m512i b);
+
+
VPANDQ __m512i _mm512_mask_and_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPANDQ __m512i _mm512_maskz_and_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPANDND __m256i _mm256_mask_and_epi32(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPANDND __m256i _mm256_maskz_and_epi32( __mmask8 k, __m256i a, __m256i b);
+
+
VPANDND __m128i _mm_mask_and_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPANDND __m128i _mm_maskz_and_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
VPANDNQ __m256i _mm256_mask_and_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPANDNQ __m256i _mm256_maskz_and_epi64( __mmask8 k, __m256i a, __m256i b);
+
+
VPANDNQ __m128i _mm_mask_and_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPANDNQ __m128i _mm_maskz_and_epi64( __mmask8 k, __m128i a, __m128i b);
+
+
PAND __m64 _mm_and_si64 (__m64 m1, __m64 m2)
+
+
(V)PAND __m128i _mm_and_si128 ( __m128i a, __m128i b)
+
+
VPAND __m256i _mm256_and_si256 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pandn.html b/x86/pandn.html new file mode 100644 index 0000000..4d51130 --- /dev/null +++ b/x86/pandn.html @@ -0,0 +1,241 @@ + +PANDN + — Logical AND NOT

PANDN + — Logical AND NOT

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F DF /r1 PANDN mm, mm/m64AV/VMMXBitwise AND NOT of mm/m64 and mm.
66 0F DF /r PANDN xmm1, xmm2/m128AV/VSSE2Bitwise AND NOT of xmm2/m128 and xmm1.
VEX.128.66.0F.WIG DF /r VPANDN xmm1, xmm2, xmm3/m128BV/VAVXBitwise AND NOT of xmm3/m128 and xmm2.
VEX.256.66.0F.WIG DF /r VPANDN ymm1, ymm2, ymm3/m256BV/VAVX2Bitwise AND NOT of ymm2, and ymm3/m256 and store result in ymm1.
EVEX.128.66.0F.W0 DF /r VPANDND xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FBitwise AND NOT of packed doubleword integers in xmm2 and xmm3/m128/m32bcst and store result in xmm1 using writemask k1.
EVEX.256.66.0F.W0 DF /r VPANDND ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FBitwise AND NOT of packed doubleword integers in ymm2 and ymm3/m256/m32bcst and store result in ymm1 using writemask k1.
EVEX.512.66.0F.W0 DF /r VPANDND zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FBitwise AND NOT of packed doubleword integers in zmm2 and zmm3/m512/m32bcst and store result in zmm1 using writemask k1.
EVEX.128.66.0F.W1 DF /r VPANDNQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FBitwise AND NOT of packed quadword integers in xmm2 and xmm3/m128/m64bcst and store result in xmm1 using writemask k1.
EVEX.256.66.0F.W1 DF /r VPANDNQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FBitwise AND NOT of packed quadword integers in ymm2 and ymm3/m256/m64bcst and store result in ymm1 using writemask k1.
EVEX.512.66.0F.W1 DF /r VPANDNQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FBitwise AND NOT of packed quadword integers in zmm2 and zmm3/m512/m64bcst and store result in zmm1 using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical NOT operation on the first source operand, then performs bitwise AND with second source operand and stores the result in the destination operand. Each bit of the result is set to 1 if the corresponding bit in the first operand is 0 and the corresponding bit in the second operand is 1, otherwise it is set to 0.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE instructions: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand can be an MMX technology register.

+

128-bit Legacy SSE version: The first source operand is an XMM register. The second operand can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with write-mask k1 at 32/64-bit granularity.

+

VEX.256 encoded versions: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded versions: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

Operation + ¶ +

+

PANDN (64-bit Operand) + ¶ +

+
DEST := NOT(DEST) AND SRC
+
+

PANDN (128-bit Legacy SSE Version) + ¶ +

+
DEST := NOT(DEST) AND SRC
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPANDN (VEX.128 Encoded Version) + ¶ +

+
DEST := NOT(SRC1) AND SRC2
+DEST[MAXVL-1:128] := 0
+
+

VPANDN (VEX.256 Encoded Instruction) + ¶ +

+
DEST[255:0] := ((NOT SRC1[255:0]) AND SRC2[255:0])
+DEST[MAXVL-1:256] := 0
+
+

VPANDND (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := ((NOT SRC1[i+31:i]) AND SRC2[31:0])
+                ELSE DEST[i+31:i] := ((NOT SRC1[i+31:i]) AND SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPANDNQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := ((NOT SRC1[i+63:i]) AND SRC2[63:0])
+                ELSE DEST[i+63:i] := ((NOT SRC1[i+63:i]) AND SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPANDND __m512i _mm512_andnot_epi32( __m512i a, __m512i b);
+
+
VPANDND __m512i _mm512_mask_andnot_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPANDND __m512i _mm512_maskz_andnot_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPANDND __m256i _mm256_mask_andnot_epi32(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPANDND __m256i _mm256_maskz_andnot_epi32( __mmask8 k, __m256i a, __m256i b);
+
+
VPANDND __m128i _mm_mask_andnot_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPANDND __m128i _mm_maskz_andnot_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
VPANDNQ __m512i _mm512_andnot_epi64( __m512i a, __m512i b);
+
+
VPANDNQ __m512i _mm512_mask_andnot_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPANDNQ __m512i _mm512_maskz_andnot_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPANDNQ __m256i _mm256_mask_andnot_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPANDNQ __m256i _mm256_maskz_andnot_epi64( __mmask8 k, __m256i a, __m256i b);
+
+
VPANDNQ __m128i _mm_mask_andnot_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPANDNQ __m128i _mm_maskz_andnot_epi64( __mmask8 k, __m128i a, __m128i b);
+
+
PANDN __m64 _mm_andnot_si64 (__m64 m1, __m64 m2)
+
+
(V)PANDN __m128i _mm_andnot_si128 ( __m128i a, __m128i b)
+
+
VPANDN __m256i _mm256_andnot_si256 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/parameters.html b/x86/parameters.html new file mode 100644 index 0000000..a02ee32 --- /dev/null +++ b/x86/parameters.html @@ -0,0 +1,234 @@ + +GETSEC[PARAMETERS] + — Report the SMX Parameters

GETSEC[PARAMETERS] + — Report the SMX Parameters

+ + + + + + + + + +
OpcodeInstructionDescription
NP 0F 37 (EAX=6)GETSEC[PARAMETERS]Report the SMX parameters. The parameters index is input in EBX with the result returned in EAX, EBX, and ECX.
+

Description + ¶ +

+

The GETSEC[PARAMETERS] instruction returns specific parameter information for SMX features supported by the processor. Parameter information is returned in EAX, EBX, and ECX, with the input parameter selected using EBX.

+

Software retrieves parameter information by searching with an input index for EBX starting at 0, and then reading the returned results in EAX, EBX, and ECX. EAX[4:0] is designated to return a parameter type field indicating if a parameter is available and what type it is. If EAX[4:0] is returned with 0, this designates a null parameter and indicates no more parameters are available.

+

Table 7-7 defines the parameter types supported in current and future implementations.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Parameter Type EAX[4:0]Parameter DescriptionEAX[31:5]EBX[31:0]ECX[31:0]
0NULLReserved (0 returned)Reserved (unmodified)Reserved (unmodified)
1Supported AC module versionsReserved (0 returned)Version comparison maskVersion numbers supported
2Max size of authenticated code execution areaMultiply by 32 for size in bytesReserved (unmodified)Reserved (unmodified)
3External memory types supported during AC modeMemory type bit maskReserved (unmodified)Reserved (unmodified)
4Selective SENTER functionality controlEAX[14:8] correspond to available SENTER function disable controlsReserved (unmodified)Reserved (unmodified)
5TXT extensions supportTXT Feature Extensions Flags (see Table )ReservedReserved
6-31UndefinedReserved (unmodified)Reserved (unmodified)Reserved (unmodified)
+
Table 7-7. SMX Reporting Parameters Format
+
+ + + + + + + + + + + + + + + + +
BitDefinitionDescription
5Processor based S-CRTM supportReturns 1 if this processor implements a processor-rooted S-CRTM capability and 0 if not (S-CRTM is rooted in BIOS). This flag cannot be used to infer whether the chipset supports TXT or whether the processor support SMX.
6Machine Check HandlingReturns 1 if it machine check status registers can be preserved through ENTERACCS and SENTER. If this bit is 1, the caller of ENTERACCS and SENTER is not required to clear machine check error status bits before invoking these GETSEC leaves. If this bit returns 0, the caller of ENTERACCS and SENTER must clear all machine check error status bits before invoking these GETSEC leaves.
31:7ReservedReserved for future use. Will return 0.
+
Table 7-8. TXT Feature Extensions Flags
+

Supported AC module versions (as defined by the AC module HeaderVersion field) can be determined for a particular SMX capable processor by the type 1 parameter. Using EBX to index through the available parameters reported by GETSEC[PARAMETERS] for each unique parameter set returned for type 1, software can determine the complete list of AC module version(s) supported.

+

For each parameter set, EBX returns the comparison mask and ECX returns the available HeaderVersion field values supported, after AND'ing the target HeaderVersion with the comparison mask. Software can then determine if a particular AC module version is supported by following the pseudo-code search routine given below:

+

parameter_search_index= 0 do { EBX= parameter_search_index++ EAX= 6 GETSEC if (EAX[4:0] = 1) { if ((version_query & EBX) = ECX) { version_is_supported= 1 break } }

+

} while (EAX[4:0] ≠ 0)

+

If only AC modules with a HeaderVersion of 0 are supported by the processor, then only one parameter set of type 1 will be returned, as follows: EAX = 00000001H,

+

EBX = FFFFFFFFH and ECX = 00000000H.

+

The maximum capacity for an authenticated code execution area supported by the processor is reported with the parameter type of 2. The maximum supported size in bytes is determined by multiplying the returned size in EAX[31:5] by 32. Thus, for a maximum supported authenticated RAM size of 32KBytes, EAX returns with 00008002H.

+

Supportable memory types for memory mapped outside of the authenticated code execution area are reported with the parameter type of 3. While is active, as initiated by the GETSEC functions SENTER and ENTERACCS and terminated by EXITAC, there are restrictions on what memory types are allowed for the rest of system memory. It is the responsibility of the system software to initialize the memory type range register (MTRR) MSRs and/or the page attribute table (PAT) to only map memory types consistent with the reporting of this parameter. The reporting of supportable memory types of external memory is indicated using a bit map returned in EAX[31:8]. These bit positions correspond to the memory type encodings defined for the MTRR MSR and PAT programming. See Table 7-9.

+

The parameter type of 4 is used for enumerating the availability of selective GETSEC[SENTER] function disable controls. If a 1 is reported in bits 14:8 of the returned parameter EAX, then this indicates a disable control capability exists with SENTER for a particular function. The enumerated field in bits 14:8 corresponds to use of the EDX input parameter bits 6:0 for SENTER. If an enumerated field bit is set to 1, then the corresponding EDX input parameter bit of EDX may be set to 1 to disable that designated function. If the enumerated field bit is 0 or this parameter is not reported, then no disable capability exists with the corresponding EDX input parameter for SENTER, and EDX bit(s) must be cleared to 0 to enable execution of SENTER. If no selective disable capability for SENTER exists as enumerated, then the corresponding bits in the IA32_FEATURE_CONTROL MSR bits 14:8 must also be programmed to 1 if the SENTER global enable bit 15 of the MSR is set. This is required to enable future extensibility of SENTER selective disable capability with respect to potentially separate software initialization of the MSR.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
EAX Bit PositionParameter Description
8Uncacheable (UC)
9Write Combining (WC)
11:10Reserved
12Write-through (WT)
13Write-protected (WP)
14Write-back (WB)
31:15Reserved
+
Table 7-9. External Memory Types Using Parameter 3
+

If the GETSEC[PARAMETERS] leaf or specific parameter is not present for a given SMX capable processor, then default parameter values should be assumed. These are defined in Table 7-10.

+
+ + + + + + + + + + + + + + + + + + + + +
Parameter Type EAX[4:0]Default SettingParameter Description
10.0 onlySupported AC module versions.
232 KBytesAuthenticated code execution area size.
3UC onlyExternal memory types supported during AC execution mode.
4NoneAvailable SENTER selective disable controls.
+
Table 7-10. Default Parameter Values
+

Operation + ¶ +

+
(* example of a processor supporting only a 0.0 HeaderVersion, 32K ACRAM size, memory types UC and WC *)
+IF (CR4.SMXE=0)
+    THEN #UD;
+ELSE IF (in VMX non-root operation)
+    THEN VM Exit (reason=”GETSEC instruction”);
+ELSE IF (GETSEC leaf unsupported)
+    THEN #UD;
+    (* example of a processor supporting a 0.0 HeaderVersion *)
+IF (EBX=0) THEN
+    EAX := 00000001h;
+    EBX := FFFFFFFFh;
+    ECX := 00000000h;
+ELSE IF (EBX=1)
+    (* example of a processor supporting a 32K ACRAM size *)
+    THEN EAX := 00008002h;
+ESE IF (EBX= 2)
+    (* example of a processor supporting external memory types of UC and WC *)
+    THEN EAX := 00000303h;
+ESE IF (EBX= other value(s) less than unsupported index value)
+    (* EAX value varies. Consult Table 7-7 and Table *)
+ELSE (* unsupported index*)
+    EAX := 00000000h;
+END;
+
+

Flags Affected + ¶ +

+

None.

+

Use of Prefixes + ¶ +

+

LOCK Causes #UD.

+

REP* Cause #UD (includes REPNE/REPNZ and REP/REPE/REPZ).

+

Operand size Causes #UD.

+

NP 66/F2/F3 prefixes are not allowed.

+

Segmentoverrides Ignored.

+

Address size Ignored.

+

REX Ignored.

+

Protected Mode Exceptions + ¶ +

+ + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[PARAMETERS] is not reported as supported by GETSEC[CAPABILITIES].
+

Real-Address Mode Exceptions + ¶ +

+ + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[PARAMETERS] is not reported as supported by GETSEC[CAPABILITIES].
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[PARAMETERS] is not reported as supported by GETSEC[CAPABILITIES].
+

Compatibility Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+

64-Bit Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+

VM-Exit Condition + ¶ +

+

Reason (GETSEC) If in VMX non-root operation.

diff --git a/x86/pause.html b/x86/pause.html new file mode 100644 index 0000000..8f7dbf1 --- /dev/null +++ b/x86/pause.html @@ -0,0 +1,60 @@ + +PAUSE + — Spin Loop Hint

PAUSE + — Spin Loop Hint

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F3 90PAUSEZOValidValidGives hint to processor that improves performance of spin-wait loops.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Improves the performance of spin-wait loops. When executing a “spin-wait loop,” processors will suffer a severe performance penalty when exiting the loop because it detects a possible memory order violation. The PAUSE instruction provides a hint to the processor that the code sequence is a spin-wait loop. The processor uses this hint to avoid the memory order violation in most situations, which greatly improves processor performance. For this reason, it is recommended that a PAUSE instruction be placed in all spin-wait loops.

+

An additional function of the PAUSE instruction is to reduce the power consumed by a processor while executing a spin loop. A processor can execute a spin-wait loop extremely quickly, causing the processor to consume a lot of power while it waits for the resource it is spinning on to become available. Inserting a pause instruction in a spin-wait loop greatly reduces the processor’s power consumption.

+

This instruction was introduced in the Pentium 4 processors, but is backward compatible with all IA-32 processors. In earlier IA-32 processors, the PAUSE instruction operates like a NOP instruction. The Pentium 4 and Intel Xeon processors implement the PAUSE instruction as a delay. The delay is finite and can be zero for some processors. This instruction does not change the architectural state of the processor (that is, it performs essentially a delaying no-op operation).

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
Execute_Next_Instruction(DELAY);
+
+

Numeric Exceptions + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/pavgb.pavgw.html b/x86/pavgb.pavgw.html new file mode 100644 index 0000000..06a3980 --- /dev/null +++ b/x86/pavgb.pavgw.html @@ -0,0 +1,299 @@ + +PAVGB/PAVGW + — Average Packed Integers

PAVGB/PAVGW + — Average Packed Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F E0 /r1 PAVGB mm1, mm2/m64AV/VSSEAverage packed unsigned byte integers from mm2/m64 and mm1 with rounding.
66 0F E0, /r PAVGB xmm1, xmm2/m128AV/VSSE2Average packed unsigned byte integers from xmm2/m128 and xmm1 with rounding.
NP 0F E3 /r1 PAVGW mm1, mm2/m64AV/VSSEAverage packed unsigned word integers from mm2/m64 and mm1 with rounding.
66 0F E3 /r PAVGW xmm1, xmm2/m128AV/VSSE2Average packed unsigned word integers from xmm2/m128 and xmm1 with rounding.
VEX.128.66.0F.WIG E0 /r VPAVGB xmm1, xmm2, xmm3/m128BV/VAVXAverage packed unsigned byte integers from xmm3/m128 and xmm2 with rounding.
VEX.128.66.0F.WIG E3 /r VPAVGW xmm1, xmm2, xmm3/m128BV/VAVXAverage packed unsigned word integers from xmm3/m128 and xmm2 with rounding.
VEX.256.66.0F.WIG E0 /r VPAVGB ymm1, ymm2, ymm3/m256BV/VAVX2Average packed unsigned byte integers from ymm2, and ymm3/m256 with rounding and store to ymm1.
VEX.256.66.0F.WIG E3 /r VPAVGW ymm1, ymm2, ymm3/m256BV/VAVX2Average packed unsigned word integers from ymm2, ymm3/m256 with rounding to ymm1.
EVEX.128.66.0F.WIG E0 /r VPAVGB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWAverage packed unsigned byte integers from xmm2, and xmm3/m128 with rounding and store to xmm1 under writemask k1.
EVEX.256.66.0F.WIG E0 /r VPAVGB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWAverage packed unsigned byte integers from ymm2, and ymm3/m256 with rounding and store to ymm1 under writemask k1.
EVEX.512.66.0F.WIG E0 /r VPAVGB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWAverage packed unsigned byte integers from zmm2, and zmm3/m512 with rounding and store to zmm1 under writemask k1.
EVEX.128.66.0F.WIG E3 /r VPAVGW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWAverage packed unsigned word integers from xmm2, xmm3/m128 with rounding to xmm1 under writemask k1.
EVEX.256.66.0F.WIG E3 /r VPAVGW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWAverage packed unsigned word integers from ymm2, ymm3/m256 with rounding to ymm1 under writemask k1.
EVEX.512.66.0F.WIG E3 /r VPAVGW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWAverage packed unsigned word integers from zmm2, zmm3/m512 with rounding to zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2B, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD average of the packed unsigned integers from the source operand (second operand) and the destination operand (first operand), and stores the results in the destination operand. For each corresponding pair of data elements in the first and second operands, the elements are added together, a 1 is added to the temporary sum, and that result is shifted right one bit position.

+

The (V)PAVGB instruction operates on packed unsigned bytes and the (V)PAVGW instruction operates on packed unsigned words.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE instructions: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand can be an MMX technology register.

+

128-bit Legacy SSE version: The first source operand is an XMM register. The second operand can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand is a ZMM register or a 512-bit memory location. The destination operand is a ZMM register.

+

VEX.256 and EVEX.256 encoded versions: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

VEX.128 and EVEX.128 encoded versions: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding register destination are zeroed.

+

Operation + ¶ +

+

PAVGB (With 64-bit Operands) + ¶ +

+
DEST[7:0] := (SRC[7:0] + DEST[7:0] + 1) >> 1; (* Temp sum before shifting is 9 bits *)
+(* Repeat operation performed for bytes 2 through 6 *)
+DEST[63:56] := (SRC[63:56] + DEST[63:56] + 1) >> 1;
+
+

PAVGW (With 64-bit Operands) + ¶ +

+
DEST[15:0] := (SRC[15:0] + DEST[15:0] + 1) >> 1; (* Temp sum before shifting is 17 bits *)
+(* Repeat operation performed for words 2 and 3 *)
+DEST[63:48] := (SRC[63:48] + DEST[63:48] + 1) >> 1;
+
+

PAVGB (With 128-bit Operands) + ¶ +

+
DEST[7:0] := (SRC[7:0] + DEST[7:0] + 1) >> 1; (* Temp sum before shifting is 9 bits *)
+(* Repeat operation performed for bytes 2 through 14 *)
+DEST[127:120] := (SRC[127:120] + DEST[127:120] + 1) >> 1;
+
+

PAVGW (With 128-bit Operands) + ¶ +

+
DEST[15:0] := (SRC[15:0] + DEST[15:0] + 1) >> 1; (* Temp sum before shifting is 17 bits *)
+(* Repeat operation performed for words 2 through 6 *)
+DEST[127:112] := (SRC[127:112] + DEST[127:112] + 1) >> 1;
+
+

VPAVGB (VEX.128 Encoded Version) + ¶ +

+
DEST[7:0] := (SRC1[7:0] + SRC2[7:0] + 1) >> 1;
+(* Repeat operation performed for bytes 2 through 15 *)
+DEST[127:120] := (SRC1[127:120] + SRC2[127:120] + 1) >> 1
+DEST[MAXVL-1:128] := 0
+
+

VPAVGW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := (SRC1[15:0] + SRC2[15:0] + 1) >> 1;
+(* Repeat operation performed for 16-bit words 2 through 7 *)
+DEST[127:112] := (SRC1[127:112] + SRC2[127:112] + 1) >> 1
+DEST[MAXVL-1:128] := 0
+
+

VPAVGB (VEX.256 Encoded Instruction) + ¶ +

+
DEST[7:0] := (SRC1[7:0] + SRC2[7:0] + 1) >> 1; (* Temp sum before shifting is 9 bits *)
+(* Repeat operation performed for bytes 2 through 31)
+DEST[255:248] := (SRC1[255:248] + SRC2[255:248] + 1) >> 1;
+
+

VPAVGW (VEX.256 Encoded Instruction) + ¶ +

+
    DEST[15:0] := (SRC1[15:0] + SRC2[15:0] + 1) >> 1; (* Temp sum before shifting is 17 bits *)
+    (* Repeat operation performed for words 2 through 15)
+    DEST[255:14]) := (SRC1[255:240] + SRC2[255:240] + 1) >> 1;
+VPAVGB (EVEX encoded versions)
+(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := (SRC1[i+7:i] + SRC2[i+7:i] + 1) >> 1; (* Temp sum before shifting is 9 bits *)
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPAVGW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := (SRC1[i+15:i] + SRC2[i+15:i] + 1) >> 1
+                        ; (* Temp sum before shifting is 17 bits *)
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPAVGB __m512i _mm512_avg_epu8( __m512i a, __m512i b);
+
+
VPAVGW __m512i _mm512_avg_epu16( __m512i a, __m512i b);
+
+
VPAVGB __m512i _mm512_mask_avg_epu8(__m512i s, __mmask64 m, __m512i a, __m512i b);
+
+
VPAVGW __m512i _mm512_mask_avg_epu16(__m512i s, __mmask32 m, __m512i a, __m512i b);
+
+
VPAVGB __m512i _mm512_maskz_avg_epu8( __mmask64 m, __m512i a, __m512i b);
+
+
VPAVGW __m512i _mm512_maskz_avg_epu16( __mmask32 m, __m512i a, __m512i b);
+
+
VPAVGB __m256i _mm256_mask_avg_epu8(__m256i s, __mmask32 m, __m256i a, __m256i b);
+
+
VPAVGW __m256i _mm256_mask_avg_epu16(__m256i s, __mmask16 m, __m256i a, __m256i b);
+
+
VPAVGB __m256i _mm256_maskz_avg_epu8( __mmask32 m, __m256i a, __m256i b);
+
+
VPAVGW __m256i _mm256_maskz_avg_epu16( __mmask16 m, __m256i a, __m256i b);
+
+
VPAVGB __m128i _mm_mask_avg_epu8(__m128i s, __mmask16 m, __m128i a, __m128i b);
+
+
VPAVGW __m128i _mm_mask_avg_epu16(__m128i s, __mmask8 m, __m128i a, __m128i b);
+
+
VPAVGB __m128i _mm_maskz_avg_epu8( __mmask16 m, __m128i a, __m128i b);
+
+
VPAVGW __m128i _mm_maskz_avg_epu16( __mmask8 m, __m128i a, __m128i b);
+
+
PAVGB __m64 _mm_avg_pu8 (__m64 a, __m64 b)
+
+
PAVGW __m64 _mm_avg_pu16 (__m64 a, __m64 b)
+
+
(V)PAVGB __m128i _mm_avg_epu8 ( __m128i a, __m128i b)
+
+
(V)PAVGW __m128i _mm_avg_epu16 ( __m128i a, __m128i b)
+
+
VPAVGB __m256i _mm256_avg_epu8 ( __m256i a, __m256i b)
+
+
VPAVGW __m256i _mm256_avg_epu16 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pblendvb.html b/x86/pblendvb.html new file mode 100644 index 0000000..3b209d4 --- /dev/null +++ b/x86/pblendvb.html @@ -0,0 +1,237 @@ + +PBLENDVB + — Variable Blend Packed Bytes

PBLENDVB + — Variable Blend Packed Bytes

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 10 /r PBLENDVB xmm1, xmm2/m128, <XMM0>RMV/VSSE4_1Select byte values from xmm1 and xmm2/m128 from mask specified in the high bit of each byte in XMM0 and store the values into xmm1.
VEX.128.66.0F3A.W0 4C /r /is4 VPBLENDVB xmm1, xmm2, xmm3/m128, xmm4RVMRV/VAVXSelect byte values from xmm2 and xmm3/m128 using mask bits in the specified mask register, xmm4, and store the values into xmm1.
VEX.256.66.0F3A.W0 4C /r /is4 VPBLENDVB ymm1, ymm2, ymm3/m256, ymm4RVMRV/VAVX2Select byte values from ymm2 and ymm3/m256 from mask specified in the high bit of each byte in ymm4 and store the values into ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)<XMM0>N/A
RVMRModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8[7:4]
+

Description + ¶ +

+

Conditionally copies byte elements from the source operand (second operand) to the destination operand (first operand) depending on mask bits defined in the implicit third register argument, XMM0. The mask bits are the most significant bit in each byte element of the XMM0 register.

+

If a mask bit is “1", then the corresponding byte element in the source operand is copied to the destination, else the byte element in the destination operand is left unchanged.

+

The register assignment of the implicit third operand is defined to be the architectural register XMM0.

+

128-bit Legacy SSE version: The first source operand and the destination operand is the same. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged. The mask register operand is implicitly defined to be the architectural register XMM0. An attempt to execute PBLENDVB with a VEX prefix will cause #UD.

+

VEX.128 encoded version: The first source operand and the destination operand are XMM registers. The second source operand is an XMM register or 128-bit memory location. The mask operand is the third source register, and encoded in bits[7:4] of the immediate byte(imm8). The bits[3:0] of imm8 are ignored. In 32-bit mode, imm8[7] is ignored. The upper bits (MAXVL-1:128) of the corresponding YMM register (destination register) are zeroed. VEX.L must be 0, otherwise the instruction will #UD. VEX.W must be 0, otherwise, the instruction will #UD.

+

VEX.256 encoded version: The first source operand and the destination operand are YMM registers. The second source operand is an YMM register or 256-bit memory location. The third source register is an YMM register and encoded in bits[7:4] of the immediate byte(imm8). The bits[3:0] of imm8 are ignored. In 32-bit mode, imm8[7] is ignored.

+

VPBLENDVB permits the mask to be any XMM or YMM register. In contrast, PBLENDVB treats XMM0 implicitly as the mask and do not support non-destructive destination operation. An attempt to execute PBLENDVB encoded with a VEX prefix will cause a #UD exception.

+

Operation + ¶ +

+

PBLENDVB (128-bit Legacy SSE Version) + ¶ +

+
MASK := XMM0
+IF (MASK[7] = 1) THEN DEST[7:0] := SRC[7:0];
+ELSE DEST[7:0] := DEST[7:0];
+IF (MASK[15] = 1) THEN DEST[15:8] := SRC[15:8];
+ELSE DEST[15:8] := DEST[15:8];
+IF (MASK[23] = 1) THEN DEST[23:16] := SRC[23:16]
+ELSE DEST[23:16] := DEST[23:16];
+IF (MASK[31] = 1) THEN DEST[31:24] := SRC[31:24]
+ELSE DEST[31:24] := DEST[31:24];
+IF (MASK[39] = 1) THEN DEST[39:32] := SRC[39:32]
+ELSE DEST[39:32] := DEST[39:32];
+IF (MASK[47] = 1) THEN DEST[47:40] := SRC[47:40]
+ELSE DEST[47:40] := DEST[47:40];
+IF (MASK[55] = 1) THEN DEST[55:48] := SRC[55:48]
+ELSE DEST[55:48] := DEST[55:48];
+IF (MASK[63] = 1) THEN DEST[63:56] := SRC[63:56]
+ELSE DEST[63:56] := DEST[63:56];
+IF (MASK[71] = 1) THEN DEST[71:64] := SRC[71:64]
+ELSE DEST[71:64] := DEST[71:64];
+IF (MASK[79] = 1) THEN DEST[79:72] := SRC[79:72]
+ELSE DEST[79:72] := DEST[79:72];
+IF (MASK[87] = 1) THEN DEST[87:80] := SRC[87:80]
+ELSE DEST[87:80] := DEST[87:80];
+IF (MASK[95] = 1) THEN DEST[95:88] := SRC[95:88]
+ELSE DEST[95:88] := DEST[95:88];
+IF (MASK[103] = 1) THEN DEST[103:96] := SRC[103:96]
+ELSE DEST[103:96] := DEST[103:96];
+IF (MASK[111] = 1) THEN DEST[111:104] := SRC[111:104]
+ELSE DEST[111:104] := DEST[111:104];
+IF (MASK[119] = 1) THEN DEST[119:112] := SRC[119:112]
+ELSE DEST[119:112] := DEST[119:112];
+IF (MASK[127] = 1) THEN DEST[127:120] := SRC[127:120]
+ELSE DEST[127:120] := DEST[127:120])
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPBLENDVB (VEX.128 Encoded Version) + ¶ +

+
MASK := SRC3
+IF (MASK[7] = 1) THEN DEST[7:0] := SRC2[7:0];
+ELSE DEST[7:0] := SRC1[7:0];
+IF (MASK[15] = 1) THEN DEST[15:8] := SRC2[15:8];
+ELSE DEST[15:8] := SRC1[15:8];
+IF (MASK[23] = 1) THEN DEST[23:16] := SRC2[23:16]
+ELSE DEST[23:16] := SRC1[23:16];
+IF (MASK[31] = 1) THEN DEST[31:24] := SRC2[31:24]
+ELSE DEST[31:24] := SRC1[31:24];
+IF (MASK[39] = 1) THEN DEST[39:32] := SRC2[39:32]
+ELSE DEST[39:32] := SRC1[39:32];
+IF (MASK[47] = 1) THEN DEST[47:40] := SRC2[47:40]
+ELSE DEST[47:40] := SRC1[47:40];
+IF (MASK[55] = 1) THEN DEST[55:48] := SRC2[55:48]
+ELSE DEST[55:48] := SRC1[55:48];
+IF (MASK[63] = 1) THEN DEST[63:56] := SRC2[63:56]
+ELSE DEST[63:56] := SRC1[63:56];
+IF (MASK[71] = 1) THEN DEST[71:64] := SRC2[71:64]
+ELSE DEST[71:64] := SRC1[71:64];
+IF (MASK[79] = 1) THEN DEST[79:72] := SRC2[79:72]
+ELSE DEST[79:72] := SRC1[79:72];
+IF (MASK[87] = 1) THEN DEST[87:80] := SRC2[87:80]
+ELSE DEST[87:80] := SRC1[87:80];
+IF (MASK[95] = 1) THEN DEST[95:88] := SRC2[95:88]
+ELSE DEST[95:88] := SRC1[95:88];
+IF (MASK[103] = 1) THEN DEST[103:96] := SRC2[103:96]
+ELSE DEST[103:96] := SRC1[103:96];
+IF (MASK[111] = 1) THEN DEST[111:104] := SRC2[111:104]
+ELSE DEST[111:104] := SRC1[111:104];
+IF (MASK[119] = 1) THEN DEST[119:112] := SRC2[119:112]
+ELSE DEST[119:112] := SRC1[119:112];
+IF (MASK[127] = 1) THEN DEST[127:120] := SRC2[127:120]
+ELSE DEST[127:120] := SRC1[127:120])
+DEST[MAXVL-1:128] := 0
+
+

VPBLENDVB (VEX.256 Encoded Version) + ¶ +

+
MASK := SRC3
+IF (MASK[7] == 1) THEN DEST[7:0] := SRC2[7:0];
+ELSE DEST[7:0] := SRC1[7:0];
+IF (MASK[15] == 1) THEN DEST[15:8] := SRC2[15:8];
+ELSE DEST[15:8] := SRC1[15:8];
+IF (MASK[23] == 1) THEN DEST[23:16] := SRC2[23:16]
+ELSE DEST[23:16] := SRC1[23:16];
+IF (MASK[31] == 1) THEN DEST[31:24] := SRC2[31:24]
+ELSE DEST[31:24] := SRC1[31:24];
+IF (MASK[39] == 1) THEN DEST[39:32] := SRC2[39:32]
+ELSE DEST[39:32] := SRC1[39:32];
+IF (MASK[47] == 1) THEN DEST[47:40] := SRC2[47:40]
+ELSE DEST[47:40] := SRC1[47:40];
+IF (MASK[55] == 1) THEN DEST[55:48] := SRC2[55:48]
+ELSE DEST[55:48] := SRC1[55:48];
+IF (MASK[63] == 1) THEN DEST[63:56] := SRC2[63:56]
+ELSE DEST[63:56] := SRC1[63:56];
+IF (MASK[71] == 1) THEN DEST[71:64] := SRC2[71:64]
+ELSE DEST[71:64] := SRC1[71:64];
+IF (MASK[79] == 1) THEN DEST[79:72] := SRC2[79:72]
+ELSE DEST[79:72] := SRC1[79:72];
+IF (MASK[87] == 1) THEN DEST[87:80] := SRC2[87:80]
+ELSE DEST[87:80] := SRC1[87:80];
+IF (MASK[95] == 1) THEN DEST[95:88] := SRC2[95:88]
+ELSE DEST[95:88] := SRC1[95:88];
+IF (MASK[103] == 1) THEN DEST[103:96] := SRC2[103:96]
+ELSE DEST[103:96] := SRC1[103:96];
+IF (MASK[111] == 1) THEN DEST[111:104] := SRC2[111:104]
+ELSE DEST[111:104] := SRC1[111:104];
+IF (MASK[119] == 1) THEN DEST[119:112] := SRC2[119:112]
+ELSE DEST[119:112] := SRC1[119:112];
+IF (MASK[127] == 1) THEN DEST[127:120] := SRC2[127:120]
+ELSE DEST[127:120] := SRC1[127:120])
+IF (MASK[135] == 1) THEN DEST[135:128] := SRC2[135:128];
+ELSE DEST[135:128] := SRC1[135:128];
+IF (MASK[143] == 1) THEN DEST[143:136] := SRC2[143:136];
+ELSE DEST[[143:136] := SRC1[143:136];
+IF (MASK[151] == 1) THEN DEST[151:144] := SRC2[151:144]
+ELSE DEST[151:144] := SRC1[151:144];
+IF (MASK[159] == 1) THEN DEST[159:152] := SRC2[159:152]
+ELSE DEST[159:152] := SRC1[159:152];
+IF (MASK[167] == 1) THEN DEST[167:160] := SRC2[167:160]
+ELSE DEST[167:160] := SRC1[167:160];
+IF (MASK[175] == 1) THEN DEST[175:168] := SRC2[175:168]
+ELSE DEST[175:168] := SRC1[175:168];
+IF (MASK[183] == 1) THEN DEST[183:176] := SRC2[183:176]
+ELSE DEST[183:176] := SRC1[183:176];
+IF (MASK[191] == 1) THEN DEST[191:184] := SRC2[191:184]
+ELSE DEST[191:184] := SRC1[191:184];
+IF (MASK[199] == 1) THEN DEST[199:192] := SRC2[199:192]
+ELSE DEST[199:192] := SRC1[199:192];
+IF (MASK[207] == 1) THEN DEST[207:200] := SRC2[207:200]
+ELSE DEST[207:200] := SRC1[207:200]
+IF (MASK[215] == 1) THEN DEST[215:208] := SRC2[215:208]
+ELSE DEST[215:208] := SRC1[215:208];
+IF (MASK[223] == 1) THEN DEST[223:216] := SRC2[223:216]
+ELSE DEST[223:216] := SRC1[223:216];
+IF (MASK[231] == 1) THEN DEST[231:224] := SRC2[231:224]
+ELSE DEST[231:224] := SRC1[231:224];
+IF (MASK[239] == 1) THEN DEST[239:232] := SRC2[239:232]
+ELSE DEST[239:232] := SRC1[239:232];
+IF (MASK[247] == 1) THEN DEST[247:240] := SRC2[247:240]
+ELSE DEST[247:240] := SRC1[247:240];
+IF (MASK[255] == 1) THEN DEST[255:248] := SRC2[255:248]
+ELSE DEST[255:248] := SRC1[255:248]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)PBLENDVB __m128i _mm_blendv_epi8 (__m128i v1, __m128i v2, __m128i mask);
+
+
VPBLENDVB __m256i _mm256_blendv_epi8 (__m256i v1, __m256i v2, __m256i mask);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.W = 1.
diff --git a/x86/pblendw.html b/x86/pblendw.html new file mode 100644 index 0000000..0f63043 --- /dev/null +++ b/x86/pblendw.html @@ -0,0 +1,166 @@ + +PBLENDW + — Blend Packed Words

PBLENDW + — Blend Packed Words

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 0E /r ib PBLENDW xmm1, xmm2/m128, imm8RMIV/VSSE4_1Select words from xmm1 and xmm2/m128 from mask specified in imm8 and store the values into xmm1.
VEX.128.66.0F3A.WIG 0E /r ib VPBLENDW xmm1, xmm2, xmm3/m128, imm8RVMIV/VAVXSelect words from xmm2 and xmm3/m128 from mask specified in imm8 and store the values into xmm1.
VEX.256.66.0F3A.WIG 0E /r ib VPBLENDW ymm1, ymm2, ymm3/m256, imm8RVMIV/VAVX2Select words from ymm2 and ymm3/m256 from mask specified in imm8 and store the values into ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (r, w)ModRM:r/m (r)imm8N/A
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Words from the source operand (second operand) are conditionally written to the destination operand (first operand) depending on bits in the immediate operand (third operand). The immediate bits (bits 7:0) form a mask that determines whether the corresponding word in the destination is copied from the source. If a bit in the mask, corresponding to a word, is “1", then the word is copied, else the word element in the destination operand is unchanged.

+

128-bit Legacy SSE version: The second source operand can be an XMM register or a 128-bit memory location. The first source and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand can be an XMM register or a 128-bit memory location. The first source and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM register are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Operation + ¶ +

+

PBLENDW (128-bit Legacy SSE Version) + ¶ +

+
IF (imm8[0] = 1) THEN DEST[15:0] := SRC[15:0]
+ELSE DEST[15:0] := DEST[15:0]
+IF (imm8[1] = 1) THEN DEST[31:16] := SRC[31:16]
+ELSE DEST[31:16] := DEST[31:16]
+IF (imm8[2] = 1) THEN DEST[47:32] := SRC[47:32]
+ELSE DEST[47:32] := DEST[47:32]
+IF (imm8[3] = 1) THEN DEST[63:48] := SRC[63:48]
+ELSE DEST[63:48] := DEST[63:48]
+IF (imm8[4] = 1) THEN DEST[79:64] := SRC[79:64]
+ELSE DEST[79:64] := DEST[79:64]
+IF (imm8[5] = 1) THEN DEST[95:80] := SRC[95:80]
+ELSE DEST[95:80] := DEST[95:80]
+IF (imm8[6] = 1) THEN DEST[111:96] := SRC[111:96]
+ELSE DEST[111:96] := DEST[111:96]
+IF (imm8[7] = 1) THEN DEST[127:112] := SRC[127:112]
+ELSE DEST[127:112] := DEST[127:112]
+
+

VPBLENDW (VEX.128 Encoded Version) + ¶ +

+
IF (imm8[0] = 1) THEN DEST[15:0] := SRC2[15:0]
+ELSE DEST[15:0] := SRC1[15:0]
+IF (imm8[1] = 1) THEN DEST[31:16] := SRC2[31:16]
+ELSE DEST[31:16] := SRC1[31:16]
+IF (imm8[2] = 1) THEN DEST[47:32] := SRC2[47:32]
+ELSE DEST[47:32] := SRC1[47:32]
+IF (imm8[3] = 1) THEN DEST[63:48] := SRC2[63:48]
+ELSE DEST[63:48] := SRC1[63:48]
+IF (imm8[4] = 1) THEN DEST[79:64] := SRC2[79:64]
+ELSE DEST[79:64] := SRC1[79:64]
+IF (imm8[5] = 1) THEN DEST[95:80] := SRC2[95:80]
+ELSE DEST[95:80] := SRC1[95:80]
+IF (imm8[6] = 1) THEN DEST[111:96] := SRC2[111:96]
+ELSE DEST[111:96] := SRC1[111:96]
+IF (imm8[7] = 1) THEN DEST[127:112] := SRC2[127:112]
+ELSE DEST[127:112] := SRC1[127:112]
+DEST[MAXVL-1:128] := 0
+
+

VPBLENDW (VEX.256 Encoded Version) + ¶ +

+
IF (imm8[0] == 1) THEN DEST[15:0] := SRC2[15:0]
+ELSE DEST[15:0] := SRC1[15:0]
+IF (imm8[1] == 1) THEN DEST[31:16] := SRC2[31:16]
+ELSE DEST[31:16] := SRC1[31:16]
+IF (imm8[2] == 1) THEN DEST[47:32] := SRC2[47:32]
+ELSE DEST[47:32] := SRC1[47:32]
+IF (imm8[3] == 1) THEN DEST[63:48] := SRC2[63:48]
+ELSE DEST[63:48] := SRC1[63:48]
+IF (imm8[4] == 1) THEN DEST[79:64] := SRC2[79:64]
+ELSE DEST[79:64] := SRC1[79:64]
+IF (imm8[5] == 1) THEN DEST[95:80] := SRC2[95:80]
+ELSE DEST[95:80] := SRC1[95:80]
+IF (imm8[6] == 1) THEN DEST[111:96] := SRC2[111:96]
+ELSE DEST[111:96] := SRC1[111:96]
+IF (imm8[7] == 1) THEN DEST[127:112]
+ELSE DEST[127:112] := SRC1[127:112]
+IF (imm8[0] == 1) THEN DEST[143:128]
+ELSE DEST[143:128] := SRC1[143:128]
+IF (imm8[1] == 1) THEN DEST[159:144]
+ELSE DEST[159:144] := SRC1[159:144]
+IF (imm8[2] == 1) THEN DEST[175:160]
+ELSE DEST[175:160] := SRC1[175:160]
+IF (imm8[3] == 1) THEN DEST[191:176]
+ELSE DEST[191:176] := SRC1[191:176]
+IF (imm8[4] == 1) THEN DEST[207:192]
+ELSE DEST[207:192] := SRC1[207:192]
+IF (imm8[5] == 1) THEN DEST[223:208]
+ELSE DEST[223:208] := SRC1[223:208]
+IF (imm8[6] == 1) THEN DEST[239:224]
+ELSE DEST[239:224] := SRC1[239:224]
+IF (imm8[7] == 1) THEN DEST[255:240]
+ELSE DEST[255:240] := SRC1[255:240]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)PBLENDW __m128i _mm_blend_epi16 (__m128i v1, __m128i v2, const int mask);
+
+
VPBLENDW __m256i _mm256_blend_epi16 (__m256i v1, __m256i v2, const int mask)
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1 and AVX2 = 0.
diff --git a/x86/pclmulqdq.html b/x86/pclmulqdq.html new file mode 100644 index 0000000..54e1eae --- /dev/null +++ b/x86/pclmulqdq.html @@ -0,0 +1,219 @@ + +PCLMULQDQ + — Carry-Less Multiplication Quadword

PCLMULQDQ + — Carry-Less Multiplication Quadword

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 44 /r ib PCLMULQDQ xmm1, xmm2/m128, imm8AV/VPCLMULQDQCarry-less multiplication of one quadword of xmm1 by one quadword of xmm2/m128, stores the 128-bit result in xmm1. The immediate is used to determine which quadwords of xmm1 and xmm2/m128 should be used.
VEX.128.66.0F3A.WIG 44 /r ib VPCLMULQDQ xmm1, xmm2, xmm3/m128, imm8BV/VPCLMULQDQ AVXCarry-less multiplication of one quadword of xmm2 by one quadword of xmm3/m128, stores the 128-bit result in xmm1. The immediate is used to determine which quadwords of xmm2 and xmm3/m128 should be used.
VEX.256.66.0F3A.WIG 44 /r /ib VPCLMULQDQ ymm1, ymm2, ymm3/m256, imm8BV/VVPCLMULQDQ AVXCarry-less multiplication of one quadword of ymm2 by one quadword of ymm3/m256, stores the 128-bit result in ymm1. The immediate is used to determine which quadwords of ymm2 and ymm3/m256 should be used.
EVEX.128.66.0F3A.WIG 44 /r /ib VPCLMULQDQ xmm1, xmm2, xmm3/m128, imm8CV/VVPCLMULQDQ AVX512VLCarry-less multiplication of one quadword of xmm2 by one quadword of xmm3/m128, stores the 128-bit result in xmm1. The immediate is used to determine which quadwords of xmm2 and xmm3/m128 should be used.
EVEX.256.66.0F3A.WIG 44 /r /ib VPCLMULQDQ ymm1, ymm2, ymm3/m256, imm8CV/VVPCLMULQDQ AVX512VLCarry-less multiplication of one quadword of ymm2 by one quadword of ymm3/m256, stores the 128-bit result in ymm1. The immediate is used to determine which quadwords of ymm2 and ymm3/m256 should be used.
EVEX.512.66.0F3A.WIG 44 /r /ib VPCLMULQDQ zmm1, zmm2, zmm3/m512, imm8CV/VVPCLMULQDQ AVX512FCarry-less multiplication of one quadword of zmm2 by one quadword of zmm3/m512, stores the 128-bit result in zmm1. The immediate is used to determine which quadwords of zmm2 and zmm3/m512 should be used.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

Performs a carry-less multiplication of two quadwords, selected from the first source and second source operand according to the value of the immediate byte. Bits 4 and 0 are used to select which 64-bit half of each operand to use according to Table 4-13, other bits of the immediate byte are ignored.

+

The EVEX encoded form of this instruction does not support memory fault suppression.

+
+ + + + + + + + + + + + + + + + + + + + +
Imm[4]Imm[0]PCLMULQDQ Operation
00CL_MUL( SRC21[63:0], SRC1[63:0] )
01CL_MUL( SRC2[63:0], SRC1[127:64] )
10CL_MUL( SRC2[127:64], SRC1[63:0] )
11CL_MUL( SRC2[127:64], SRC1[127:64] )
+
Table 4-13. PCLMULQDQ Quadword Selection of Immediate Byte
+
+

1. SRC2 denotes the second source operand, which can be a register or memory; SRC1 denotes the first source and destination operand.

+

The first source operand and the destination operand are the same and must be a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location. Bits (VL_MAX-1:128) of the corresponding YMM destination register remain unchanged.

+

Compilers and assemblers may implement the following pseudo-op syntax to simplify programming and emit the required encoding for imm8.

+
+ + + + + + + + + + + + + + + +
Pseudo-OpImm8 Encoding
PCLMULLQLQDQ xmm1, xmm20000_0000B
PCLMULHQLQDQ xmm1, xmm20000_0001B
PCLMULLQHQDQ xmm1, xmm20001_0000B
PCLMULHQHQDQ xmm1, xmm20001_0001B
+
Table 4-14. Pseudo-Op and PCLMULQDQ Implementation
+

Operation + ¶ +

+
define PCLMUL128(X,Y): // helper function
+    FOR i := 0 to 63:
+        TMP [ i ] := X[ 0 ] and Y[ i ]
+        FOR j := 1 to i:
+            TMP [ i ] := TMP [ i ] xor (X[ j ] and Y[ i - j ])
+        DEST[ i ] := TMP[ i ]
+    FOR i := 64 to 126:
+        TMP [ i ] := 0
+        FOR j := i - 63 to 63:
+            TMP [ i ] := TMP [ i ] xor (X[ j ] and Y[ i - j ])
+        DEST[ i ] := TMP[ i ]
+    DEST[127] := 0;
+    RETURN DEST // 128b vector
+
+

PCLMULQDQ (SSE Version) + ¶ +

+
IF imm8[0] = 0:
+    TEMP1 := SRC1.qword[0]
+ELSE:
+    TEMP1 := SRC1.qword[1]
+IF imm8[4] = 0:
+    TEMP2 := SRC2.qword[0]
+ELSE:
+    TEMP2 := SRC2.qword[1]
+DEST[127:0] := PCLMUL128(TEMP1, TEMP2)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPCLMULQDQ (128b and 256b VEX Encoded Versions) + ¶ +

+
(KL,VL) = (1,128), (2,256)
+FOR i= 0 to KL-1:
+    IF imm8[0] = 0:
+        TEMP1 := SRC1.xmm[i].qword[0]
+    ELSE:
+        TEMP1 := SRC1.xmm[i].qword[1]
+    IF imm8[4] = 0:
+        TEMP2 := SRC2.xmm[i].qword[0]
+    ELSE:
+        TEMP2 := SRC2.xmm[i].qword[1]
+    DEST.xmm[i] := PCLMUL128(TEMP1, TEMP2)
+DEST[MAXVL-1:VL] := 0
+
+

VPCLMULQDQ (EVEX Encoded Version) + ¶ +

+
(KL,VL) = (1,128), (2,256), (4,512)
+FOR i = 0 to KL-1:
+    IF imm8[0] = 0:
+        TEMP1 := SRC1.xmm[i].qword[0]
+    ELSE:
+        TEMP1 := SRC1.xmm[i].qword[1]
+    IF imm8[4] = 0:
+        TEMP2 := SRC2.xmm[i].qword[0]
+    ELSE:
+        TEMP2 := SRC2.xmm[i].qword[1]
+    DEST.xmm[i] := PCLMUL128(TEMP1, TEMP2)
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)PCLMULQDQ __m128i _mm_clmulepi64_si128 (__m128i, __m128i, const int)
+
+
VPCLMULQDQ __m256i _mm256_clmulepi64_epi128(__m256i, __m256i, const int);
+
+
VPCLMULQDQ __m512i _mm512_clmulepi64_epi128(__m512i, __m512i, const int);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
+

EVEX-encoded: See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/pcmpeqb.pcmpeqw.pcmpeqd.html b/x86/pcmpeqb.pcmpeqw.pcmpeqd.html new file mode 100644 index 0000000..391f7a8 --- /dev/null +++ b/x86/pcmpeqb.pcmpeqw.pcmpeqd.html @@ -0,0 +1,453 @@ + +PCMPEQB/PCMPEQW/PCMPEQD + — Compare Packed Data for Equal

PCMPEQB/PCMPEQW/PCMPEQD + — Compare Packed Data for Equal

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 74 /r1 PCMPEQB mm, mm/m64AV/VMMXCompare packed bytes in mm/m64 and mm for equality.
66 0F 74 /r PCMPEQB xmm1, xmm2/m128AV/VSSE2Compare packed bytes in xmm2/m128 and xmm1 for equality.
NP 0F 75 /r1 PCMPEQW mm, mm/m64AV/VMMXCompare packed words in mm/m64 and mm for equality.
66 0F 75 /r PCMPEQW xmm1, xmm2/m128AV/VSSE2Compare packed words in xmm2/m128 and xmm1 for equality.
NP 0F 76 /r1 PCMPEQD mm, mm/m64AV/VMMXCompare packed doublewords in mm/m64 and mm for equality.
66 0F 76 /r PCMPEQD xmm1, xmm2/m128AV/VSSE2Compare packed doublewords in xmm2/m128 and xmm1 for equality.
VEX.128.66.0F.WIG 74 /r VPCMPEQB xmm1, xmm2, xmm3/m128BV/VAVXCompare packed bytes in xmm3/m128 and xmm2 for equality.
VEX.128.66.0F.WIG 75 /r VPCMPEQW xmm1, xmm2, xmm3/m128BV/VAVXCompare packed words in xmm3/m128 and xmm2 for equality.
VEX.128.66.0F.WIG 76 /r VPCMPEQD xmm1, xmm2, xmm3/m128BV/VAVXCompare packed doublewords in xmm3/m128 and xmm2 for equality.
VEX.256.66.0F.WIG 74 /r VPCMPEQB ymm1, ymm2, ymm3 /m256BV/VAVX2Compare packed bytes in ymm3/m256 and ymm2 for equality.
VEX.256.66.0F.WIG 75 /r VPCMPEQW ymm1, ymm2, ymm3 /m256BV/VAVX2Compare packed words in ymm3/m256 and ymm2 for equality.
VEX.256.66.0F.WIG 76 /r VPCMPEQD ymm1, ymm2, ymm3 /m256BV/VAVX2Compare packed doublewords in ymm3/m256 and ymm2 for equality.
EVEX.128.66.0F.W0 76 /r VPCMPEQD k1 {k2}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FCompare Equal between int32 vector xmm2 and int32 vector xmm3/m128/m32bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.256.66.0F.W0 76 /r VPCMPEQD k1 {k2}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FCompare Equal between int32 vector ymm2 and int32 vector ymm3/m256/m32bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.512.66.0F.W0 76 /r VPCMPEQD k1 {k2}, zmm2, zmm3/m512/m32bcstCV/VAVX512FCompare Equal between int32 vectors in zmm2 and zmm3/m512/m32bcst, and set destination k1 according to the comparison results under writemask k2.
EVEX.128.66.0F.WIG 74 /r VPCMPEQB k1 {k2}, xmm2, xmm3 /m128DV/VAVX512VL AVX512BWCompare packed bytes in xmm3/m128 and xmm2 for equality and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.256.66.0F.WIG 74 /r VPCMPEQB k1 {k2}, ymm2, ymm3 /m256DV/VAVX512VL AVX512BWCompare packed bytes in ymm3/m256 and ymm2 for equality and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.512.66.0F.WIG 74 /r VPCMPEQB k1 {k2}, zmm2, zmm3 /m512DV/VAVX512BWCompare packed bytes in zmm3/m512 and zmm2 for equality and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.128.66.0F.WIG 75 /r VPCMPEQW k1 {k2}, xmm2, xmm3 /m128DV/VAVX512VL AVX512BWCompare packed words in xmm3/m128 and xmm2 for equality and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.256.66.0F.WIG 75 /r VPCMPEQW k1 {k2}, ymm2, ymm3 /m256DV/VAVX512VL AVX512BWCompare packed words in ymm3/m256 and ymm2 for equality and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.512.66.0F.WIG 75 /r VPCMPEQW k1 {k2}, zmm2, zmm3 /m512DV/VAVX512BWCompare packed words in zmm3/m512 and zmm2 for equality and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare for equality of the packed bytes, words, or doublewords in the destination operand (first operand) and the source operand (second operand). If a pair of data elements is equal, the corresponding data element in the destination operand is set to all 1s; otherwise, it is set to all 0s.

+

The (V)PCMPEQB instruction compares the corresponding bytes in the destination and source operands; the (V)PCMPEQW instruction compares the corresponding words in the destination and source operands; and the (V)PCMPEQD instruction compares the corresponding doublewords in the destination and source operands.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE instructions: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand can be an MMX technology register.

+

128-bit Legacy SSE version: The second source operand can be an XMM register or a 128-bit memory location. The first source and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand can be an XMM register or a 128-bit memory location. The first source and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM register are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

EVEX encoded VPCMPEQD: The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand (first operand) is a mask register updated according to the writemask k2.

+

EVEX encoded VPCMPEQB/W: The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination operand (first operand) is a mask register updated according to the writemask k2.

+

Operation + ¶ +

+

PCMPEQB (With 64-bit Operands) + ¶ +

+
IF DEST[7:0] = SRC[7:0]
+    THEN DEST[7:0) := FFH;
+    ELSE DEST[7:0] := 0; FI;
+(* Continue comparison of 2nd through 7th bytes in DEST and SRC *)
+IF DEST[63:56] = SRC[63:56]
+    THEN DEST[63:56] := FFH;
+    ELSE DEST[63:56] := 0; FI;
+
+

COMPARE_BYTES_EQUAL (SRC1, SRC2) + ¶ +

+
    IF SRC1[7:0] = SRC2[7:0]
+    THEN DEST[7:0] := FFH;
+    ELSE DEST[7:0] := 0; FI;
+(* Continue comparison of 2nd through 15th bytes in SRC1 and SRC2 *)
+    IF SRC1[127:120] = SRC2[127:120]
+    THEN DEST[127:120] := FFH;
+    ELSE DEST[127:120] := 0; FI;
+
+

COMPARE_WORDS_EQUAL (SRC1, SRC2) + ¶ +

+
    IF SRC1[15:0] = SRC2[15:0]
+    THEN DEST[15:0] := FFFFH;
+    ELSE DEST[15:0] := 0; FI;
+(* Continue comparison of 2nd through 7th 16-bit words in SRC1 and SRC2 *)
+    IF SRC1[127:112] = SRC2[127:112]
+    THEN DEST[127:112] := FFFFH;
+    ELSE DEST[127:112] := 0; FI;
+
+

COMPARE_DWORDS_EQUAL (SRC1, SRC2) + ¶ +

+
    IF SRC1[31:0] = SRC2[31:0]
+    THEN DEST[31:0] := FFFFFFFFH;
+    ELSE DEST[31:0] := 0; FI;
+(* Continue comparison of 2nd through 3rd 32-bit dwords in SRC1 and SRC2 *)
+    IF SRC1[127:96] = SRC2[127:96]
+    THEN DEST[127:96] := FFFFFFFFH;
+    ELSE DEST[127:96] := 0; FI;
+
+

PCMPEQB (With 128-bit Operands) + ¶ +

+
DEST[127:0] := COMPARE_BYTES_EQUAL(DEST[127:0],SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPCMPEQB (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_BYTES_EQUAL(SRC1[127:0],SRC2[127:0])
+DEST[MAXVL-1:128] := 0
+
+

VPCMPEQB (VEX.256 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_BYTES_EQUAL(SRC1[127:0],SRC2[127:0])
+DEST[255:128] := COMPARE_BYTES_EQUAL(SRC1[255:128],SRC2[255:128])
+DEST[MAXVL-1:256] := 0
+
+

VPCMPEQB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k2[j] OR *no writemask*
+        THEN
+            /* signed comparison */
+            CMP := SRC1[i+7:i] == SRC2[i+7:i];
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking onlyFI;
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

PCMPEQW (With 64-bit Operands) + ¶ +

+
IF DEST[15:0] = SRC[15:0]
+    THEN DEST[15:0] := FFFFH;
+    ELSE DEST[15:0] := 0; FI;
+(* Continue comparison of 2nd and 3rd words in DEST and SRC *)
+IF DEST[63:48] = SRC[63:48]
+    THEN DEST[63:48] := FFFFH;
+    ELSE DEST[63:48] := 0; FI;
+
+

PCMPEQW (With 128-bit Operands) + ¶ +

+
DEST[127:0] := COMPARE_WORDS_EQUAL(DEST[127:0],SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPCMPEQW (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_WORDS_EQUAL(SRC1[127:0],SRC2[127:0])
+DEST[MAXVL-1:128] := 0
+
+

VPCMPEQW (VEX.256 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_WORDS_EQUAL(SRC1[127:0],SRC2[127:0])
+DEST[255:128] := COMPARE_WORDS_EQUAL(SRC1[255:128],SRC2[255:128])
+DEST[MAXVL-1:256] := 0
+
+

VPCMPEQW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k2[j] OR *no writemask*
+        THEN
+            /* signed comparison */
+            CMP := SRC1[i+15:i] == SRC2[i+15:i];
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking onlyFI;
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

PCMPEQD (With 64-bit Operands) + ¶ +

+
IF DEST[31:0] = SRC[31:0]
+    THEN DEST[31:0] := FFFFFFFFH;
+    ELSE DEST[31:0] := 0; FI;
+IF DEST[63:32] = SRC[63:32]
+    THEN DEST[63:32] := FFFFFFFFH;
+    ELSE DEST[63:32] := 0; FI;
+
+

PCMPEQD (With 128-bit Operands) + ¶ +

+
DEST[127:0] := COMPARE_DWORDS_EQUAL(DEST[127:0],SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPCMPEQD (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_DWORDS_EQUAL(SRC1[127:0],SRC2[127:0])
+DEST[MAXVL-1:128] := 0
+
+

VPCMPEQD (VEX.256 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_DWORDS_EQUAL(SRC1[127:0],SRC2[127:0])
+DEST[255:128] := COMPARE_DWORDS_EQUAL(SRC1[255:128],SRC2[255:128])
+DEST[MAXVL-1:256] := 0
+
+

VPCMPEQD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k2[j] OR *no writemask*
+        THEN
+            /* signed comparison */
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN CMP := SRC1[i+31:i] = SRC2[31:0];
+                ELSE CMP := SRC1[i+31:i] = SRC2[i+31:i];
+            FI;
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPCMPEQB __mmask64 _mm512_cmpeq_epi8_mask(__m512i a, __m512i b);
+
+
VPCMPEQB __mmask64 _mm512_mask_cmpeq_epi8_mask(__mmask64 k, __m512i a, __m512i b);
+
+
VPCMPEQB __mmask32 _mm256_cmpeq_epi8_mask(__m256i a, __m256i b);
+
+
VPCMPEQB __mmask32 _mm256_mask_cmpeq_epi8_mask(__mmask32 k, __m256i a, __m256i b);
+
+
VPCMPEQB __mmask16 _mm_cmpeq_epi8_mask(__m128i a, __m128i b);
+
+
VPCMPEQB __mmask16 _mm_mask_cmpeq_epi8_mask(__mmask16 k, __m128i a, __m128i b);
+
+
VPCMPEQW __mmask32 _mm512_cmpeq_epi16_mask(__m512i a, __m512i b);
+
+
VPCMPEQW __mmask32 _mm512_mask_cmpeq_epi16_mask(__mmask32 k, __m512i a, __m512i b);
+
+
VPCMPEQW __mmask16 _mm256_cmpeq_epi16_mask(__m256i a, __m256i b);
+
+
VPCMPEQW __mmask16 _mm256_mask_cmpeq_epi16_mask(__mmask16 k, __m256i a, __m256i b);
+
+
VPCMPEQW __mmask8 _mm_cmpeq_epi16_mask(__m128i a, __m128i b);
+
+
VPCMPEQW __mmask8 _mm_mask_cmpeq_epi16_mask(__mmask8 k, __m128i a, __m128i b);
+
+
VPCMPEQD __mmask16 _mm512_cmpeq_epi32_mask( __m512i a, __m512i b);
+
+
VPCMPEQD __mmask16 _mm512_mask_cmpeq_epi32_mask(__mmask16 k, __m512i a, __m512i b);
+
+
VPCMPEQD __mmask8 _mm256_cmpeq_epi32_mask(__m256i a, __m256i b);
+
+
VPCMPEQD __mmask8 _mm256_mask_cmpeq_epi32_mask(__mmask8 k, __m256i a, __m256i b);
+
+
VPCMPEQD __mmask8 _mm_cmpeq_epi32_mask(__m128i a, __m128i b);
+
+
VPCMPEQD __mmask8 _mm_mask_cmpeq_epi32_mask(__mmask8 k, __m128i a, __m128i b);
+
+
PCMPEQB __m64 _mm_cmpeq_pi8 (__m64 m1, __m64 m2)
+
+
PCMPEQW __m64 _mm_cmpeq_pi16 (__m64 m1, __m64 m2)
+
+
PCMPEQD __m64 _mm_cmpeq_pi32 (__m64 m1, __m64 m2)
+
+
(V)PCMPEQB __m128i _mm_cmpeq_epi8 ( __m128i a, __m128i b)
+
+
(V)PCMPEQW __m128i _mm_cmpeq_epi16 ( __m128i a, __m128i b)
+
+
(V)PCMPEQD __m128i _mm_cmpeq_epi32 ( __m128i a, __m128i b)
+
+
VPCMPEQB __m256i _mm256_cmpeq_epi8 ( __m256i a, __m256i b)
+
+
VPCMPEQW __m256i _mm256_cmpeq_epi16 ( __m256i a, __m256i b)
+
+
VPCMPEQD __m256i _mm256_cmpeq_epi32 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPCMPEQD, see Table 2-49, “Type E4 Class Exception Conditions.”

+

EVEX-encoded VPCMPEQB/W, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pcmpeqq.html b/x86/pcmpeqq.html new file mode 100644 index 0000000..bb91918 --- /dev/null +++ b/x86/pcmpeqq.html @@ -0,0 +1,182 @@ + +PCMPEQQ + — Compare Packed Qword Data for Equal

PCMPEQQ + — Compare Packed Qword Data for Equal

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 29 /r PCMPEQQ xmm1, xmm2/m128AV/VSSE4_1Compare packed qwords in xmm2/m128 and xmm1 for equality.
VEX.128.66.0F38.WIG 29 /r VPCMPEQQ xmm1, xmm2, xmm3/m128BV/VAVXCompare packed quadwords in xmm3/m128 and xmm2 for equality.
VEX.256.66.0F38.WIG 29 /r VPCMPEQQ ymm1, ymm2, ymm3 /m256BV/VAVX2Compare packed quadwords in ymm3/m256 and ymm2 for equality.
EVEX.128.66.0F38.W1 29 /r VPCMPEQQ k1 {k2}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FCompare Equal between int64 vector xmm2 and int64 vector xmm3/m128/m64bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.256.66.0F38.W1 29 /r VPCMPEQQ k1 {k2}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FCompare Equal between int64 vector ymm2 and int64 vector ymm3/m256/m64bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.512.66.0F38.W1 29 /r VPCMPEQQ k1 {k2}, zmm2, zmm3/m512/m64bcstCV/VAVX512FCompare Equal between int64 vector zmm2 and int64 vector zmm3/m512/m64bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an SIMD compare for equality of the packed quadwords in the destination operand (first operand) and the source operand (second operand). If a pair of data elements is equal, the corresponding data element in the destination is set to all 1s; otherwise, it is set to 0s.

+

128-bit Legacy SSE version: The second source operand can be an XMM register or a 128-bit memory location. The first source and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand can be an XMM register or a 128-bit memory location. The first source and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM register are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

EVEX encoded VPCMPEQQ: The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand (first operand) is a mask register updated according to the writemask k2.

+

Operation + ¶ +

+

PCMPEQQ (With 128-bit Operands) + ¶ +

+
IF (DEST[63:0] = SRC[63:0])
+    THEN DEST[63:0] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[63:0] := 0; FI;
+IF (DEST[127:64] = SRC[127:64])
+    THEN DEST[127:64] := FFFFFFFFFFFFFFFFH;
+    ELSE DEST[127:64] := 0; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

COMPARE_QWORDS_EQUAL (SRC1, SRC2) + ¶ +

+
IF SRC1[63:0] = SRC2[63:0]
+THEN DEST[63:0] := FFFFFFFFFFFFFFFFH;
+ELSE DEST[63:0] := 0; FI;
+IF SRC1[127:64] = SRC2[127:64]
+THEN DEST[127:64] := FFFFFFFFFFFFFFFFH;
+ELSE DEST[127:64] := 0; FI;
+
+

VPCMPEQQ (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_QWORDS_EQUAL(SRC1,SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPCMPEQQ (VEX.256 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_QWORDS_EQUAL(SRC1[127:0],SRC2[127:0])
+DEST[255:128] := COMPARE_QWORDS_EQUAL(SRC1[255:128],SRC2[255:128])
+DEST[MAXVL-1:256] := 0
+
+

VPCMPEQQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k2[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN CMP := SRC1[i+63:i] = SRC2[63:0];
+                ELSE CMP := SRC1[i+63:i] = SRC2[i+63:i];
+            FI;
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCMPEQQ __mmask8 _mm512_cmpeq_epi64_mask( __m512i a, __m512i b);
+
+
VPCMPEQQ __mmask8 _mm512_mask_cmpeq_epi64_mask(__mmask8 k, __m512i a, __m512i b);
+
+
VPCMPEQQ __mmask8 _mm256_cmpeq_epi64_mask( __m256i a, __m256i b);
+
+
VPCMPEQQ __mmask8 _mm256_mask_cmpeq_epi64_mask(__mmask8 k, __m256i a, __m256i b);
+
+
VPCMPEQQ __mmask8 _mm_cmpeq_epi64_mask( __m128i a, __m128i b);
+
+
VPCMPEQQ __mmask8 _mm_mask_cmpeq_epi64_mask(__mmask8 k, __m128i a, __m128i b);
+
+
(V)PCMPEQQ __m128i _mm_cmpeq_epi64(__m128i a, __m128i b);
+
+
VPCMPEQQ __m256i _mm256_cmpeq_epi64( __m256i a, __m256i b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPCMPEQQ, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pcmpestri.html b/x86/pcmpestri.html new file mode 100644 index 0000000..8d60dc0 --- /dev/null +++ b/x86/pcmpestri.html @@ -0,0 +1,123 @@ + +PCMPESTRI + — Packed Compare Explicit Length Strings, Return Index

PCMPESTRI + — Packed Compare Explicit Length Strings, Return Index

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 61 /r imm8 PCMPESTRI xmm1, xmm2/m128, imm8RMIV/VSSE4_2Perform a packed comparison of string data with explicit lengths, generating an index, and storing the result in ECX.
VEX.128.66.0F3A 61 /r ib VPCMPESTRI xmm1, xmm2/m128, imm8RMIV/VAVXPerform a packed comparison of string data with explicit lengths, generating an index, and storing the result in ECX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (r)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

The instruction compares and processes data from two string fragments based on the encoded value in the imm8 control byte (see Section 4.1, “Imm8 Control Byte Operation for PCMPESTRI / PCMPESTRM / PCMPISTRI / PCMPISTRM”), and generates an index stored to the count register (ECX).

+

Each string fragment is represented by two values. The first value is an xmm (or possibly m128 for the second operand) which contains the data elements of the string (byte or word data). The second value is stored in an input length register. The input length register is EAX/RAX (for xmm1) or EDX/RDX (for xmm2/m128). The length represents the number of bytes/words which are valid for the respective xmm/m128 data.

+

The length of each input is interpreted as being the absolute-value of the value in the length register. The absolute-value computation saturates to 16 (for bytes) and 8 (for words), based on the value of imm8[bit3] when the value in the length register is greater than 16 (8) or less than -16 (-8).

+

The comparison and aggregation operations are performed according to the encoded value of imm8 bit fields (see Section 4.1). The index of the first (or last, according to imm8[6]) set bit of IntRes2 (see Section 4.1.4) is returned in ECX. If no bits are set in IntRes2, ECX is set to 16 (8).

+

Note that the Arithmetic Flags are written in a non-standard manner in order to supply the most relevant information:

+

CFlag – Reset if IntRes2 is equal to zero, set otherwise

+

ZFlag – Set if absolute-value of EDX is < 16 (8), reset otherwise

+

SFlag – Set if absolute-value of EAX is < 16 (8), reset otherwise

+

OFlag – IntRes2[0]

+

AFlag – Reset

+

PFlag – Reset

+

Effective Operand Size + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Operating mode/sizeOperand 1Operand 2Length 1Length 2Result
16 bitxmmxmm/m128EAXEDXECX
32 bitxmmxmm/m128EAXEDXECX
64 bitxmmxmm/m128EAXEDXECX
64 bit + REX.Wxmmxmm/m128RAXRDXECX
+

Intel C/C++ Compiler Intrinsic Equivalent For Returning Index + ¶ +

+

int _mm_cmpestri (__m128i a, int la, __m128i b, int lb, const int mode);

+

Intel C/C++ Compiler Intrinsics For Reading EFlag Results + ¶ +

+

int _mm_cmpestra (__m128i a, int la, __m128i b, int lb, const int mode);

+

int _mm_cmpestrc (__m128i a, int la, __m128i b, int lb, const int mode);

+

int _mm_cmpestro (__m128i a, int la, __m128i b, int lb, const int mode);

+

int _mm_cmpestrs (__m128i a, int la, __m128i b, int lb, const int mode);

+

int _mm_cmpestrz (__m128i a, int la, __m128i b, int lb, const int mode);

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally, this instruction does not cause #GP if the memory operand is not aligned to 16 Byte boundary, and:

+ + + + + +
#UDIf VEX.L = 1.
If VEX.vvvv ≠ 1111B.
diff --git a/x86/pcmpestrm.html b/x86/pcmpestrm.html new file mode 100644 index 0000000..87e8509 --- /dev/null +++ b/x86/pcmpestrm.html @@ -0,0 +1,124 @@ + +PCMPESTRM + — Packed Compare Explicit Length Strings, Return Mask

PCMPESTRM + — Packed Compare Explicit Length Strings, Return Mask

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 60 /r imm8 PCMPESTRM xmm1, xmm2/m128, imm8RMIV/VSSE4_2Perform a packed comparison of string data with explicit lengths, generating a mask, and storing the result in XMM0.
VEX.128.66.0F3A 60 /r ib VPCMPESTRM xmm1, xmm2/m128, imm8RMIV/VAVXPerform a packed comparison of string data with explicit lengths, generating a mask, and storing the result in XMM0.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (r)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

The instruction compares data from two string fragments based on the encoded value in the imm8 contol byte (see Section 4.1, “Imm8 Control Byte Operation for PCMPESTRI / PCMPESTRM / PCMPISTRI / PCMPISTRM”), and generates a mask stored to XMM0.

+

Each string fragment is represented by two values. The first value is an xmm (or possibly m128 for the second operand) which contains the data elements of the string (byte or word data). The second value is stored in an input length register. The input length register is EAX/RAX (for xmm1) or EDX/RDX (for xmm2/m128). The length represents the number of bytes/words which are valid for the respective xmm/m128 data.

+

The length of each input is interpreted as being the absolute-value of the value in the length register. The absolute-value computation saturates to 16 (for bytes) and 8 (for words), based on the value of imm8[bit3] when the value in the length register is greater than 16 (8) or less than -16 (-8).

+

The comparison and aggregation operations are performed according to the encoded value of imm8 bit fields (see Section 4.1). As defined by imm8[6], IntRes2 is then either stored to the least significant bits of XMM0 (zero extended to 128 bits) or expanded into a byte/word-mask and then stored to XMM0.

+

Note that the Arithmetic Flags are written in a non-standard manner in order to supply the most relevant information:

+

CFlag – Reset if IntRes2 is equal to zero, set otherwise

+

ZFlag – Set if absolute-value of EDX is < 16 (8), reset otherwise

+

SFlag – Set if absolute-value of EAX is < 16 (8), reset otherwise

+

OFlag –IntRes2[0]

+

AFlag – Reset

+

PFlag – Reset

+

Note: In VEX.128 encoded versions, bits (MAXVL-1:128) of XMM0 are zeroed. VEX.vvvv is reserved and must be 1111b, VEX.L must be 0, otherwise the instruction will #UD.

+

Effective Operand Size + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Operating mode/sizeOperand 1Operand 2Length 1Length 2Result
16 bitxmmxmm/m128EAXEDXXMM0
32 bitxmmxmm/m128EAXEDXXMM0
64 bitxmmxmm/m128EAXEDXXMM0
64 bit + REX.Wxmmxmm/m128RAXRDXXMM0
+

Intel C/C++ Compiler Intrinsic Equivalent For Returning Mask + ¶ +

+

__m128i _mm_cmpestrm (__m128i a, int la, __m128i b, int lb, const int mode);

+

Intel C/C++ Compiler Intrinsics For Reading EFlag Results + ¶ +

+

int _mm_cmpestra (__m128i a, int la, __m128i b, int lb, const int mode);

+

int _mm_cmpestrc (__m128i a, int la, __m128i b, int lb, const int mode);

+

int _mm_cmpestro (__m128i a, int la, __m128i b, int lb, const int mode);

+

int _mm_cmpestrs (__m128i a, int la, __m128i b, int lb, const int mode);

+

int _mm_cmpestrz (__m128i a, int la, __m128i b, int lb, const int mode);

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally, this instruction does not cause #GP if the memory operand is not aligned to 16 Byte boundary, and:

+ + + + + +
#UDIf VEX.L = 1.
If VEX.vvvv ≠ 1111B.
diff --git a/x86/pcmpgtb.pcmpgtw.pcmpgtd.html b/x86/pcmpgtb.pcmpgtw.pcmpgtd.html new file mode 100644 index 0000000..584f2ee --- /dev/null +++ b/x86/pcmpgtb.pcmpgtw.pcmpgtd.html @@ -0,0 +1,455 @@ + +PCMPGTB/PCMPGTW/PCMPGTD + — Compare Packed Signed Integers for Greater Than

PCMPGTB/PCMPGTW/PCMPGTD + — Compare Packed Signed Integers for Greater Than

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 64 /r1 PCMPGTB mm, mm/m64AV/VMMXCompare packed signed byte integers in mm and mm/m64 for greater than.
66 0F 64 /r PCMPGTB xmm1, xmm2/m128AV/VSSE2Compare packed signed byte integers in xmm1 and xmm2/m128 for greater than.
NP 0F 65 /r1 PCMPGTW mm, mm/m64AV/VMMXCompare packed signed word integers in mm and mm/m64 for greater than.
66 0F 65 /r PCMPGTW xmm1, xmm2/m128AV/VSSE2Compare packed signed word integers in xmm1 and xmm2/m128 for greater than.
NP 0F 66 /r1 PCMPGTD mm, mm/m64AV/VMMXCompare packed signed doubleword integers in mm and mm/m64 for greater than.
66 0F 66 /r PCMPGTD xmm1, xmm2/m128AV/VSSE2Compare packed signed doubleword integers in xmm1 and xmm2/m128 for greater than.
VEX.128.66.0F.WIG 64 /r VPCMPGTB xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed byte integers in xmm2 and xmm3/m128 for greater than.
VEX.128.66.0F.WIG 65 /r VPCMPGTW xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed word integers in xmm2 and xmm3/m128 for greater than.
VEX.128.66.0F.WIG 66 /r VPCMPGTD xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed doubleword integers in xmm2 and xmm3/m128 for greater than.
VEX.256.66.0F.WIG 64 /r VPCMPGTB ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed byte integers in ymm2 and ymm3/m256 for greater than.
VEX.256.66.0F.WIG 65 /r VPCMPGTW ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed word integers in ymm2 and ymm3/m256 for greater than.
VEX.256.66.0F.WIG 66 /r VPCMPGTD ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed doubleword integers in ymm2 and ymm3/m256 for greater than.
EVEX.128.66.0F.W0 66 /r VPCMPGTD k1 {k2}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FCompare Greater between int32 vector xmm2 and int32 vector xmm3/m128/m32bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.256.66.0F.W0 66 /r VPCMPGTD k1 {k2}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FCompare Greater between int32 vector ymm2 and int32 vector ymm3/m256/m32bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.512.66.0F.W0 66 /r VPCMPGTD k1 {k2}, zmm2, zmm3/m512/m32bcstCV/VAVX512FCompare Greater between int32 elements in zmm2 and zmm3/m512/m32bcst, and set destination k1 according to the comparison results under writemask. k2.
EVEX.128.66.0F.WIG 64 /r VPCMPGTB k1 {k2}, xmm2, xmm3/m128DV/VAVX512VL AVX512BWCompare packed signed byte integers in xmm2 and xmm3/m128 for greater than, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.256.66.0F.WIG 64 /r VPCMPGTB k1 {k2}, ymm2, ymm3/m256DV/VAVX512VL AVX512BWCompare packed signed byte integers in ymm2 and ymm3/m256 for greater than, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.512.66.0F.WIG 64 /r VPCMPGTB k1 {k2}, zmm2, zmm3/m512DV/VAVX512BWCompare packed signed byte integers in zmm2 and zmm3/m512 for greater than, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.128.66.0F.WIG 65 /r VPCMPGTW k1 {k2}, xmm2, xmm3/m128DV/VAVX512VL AVX512BWCompare packed signed word integers in xmm2 and xmm3/m128 for greater than, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.256.66.0F.WIG 65 /r VPCMPGTW k1 {k2}, ymm2, ymm3/m256DV/VAVX512VL AVX512BWCompare packed signed word integers in ymm2 and ymm3/m256 for greater than, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.512.66.0F.WIG 65 /r VPCMPGTW k1 {k2}, zmm2, zmm3/m512DV/VAVX512BWCompare packed signed word integers in zmm2 and zmm3/m512 for greater than, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an SIMD signed compare for the greater value of the packed byte, word, or doubleword integers in the destination operand (first operand) and the source operand (second operand). If a data element in the destination operand is greater than the corresponding date element in the source operand, the corresponding data element in the destination operand is set to all 1s; otherwise, it is set to all 0s.

+

The PCMPGTB instruction compares the corresponding signed byte integers in the destination and source operands; the PCMPGTW instruction compares the corresponding signed word integers in the destination and source operands; and the PCMPGTD instruction compares the corresponding signed doubleword integers in the destination and source operands.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE instructions: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand can be an MMX technology register.

+

128-bit Legacy SSE version: The second source operand can be an XMM register or a 128-bit memory location. The first source operand and destination operand are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand can be an XMM register or a 128-bit memory location. The first source operand and destination operand are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM register are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

EVEX encoded VPCMPGTD: The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand (first operand) is a mask register updated according to the writemask k2.

+

EVEX encoded VPCMPGTB/W: The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination operand (first operand) is a mask register updated according to the writemask k2.

+

Operation + ¶ +

+

PCMPGTB (With 64-bit Operands) + ¶ +

+
IF DEST[7:0] > SRC[7:0]
+    THEN DEST[7:0) := FFH;
+    ELSE DEST[7:0] := 0; FI;
+(* Continue comparison of 2nd through 7th bytes in DEST and SRC *)
+IF DEST[63:56] > SRC[63:56]
+    THEN DEST[63:56] := FFH;
+    ELSE DEST[63:56] := 0; FI;
+
+

COMPARE_BYTES_GREATER (SRC1, SRC2) + ¶ +

+
    IF SRC1[7:0] > SRC2[7:0]
+    THEN DEST[7:0] := FFH;
+    ELSE DEST[7:0] := 0; FI;
+(* Continue comparison of 2nd through 15th bytes in SRC1 and SRC2 *)
+    IF SRC1[127:120] > SRC2[127:120]
+    THEN DEST[127:120] := FFH;
+    ELSE DEST[127:120] := 0; FI;
+
+

COMPARE_WORDS_GREATER (SRC1, SRC2) + ¶ +

+
    IF SRC1[15:0] > SRC2[15:0]
+    THEN DEST[15:0] := FFFFH;
+    ELSE DEST[15:0] := 0; FI;
+(* Continue comparison of 2nd through 7th 16-bit words in SRC1 and SRC2 *)
+    IF SRC1[127:112] > SRC2[127:112]
+    THEN DEST[127:112] := FFFFH;
+    ELSE DEST[127:112] := 0; FI;
+
+

COMPARE_DWORDS_GREATER (SRC1, SRC2) + ¶ +

+
    IF SRC1[31:0] > SRC2[31:0]
+    THEN DEST[31:0] := FFFFFFFFH;
+    ELSE DEST[31:0] := 0; FI;
+(* Continue comparison of 2nd through 3rd 32-bit dwords in SRC1 and SRC2 *)
+    IF SRC1[127:96] > SRC2[127:96]
+    THEN DEST[127:96] := FFFFFFFFH;
+    ELSE DEST[127:96] := 0; FI;
+
+

PCMPGTB (With 128-bit Operands) + ¶ +

+
DEST[127:0] := COMPARE_BYTES_GREATER(DEST[127:0],SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPCMPGTB (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_BYTES_GREATER(SRC1,SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPCMPGTB (VEX.256 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_BYTES_GREATER(SRC1[127:0],SRC2[127:0])
+DEST[255:128] := COMPARE_BYTES_GREATER(SRC1[255:128],SRC2[255:128])
+DEST[MAXVL-1:256] := 0
+
+

VPCMPGTB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k2[j] OR *no writemask*
+        THEN
+            /* signed comparison */
+            CMP := SRC1[i+7:i] > SRC2[i+7:i];
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking onlyFI;
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

PCMPGTW (With 64-bit Operands) + ¶ +

+
IF DEST[15:0] > SRC[15:0]
+    THEN DEST[15:0] := FFFFH;
+    ELSE DEST[15:0] := 0; FI;
+(* Continue comparison of 2nd and 3rd words in DEST and SRC *)
+IF DEST[63:48] > SRC[63:48]
+    THEN DEST[63:48] := FFFFH;
+    ELSE DEST[63:48] := 0; FI;
+
+

PCMPGTW (With 128-bit Operands) + ¶ +

+
DEST[127:0] := COMPARE_WORDS_GREATER(DEST[127:0],SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPCMPGTW (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_WORDS_GREATER(SRC1,SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPCMPGTW (VEX.256 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_WORDS_GREATER(SRC1[127:0],SRC2[127:0])
+DEST[255:128] := COMPARE_WORDS_GREATER(SRC1[255:128],SRC2[255:128])
+DEST[MAXVL-1:256] := 0
+
+

VPCMPGTW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k2[j] OR *no writemask*
+        THEN
+            /* signed comparison */
+            CMP := SRC1[i+15:i] > SRC2[i+15:i];
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking onlyFI;
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

PCMPGTD (With 64-bit Operands) + ¶ +

+
IF DEST[31:0] > SRC[31:0]
+    THEN DEST[31:0] := FFFFFFFFH;
+    ELSE DEST[31:0] := 0; FI;
+IF DEST[63:32] > SRC[63:32]
+    THEN DEST[63:32] := FFFFFFFFH;
+    ELSE DEST[63:32] := 0; FI;
+
+

PCMPGTD (With 128-bit Operands) + ¶ +

+
DEST[127:0] := COMPARE_DWORDS_GREATER(DEST[127:0],SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPCMPGTD (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_DWORDS_GREATER(SRC1,SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPCMPGTD (VEX.256 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_DWORDS_GREATER(SRC1[127:0],SRC2[127:0])
+DEST[255:128] := COMPARE_DWORDS_GREATER(SRC1[255:128],SRC2[255:128])
+DEST[MAXVL-1:256] := 0
+
+

VPCMPGTD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k2[j] OR *no writemask*
+                THEN
+                    /* signed comparison */
+                    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                        THEN CMP := SRC1[i+31:i] > SRC2[31:0];
+                        ELSE CMP := SRC1[i+31:i] > SRC2[i+31:i];
+                    FI;
+                    IF CMP = TRUE
+                        THEN DEST[j] := 1;
+                        ELSE DEST[j] := 0; FI;
+                ELSE
+                        DEST[j] := 0
+                            ; zeroing-masking only
+        I
+            ;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPCMPGTB __mmask64 _mm512_cmpgt_epi8_mask(__m512i a, __m512i b);
+
+
VPCMPGTB __mmask64 _mm512_mask_cmpgt_epi8_mask(__mmask64 k, __m512i a, __m512i b);
+
+
VPCMPGTB __mmask32 _mm256_cmpgt_epi8_mask(__m256i a, __m256i b);
+
+
VPCMPGTB __mmask32 _mm256_mask_cmpgt_epi8_mask(__mmask32 k, __m256i a, __m256i b);
+
+
VPCMPGTB __mmask16 _mm_cmpgt_epi8_mask(__m128i a, __m128i b);
+
+
VPCMPGTB __mmask16 _mm_mask_cmpgt_epi8_mask(__mmask16 k, __m128i a, __m128i b);
+
+
VPCMPGTD __mmask16 _mm512_cmpgt_epi32_mask(__m512i a, __m512i b);
+
+
VPCMPGTD __mmask16 _mm512_mask_cmpgt_epi32_mask(__mmask16 k, __m512i a, __m512i b);
+
+
VPCMPGTD __mmask8 _mm256_cmpgt_epi32_mask(__m256i a, __m256i b);
+
+
VPCMPGTD __mmask8 _mm256_mask_cmpgt_epi32_mask(__mmask8 k, __m256i a, __m256i b);
+
+
VPCMPGTD __mmask8 _mm_cmpgt_epi32_mask(__m128i a, __m128i b);
+
+
VPCMPGTD __mmask8 _mm_mask_cmpgt_epi32_mask(__mmask8 k, __m128i a, __m128i b);
+
+
VPCMPGTW __mmask32 _mm512_cmpgt_epi16_mask(__m512i a, __m512i b);
+
+
VPCMPGTW __mmask32 _mm512_mask_cmpgt_epi16_mask(__mmask32 k, __m512i a, __m512i b);
+
+
VPCMPGTW __mmask16 _mm256_cmpgt_epi16_mask(__m256i a, __m256i b);
+
+
VPCMPGTW __mmask16 _mm256_mask_cmpgt_epi16_mask(__mmask16 k, __m256i a, __m256i b);
+
+
VPCMPGTW __mmask8 _mm_cmpgt_epi16_mask(__m128i a, __m128i b);
+
+
VPCMPGTW __mmask8 _mm_mask_cmpgt_epi16_mask(__mmask8 k, __m128i a, __m128i b);
+
+
PCMPGTB __m64 _mm_cmpgt_pi8 (__m64 m1, __m64 m2)
+
+
PCMPGTW __m64 _mm_cmpgt_pi16 (__m64 m1, __m64 m2)
+
+
PCMPGTD __m64 _mm_cmpgt_pi32 (__m64 m1, __m64 m2)
+
+
(V)PCMPGTB __m128i _mm_cmpgt_epi8 ( __m128i a, __m128i b)
+
+
(V)PCMPGTW __m128i _mm_cmpgt_epi16 ( __m128i a, __m128i b)
+
+
(V)DCMPGTD __m128i _mm_cmpgt_epi32 ( __m128i a, __m128i b)
+
+
VPCMPGTB __m256i _mm256_cmpgt_epi8 ( __m256i a, __m256i b)
+
+
VPCMPGTW __m256i _mm256_cmpgt_epi16 ( __m256i a, __m256i b)
+
+
VPCMPGTD __m256i _mm256_cmpgt_epi32 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPCMPGTD, see Table 2-49, “Type E4 Class Exception Conditions.”

+

EVEX-encoded VPCMPGTB/W, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pcmpgtq.html b/x86/pcmpgtq.html new file mode 100644 index 0000000..8601a89 --- /dev/null +++ b/x86/pcmpgtq.html @@ -0,0 +1,172 @@ + +PCMPGTQ + — Compare Packed Data for Greater Than

PCMPGTQ + — Compare Packed Data for Greater Than

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 37 /r PCMPGTQ xmm1,xmm2/m128AV/VSSE4_2Compare packed signed qwords in xmm2/m128 and xmm1 for greater than.
VEX.128.66.0F38.WIG 37 /r VPCMPGTQ xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed qwords in xmm2 and xmm3/m128 for greater than.
VEX.256.66.0F38.WIG 37 /r VPCMPGTQ ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed qwords in ymm2 and ymm3/m256 for greater than.
EVEX.128.66.0F38.W1 37 /r VPCMPGTQ k1 {k2}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FCompare Greater between int64 vector xmm2 and int64 vector xmm3/m128/m64bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.256.66.0F38.W1 37 /r VPCMPGTQ k1 {k2}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FCompare Greater between int64 vector ymm2 and int64 vector ymm3/m256/m64bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
EVEX.512.66.0F38.W1 37 /r VPCMPGTQ k1 {k2}, zmm2, zmm3/m512/m64bcstCV/VAVX512FCompare Greater between int64 vector zmm2 and int64 vector zmm3/m512/m64bcst, and set vector mask k1 to reflect the zero/nonzero status of each element of the result, under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an SIMD signed compare for the packed quadwords in the destination operand (first operand) and the source operand (second operand). If the data element in the first (destination) operand is greater than the corresponding element in the second (source) operand, the corresponding data element in the destination is set to all 1s; otherwise, it is set to 0s.

+

128-bit Legacy SSE version: The second source operand can be an XMM register or a 128-bit memory location. The first source operand and destination operand are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand can be an XMM register or a 128-bit memory location. The first source operand and destination operand are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM register are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

EVEX encoded VPCMPGTD/Q: The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand (first operand) is a mask register updated according to the writemask k2.

+

Operation + ¶ +

+

COMPARE_QWORDS_GREATER (SRC1, SRC2) + ¶ +

+
IF SRC1[63:0] > SRC2[63:0]
+THEN DEST[63:0] := FFFFFFFFFFFFFFFFH;
+ELSE DEST[63:0] := 0; FI;
+IF SRC1[127:64] > SRC2[127:64]
+THEN DEST[127:64] := FFFFFFFFFFFFFFFFH;
+ELSE DEST[127:64] := 0; FI;
+
+

VPCMPGTQ (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_QWORDS_GREATER(SRC1,SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPCMPGTQ (VEX.256 Encoded Version) + ¶ +

+
DEST[127:0] := COMPARE_QWORDS_GREATER(SRC1[127:0],SRC2[127:0])
+DEST[255:128] := COMPARE_QWORDS_GREATER(SRC1[255:128],SRC2[255:128])
+DEST[MAXVL-1:256] := 0
+
+

VPCMPGTQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k2[j] OR *no writemask*
+        THEN
+            /* signed comparison */
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN CMP := SRC1[i+63:i] > SRC2[63:0];
+                ELSE CMP := SRC1[i+63:i] > SRC2[i+63:i];
+            FI;
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCMPGTQ __mmask8 _mm512_cmpgt_epi64_mask( __m512i a, __m512i b);
+
+
VPCMPGTQ __mmask8 _mm512_mask_cmpgt_epi64_mask(__mmask8 k, __m512i a, __m512i b);
+
+
VPCMPGTQ __mmask8 _mm256_cmpgt_epi64_mask( __m256i a, __m256i b);
+
+
VPCMPGTQ __mmask8 _mm256_mask_cmpgt_epi64_mask(__mmask8 k, __m256i a, __m256i b);
+
+
VPCMPGTQ __mmask8 _mm_cmpgt_epi64_mask( __m128i a, __m128i b);
+
+
VPCMPGTQ __mmask8 _mm_mask_cmpgt_epi64_mask(__mmask8 k, __m128i a, __m128i b);
+
+
(V)PCMPGTQ __m128i _mm_cmpgt_epi64(__m128i a, __m128i b)
+
+
VPCMPGTQ __m256i _mm256_cmpgt_epi64( __m256i a, __m256i b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPCMPGTQ, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pcmpistri.html b/x86/pcmpistri.html new file mode 100644 index 0000000..ecc8b05 --- /dev/null +++ b/x86/pcmpistri.html @@ -0,0 +1,108 @@ + +PCMPISTRI + — Packed Compare Implicit Length Strings, Return Index

PCMPISTRI + — Packed Compare Implicit Length Strings, Return Index

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 63 /r imm8 PCMPISTRI xmm1, xmm2/m128, imm8RMV/VSSE4_2Perform a packed comparison of string data with implicit lengths, generating an index, and storing the result in ECX.
VEX.128.66.0F3A.WIG 63 /r ib VPCMPISTRI xmm1, xmm2/m128, imm8RMV/VAVXPerform a packed comparison of string data with implicit lengths, generating an index, and storing the result in ECX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

The instruction compares data from two strings based on the encoded value in the imm8 control byte (see Section 4.1, “Imm8 Control Byte Operation for PCMPESTRI / PCMPESTRM / PCMPISTRI / PCMPISTRM”), and generates an index stored to ECX.

+

Each string is represented by a single value. The value is an xmm (or possibly m128 for the second operand) which contains the data elements of the string (byte or word data). Each input byte/word is augmented with a valid/invalid tag. A byte/word is considered valid only if it has a lower index than the least significant null byte/word. (The least significant null byte/word is also considered invalid.)

+

The comparison and aggregation operations are performed according to the encoded value of imm8 bit fields (see Section 4.1). The index of the first (or last, according to imm8[6]) set bit of IntRes2 is returned in ECX. If no bits are set in IntRes2, ECX is set to 16 (8).

+

Note that the Arithmetic Flags are written in a non-standard manner in order to supply the most relevant information:

+

CFlag – Reset if IntRes2 is equal to zero, set otherwise

+

ZFlag – Set if any byte/word of xmm2/mem128 is null, reset otherwise

+

SFlag – Set if any byte/word of xmm1 is null, reset otherwise

+

OFlag –IntRes2[0]

+

AFlag – Reset

+

PFlag – Reset

+

Note: In VEX.128 encoded version, VEX.vvvv is reserved and must be 1111b, VEX.L must be 0, otherwise the instruction will #UD.

+

Effective Operand Size + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
Operating mode/sizeOperand 1Operand 2Result
16 bitxmmxmm/m128ECX
32 bitxmmxmm/m128ECX
64 bitxmmxmm/m128ECX
+

Intel C/C++ Compiler Intrinsic Equivalent For Returning Index + ¶ +

+

int _mm_cmpistri (__m128i a, __m128i b, const int mode);

+

Intel C/C++ Compiler Intrinsics For Reading EFlag Results + ¶ +

+

int _mm_cmpistra (__m128i a, __m128i b, const int mode);

+

int _mm_cmpistrc (__m128i a, __m128i b, const int mode);

+

int _mm_cmpistro (__m128i a, __m128i b, const int mode);

+

int _mm_cmpistrs (__m128i a, __m128i b, const int mode);

+

int _mm_cmpistrz (__m128i a, __m128i b, const int mode);

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally, this instruction does not cause #GP if the memory operand is not aligned to 16 Byte boundary, and:

+ + + + + +
#UDIf VEX.L = 1.
If VEX.vvvv ≠ 1111B.
diff --git a/x86/pcmpistrm.html b/x86/pcmpistrm.html new file mode 100644 index 0000000..37a5b12 --- /dev/null +++ b/x86/pcmpistrm.html @@ -0,0 +1,108 @@ + +PCMPISTRM + — Packed Compare Implicit Length Strings, Return Mask

PCMPISTRM + — Packed Compare Implicit Length Strings, Return Mask

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 62 /r imm8 PCMPISTRM xmm1, xmm2/m128, imm8RMV/VSSE4_2Perform a packed comparison of string data with implicit lengths, generating a mask, and storing the result in XMM0.
VEX.128.66.0F3A.WIG 62 /r ib VPCMPISTRM xmm1, xmm2/m128, imm8RMV/VAVXPerform a packed comparison of string data with implicit lengths, generating a Mask, and storing the result in XMM0.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

The instruction compares data from two strings based on the encoded value in the imm8 byte (see Section 4.1, “Imm8 Control Byte Operation for PCMPESTRI / PCMPESTRM / PCMPISTRI / PCMPISTRM”) generating a mask stored to XMM0.

+

Each string is represented by a single value. The value is an xmm (or possibly m128 for the second operand) which contains the data elements of the string (byte or word data). Each input byte/word is augmented with a valid/invalid tag. A byte/word is considered valid only if it has a lower index than the least significant null byte/word. (The least significant null byte/word is also considered invalid.)

+

The comparison and aggregation operation are performed according to the encoded value of imm8 bit fields (see Section 4.1). As defined by imm8[6], IntRes2 is then either stored to the least significant bits of XMM0 (zero extended to 128 bits) or expanded into a byte/word-mask and then stored to XMM0.

+

Note that the Arithmetic Flags are written in a non-standard manner in order to supply the most relevant information:

+

CFlag – Reset if IntRes2 is equal to zero, set otherwise

+

ZFlag – Set if any byte/word of xmm2/mem128 is null, reset otherwise

+

SFlag – Set if any byte/word of xmm1 is null, reset otherwise

+

OFlag – IntRes2[0]

+

AFlag – Reset

+

PFlag – Reset

+

Note: In VEX.128 encoded versions, bits (MAXVL-1:128) of XMM0 are zeroed. VEX.vvvv is reserved and must be 1111b, VEX.L must be 0, otherwise the instruction will #UD.

+

Effective Operand Size + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
Operating mode/sizeOperand 1Operand 2Result
16 bitxmmxmm/m128XMM0
32 bitxmmxmm/m128XMM0
64 bitxmmxmm/m128XMM0
+

Intel C/C++ Compiler Intrinsic Equivalent For Returning Mask + ¶ +

+

__m128i _mm_cmpistrm (__m128i a, __m128i b, const int mode);

+

Intel C/C++ Compiler Intrinsics For Reading EFlag Results + ¶ +

+

int _mm_cmpistra (__m128i a, __m128i b, const int mode);

+

int _mm_cmpistrc (__m128i a, __m128i b, const int mode);

+

int _mm_cmpistro (__m128i a, __m128i b, const int mode);

+

int _mm_cmpistrs (__m128i a, __m128i b, const int mode);

+

int _mm_cmpistrz (__m128i a, __m128i b, const int mode);

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally, this instruction does not cause #GP if the memory operand is not aligned to 16 Byte boundary, and:

+ + + + + +
#UDIf VEX.L = 1.
If VEX.vvvv ≠ 1111B.
diff --git a/x86/pconfig.html b/x86/pconfig.html new file mode 100644 index 0000000..a08d009 --- /dev/null +++ b/x86/pconfig.html @@ -0,0 +1,323 @@ + +PCONFIG + — Platform Configuration

PCONFIG + — Platform Configuration

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 C5 PCONFIGAV/VPCONFIGThis instruction is used to execute functions for configuring platform features.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AN/AN/AN/AN/A
+

Description + ¶ +

+

The PCONFIG instruction allows software to configure certain platform features. It supports these features with multiple leaf functions, selecting a leaf function using the value in EAX.

+

Depending on the leaf function, the registers RBX, RCX, and RDX may be used to provide input information or for the instruction to report output information. Addresses and operands are 32 bits outside 64-bit mode and are 64 bits in 64-bit mode. The value of CS.D does not affect operand size or address size.

+

Executions of PCONFIG may fail for platform-specific reasons. An execution reports failure by setting the ZF flag and loading EAX with a non-zero failure reason; a successful execution clears ZF and EAX.

+

Each PCONFIG leaf function applies to a specific hardware block called a PCONFIG target. The leaf function is supported only if the processor supports that target. Each target is associated with a numerical target identifier, and CPUID leaf 1BH (PCONFIG information) enumerates the identifiers of the supported targets. An attempt to execute an undefined leaf function, or a leaf function that applies to an unsupported target identifier, results in a general-protection exception (#GP).

+

Leaf Function MKTME_KEY_PROGRAM + ¶ +

+

As of this writing, the only defined PCONFIG leaf function is used for key programming for total memory encryption-multi-key (TME-MK).1 This leaf function is called MKTME_KEY_PROGRAM and it pertains to the TME-MK target, which has target identifier 1. The leaf function is selected by loading EAX with value 0. The MKTME_KEY_PROGRAM leaf function uses the EBX (or RBX) register for additional input information.

+

Software uses the MKTME_KEY_PROGRAM leaf function to manage the encryption key associated with a particular key identifier (KeyID). The leaf function uses a data structure called the TME-MK key programming structure (MKTME_KEY_PROGRAM_STRUCT). Software provides the address of the structure (as an offset in the DS segment) in EBX (or RBX). The format of the structure is given in Table 4-15.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldOffset (bytes)Size (bytes)Comments
KEYID02Key Identifier.
KEYID_CTRL24KeyID control: • Bits 7:0: key-programming command (COMMAND) • Bits 23:8: encryption algorithm (ENC_ALG) • Bits 31:24: Reserved, must be zero (RSVD)
Ignored658Not used.
KEY_FIELD_16464Software supplied data key or entropy for data key.
KEY_FIELD_212864Software supplied tweak key or entropy for tweak key.
+
Table 4-15. MKTME_KEY_PROGRAM_STRUCT Format
+

1. Further details on TME-MK can be found here:

+

https://software.intel.com/sites/default/files/managed/a5/16/Multi-Key-Total-Memory-Encryption-Spec.pdf + ¶ +

+

A description of each of the fields in MKTME_KEY_PROGRAM_STRUCT is provided below:

+
    +
  • KEYID: The key identifier (KeyID) being programmed to the MKTME engine. PCONFIG causes a general-protection exception (#GP) if the KeyID is zero. KeyID zero always uses the current behavior configured for TME (total memory encryption), either to encrypt with platform TME key or to bypass TME encryption. PCONFIG also causes a #GP if the KeyID exceeds the maximum enumerated in IA32_TME_CAPABILITY.MK_TME_MAX-_KEYS[bits 50:36] or configured by the setting of IA32_TME_ACTIVATE.MK_TME_KEYID_BITS[bits 35:32].
  • +
  • KEYID_CTRL: The KEYID_CTRL field comprises two sub-fields used by software to control the encryption performed for the selected KeyID: +
      +
    • Key-programming command (COMMAND; bits 7:0). This 8-bit field should contain one of the following values:
    • +
    • Key-programming command (COMMAND; bits 7:0). This 8-bit field should contain one of the following values:
  • +
  • KEYID_SET_KEY_DIRECT (value 0). With this command, software programs directly the encryption key to be used for the selected KeyID.
  • +
  • KEYID_SET_KEY_RANDOM (value 1). With this command, software has the CPU generate and assign an encryption key to be used for the selected KeyID using a hardware random-number generator.
+

If this command is used and there is insufficient entropy for the random-number generator, PCONFIG will fail and report the failure by loading EAX with value 2 (ENTROPY_ERROR).

+

Because the keys programed by PCONFIG are discarded on reset and software cannot read the programmed keys, the keys programmed with this command are ephemeral.

+
    +
  • KEYID_CLEAR_KEY (value 2). With this command, software indicates that the selected KeyID should use the current behavior configured for TME (see above).
  • +
  • KEYID_NO_ENCRYPT (value 3). With this command, software indicates that no encryption should be used for the selected KeyID.
+

If any other value is used, PCONFIG causes a #GP.

+

— Encryption algorithm (ENC_ALG, bits 23:8). Bits 63:48 of the IA32_TME_ACTIVATE MSR (MSR index 982H) indicate which encryption algorithms are supported by the platform. The 16-bit ENC_ALG field should specify one of the algorithms indicated in IA32_TME_ACTIVATE. PCONFIG causes a #GP if ENC_ALG does not set exactly one bit or if it sets a bit whose corresponding bit is not set in IA32_TME_ACTIVATE[63:48].

+
    +
  • KEY_FIELD_1: Use of this field depends upon selected key-programming command: +
      +
    • If the direct key-programming command is used (KEYID_SET_KEY_DIRECT), this field carries the software supplied data key to be used for the KeyID.
    • +
    • If the direct key-programming command is used (KEYID_SET_KEY_DIRECT), this field carries the software supplied data key to be used for the KeyID.
    • +
    • If the random key-programming command is used (KEYID_SET_KEY_RANDOM), this field carries the software supplied entropy to be mixed in the CPU generated random data key.
    • +
    • If the random key-programming command is used (KEYID_SET_KEY_RANDOM), this field carries the software supplied entropy to be mixed in the CPU generated random data key.
    • +
    • This field is ignored when one of the other key-programming commands is used.
    • +
    • This field is ignored when one of the other key-programming commands is used.
+

It is software’s responsibility to ensure that the key supplied for the direct key-programming option or the entropy supplied for the random key-programming option does not result in weak keys. There are no explicit checks in the instruction to detect or prevent weak keys.

+
    +
  • KEY_FIELD_2: Use of this field depends upon selected key-programming command: +
      +
    • If the direct key-programming command is used (KEYID_SET_KEY_DIRECT), this field carries the software supplied tweak key to be used for the KeyID.
    • +
    • If the direct key-programming command is used (KEYID_SET_KEY_DIRECT), this field carries the software supplied tweak key to be used for the KeyID.
    • +
    • If the random key-programming command is used (KEYID_SET_KEY_RANDOM), this field carries the software supplied entropy to be mixed in the CPU generated random tweak key.
    • +
    • If the random key-programming command is used (KEYID_SET_KEY_RANDOM), this field carries the software supplied entropy to be mixed in the CPU generated random tweak key.
    • +
    • This field is ignored when one of the other key-programming commands is used.
    • +
    • This field is ignored when one of the other key-programming commands is used.
+

It is software’s responsibility to ensure that the key supplied for the direct key-programming option or the entropy supplied for the random key-programming option does not result in weak keys. There are no explicit checks in the instruction to detect or prevent weak keys.

+

All KeyIDs default to TME behavior (encrypt with TME key or bypass encryption) on activation of TME-MK. Software can at any point decide to change the key for a KeyID using the MKTME_KEY_PROGRAM leaf function of the PCONFIG instruction. Changing the key for a KeyID does not change the state of the TLB caches or memory pipeline. Software is responsible for taking appropriate actions to ensure correct behavior.

+

The key table used by TME-MK is shared by all logical processors in a platform. For this reason, execution of the MKTME_KEY_PROGRAM leaf function must gain exclusive access to the key table before updating it. The leaf function does this by acquiring lock (implemented in the platform) and retaining that lock until the execution completes. An execution of the leaf function may fail to acquire the lock if it is already in use. In this situation, the leaf function will load EAX with failure reason 5 (DEVICE_BUSY) indicating that software must retry. When this happens, the key table is not updated, and software should retry execution of PCONFIG.

+
+

Earlier versions of this manual specified that bytes 63:6 of MKTME_KEY_PROGRAM_STRUCT were reserved and that PCONFIG would cause a #GP if they were not all zero. This is not the case. As indicated in Table 4-15, PCONFIG ignores those bytes.

+

They also specified that PCONFIG would cause a #GP if the upper 48 bytes of each of the 64-byte key fields were not all 0. This is not the case. From each of these fields, PCONFIG uses the number of bytes required by the selected encryption algorithm (e.g., 32 bytes for AES-XTS 256) and ignores the upper bytes.

+

They also specified that PCONFIG would complete and report a failure reason in EAX if the structure specified an incorrect KeyID, and unsupported key-programming command, or an incorrect selection of an encryption algorithm. This is not the case. As indicated above (and in the Operation section), those conditions cause #GP.

+

Operation + ¶ +

+
(* #UD if PCONFIG is not enumerated or CPL > 0 *)
+IF CPUID.7.0:EDX[18] = 0 OR CPL > 0
+    THEN #UD; FI;
+(* #GP(0) for an unsupported leaf function *)
+IF EAX != 0
+    THEN #GP(0); FI;
+CASE (EAX) (* operation based on selected leaf function *)
+    0 (MKTME_KEY_PROGRAM):
+    (* Confirm that TME-MK is properly enabled by the IA32_TME_ACTIVATE MSR *)
+    (* The MSR must be locked, encryption enabled, and a non-zero number of KeyID bits specified *)
+    IF IA32_TME_ACTIVATE[0] = 0 OR IA32_TME_ACTIVATE[1] = 0 OR IA32_TME_ACTIVATE[35:32] = 0
+            THEN #GP(0); FI;
+    IF DS:RBX is not 256-byte aligned
+        THEN #GP(0); FI;
+    Load TMP_KEY_PROGRAM_STRUCT from 192 bytes at linear address DS:RBX;
+    IF TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL sets any reserved bits
+        THEN #GP(0); FI;
+    (* Check for a valid command *)
+    IF TMP_KEY_PROGRAM_STRUCT. KEYID_CTRL.COMMAND > 3
+        THEN #GP(0); FI;
+    (* Check that the KEYID being operated upon is a valid KEYID *)
+    IF TMP_KEY_PROGRAM_STRUCT.KEYID = 0 OR
+        TMP_KEY_PROGRAM_STRUCT.KEYID > 2^IA32_TME_ACTIVATE.MK_TME_KEYID_BITS – 1 OR
+        TMP_KEY_PROGRAM_STRUCT.KEYID > IA32_TME_CAPABILITY.MK_TME_MAX_KEYS
+            THEN #GP(0); FI;
+    (* Check that only one encryption algorithm is requested for the KeyID and it is one of the activated algorithms *)
+    IF TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL.ENC_ALG does not set exactly one bit OR
+        (TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL.ENC_ALG & IA32_TME_ACTIVATE[63:48]) = 0
+            THEN #GP(0); FI:
+    Attempt to acquire lock to gain exclusive access to platform key table;
+    IF attempt is unsuccessful
+        THEN (* PCONFIG failure *)
+            RFLAGS.ZF := 1;
+            RAX := DEVICE_BUSY;
+                    (* failure reason 5 *)
+            GOTO EXIT;
+    FI;
+    CASE (TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL.COMMAND) OF
+        0 (KEYID_SET_KEY_DIRECT):
+        Update TME-MK table for TMP_KEY_PROGRAM_STRUCT.KEYID as follows:
+            Encrypt with the selected key
+            Use the encryption algorithm selected by TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL.ENC_ALG
+            (* The number of bytes used by the next two lines depends on selected encryption algorithm *)
+            DATA_KEY is TMP_KEY_PROGRAM_STRUCT.KEY_FIELD_1
+            TWEAK_KEY is TMP_KEY_PROGRAM_STRUCT.KEY_FIELD_2
+        BREAK;
+        1 (KEYID_SET_KEY_RANDOM):
+        Load TMP_RND_DATA_KEY with a random key using hardware RNG; (* key size depends on selected encryption algorithm *)
+        IF there was insufficient entropy
+            THEN (* PCONFIG failure *)
+                RFLAGS.ZF := 1;
+                RAX := ENTROPY_ERROR; (* failure reason 2 *)
+                Release lock on platform key table;
+                GOTO EXIT;
+        FI;
+        Load TMP_RND_TWEAK_KEY with a random key using hardware RNG; (* key size depends on selected encryption algorithm *)
+        IF there was insufficient entropy
+            THEN (* PCONFIG failure *)
+                RFLAGS.ZF := 1;
+                RAX := ENTROPY_ERROR; (* failure reason 2 *)
+                Release lock on platform key table;
+                GOTO EXIT;
+        FI;
+        (* Combine software-supplied entropy to the data key and tweak key *)
+        (* The number of bytes used by the next two lines depends on selected encryption algorithm *)
+        TMP_RND_DATA_KEY := TMP_RND_KEY XOR TMP_KEY_PROGRAM_STRUCT.KEY_FIELD_1;
+        TMP_RND_TWEAK_KEY := TMP_RND_TWEAK_KEY XOR TMP_KEY_PROGRAM_STRUCT.KEY_FIELD_2;
+        Update TME-MK table for TMP_KEY_PROGRAM_STRUCT.KEYID as follows:
+            Encrypt with the selected key
+            Use the encryption algorithm selected by TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL.ENC_ALG
+            (* The number of bytes used by the next two lines depends on selected encryption algorithm *)
+            DATA_KEY is TMP_RND_DATA_KEY
+            TWEAK_KEY is TMP_RND_TWEAK_KEY
+        BREAK;
+        2 (KEYID_CLEAR_KEY):
+        Update TME-MK table for TMP_KEY_PROGRAM_STRUCT.KEYID as follows:
+            Encrypt (or not) using the current configuration for TME
+            The specified encryption algorithm and key values are not used.
+        BREAK;
+        3 (KEYID_NO_ENCRYPT):
+        Update TME-MK table for TMP_KEY_PROGRAM_STRUCT.KEYID as follows:
+            Do not encrypt
+            The specified encryption algorithm and key values are not used.
+        BREAK;
+    ESAC;
+    Release lock on platform key table;
+ESAC;
+RAX := 0;
+RFLAGS.ZF := 0;
+EXIT:
+RFLAGS.CF := 0;
+RFLAGS.PF := 0;
+RFLAGS.AF := 0;
+RFLAGS.OF := 0;
+RFLAGS.SF := 0;
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If input value in EAX encodes an unsupported leaf function.
If a memory operand effective address is outside the relevant segment limit.
MKTME_KEY_PROGRAM leaf function:
If IA32_TME_ACTIVATE MSR is not locked.
If hardware encryption and TME-MK capability are not enabled in IA32_TME_ACTIVATE MSR.
If the memory operand is not 256B aligned.
If any of the reserved bits in the KEYID_CTRL field of the MKTME_KEY_PROGRAM_STRUCT are set or that field indicates an unsupported KeyID, key-programming command, or encryption algorithm.
#PF(fault-code)If a page fault occurs in accessing memory operands.
#UDIf any of the LOCK/REP/Operand Size/VEX prefixes are used.
If current privilege level is not 0.
If CPUID.7.0:EDX[bit 18] = 0
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GPIf input value in EAX encodes an unsupported leaf function.
MKTME_KEY_PROGRAM leaf function:
If IA32_TME_ACTIVATE MSR is not locked.
If hardware encryption and TME-MK capability are not enabled in IA32_TME_ACTIVATE MSR.
If a memory operand is not 256B aligned.
If any of the reserved bits in the KEYID_CTRL field of the MKTME_KEY_PROGRAM_STRUCT are set or that field indicates an unsupported KeyID, key-programming command, or encryption algorithm.
#UDIf any of the LOCK/REP/Operand Size/VEX prefixes are used.
If current privilege level is not 0.
If CPUID.7.0:EDX.PCONFIG[bit 18] = 0
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDPCONFIG instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If input value in EAX encodes an unsupported leaf function.
If a memory operand is non-canonical form.
MKTME_KEY_PROGRAM leaf function:
If IA32_TME_ACTIVATE MSR is not locked.
If hardware encryption and TME-MK capability are not enabled in IA32_TME_ACTIVATE MSR.
If a memory operand is not 256B aligned.
If any of the reserved bits in the KEYID_CTRL field of the MKTME_KEY_PROGRAM_STRUCT are set or that field indicates an unsupported KeyID, key-programming command, or encryption algorithm.
#PF(fault-code)If a page fault occurs in accessing memory operands.
#UDIf any of the LOCK/REP/Operand Size/VEX prefixes are used.
If the current privilege level is not 0.
If CPUID.7.0:EDX.PCONFIG[bit 18] = 0.
diff --git a/x86/pdep.html b/x86/pdep.html new file mode 100644 index 0000000..093509d --- /dev/null +++ b/x86/pdep.html @@ -0,0 +1,355 @@ + +PDEP + — Parallel Bits Deposit

PDEP + — Parallel Bits Deposit

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.F2.0F38.W0 F5 /r PDEP r32a, r32b, r/m32RVMV/VBMI2Parallel deposit of bits from r32b using mask in r/m32, result is written to r32a.
VEX.LZ.F2.0F38.W1 F5 /r PDEP r64a, r64b, r/m64RVMV/N.E.BMI2Parallel deposit of bits from r64b using mask in r/m64, result is written to r64a.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

PDEP uses a mask in the second source operand (the third operand) to transfer/scatter contiguous low order bits in the first source operand (the second operand) into the destination (the first operand). PDEP takes the low bits from the first source operand and deposit them in the destination operand at the corresponding bit locations that are set in the second source operand (mask). All other bits (bits not set in mask) in destination are set to zero.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC1 +S31S30 S29S28S27 +S7 S6 S5 S4 S3 S2 S1 S0 +SRC2 0 0 0 1 0 +1 0 1 0 0 1 0 0 +(mask) +DEST 0 0 0 S3 0 +S2 0 S1 0 0 S0 0 0 +bit 0 +bit 31 +
Figure 4-8. PDEP Example
+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
TEMP := SRC1;
+MASK := SRC2;
+DEST := 0 ;
+m := 0, k := 0;
+DO WHILE m < OperandSize
+    IF MASK[ m] = 1 THEN
+        DEST[ m] := TEMP[ k];
+        k := k+ 1;
+    FI
+    m := m+ 1;
+OD
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PDEP unsigned __int32 _pdep_u32(unsigned __int32 src, unsigned __int32 mask);
+
+
PDEP unsigned __int64 _pdep_u64(unsigned __int64 src, unsigned __int32 mask);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/pext.html b/x86/pext.html new file mode 100644 index 0000000..b64e477 --- /dev/null +++ b/x86/pext.html @@ -0,0 +1,338 @@ + +PEXT + — Parallel Bits Extract

PEXT + — Parallel Bits Extract

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.F3.0F38.W0 F5 /r PEXT r32a, r32b, r/m32RVMV/VBMI2Parallel extract of bits from r32b using mask in r/m32, result is written to r32a.
VEX.LZ.F3.0F38.W1 F5 /r PEXT r64a, r64b, r/m64RVMV/N.E.BMI2Parallel extract of bits from r64b using mask in r/m64, result is written to r64a.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

PEXT uses a mask in the second source operand (the third operand) to transfer either contiguous or non-contiguous bits in the first source operand (the second operand) to contiguous low order bit positions in the destination (the first operand). For each bit set in the MASK, PEXT extracts the corresponding bits from the first source operand and writes them into contiguous lower bits of destination operand. The remaining upper bits of destination are zeroed.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC1 S31S30 S29S28S27 +S7 S6 S5 S4 S3 S2 S1 S0 +SRC2 0 0 0 1 0 +1 0 1 0 0 1 0 0 +(mask) +0 0 0 0 S28 S7 S5 S2 +DEST 0 0 0 0 0 +bit 0 +bit 31 +
Figure 4-9. PEXT Example
+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
TEMP := SRC1;
+MASK := SRC2;
+DEST := 0 ;
+m := 0, k := 0;
+DO WHILE m < OperandSize
+    IF MASK[ m] = 1 THEN
+        DEST[ k] := TEMP[ m];
+        k := k+ 1;
+    FI
+    m := m+ 1;
+OD
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PEXT unsigned __int32 _pext_u32(unsigned __int32 src, unsigned __int32 mask);
+
+
PEXT unsigned __int64 _pext_u64(unsigned __int64 src, unsigned __int32 mask);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/pextrb.pextrd.pextrq.html b/x86/pextrb.pextrd.pextrq.html new file mode 100644 index 0000000..a0451f3 --- /dev/null +++ b/x86/pextrb.pextrd.pextrq.html @@ -0,0 +1,192 @@ + +PEXTRB/PEXTRD/PEXTRQ + — Extract Byte/Dword/Qword

PEXTRB/PEXTRD/PEXTRQ + — Extract Byte/Dword/Qword

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 14 /r ib PEXTRB reg/m8, xmm2, imm8AV/VSSE4_1Extract a byte integer value from xmm2 at the source byte offset specified by imm8 into reg or m8. The upper bits of r32 or r64 are zeroed.
66 0F 3A 16 /r ib PEXTRD r/m32, xmm2, imm8AV/VSSE4_1Extract a dword integer value from xmm2 at the source dword offset specified by imm8 into r/m32.
66 REX.W 0F 3A 16 /r ib PEXTRQ r/m64, xmm2, imm8AV/N.E.SSE4_1Extract a qword integer value from xmm2 at the source qword offset specified by imm8 into r/m64.
VEX.128.66.0F3A.W0 14 /r ib VPEXTRB reg/m8, xmm2, imm8AV1/VAVXExtract a byte integer value from xmm2 at the source byte offset specified by imm8 into reg or m8. The upper bits of r64/r32 is filled with zeros.
VEX.128.66.0F3A.W0 16 /r ib VPEXTRD r32/m32, xmm2, imm8AV/VAVXExtract a dword integer value from xmm2 at the source dword offset specified by imm8 into r32/m32.
VEX.128.66.0F3A.W1 16 /r ib VPEXTRQ r64/m64, xmm2, imm8AV/I2AVXExtract a qword integer value from xmm2 at the source dword offset specified by imm8 into r64/m64.
EVEX.128.66.0F3A.WIG 14 /r ib VPEXTRB reg/m8, xmm2, imm8BV/VAVX512BWExtract a byte integer value from xmm2 at the source byte offset specified by imm8 into reg or m8. The upper bits of r64/r32 is filled with zeros.
EVEX.128.66.0F3A.W0 16 /r ib VPEXTRD r32/m32, xmm2, imm8BV/VAVX512DQExtract a dword integer value from xmm2 at the source dword offset specified by imm8 into r32/m32.
EVEX.128.66.0F3A.W1 16 /r ib VPEXTRQ r64/m64, xmm2, imm8BV/N.E.2AVX512DQExtract a qword integer value from xmm2 at the source dword offset specified by imm8 into r64/m64.
+
+

1. In 64-bit mode, VEX.W1 is ignored for VPEXTRB (similar to legacy REX.W=1 prefix in PEXTRB).

+

2. VEX.W/EVEX.W in non-64 bit is ignored; the instructions behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)imm8N/A
BTuple1 ScalarModRM:r/m (w)ModRM:reg (r)imm8N/A
+

Description + ¶ +

+

Extract a byte/dword/qword integer value from the source XMM register at a byte/dword/qword offset determined from imm8[3:0]. The destination can be a register or byte/dword/qword memory location. If the destination is a register, the upper bits of the register are zero extended.

+

In legacy non-VEX encoded version and if the destination operand is a register, the default operand size in 64-bit mode for PEXTRB/PEXTRD is 64 bits, the bits above the least significant byte/dword data are filled with zeros. PEXTRQ is not encodable in non-64-bit modes and requires REX.W in 64-bit mode.

+

Note: In VEX.128 encoded versions, VEX.vvvv is reserved and must be 1111b, VEX.L must be 0, otherwise the instruction will #UD. In EVEX.128 encoded versions, EVEX.vvvv is reserved and must be 1111b, EVEX.L”L must be 0, otherwise the instruction will #UD. If the destination operand is a register, the default operand size in 64-bit mode for VPEXTRB/VPEXTRD is 64 bits, the bits above the least significant byte/word/dword data are filled with zeros.

+

Operation + ¶ +

+
CASE of
+    PEXTRB: SEL := COUNT[3:0];
+        TEMP := (Src >> SEL*8) AND FFH;
+        IF (DEST = Mem8)
+            THEN
+            Mem8 := TEMP[7:0];
+        ELSE IF (64-Bit Mode and 64-bit register selected)
+            THEN
+                R64[7:0] := TEMP[7:0];
+                r64[63:8] := ZERO_FILL; };
+        ELSE
+                R32[7:0] := TEMP[7:0];
+                r32[31:8] := ZERO_FILL; };
+        FI;
+    PEXTRD:SEL := COUNT[1:0];
+        TEMP := (Src >> SEL*32) AND FFFF_FFFFH;
+        DEST := TEMP;
+    PEXTRQ: SEL := COUNT[0];
+        TEMP := (Src >> SEL*64);
+        DEST := TEMP;
+EASC:
+
+

VPEXTRTD/VPEXTRQ + ¶ +

+
IF (64-Bit Mode and 64-bit dest operand)
+THEN
+    Src_Offset := imm8[0]
+    r64/m64 := (Src >> Src_Offset * 64)
+ELSE
+    Src_Offset := imm8[1:0]
+    r32/m32 := ((Src >> Src_Offset *32) AND 0FFFFFFFFh);
+FI
+
+

VPEXTRB ( dest=m8) + ¶ +

+
SRC_Offset := imm8[3:0]
+Mem8 := (Src >> Src_Offset*8)
+
+

VPEXTRB ( dest=reg) + ¶ +

+
IF (64-Bit Mode )
+THEN
+    SRC_Offset := imm8[3:0]
+    DEST[7:0] := ((Src >> Src_Offset*8) AND 0FFh)
+    DEST[63:8] := ZERO_FILL;
+ELSE
+    SRC_Offset := imm8[3:0];
+    DEST[7:0] := ((Src >> Src_Offset*8) AND 0FFh);
+    DEST[31:8] := ZERO_FILL;
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PEXTRB int _mm_extract_epi8 (__m128i src, const int ndx);
+
+
PEXTRD int _mm_extract_epi32 (__m128i src, const int ndx);
+
+
PEXTRQ __int64 _mm_extract_epi64 (__m128i src, const int ndx);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

+

Additionally:

+ + + + + +
#UDIf VEX.L = 1 or EVEX.L’L > 0.
If VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/pextrw.html b/x86/pextrw.html new file mode 100644 index 0000000..2bfbb53 --- /dev/null +++ b/x86/pextrw.html @@ -0,0 +1,180 @@ + +PEXTRW + — Extract Word

PEXTRW + — Extract Word

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F C5 /r ib1 PEXTRW reg, mm, imm8AV/VSSEExtract the word specified by imm8 from mm and move it to reg, bits 15-0. The upper bits of r32 or r64 is zeroed.
66 0F C5 /r ib PEXTRW reg, xmm, imm8AV/VSSE2Extract the word specified by imm8 from xmm and move it to reg, bits 15-0. The upper bits of r32 or r64 is zeroed.
66 0F 3A 15 /r ib PEXTRW reg/m16, xmm, imm8BV/VSSE4_1Extract the word specified by imm8 from xmm and copy it to lowest 16 bits of reg or m16. Zero-extend the result in the destination, r32 or r64.
VEX.128.66.0F.W0 C5 /r ib VPEXTRW reg, xmm1, imm8AV2/VAVXExtract the word specified by imm8 from xmm1 and move it to reg, bits 15:0. Zero-extend the result. The upper bits of r64/r32 is filled with zeros.
VEX.128.66.0F3A.W0 15 /r ib VPEXTRW reg/m16, xmm2, imm8BV/VAVXExtract a word integer value from xmm2 at the source word offset specified by imm8 into reg or m16. The upper bits of r64/r32 is filled with zeros.
EVEX.128.66.0F.WIG C5 /r ib VPEXTRW reg, xmm1, imm8AV/VAVX512BWExtract the word specified by imm8 from xmm1 and move it to reg, bits 15:0. Zero-extend the result. The upper bits of r64/r32 is filled with zeros.
EVEX.128.66.0F3A.WIG 15 /r ib VPEXTRW reg/m16, xmm2, imm8CV/VAVX512BWExtract a word integer value from xmm2 at the source word offset specified by imm8 into reg or m16. The upper bits of r64/r32 is filled with zeros.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

2. In 64-bit mode, VEX.W1 is ignored for VPEXTRW (similar to legacy REX.W=1 prefix in PEXTRW).

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)imm8N/A
BN/AModRM:r/m (w)ModRM:reg (r)imm8N/A
CTuple1 ScalarModRM:r/m (w)ModRM:reg (r)imm8N/A
+

Description + ¶ +

+

Copies the word in the source operand (second operand) specified by the count operand (third operand) to the destination operand (first operand). The source operand can be an MMX technology register or an XMM register. The destination operand can be the low word of a general-purpose register or a 16-bit memory address. The count operand is an 8-bit immediate. When specifying a word location in an MMX technology register, the 2 least-significant bits of the count operand specify the location; for an XMM register, the 3 least-significant bits specify the location. The content of the destination register above bit 16 is cleared (set to all 0s).

+

In 64-bit mode, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15, R8-15). If the destination operand is a general-purpose register, the default operand size is 64-bits in 64-bit mode.

+

Note: In VEX.128 encoded versions, VEX.vvvv is reserved and must be 1111b, VEX.L must be 0, otherwise the instruction will #UD. In EVEX.128 encoded versions, EVEX.vvvv is reserved and must be 1111b, EVEX.L must be 0, otherwise the instruction will #UD. If the destination operand is a register, the default operand size in 64-bit mode for VPEXTRW is 64 bits, the bits above the least significant byte/word/dword data are filled with zeros.

+

Operation + ¶ +

+
IF (DEST = Mem16)
+THEN
+    SEL := COUNT[2:0];
+    TEMP := (Src >> SEL*16) AND FFFFH;
+    Mem16 := TEMP[15:0];
+ELSE IF (64-Bit Mode and destination is a general-purpose register)
+    THEN
+        FOR (PEXTRW instruction with 64-bit source operand)
+                { SEL := COUNT[1:0];
+                    TEMP := (SRC >> (SEL ∗ 16)) AND FFFFH;
+                    r64[15:0] := TEMP[15:0];
+                    r64[63:16] := ZERO_FILL; };
+        FOR (PEXTRW instruction with 128-bit source operand)
+                { SEL := COUNT[2:0];
+                    TEMP := (SRC >> (SEL ∗ 16)) AND FFFFH;
+                    r64[15:0] := TEMP[15:0];
+                    r64[63:16] := ZERO_FILL; }
+    ELSE
+        FOR (PEXTRW instruction with 64-bit source operand)
+            { SEL := COUNT[1:0];
+                    TEMP := (SRC >> (SEL ∗ 16)) AND FFFFH;
+                    r32[15:0] := TEMP[15:0];
+                    r32[31:16] := ZERO_FILL; };
+        FOR (PEXTRW instruction with 128-bit source operand)
+            { SEL := COUNT[2:0];
+                    TEMP := (SRC >> (SEL ∗ 16)) AND FFFFH;
+                    r32[15:0] := TEMP[15:0];
+                    r32[31:16] := ZERO_FILL; };
+    FI;
+FI;
+
+

VPEXTRW ( dest=m16) + ¶ +

+
SRC_Offset := imm8[2:0]
+Mem16 := (Src >> Src_Offset*16)
+
+

VPEXTRW ( dest=reg) + ¶ +

+
IF (64-Bit Mode )
+THEN
+    SRC_Offset := imm8[2:0]
+    DEST[15:0] := ((Src >> Src_Offset*16) AND 0FFFFh)
+    DEST[63:16] := ZERO_FILL;
+ELSE
+    SRC_Offset := imm8[2:0]
+    DEST[15:0] := ((Src >> Src_Offset*16) AND 0FFFFh)
+    DEST[31:16] := ZERO_FILL;
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PEXTRW int _mm_extract_pi16 (__m64 a, int n)
+
+
PEXTRW int _mm_extract_epi16 ( __m128i a, int imm)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

+

Additionally:

+ + + + + +
#UDIf VEX.L = 1 or EVEX.L’L > 0.
If VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/phaddsw.html b/x86/phaddsw.html new file mode 100644 index 0000000..48e7ad7 --- /dev/null +++ b/x86/phaddsw.html @@ -0,0 +1,150 @@ + +PHADDSW + — Packed Horizontal Add and Saturate

PHADDSW + — Packed Horizontal Add and Saturate

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 03 /r1 PHADDSW mm1, mm2/m64RMV/VSSSE3Add 16-bit signed integers horizontally, pack saturated integers to mm1.
66 0F 38 03 /r PHADDSW xmm1, xmm2/m128RMV/VSSSE3Add 16-bit signed integers horizontally, pack saturated integers to xmm1.
VEX.128.66.0F38.WIG 03 /r VPHADDSW xmm1, xmm2, xmm3/m128RVMV/VAVXAdd 16-bit signed integers horizontally, pack saturated integers to xmm1.
VEX.256.66.0F38.WIG 03 /r VPHADDSW ymm1, ymm2, ymm3/m256RVMV/VAVX2Add 16-bit signed integers horizontally, pack saturated integers to ymm1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

(V)PHADDSW adds two adjacent signed 16-bit integers horizontally from the source and destination operands and saturates the signed results; packs the signed, saturated 16-bit results to the destination operand (first operand) When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

Legacy SSE version: Both operands can be MMX registers. The second source operand can be an MMX register or a 64-bit memory location.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

In 64-bit mode, use the REX prefix to access additional registers.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The first source and destination operands are YMM registers. The second source operand can be an YMM register or a 256-bit memory location.

+

Operation + ¶ +

+

PHADDSW (With 64-bit Operands) + ¶ +

+
mm1[15-0] = SaturateToSignedWord((mm1[31-16] + mm1[15-0]);
+mm1[31-16] = SaturateToSignedWord(mm1[63-48] + mm1[47-32]);
+mm1[47-32] = SaturateToSignedWord(mm2/m64[31-16] + mm2/m64[15-0]);
+mm1[63-48] = SaturateToSignedWord(mm2/m64[63-48] + mm2/m64[47-32]);
+
+

PHADDSW (With 128-bit Operands) + ¶ +

+
xmm1[15-0]= SaturateToSignedWord(xmm1[31-16] + xmm1[15-0]);
+xmm1[31-16] = SaturateToSignedWord(xmm1[63-48] + xmm1[47-32]);
+xmm1[47-32] = SaturateToSignedWord(xmm1[95-80] + xmm1[79-64]);
+xmm1[63-48] = SaturateToSignedWord(xmm1[127-112] + xmm1[111-96]);
+xmm1[79-64] = SaturateToSignedWord(xmm2/m128[31-16] + xmm2/m128[15-0]);
+xmm1[95-80] = SaturateToSignedWord(xmm2/m128[63-48] + xmm2/m128[47-32]);
+xmm1[111-96] = SaturateToSignedWord(xmm2/m128[95-80] + xmm2/m128[79-64]);
+xmm1[127-112] = SaturateToSignedWord(xmm2/m128[127-112] + xmm2/m128[111-96]);
+
+

VPHADDSW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0]= SaturateToSignedWord(SRC1[31:16] + SRC1[15:0])
+DEST[31:16] = SaturateToSignedWord(SRC1[63:48] + SRC1[47:32])
+DEST[47:32] = SaturateToSignedWord(SRC1[95:80] + SRC1[79:64])
+DEST[63:48] = SaturateToSignedWord(SRC1[127:112] + SRC1[111:96])
+DEST[79:64] = SaturateToSignedWord(SRC2[31:16] + SRC2[15:0])
+DEST[95:80] = SaturateToSignedWord(SRC2[63:48] + SRC2[47:32])
+DEST[111:96] = SaturateToSignedWord(SRC2[95:80] + SRC2[79:64])
+DEST[127:112] = SaturateToSignedWord(SRC2[127:112] + SRC2[111:96])
+DEST[MAXVL-1:128] := 0
+
+

VPHADDSW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0]= SaturateToSignedWord(SRC1[31:16] + SRC1[15:0])
+DEST[31:16] = SaturateToSignedWord(SRC1[63:48] + SRC1[47:32])
+DEST[47:32] = SaturateToSignedWord(SRC1[95:80] + SRC1[79:64])
+DEST[63:48] = SaturateToSignedWord(SRC1[127:112] + SRC1[111:96])
+DEST[79:64] = SaturateToSignedWord(SRC2[31:16] + SRC2[15:0])
+DEST[95:80] = SaturateToSignedWord(SRC2[63:48] + SRC2[47:32])
+DEST[111:96] = SaturateToSignedWord(SRC2[95:80] + SRC2[79:64])
+DEST[127:112] = SaturateToSignedWord(SRC2[127:112] + SRC2[111:96])
+DEST[143:128]= SaturateToSignedWord(SRC1[159:144] + SRC1[143:128])
+DEST[159:144] = SaturateToSignedWord(SRC1[191:176] + SRC1[175:160])
+DEST[175:160] = SaturateToSignedWord( SRC1[223:208] + SRC1[207:192])
+DEST[191:176] = SaturateToSignedWord(SRC1[255:240] + SRC1[239:224])
+DEST[207:192] = SaturateToSignedWord(SRC2[127:112] + SRC2[143:128])
+DEST[223:208] = SaturateToSignedWord(SRC2[159:144] + SRC2[175:160])
+DEST[239:224] = SaturateToSignedWord(SRC2[191-160] + SRC2[159-128])
+DEST[255:240] = SaturateToSignedWord(SRC2[255:240] + SRC2[239:224])
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PHADDSW __m64 _mm_hadds_pi16 (__m64 a, __m64 b)
+
+
(V)PHADDSW __m128i _mm_hadds_epi16 (__m128i a, __m128i b)
+
+
VPHADDSW __m256i _mm256_hadds_epi16 (__m256i a, __m256i b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
diff --git a/x86/phaddw.phaddd.html b/x86/phaddw.phaddd.html new file mode 100644 index 0000000..d977982 --- /dev/null +++ b/x86/phaddw.phaddd.html @@ -0,0 +1,370 @@ + +PHADDW/PHADDD + — Packed Horizontal Add

PHADDW/PHADDD + — Packed Horizontal Add

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 01 /r1 PHADDW mm1, mm2/m64RMV/VSSSE3Add 16-bit integers horizontally, pack to mm1.
66 0F 38 01 /r PHADDW xmm1, xmm2/m128RMV/VSSSE3Add 16-bit integers horizontally, pack to xmm1.
NP 0F 38 02 /r PHADDD mm1, mm2/m64RMV/VSSSE3Add 32-bit integers horizontally, pack to mm1.
66 0F 38 02 /r PHADDD xmm1, xmm2/m128RMV/VSSSE3Add 32-bit integers horizontally, pack to xmm1.
VEX.128.66.0F38.WIG 01 /r VPHADDW xmm1, xmm2, xmm3/m128RVMV/VAVXAdd 16-bit integers horizontally, pack to xmm1.
VEX.128.66.0F38.WIG 02 /r VPHADDD xmm1, xmm2, xmm3/m128RVMV/VAVXAdd 32-bit integers horizontally, pack to xmm1.
VEX.256.66.0F38.WIG 01 /r VPHADDW ymm1, ymm2, ymm3/m256RVMV/VAVX2Add 16-bit signed integers horizontally, pack to ymm1.
VEX.256.66.0F38.WIG 02 /r VPHADDD ymm1, ymm2, ymm3/m256RVMV/VAVX2Add 32-bit signed integers horizontally, pack to ymm1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

(V)PHADDW adds two adjacent 16-bit signed integers horizontally from the source and destination operands and packs the 16-bit signed results to the destination operand (first operand). (V)PHADDD adds two adjacent 32-bit signed integers horizontally from the source and destination operands and packs the 32-bit signed results to the destination operand (first operand). When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

Note that these instructions can operate on either unsigned or signed (two’s complement notation) integers; however, it does not set bits in the EFLAGS register to indicate overflow and/or a carry. To prevent undetected overflow conditions, software must control the ranges of the values operated on.

+

Legacy SSE instructions: Both operands can be MMX registers. The second source operand can be an MMX register or a 64-bit memory location.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand can be an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

In 64-bit mode, use the REX prefix to access additional registers.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand can be an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM register are zeroed.

+

VEX.256 encoded version: Horizontal addition of two adjacent data elements of the low 16-bytes of the first and second source operands are packed into the low 16-bytes of the destination operand. Horizontal addition of two adjacent data elements of the high 16-bytes of the first and second source operands are packed into the high 16-bytes of the destination operand. The first source and destination operands are YMM registers. The second source operand can be an YMM register or a 256-bit memory location.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC2 Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 X7 X6 X5 X4 X3 X2 X1 X0 SRC1 +

S7 S3 S3 S4 S3 S2 S1 S0

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +255 +0 +Dest +
Figure 4-10. 256-bit VPHADDD Instruction Operation
+

Operation + ¶ +

+

PHADDW (With 64-bit Operands) + ¶ +

+
mm1[15-0] = mm1[31-16] + mm1[15-0];
+mm1[31-16] = mm1[63-48] + mm1[47-32];
+mm1[47-32] = mm2/m64[31-16] + mm2/m64[15-0];
+mm1[63-48] = mm2/m64[63-48] + mm2/m64[47-32];
+
+

PHADDW (With 128-bit Operands) + ¶ +

+
xmm1[15-0] = xmm1[31-16] + xmm1[15-0];
+xmm1[31-16] = xmm1[63-48] + xmm1[47-32];
+xmm1[47-32] = xmm1[95-80] + xmm1[79-64];
+xmm1[63-48] = xmm1[127-112] + xmm1[111-96];
+xmm1[79-64] = xmm2/m128[31-16] + xmm2/m128[15-0];
+xmm1[95-80] = xmm2/m128[63-48] + xmm2/m128[47-32];
+xmm1[111-96] = xmm2/m128[95-80] + xmm2/m128[79-64];
+xmm1[127-112] = xmm2/m128[127-112] + xmm2/m128[111-96];
+
+

VPHADDW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := SRC1[31:16] + SRC1[15:0]
+DEST[31:16] := SRC1[63:48] + SRC1[47:32]
+DEST[47:32] := SRC1[95:80] + SRC1[79:64]
+DEST[63:48] := SRC1[127:112] + SRC1[111:96]
+DEST[79:64] := SRC2[31:16] + SRC2[15:0]
+DEST[95:80] := SRC2[63:48] + SRC2[47:32]
+DEST[111:96] := SRC2[95:80] + SRC2[79:64]
+DEST[127:112] := SRC2[127:112] + SRC2[111:96]
+DEST[MAXVL-1:128] := 0
+
+

VPHADDW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := SRC1[31:16] + SRC1[15:0]
+DEST[31:16] := SRC1[63:48] + SRC1[47:32]
+DEST[47:32] := SRC1[95:80] + SRC1[79:64]
+DEST[63:48] := SRC1[127:112] + SRC1[111:96]
+DEST[79:64] := SRC2[31:16] + SRC2[15:0]
+DEST[95:80] := SRC2[63:48] + SRC2[47:32]
+DEST[111:96] := SRC2[95:80] + SRC2[79:64]
+DEST[127:112] := SRC2[127:112] + SRC2[111:96]
+DEST[143:128] := SRC1[159:144] + SRC1[143:128]
+DEST[159:144] := SRC1[191:176] + SRC1[175:160]
+DEST[175:160] := SRC1[223:208] + SRC1[207:192]
+DEST[191:176] := SRC1[255:240] + SRC1[239:224]
+DEST[207:192] := SRC2[127:112] + SRC2[143:128]
+DEST[223:208] := SRC2[159:144] + SRC2[175:160]
+DEST[239:224] := SRC2[191:176] + SRC2[207:192]
+DEST[255:240] := SRC2[223:208] + SRC2[239:224]
+
+

PHADDD (With 64-bit Operands) + ¶ +

+
mm1[31-0] = mm1[63-32] + mm1[31-0];
+mm1[63-32] = mm2/m64[63-32] + mm2/m64[31-0];
+
+

PHADDD (With 128-bit Operands) + ¶ +

+
xmm1[31-0] = xmm1[63-32] + xmm1[31-0];
+xmm1[63-32] = xmm1[127-96] + xmm1[95-64];
+xmm1[95-64] = xmm2/m128[63-32] + xmm2/m128[31-0];
+xmm1[127-96] = xmm2/m128[127-96] + xmm2/m128[95-64];
+
+

VPHADDD (VEX.128 Encoded Version) + ¶ +

+
DEST[31-0] := SRC1[63-32] + SRC1[31-0]
+DEST[63-32] := SRC1[127-96] + SRC1[95-64]
+DEST[95-64] := SRC2[63-32] + SRC2[31-0]
+DEST[127-96] := SRC2[127-96] + SRC2[95-64]
+DEST[MAXVL-1:128] := 0
+
+

VPHADDD (VEX.256 Encoded Version) + ¶ +

+
DEST[31-0] := SRC1[63-32] + SRC1[31-0]
+DEST[63-32] := SRC1[127-96] + SRC1[95-64]
+DEST[95-64] := SRC2[63-32] + SRC2[31-0]
+DEST[127-96] := SRC2[127-96] + SRC2[95-64]
+DEST[159-128] := SRC1[191-160] + SRC1[159-128]
+DEST[191-160] := SRC1[255-224] + SRC1[223-192]
+DEST[223-192] := SRC2[191-160] + SRC2[159-128]
+DEST[255-224] := SRC2[255-224] + SRC2[223-192]
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
PHADDW __m64 _mm_hadd_pi16 (__m64 a, __m64 b)
+
+
PHADDD __m64 _mm_hadd_pi32 (__m64 a, __m64 b)
+
+
(V)PHADDW __m128i _mm_hadd_epi16 (__m128i a, __m128i b)
+
+
(V)PHADDD __m128i _mm_hadd_epi32 (__m128i a, __m128i b)
+
+
VPHADDW __m256i _mm256_hadd_epi16 (__m256i a, __m256i b)
+
+
VPHADDD __m256i _mm256_hadd_epi32 (__m256i a, __m256i b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
diff --git a/x86/phminposuw.html b/x86/phminposuw.html new file mode 100644 index 0000000..3cfa8e9 --- /dev/null +++ b/x86/phminposuw.html @@ -0,0 +1,106 @@ + +PHMINPOSUW + — Packed Horizontal Word Minimum

PHMINPOSUW + — Packed Horizontal Word Minimum

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 41 /r PHMINPOSUW xmm1, xmm2/m128RMV/VSSE4_1Find the minimum unsigned word in xmm2/m128 and place its value in the low word of xmm1 and its index in the second-lowest word of xmm1.
VEX.128.66.0F38.WIG 41 /r VPHMINPOSUW xmm1, xmm2/m128RMV/VAVXFind the minimum unsigned word in xmm2/m128 and place its value in the low word of xmm1 and its index in the second-lowest word of xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Determine the minimum unsigned word value in the source operand (second operand) and place the unsigned word in the low word (bits 0-15) of the destination operand (first operand). The word index of the minimum value is stored in bits 16-18 of the destination operand. The remaining upper bits of the destination are set to zero.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding XMM destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination XMM register are zeroed. VEX.vvvv is reserved and must be 1111b, VEX.L must be 0, otherwise the instruction will #UD.

+

Operation + ¶ +

+

PHMINPOSUW (128-bit Legacy SSE Version) + ¶ +

+
INDEX := 0;
+MIN := SRC[15:0]
+IF (SRC[31:16] < MIN)
+    THEN INDEX := 1; MIN := SRC[31:16]; FI;
+IF (SRC[47:32] < MIN)
+    THEN INDEX := 2; MIN := SRC[47:32]; FI;
+* Repeat operation for words 3 through 6
+IF (SRC[127:112] < MIN)
+    THEN INDEX := 7; MIN := SRC[127:112]; FI;
+DEST[15:0] := MIN;
+DEST[18:16] := INDEX;
+DEST[127:19] := 0000000000000000000000000000H;
+
+

VPHMINPOSUW (VEX.128 Encoded Version) + ¶ +

+
INDEX := 0
+MIN := SRC[15:0]
+IF (SRC[31:16] < MIN) THEN INDEX := 1; MIN := SRC[31:16]
+IF (SRC[47:32] < MIN) THEN INDEX := 2; MIN := SRC[47:32]
+* Repeat operation for words 3 through 6
+IF (SRC[127:112] < MIN) THEN INDEX := 7; MIN := SRC[127:112]
+DEST[15:0] := MIN
+DEST[18:16] := INDEX
+DEST[127:19] := 0000000000000000000000000000H
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PHMINPOSUW __m128i _mm_minpos_epu16( __m128i packed_words);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + + + +
#UDIf VEX.L = 1.
If VEX.vvvv ≠ 1111B.
diff --git a/x86/phsubsw.html b/x86/phsubsw.html new file mode 100644 index 0000000..3c6758e --- /dev/null +++ b/x86/phsubsw.html @@ -0,0 +1,150 @@ + +PHSUBSW + — Packed Horizontal Subtract and Saturate

PHSUBSW + — Packed Horizontal Subtract and Saturate

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 07 /r1 PHSUBSW mm1, mm2/m64RMV/VSSSE3Subtract 16-bit signed integer horizontally, pack saturated integers to mm1.
66 0F 38 07 /r PHSUBSW xmm1, xmm2/m128RMV/VSSSE3Subtract 16-bit signed integer horizontally, pack saturated integers to xmm1.
VEX.128.66.0F38.WIG 07 /r VPHSUBSW xmm1, xmm2, xmm3/m128RVMV/VAVXSubtract 16-bit signed integer horizontally, pack saturated integers to xmm1.
VEX.256.66.0F38.WIG 07 /r VPHSUBSW ymm1, ymm2, ymm3/m256RVMV/VAVX2Subtract 16-bit signed integer horizontally, pack saturated integers to ymm1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

(V)PHSUBSW performs horizontal subtraction on each adjacent pair of 16-bit signed integers by subtracting the most significant word from the least significant word of each pair in the source and destination operands. The signed, saturated 16-bit results are packed to the destination operand (first operand). When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

Legacy SSE version: Both operands can be MMX registers. The second source operand can be an MMX register or a 64-bit memory location.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

In 64-bit mode, use the REX prefix to access additional registers.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The first source and destination operands are YMM registers. The second source operand can be an YMM register or a 256-bit memory location.

+

Operation + ¶ +

+

PHSUBSW (With 64-bit Operands) + ¶ +

+
mm1[15-0] = SaturateToSignedWord(mm1[15-0] - mm1[31-16]);
+mm1[31-16] = SaturateToSignedWord(mm1[47-32] - mm1[63-48]);
+mm1[47-32] = SaturateToSignedWord(mm2/m64[15-0] - mm2/m64[31-16]);
+mm1[63-48] = SaturateToSignedWord(mm2/m64[47-32] - mm2/m64[63-48]);
+
+

PHSUBSW (With 128-bit Operands) + ¶ +

+
xmm1[15-0] = SaturateToSignedWord(xmm1[15-0] - xmm1[31-16]);
+xmm1[31-16] = SaturateToSignedWord(xmm1[47-32] - xmm1[63-48]);
+xmm1[47-32] = SaturateToSignedWord(xmm1[79-64] - xmm1[95-80]);
+xmm1[63-48] = SaturateToSignedWord(xmm1[111-96] - xmm1[127-112]);
+xmm1[79-64] = SaturateToSignedWord(xmm2/m128[15-0] - xmm2/m128[31-16]);
+xmm1[95-80] =SaturateToSignedWord(xmm2/m128[47-32] - xmm2/m128[63-48]);
+xmm1[111-96] =SaturateToSignedWord(xmm2/m128[79-64] - xmm2/m128[95-80]);
+xmm1[127-112]= SaturateToSignedWord(xmm2/m128[111-96] - xmm2/m128[127-112]);
+
+

VPHSUBSW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0]= SaturateToSignedWord(SRC1[15:0] - SRC1[31:16])
+DEST[31:16] = SaturateToSignedWord(SRC1[47:32] - SRC1[63:48])
+DEST[47:32] = SaturateToSignedWord(SRC1[79:64] - SRC1[95:80])
+DEST[63:48] = SaturateToSignedWord(SRC1[111:96] - SRC1[127:112])
+DEST[79:64] = SaturateToSignedWord(SRC2[15:0] - SRC2[31:16])
+DEST[95:80] = SaturateToSignedWord(SRC2[47:32] - SRC2[63:48])
+DEST[111:96] = SaturateToSignedWord(SRC2[79:64] - SRC2[95:80])
+DEST[127:112] = SaturateToSignedWord(SRC2[111:96] - SRC2[127:112])
+DEST[MAXVL-1:128] := 0
+
+

VPHSUBSW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0]= SaturateToSignedWord(SRC1[15:0] - SRC1[31:16])
+DEST[31:16] = SaturateToSignedWord(SRC1[47:32] - SRC1[63:48])
+DEST[47:32] = SaturateToSignedWord(SRC1[79:64] - SRC1[95:80])
+DEST[63:48] = SaturateToSignedWord(SRC1[111:96] - SRC1[127:112])
+DEST[79:64] = SaturateToSignedWord(SRC2[15:0] - SRC2[31:16])
+DEST[95:80] = SaturateToSignedWord(SRC2[47:32] - SRC2[63:48])
+DEST[111:96] = SaturateToSignedWord(SRC2[79:64] - SRC2[95:80])
+DEST[127:112] = SaturateToSignedWord(SRC2[111:96] - SRC2[127:112])
+DEST[143:128]= SaturateToSignedWord(SRC1[143:128] - SRC1[159:144])
+DEST[159:144] = SaturateToSignedWord(SRC1[175:160] - SRC1[191:176])
+DEST[175:160] = SaturateToSignedWord(SRC1[207:192] - SRC1[223:208])
+DEST[191:176] = SaturateToSignedWord(SRC1[239:224] - SRC1[255:240])
+DEST[207:192] = SaturateToSignedWord(SRC2[143:128] - SRC2[159:144])
+DEST[223:208] = SaturateToSignedWord(SRC2[175:160] - SRC2[191:176])
+DEST[239:224] = SaturateToSignedWord(SRC2[207:192] - SRC2[223:208])
+DEST[255:240] = SaturateToSignedWord(SRC2[239:224] - SRC2[255:240])
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PHSUBSW __m64 _mm_hsubs_pi16 (__m64 a, __m64 b)
+
+
(V)PHSUBSW __m128i _mm_hsubs_epi16 (__m128i a, __m128i b)
+
+
VPHSUBSW __m256i _mm256_hsubs_epi16 (__m256i a, __m256i b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
diff --git a/x86/phsubw.phsubd.html b/x86/phsubw.phsubd.html new file mode 100644 index 0000000..29fdb13 --- /dev/null +++ b/x86/phsubw.phsubd.html @@ -0,0 +1,216 @@ + +PHSUBW/PHSUBD + — Packed Horizontal Subtract

PHSUBW/PHSUBD + — Packed Horizontal Subtract

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 05 /r1 PHSUBW mm1, mm2/m64RMV/VSSSE3Subtract 16-bit signed integers horizontally, pack to mm1.
66 0F 38 05 /r PHSUBW xmm1, xmm2/m128RMV/VSSSE3Subtract 16-bit signed integers horizontally, pack to xmm1.
NP 0F 38 06 /r PHSUBD mm1, mm2/m64RMV/VSSSE3Subtract 32-bit signed integers horizontally, pack to mm1.
66 0F 38 06 /r PHSUBD xmm1, xmm2/m128RMV/VSSSE3Subtract 32-bit signed integers horizontally, pack to xmm1.
VEX.128.66.0F38.WIG 05 /r VPHSUBW xmm1, xmm2, xmm3/m128RVMV/VAVXSubtract 16-bit signed integers horizontally, pack to xmm1.
VEX.128.66.0F38.WIG 06 /r VPHSUBD xmm1, xmm2, xmm3/m128RVMV/VAVXSubtract 32-bit signed integers horizontally, pack to xmm1.
VEX.256.66.0F38.WIG 05 /r VPHSUBW ymm1, ymm2, ymm3/m256RVMV/VAVX2Subtract 16-bit signed integers horizontally, pack to ymm1.
VEX.256.66.0F38.WIG 06 /r VPHSUBD ymm1, ymm2, ymm3/m256RVMV/VAVX2Subtract 32-bit signed integers horizontally, pack to ymm1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

(V)PHSUBW performs horizontal subtraction on each adjacent pair of 16-bit signed integers by subtracting the most significant word from the least significant word of each pair in the source and destination operands, and packs the signed 16-bit results to the destination operand (first operand). (V)PHSUBD performs horizontal subtraction on each adjacent pair of 32-bit signed integers by subtracting the most significant doubleword from the least significant doubleword of each pair, and packs the signed 32-bit result to the destination operand. When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

Legacy SSE version: Both operands can be MMX registers. The second source operand can be an MMX register or a 64-bit memory location.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

In 64-bit mode, use the REX prefix to access additional registers.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The first source and destination operands are YMM registers. The second source operand can be an YMM register or a 256-bit memory location.

+

Operation + ¶ +

+

PHSUBW (With 64-bit Operands) + ¶ +

+
mm1[15-0] = mm1[15-0] - mm1[31-16];
+mm1[31-16] = mm1[47-32] - mm1[63-48];
+mm1[47-32] = mm2/m64[15-0] - mm2/m64[31-16];
+mm1[63-48] = mm2/m64[47-32] - mm2/m64[63-48];
+
+

PHSUBW (With 128-bit Operands) + ¶ +

+
xmm1[15-0] = xmm1[15-0] - xmm1[31-16];
+xmm1[31-16] = xmm1[47-32] - xmm1[63-48];
+xmm1[47-32] = xmm1[79-64] - xmm1[95-80];
+xmm1[63-48] = xmm1[111-96] - xmm1[127-112];
+xmm1[79-64] = xmm2/m128[15-0] - xmm2/m128[31-16];
+xmm1[95-80] = xmm2/m128[47-32] - xmm2/m128[63-48];
+xmm1[111-96] = xmm2/m128[79-64] - xmm2/m128[95-80];
+xmm1[127-112] = xmm2/m128[111-96] - xmm2/m128[127-112];
+
+

VPHSUBW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := SRC1[15:0] - SRC1[31:16]
+DEST[31:16] := SRC1[47:32] - SRC1[63:48]
+DEST[47:32] := SRC1[79:64] - SRC1[95:80]
+DEST[63:48] := SRC1[111:96] - SRC1[127:112]
+DEST[79:64] := SRC2[15:0] - SRC2[31:16]
+DEST[95:80] := SRC2[47:32] - SRC2[63:48]
+DEST[111:96] := SRC2[79:64] - SRC2[95:80]
+DEST[127:112] := SRC2[111:96] - SRC2[127:112]
+DEST[MAXVL-1:128] := 0
+
+

VPHSUBW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := SRC1[15:0] - SRC1[31:16]
+DEST[31:16] := SRC1[47:32] - SRC1[63:48]
+DEST[47:32] := SRC1[79:64] - SRC1[95:80]
+DEST[63:48] := SRC1[111:96] - SRC1[127:112]
+DEST[79:64] := SRC2[15:0] - SRC2[31:16]
+DEST[95:80] := SRC2[47:32] - SRC2[63:48]
+DEST[111:96] := SRC2[79:64] - SRC2[95:80]
+DEST[127:112] := SRC2[111:96] - SRC2[127:112]
+DEST[143:128] := SRC1[143:128] - SRC1[159:144]
+DEST[159:144] := SRC1[175:160] - SRC1[191:176]
+DEST[175:160] := SRC1[207:192] - SRC1[223:208]
+DEST[191:176] := SRC1[239:224] - SRC1[255:240]
+DEST[207:192] := SRC2[143:128] - SRC2[159:144]
+DEST[223:208] := SRC2[175:160] - SRC2[191:176]
+DEST[239:224] := SRC2[207:192] - SRC2[223:208]
+DEST[255:240] := SRC2[239:224] - SRC2[255:240]
+
+

PHSUBD (With 64-bit Operands) + ¶ +

+
mm1[31-0] = mm1[31-0] - mm1[63-32];
+mm1[63-32] = mm2/m64[31-0] - mm2/m64[63-32];
+
+

PHSUBD (With 128-bit Operands) + ¶ +

+
xmm1[31-0] = xmm1[31-0] - xmm1[63-32];
+xmm1[63-32] = xmm1[95-64] - xmm1[127-96];
+xmm1[95-64] = xmm2/m128[31-0] - xmm2/m128[63-32];
+xmm1[127-96] = xmm2/m128[95-64] - xmm2/m128[127-96];
+
+

VPHSUBD (VEX.128 Encoded Version) + ¶ +

+
DEST[31-0] := SRC1[31-0] - SRC1[63-32]
+DEST[63-32] := SRC1[95-64] - SRC1[127-96]
+DEST[95-64] := SRC2[31-0] - SRC2[63-32]
+DEST[127-96] := SRC2[95-64] - SRC2[127-96]
+DEST[MAXVL-1:128] := 0
+
+

VPHSUBD (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC1[63:32]
+DEST[63:32] := SRC1[95:64] - SRC1[127:96]
+DEST[95:64] := SRC2[31:0] - SRC2[63:32]
+DEST[127:96] := SRC2[95:64] - SRC2[127:96]
+DEST[159:128] := SRC1[159:128] - SRC1[191:160]
+DEST[191:160] := SRC1[223:192] - SRC1[255:224]
+DEST[223:192] := SRC2[159:128] - SRC2[191:160]
+DEST[255:224] := SRC2[223:192] - SRC2[255:224]
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
PHSUBW __m64 _mm_hsub_pi16 (__m64 a, __m64 b)
+
+
PHSUBD __m64 _mm_hsub_pi32 (__m64 a, __m64 b)
+
+
(V)PHSUBW __m128i _mm_hsub_epi16 (__m128i a, __m128i b)
+
+
(V)PHSUBD __m128i _mm_hsub_epi32 (__m128i a, __m128i b)
+
+
VPHSUBW __m256i _mm256_hsub_epi16 (__m256i a, __m256i b)
+
+
VPHSUBD __m256i _mm256_hsub_epi32 (__m256i a, __m256i b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
diff --git a/x86/pinsrb.pinsrd.pinsrq.html b/x86/pinsrb.pinsrd.pinsrq.html new file mode 100644 index 0000000..4a581e5 --- /dev/null +++ b/x86/pinsrb.pinsrd.pinsrq.html @@ -0,0 +1,179 @@ + +PINSRB/PINSRD/PINSRQ + — Insert Byte/Dword/Qword

PINSRB/PINSRD/PINSRQ + — Insert Byte/Dword/Qword

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 20 /r ib PINSRB xmm1, r32/m8, imm8AV/VSSE4_1Insert a byte integer value from r32/m8 into xmm1 at the destination element in xmm1 specified by imm8.
66 0F 3A 22 /r ib PINSRD xmm1, r/m32, imm8AV/VSSE4_1Insert a dword integer value from r/m32 into the xmm1 at the destination element specified by imm8.
66 REX.W 0F 3A 22 /r ib PINSRQ xmm1, r/m64, imm8AV/N. E.SSE4_1Insert a qword integer value from r/m64 into the xmm1 at the destination element specified by imm8.
VEX.128.66.0F3A.W0 20 /r ib VPINSRB xmm1, xmm2, r32/m8, imm8BV1/VAVXMerge a byte integer value from r32/m8 and rest from xmm2 into xmm1 at the byte offset in imm8.
VEX.128.66.0F3A.W0 22 /r ib VPINSRD xmm1, xmm2, r/m32, imm8BV/VAVXInsert a dword integer value from r32/m32 and rest from xmm2 into xmm1 at the dword offset in imm8.
VEX.128.66.0F3A.W1 22 /r ib VPINSRQ xmm1, xmm2, r/m64, imm8BV/I2AVXInsert a qword integer value from r64/m64 and rest from xmm2 into xmm1 at the qword offset in imm8.
EVEX.128.66.0F3A.WIG 20 /r ib VPINSRB xmm1, xmm2, r32/m8, imm8CV/VAVX512BWMerge a byte integer value from r32/m8 and rest from xmm2 into xmm1 at the byte offset in imm8.
EVEX.128.66.0F3A.W0 22 /r ib VPINSRD xmm1, xmm2, r32/m32, imm8CV/VAVX512DQInsert a dword integer value from r32/m32 and rest from xmm2 into xmm1 at the dword offset in imm8.
EVEX.128.66.0F3A.W1 22 /r ib VPINSRQ xmm1, xmm2, r64/m64, imm8CV/N.E.2AVX512DQInsert a qword integer value from r64/m64 and rest from xmm2 into xmm1 at the qword offset in imm8.
+
+

1. In 64-bit mode, VEX.W1 is ignored for VPINSRB (similar to legacy REX.W=1 prefix with PINSRB).

+

2. VEX.W/EVEX.W in non-64 bit is ignored; the instructions behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Copies a byte/dword/qword from the source operand (second operand) and inserts it in the destination operand (first operand) at the location specified with the count operand (third operand). (The other elements in the destination register are left untouched.) The source operand can be a general-purpose register or a memory location. (When the source operand is a general-purpose register, PINSRB copies the low byte of the register.) The destination operand is an XMM register. The count operand is an 8-bit immediate. When specifying a qword[dword, byte] location in an XMM register, the [2, 4] least-significant bit(s) of the count operand specify the location.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15, R8-15). Use of REX.W permits the use of 64 bit general purpose registers.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed. VEX.L must be 0, otherwise the instruction will #UD. Attempt to execute VPINSRQ in non-64-bit mode will cause #UD.

+

EVEX.128 encoded version: Bits (MAXVL-1:128) of the destination register are zeroed. EVEX.L’L must be 0, otherwise the instruction will #UD.

+

Operation + ¶ +

+
CASE OF
+    PINSRB: SEL:=COUNT[3:0];
+            MASK := (0FFH << (SEL * 8));
+            TEMP := (((SRC[7:0] << (SEL *8)) AND MASK);
+    PINSRD: SEL := COUNT[1:0];
+            MASK := (0FFFFFFFFH << (SEL * 32));
+            TEMP := (((SRC << (SEL *32)) AND MASK) ;
+    PINSRQ: SEL:=COUNT[0]
+            MASK := (0FFFFFFFFFFFFFFFFH << (SEL * 64));
+            TEMP := (((SRC << (SEL *64)) AND MASK) ;
+ESAC;
+        DEST := ((DEST AND NOT MASK) OR TEMP);
+
+

VPINSRB (VEX/EVEX Encoded Version) + ¶ +

+
SEL := imm8[3:0]
+DEST[127:0] := write_b_element(SEL, SRC2, SRC1)
+DEST[MAXVL-1:128] := 0
+
+

VPINSRD (VEX/EVEX Encoded Version) + ¶ +

+
SEL := imm8[1:0]
+DEST[127:0] := write_d_element(SEL, SRC2, SRC1)
+DEST[MAXVL-1:128] := 0
+
+

VPINSRQ (VEX/EVEX Encoded Version) + ¶ +

+
SEL := imm8[0]
+DEST[127:0] := write_q_element(SEL, SRC2, SRC1)
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PINSRB __m128i _mm_insert_epi8 (__m128i s1, int s2, const int ndx);
+
+
PINSRD __m128i _mm_insert_epi32 (__m128i s2, int s, const int ndx);
+
+
PINSRQ __m128i _mm_insert_epi64(__m128i s2, __int64 s, const int ndx);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.L = 1 or EVEX.L’L > 0.
diff --git a/x86/pinsrw.html b/x86/pinsrw.html new file mode 100644 index 0000000..4837076 --- /dev/null +++ b/x86/pinsrw.html @@ -0,0 +1,132 @@ + +PINSRW + — Insert Word

PINSRW + — Insert Word

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/ En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F C4 /r ib1 PINSRW mm, r32/m16, imm8AV/VSSEInsert the low word from r32 or from m16 into mm at the word position specified by imm8.
66 0F C4 /r ib PINSRW xmm, r32/m16, imm8AV/VSSE2Move the low word of r32 or from m16 into xmm at the word position specified by imm8.
VEX.128.66.0F.W0 C4 /r ib VPINSRW xmm1, xmm2, r32/m16, imm8BV2/VAVXInsert the word from r32/m16 at the offset indicated by imm8 into the value from xmm2 and store result in xmm1.
EVEX.128.66.0F.WIG C4 /r ib VPINSRW xmm1, xmm2, r32/m16, imm8CV/VAVX512BWInsert the word from r32/m16 at the offset indicated by imm8 into the value from xmm2 and store result in xmm1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

2. In 64-bit mode, VEX.W1 is ignored for VPINSRW (similar to legacy REX.W=1 prefix in PINSRW).

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Three operand MMX and SSE instructions:

+

Copies a word from the source operand and inserts it in the destination operand at the location specified with the count operand. (The other words in the destination register are left untouched.) The source operand can be a general-purpose register or a 16-bit memory location. (When the source operand is a general-purpose register, the low word of the register is copied.) The destination operand can be an MMX technology register or an XMM register. The count operand is an 8-bit immediate. When specifying a word location in an MMX technology register, the 2 least-significant bits of the count operand specify the location; for an XMM register, the 3 least-significant bits specify the location.

+

Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

Four operand AVX and AVX-512 instructions:

+

Combines a word from the first source operand with the second source operand, and inserts it in the destination operand at the location specified with the count operand. The second source operand can be a general-purpose register or a 16-bit memory location. (When the source operand is a general-purpose register, the low word of the register is copied.) The first source and destination operands are XMM registers. The count operand is an 8-bit immediate. When specifying a word location, the 3 least-significant bits specify the location.

+

Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.L/EVEX.L’L must be 0, otherwise the instruction will #UD.

+

Operation + ¶ +

+

PINSRW dest, src, imm8 (MMX) + ¶ +

+
SEL := imm8[1:0]
+DEST.word[SEL] := src.word[0]
+
+

PINSRW dest, src, imm8 (SSE) + ¶ +

+
SEL := imm8[2:0]
+DEST.word[SEL] := src.word[0]
+
+

VPINSRW dest, src1, src2, imm8 (AVX/AVX512) + ¶ +

+
SEL := imm8[2:0]
+DEST := src1
+DEST.word[SEL] := src2.word[0]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PINSRW __m64 _mm_insert_pi16 (__m64 a, int d, int n)
+
+
PINSRW __m128i _mm_insert_epi16 ( __m128i a, int b, int imm)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-57, “Type E9NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.L = 1 or EVEX.L’L > 0.
diff --git a/x86/pmaddubsw.html b/x86/pmaddubsw.html new file mode 100644 index 0000000..c259934 --- /dev/null +++ b/x86/pmaddubsw.html @@ -0,0 +1,186 @@ + +PMADDUBSW + — Multiply and Add Packed Signed and Unsigned Bytes

PMADDUBSW + — Multiply and Add Packed Signed and Unsigned Bytes

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 04 /r1 PMADDUBSW mm1, mm2/m64AV/VSSSE3Multiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to mm1.
66 0F 38 04 /r PMADDUBSW xmm1, xmm2/m128AV/VSSSE3Multiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to xmm1.
VEX.128.66.0F38.WIG 04 /r VPMADDUBSW xmm1, xmm2, xmm3/m128BV/VAVXMultiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to xmm1.
VEX.256.66.0F38.WIG 04 /r VPMADDUBSW ymm1, ymm2, ymm3/m256BV/VAVX2Multiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to ymm1.
EVEX.128.66.0F38.WIG 04 /r VPMADDUBSW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWMultiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to xmm1 under writemask k1.
EVEX.256.66.0F38.WIG 04 /r VPMADDUBSW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWMultiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to ymm1 under writemask k1.
EVEX.512.66.0F38.WIG 04 /r VPMADDUBSW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWMultiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

(V)PMADDUBSW multiplies vertically each unsigned byte of the destination operand (first operand) with the corresponding signed byte of the source operand (second operand), producing intermediate signed 16-bit integers. Each adjacent pair of signed words is added and the saturated result is packed to the destination operand. For example, the lowest-order bytes (bits 7-0) in the source and destination operands are multiplied and the intermediate signed word result is added with the corresponding intermediate result from the 2nd lowest-order bytes (bits 15-8) of the operands; the sign-saturated result is stored in the lowest word of the destination register (15-0). The same operation is performed on the other pairs of adjacent bytes. Both operands can be MMX register or XMM registers. When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

In 64-bit mode and not encoded with VEX/EVEX, use the REX prefix to access XMM8-XMM15.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX.128 encoded versions: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 and EVEX.256 encoded versions: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX.512 encoded version: The second source operand can be an ZMM register or a 512-bit memory location. The first source and destination operands are ZMM registers.

+

Operation + ¶ +

+

PMADDUBSW (With 64-bit Operands) + ¶ +

+
DEST[15-0] = SaturateToSignedWord(SRC[15-8]*DEST[15-8]+SRC[7-0]*DEST[7-0]);
+DEST[31-16] = SaturateToSignedWord(SRC[31-24]*DEST[31-24]+SRC[23-16]*DEST[23-16]);
+DEST[47-32] = SaturateToSignedWord(SRC[47-40]*DEST[47-40]+SRC[39-32]*DEST[39-32]);
+DEST[63-48] = SaturateToSignedWord(SRC[63-56]*DEST[63-56]+SRC[55-48]*DEST[55-48]);
+
+

PMADDUBSW (With 128-bit Operands) + ¶ +

+
DEST[15-0] = SaturateToSignedWord(SRC[15-8]* DEST[15-8]+SRC[7-0]*DEST[7-0]);
+// Repeat operation for 2nd through 7th word
+SRC1/DEST[127-112] = SaturateToSignedWord(SRC[127-120]*DEST[127-120]+ SRC[119-112]* DEST[119-112]);
+
+

VPMADDUBSW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToSignedWord(SRC2[15:8]* SRC1[15:8]+SRC2[7:0]*SRC1[7:0])
+// Repeat operation for 2nd through 7th word
+DEST[127:112] := SaturateToSignedWord(SRC2[127:120]*SRC1[127:120]+ SRC2[119:112]* SRC1[119:112])
+DEST[MAXVL-1:128] := 0
+
+

VPMADDUBSW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToSignedWord(SRC2[15:8]* SRC1[15:8]+SRC2[7:0]*SRC1[7:0])
+// Repeat operation for 2nd through 15th word
+DEST[255:240] := SaturateToSignedWord(SRC2[255:248]*SRC1[255:248]+ SRC2[247:240]* SRC1[247:240])
+DEST[MAXVL-1:256] := 0
+
+

VPMADDUBSW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateToSignedWord(SRC2[i+15:i+8]* SRC1[i+15:i+8] + SRC2[i+7:i]*SRC1[i+7:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMADDUBSW __m512i _mm512_maddubs_epi16( __m512i a, __m512i b);
+
+
VPMADDUBSW __m512i _mm512_mask_maddubs_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMADDUBSW __m512i _mm512_maskz_maddubs_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMADDUBSW __m256i _mm256_mask_maddubs_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMADDUBSW __m256i _mm256_maskz_maddubs_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMADDUBSW __m128i _mm_mask_maddubs_epi16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMADDUBSW __m128i _mm_maskz_maddubs_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
PMADDUBSW __m64 _mm_maddubs_pi16 (__m64 a, __m64 b)
+
+
(V)PMADDUBSW __m128i _mm_maddubs_epi16 (__m128i a, __m128i b)
+
+
VPMADDUBSW __m256i _mm256_maddubs_epi16 (__m256i a, __m256i b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/pmaddwd.html b/x86/pmaddwd.html new file mode 100644 index 0000000..2aca9b7 --- /dev/null +++ b/x86/pmaddwd.html @@ -0,0 +1,294 @@ + +PMADDWD + — Multiply and Add Packed Integers

PMADDWD + — Multiply and Add Packed Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F F5 /r1 PMADDWD mm, mm/m64AV/VMMXMultiply the packed words in mm by the packed words in mm/m64, add adjacent doubleword results, and store in mm.
66 0F F5 /r PMADDWD xmm1, xmm2/m128AV/VSSE2Multiply the packed word integers in xmm1 by the packed word integers in xmm2/m128, add adjacent doubleword results, and store in xmm1.
VEX.128.66.0F.WIG F5 /r VPMADDWD xmm1, xmm2, xmm3/m128BV/VAVXMultiply the packed word integers in xmm2 by the packed word integers in xmm3/m128, add adjacent doubleword results, and store in xmm1.
VEX.256.66.0F.WIG F5 /r VPMADDWD ymm1, ymm2, ymm3/m256BV/VAVX2Multiply the packed word integers in ymm2 by the packed word integers in ymm3/m256, add adjacent doubleword results, and store in ymm1.
EVEX.128.66.0F.WIG F5 /r VPMADDWD xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWMultiply the packed word integers in xmm2 by the packed word integers in xmm3/m128, add adjacent doubleword results, and store in xmm1 under writemask k1.
EVEX.256.66.0F.WIG F5 /r VPMADDWD ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWMultiply the packed word integers in ymm2 by the packed word integers in ymm3/m256, add adjacent doubleword results, and store in ymm1 under writemask k1.
EVEX.512.66.0F.WIG F5 /r VPMADDWD zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWMultiply the packed word integers in zmm2 by the packed word integers in zmm3/m512, add adjacent doubleword results, and store in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiplies the individual signed words of the destination operand (first operand) by the corresponding signed words of the source operand (second operand), producing temporary signed, doubleword results. The adjacent double-word results are then summed and stored in the destination operand. For example, the corresponding low-order words (15-0) and (31-16) in the source and destination operands are multiplied by one another and the double-word results are added together and stored in the low doubleword of the destination register (31-0). The same operation is performed on the other pairs of adjacent words. (Figure 4-11 shows this operation when using 64-bit operands).

+

The (V)PMADDWD instruction wraps around only in one situation: when the 2 pairs of words being operated on in a group are all 8000H. In this case, the result wraps around to 80000000H.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version: The first source and destination operands are MMX registers. The second source operand is an MMX register or a 64-bit memory location.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers.

+

EVEX.512 encoded version: The second source operand can be an ZMM register or a 512-bit memory location. The first source and destination operands are ZMM registers.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC +X3 +X2 +X1 +X0 +DEST +Y3 +Y2 +Y1 +Y0 +X3 ∗ Y3 +X2 +∗ Y2 +X1 +∗ Y1 +X0 ∗ Y0 +TEMP +DEST +(X1∗Y1)+(X0∗Y0) +(X3∗Y3)+(X2∗Y2) +
Figure 4-11. PMADDWD Execution Model Using 64-bit Operands
+

Operation + ¶ +

+

PMADDWD (With 64-bit Operands) + ¶ +

+
DEST[31:0] := (DEST[15:0] ∗ SRC[15:0]) + (DEST[31:16] ∗ SRC[31:16]);
+DEST[63:32] := (DEST[47:32] ∗ SRC[47:32]) + (DEST[63:48] ∗ SRC[63:48]);
+
+

PMADDWD (With 128-bit Operands) + ¶ +

+
DEST[31:0] := (DEST[15:0] ∗ SRC[15:0]) + (DEST[31:16] ∗ SRC[31:16]);
+DEST[63:32] := (DEST[47:32] ∗ SRC[47:32]) + (DEST[63:48] ∗ SRC[63:48]);
+DEST[95:64] := (DEST[79:64] ∗ SRC[79:64]) + (DEST[95:80] ∗ SRC[95:80]);
+DEST[127:96] := (DEST[111:96] ∗ SRC[111:96]) + (DEST[127:112] ∗ SRC[127:112]);
+
+

VPMADDWD (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := (SRC1[15:0] * SRC2[15:0]) + (SRC1[31:16] * SRC2[31:16])
+DEST[63:32] := (SRC1[47:32] * SRC2[47:32]) + (SRC1[63:48] * SRC2[63:48])
+DEST[95:64] := (SRC1[79:64] * SRC2[79:64]) + (SRC1[95:80] * SRC2[95:80])
+DEST[127:96] := (SRC1[111:96] * SRC2[111:96]) + (SRC1[127:112] * SRC2[127:112])
+DEST[MAXVL-1:128] := 0
+
+

VPMADDWD (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := (SRC1[15:0] * SRC2[15:0]) + (SRC1[31:16] * SRC2[31:16])
+DEST[63:32] := (SRC1[47:32] * SRC2[47:32]) + (SRC1[63:48] * SRC2[63:48])
+DEST[95:64] := (SRC1[79:64] * SRC2[79:64]) + (SRC1[95:80] * SRC2[95:80])
+DEST[127:96] := (SRC1[111:96] * SRC2[111:96]) + (SRC1[127:112] * SRC2[127:112])
+DEST[159:128] := (SRC1[143:128] * SRC2[143:128]) + (SRC1[159:144] * SRC2[159:144])
+DEST[191:160] := (SRC1[175:160] * SRC2[175:160]) + (SRC1[191:176] * SRC2[191:176])
+DEST[223:192] := (SRC1[207:192] * SRC2[207:192]) + (SRC1[223:208] * SRC2[223:208])
+DEST[255:224] := (SRC1[239:224] * SRC2[239:224]) + (SRC1[255:240] * SRC2[255:240])
+DEST[MAXVL-1:256] := 0
+
+

VPMADDWD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := (SRC2[i+31:i+16]* SRC1[i+31:i+16]) + (SRC2[i+15:i]*SRC1[i+15:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMADDWD __m512i _mm512_madd_epi16( __m512i a, __m512i b);
+
+
VPMADDWD __m512i _mm512_mask_madd_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMADDWD __m512i _mm512_maskz_madd_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMADDWD __m256i _mm256_mask_madd_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMADDWD __m256i _mm256_maskz_madd_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMADDWD __m128i _mm_mask_madd_epi16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMADDWD __m128i _mm_maskz_madd_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
PMADDWD __m64 _mm_madd_pi16(__m64 m1, __m64 m2)
+
+
(V)PMADDWD __m128i _mm_madd_epi16 ( __m128i a, __m128i b)
+
+
VPMADDWD __m256i _mm256_madd_epi16 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/pmaxsb.pmaxsw.pmaxsd.pmaxsq.html b/x86/pmaxsb.pmaxsw.pmaxsd.pmaxsq.html new file mode 100644 index 0000000..f6716eb --- /dev/null +++ b/x86/pmaxsb.pmaxsw.pmaxsd.pmaxsq.html @@ -0,0 +1,526 @@ + +PMAXSB/PMAXSW/PMAXSD/PMAXSQ + — Maximum of Packed Signed Integers

PMAXSB/PMAXSW/PMAXSD/PMAXSQ + — Maximum of Packed Signed Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F EE /r1 PMAXSW mm1, mm2/m64AV/VSSECompare signed word integers in mm2/m64 and mm1 and return maximum values.
66 0F 38 3C /r PMAXSB xmm1, xmm2/m128AV/VSSE4_1Compare packed signed byte integers in xmm1 and xmm2/m128 and store packed maximum values in xmm1.
66 0F EE /r PMAXSW xmm1, xmm2/m128AV/VSSE2Compare packed signed word integers in xmm2/m128 and xmm1 and stores maximum packed values in xmm1.
66 0F 38 3D /r PMAXSD xmm1, xmm2/m128AV/VSSE4_1Compare packed signed dword integers in xmm1 and xmm2/m128 and store packed maximum values in xmm1.
VEX.128.66.0F38.WIG 3C /r VPMAXSB xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed byte integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1.
VEX.128.66.0F.WIG EE /r VPMAXSW xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed word integers in xmm3/m128 and xmm2 and store packed maximum values in xmm1.
VEX.128.66.0F38.WIG 3D /r VPMAXSD xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed dword integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1.
VEX.256.66.0F38.WIG 3C /r VPMAXSB ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed byte integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1.
VEX.256.66.0F.WIG EE /r VPMAXSW ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed word integers in ymm3/m256 and ymm2 and store packed maximum values in ymm1.
VEX.256.66.0F38.WIG 3D /r VPMAXSD ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed dword integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1.
EVEX.128.66.0F38.WIG 3C /r VPMAXSB xmm1{k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWCompare packed signed byte integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1 under writemask k1.
EVEX.256.66.0F38.WIG 3C /r VPMAXSB ymm1{k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWCompare packed signed byte integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1 under writemask k1.
EVEX.512.66.0F38.WIG 3C /r VPMAXSB zmm1{k1}{z}, zmm2, zmm3/m512CV/VAVX512BWCompare packed signed byte integers in zmm2 and zmm3/m512 and store packed maximum values in zmm1 under writemask k1.
EVEX.128.66.0F.WIG EE /r VPMAXSW xmm1{k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWCompare packed signed word integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1 under writemask k1.
EVEX.256.66.0F.WIG EE /r VPMAXSW ymm1{k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWCompare packed signed word integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1 under writemask k1.
EVEX.512.66.0F.WIG EE /r VPMAXSW zmm1{k1}{z}, zmm2, zmm3/m512CV/VAVX512BWCompare packed signed word integers in zmm2 and zmm3/m512 and store packed maximum values in zmm1 under writemask k1.
EVEX.128.66.0F38.W0 3D /r VPMAXSD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstDV/VAVX512VL AVX512FCompare packed signed dword integers in xmm2 and xmm3/m128/m32bcst and store packed maximum values in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 3D /r VPMAXSD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstDV/VAVX512VL AVX512FCompare packed signed dword integers in ymm2 and ymm3/m256/m32bcst and store packed maximum values in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 3D /r VPMAXSD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstDV/VAVX512FCompare packed signed dword integers in zmm2 and zmm3/m512/m32bcst and store packed maximum values in zmm1 using writemask k1.
EVEX.128.66.0F38.W1 3D /r VPMAXSQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstDV/VAVX512VL AVX512FCompare packed signed qword integers in xmm2 and xmm3/m128/m64bcst and store packed maximum values in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 3D /r VPMAXSQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstDV/VAVX512VL AVX512FCompare packed signed qword integers in ymm2 and ymm3/m256/m64bcst and store packed maximum values in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 3D /r VPMAXSQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstDV/VAVX512FCompare packed signed qword integers in zmm2 and zmm3/m512/m64bcst and store packed maximum values in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed signed byte, word, dword or qword integers in the second source operand and the first source operand and returns the maximum value for each pair of integers to the destination operand.

+

Legacy SSE version PMAXSW: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand can be an MMX technology register.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX encoded VPMAXSD/Q: The first source operand is a ZMM/YMM/XMM register; The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is conditionally updated based on writemask k1.

+

EVEX encoded VPMAXSB/W: The first source operand is a ZMM/YMM/XMM register; The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination operand is conditionally updated based on writemask k1.

+

Operation + ¶ +

+

PMAXSW (64-bit Operands) + ¶ +

+
IF DEST[15:0] > SRC[15:0]) THEN
+    DEST[15:0] := DEST[15:0];
+ELSE
+    DEST[15:0] := SRC[15:0]; FI;
+(* Repeat operation for 2nd and 3rd words in source and destination operands *)
+IF DEST[63:48] > SRC[63:48]) THEN
+    DEST[63:48] := DEST[63:48];
+ELSE
+    DEST[63:48] := SRC[63:48]; FI;
+
+

PMAXSB (128-bit Legacy SSE Version) + ¶ +

+
    IF DEST[7:0] > SRC[7:0] THEN
+        DEST[7:0] := DEST[7:0];
+    ELSE
+        DEST[7:0] := SRC[7:0]; FI;
+    (* Repeat operation for 2nd through 15th bytes in source and destination operands *)
+    IF DEST[127:120] >SRC[127:120] THEN
+        DEST[127:120] := DEST[127:120];
+    ELSE
+        DEST[127:120] := SRC[127:120]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMAXSB (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[7:0] > SRC2[7:0] THEN
+        DEST[7:0] := SRC1[7:0];
+    ELSE
+        DEST[7:0] := SRC2[7:0]; FI;
+    (* Repeat operation for 2nd through 15th bytes in source and destination operands *)
+    IF SRC1[127:120] >SRC2[127:120] THEN
+        DEST[127:120] := SRC1[127:120];
+    ELSE
+        DEST[127:120] := SRC2[127:120]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMAXSB (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[7:0] > SRC2[7:0] THEN
+        DEST[7:0] := SRC1[7:0];
+    ELSE
+        DEST[7:0] := SRC2[7:0]; FI;
+    (* Repeat operation for 2nd through 31st bytes in source and destination operands *)
+    IF SRC1[255:248] >SRC2[255:248] THEN
+        DEST[255:248] := SRC1[255:248];
+    ELSE
+        DEST[255:248] := SRC2[255:248]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMAXSB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask* THEN
+        IF SRC1[i+7:i] > SRC2[i+7:i]
+            THEN DEST[i+7:i] := SRC1[i+7:i];
+            ELSE DEST[i+7:i] := SRC2[i+7:i];
+        FI;
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

PMAXSW (128-bit Legacy SSE Version) + ¶ +

+
    IF DEST[15:0] >SRC[15:0] THEN
+        DEST[15:0] := DEST[15:0];
+    ELSE
+        DEST[15:0] := SRC[15:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF DEST[127:112] >SRC[127:112] THEN
+        DEST[127:112] := DEST[127:112];
+    ELSE
+        DEST[127:112] := SRC[127:112]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMAXSW (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[15:0] > SRC2[15:0] THEN
+        DEST[15:0] := SRC1[15:0];
+    ELSE
+        DEST[15:0] := SRC2[15:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF SRC1[127:112] >SRC2[127:112] THEN
+        DEST[127:112] := SRC1[127:112];
+    ELSE
+        DEST[127:112] := SRC2[127:112]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMAXSW (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[15:0] > SRC2[15:0] THEN
+        DEST[15:0] := SRC1[15:0];
+    ELSE
+        DEST[15:0] := SRC2[15:0]; FI;
+    (* Repeat operation for 2nd through 15th words in source and destination operands *)
+    IF SRC1[255:240] >SRC2[255:240] THEN
+        DEST[255:240] := SRC1[255:240];
+    ELSE
+        DEST[255:240] := SRC2[255:240]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMAXSW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask* THEN
+        IF SRC1[i+15:i] > SRC2[i+15:i]
+            THEN DEST[i+15:i] := SRC1[i+15:i];
+            ELSE DEST[i+15:i] := SRC2[i+15:i];
+        FI;
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

PMAXSD (128-bit Legacy SSE Version) + ¶ +

+
    IF DEST[31:0] >SRC[31:0] THEN
+        DEST[31:0] := DEST[31:0];
+    ELSE
+        DEST[31:0] := SRC[31:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF DEST[127:96] >SRC[127:96] THEN
+        DEST[127:96] := DEST[127:96];
+    ELSE
+        DEST[127:96] := SRC[127:96]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMAXSD (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[31:0] > SRC2[31:0] THEN
+        DEST[31:0] := SRC1[31:0];
+    ELSE
+        DEST[31:0] := SRC2[31:0]; FI;
+    (* Repeat operation for 2nd through 3rd dwords in source and destination operands *)
+    IF SRC1[127:96] > SRC2[127:96] THEN
+        DEST[127:96] := SRC1[127:96];
+    ELSE
+        DEST[127:96] := SRC2[127:96]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMAXSD (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[31:0] > SRC2[31:0] THEN
+        DEST[31:0] := SRC1[31:0];
+    ELSE
+        DEST[31:0] := SRC2[31:0]; FI;
+    (* Repeat operation for 2nd through 7th dwords in source and destination operands *)
+    IF SRC1[255:224] > SRC2[255:224] THEN
+        DEST[255:224] := SRC1[255:224];
+    ELSE
+        DEST[255:224] := SRC2[255:224]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMAXSD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*THEN
+        IF (EVEX.b = 1) AND (SRC2 *is memory*)
+            THEN
+                IF SRC1[i+31:i] > SRC2[31:0]
+                    THEN DEST[i+31:i] := SRC1[i+31:i];
+                    ELSE DEST[i+31:i] := SRC2[31:0];
+                FI;
+            ELSE
+                IF SRC1[i+31:i] > SRC2[i+31:i]
+                    THEN DEST[i+31:i] := SRC1[i+31:i];
+                    ELSE DEST[i+31:i] := SRC2[i+31:i];
+            FI;
+        FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0
+                        ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMAXSQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+        IF (EVEX.b = 1) AND (SRC2 *is memory*)
+            THEN
+                IF SRC1[i+63:i] > SRC2[63:0]
+                    THEN DEST[i+63:i] := SRC1[i+63:i];
+                    ELSE DEST[i+63:i] := SRC2[63:0];
+                FI;
+            ELSE
+                IF SRC1[i+63:i] > SRC2[i+63:i]
+                    THEN DEST[i+63:i] := SRC1[i+63:i];
+                    ELSE DEST[i+63:i] := SRC2[i+63:i];
+            FI;
+        FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    THEN DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMAXSB __m512i _mm512_max_epi8( __m512i a, __m512i b);
+
+
VPMAXSB __m512i _mm512_mask_max_epi8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPMAXSB __m512i _mm512_maskz_max_epi8( __mmask64 k, __m512i a, __m512i b);
+
+
VPMAXSW __m512i _mm512_max_epi16( __m512i a, __m512i b);
+
+
VPMAXSW __m512i _mm512_mask_max_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMAXSW __m512i _mm512_maskz_max_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMAXSB __m256i _mm256_mask_max_epi8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPMAXSB __m256i _mm256_maskz_max_epi8( __mmask32 k, __m256i a, __m256i b);
+
+
VPMAXSW __m256i _mm256_mask_max_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMAXSW __m256i _mm256_maskz_max_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMAXSB __m128i _mm_mask_max_epi8(__m128i s, __mmask16 k, __m128i a, __m128i b);
+
+
VPMAXSB __m128i _mm_maskz_max_epi8( __mmask16 k, __m128i a, __m128i b);
+
+
VPMAXSW __m128i _mm_mask_max_epi16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXSW __m128i _mm_maskz_max_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXSD __m256i _mm256_mask_max_epi32(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMAXSD __m256i _mm256_maskz_max_epi32( __mmask16 k, __m256i a, __m256i b);
+
+
VPMAXSQ __m256i _mm256_mask_max_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPMAXSQ __m256i _mm256_maskz_max_epi64( __mmask8 k, __m256i a, __m256i b);
+
+
VPMAXSD __m128i _mm_mask_max_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXSD __m128i _mm_maskz_max_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXSQ __m128i _mm_mask_max_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXSQ __m128i _mm_maskz_max_epu64( __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXSD __m512i _mm512_max_epi32( __m512i a, __m512i b);
+
+
VPMAXSD __m512i _mm512_mask_max_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPMAXSD __m512i _mm512_maskz_max_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPMAXSQ __m512i _mm512_max_epi64( __m512i a, __m512i b);
+
+
VPMAXSQ __m512i _mm512_mask_max_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPMAXSQ __m512i _mm512_maskz_max_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
(V)PMAXSB __m128i _mm_max_epi8 ( __m128i a, __m128i b);
+
+
(V)PMAXSW __m128i _mm_max_epi16 ( __m128i a, __m128i b)
+
+
(V)PMAXSD __m128i _mm_max_epi32 ( __m128i a, __m128i b);
+
+
VPMAXSB __m256i _mm256_max_epi8 ( __m256i a, __m256i b);
+
+
VPMAXSW __m256i _mm256_max_epi16 ( __m256i a, __m256i b)
+
+
VPMAXSD __m256i _mm256_max_epi32 ( __m256i a, __m256i b);
+
+
PMAXSW:__m64 _mm_max_pi16(__m64 a, __m64 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPMAXSD/Q, see Table 2-49, “Type E4 Class Exception Conditions.”

+

EVEX-encoded VPMAXSB/W, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pmaxub.pmaxuw.html b/x86/pmaxub.pmaxuw.html new file mode 100644 index 0000000..2854c33 --- /dev/null +++ b/x86/pmaxub.pmaxuw.html @@ -0,0 +1,330 @@ + +PMAXUB/PMAXUW + — Maximum of Packed Unsigned Integers

PMAXUB/PMAXUW + — Maximum of Packed Unsigned Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F DE /r1 PMAXUB mm1, mm2/m64AV/VSSECompare unsigned byte integers in mm2/m64 and mm1 and returns maximum values.
66 0F DE /r PMAXUB xmm1, xmm2/m128AV/VSSE2Compare packed unsigned byte integers in xmm1 and xmm2/m128 and store packed maximum values in xmm1.
66 0F 38 3E/r PMAXUW xmm1, xmm2/m128AV/VSSE4_1Compare packed unsigned word integers in xmm2/m128 and xmm1 and stores maximum packed values in xmm1.
VEX.128.66.0F DE /r VPMAXUB xmm1, xmm2, xmm3/m128BV/VAVXCompare packed unsigned byte integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1.
VEX.128.66.0F38 3E/r VPMAXUW xmm1, xmm2, xmm3/m128BV/VAVXCompare packed unsigned word integers in xmm3/m128 and xmm2 and store maximum packed values in xmm1.
VEX.256.66.0F DE /r VPMAXUB ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed unsigned byte integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1.
VEX.256.66.0F38 3E/r VPMAXUW ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed unsigned word integers in ymm3/m256 and ymm2 and store maximum packed values in ymm1.
EVEX.128.66.0F.WIG DE /r VPMAXUB xmm1{k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWCompare packed unsigned byte integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1 under writemask k1.
EVEX.256.66.0F.WIG DE /r VPMAXUB ymm1{k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWCompare packed unsigned byte integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1 under writemask k1.
EVEX.512.66.0F.WIG DE /r VPMAXUB zmm1{k1}{z}, zmm2, zmm3/m512CV/VAVX512BWCompare packed unsigned byte integers in zmm2 and zmm3/m512 and store packed maximum values in zmm1 under writemask k1.
EVEX.128.66.0F38.WIG 3E /r VPMAXUW xmm1{k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWCompare packed unsigned word integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1 under writemask k1.
EVEX.256.66.0F38.WIG 3E /r VPMAXUW ymm1{k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWCompare packed unsigned word integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1 under writemask k1.
EVEX.512.66.0F38.WIG 3E /r VPMAXUW zmm1{k1}{z}, zmm2, zmm3/m512CV/VAVX512BWCompare packed unsigned word integers in zmm2 and zmm3/m512 and store packed maximum values in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed unsigned byte, word integers in the second source operand and the first source operand and returns the maximum value for each pair of integers to the destination operand.

+

Legacy SSE version PMAXUB: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand can be an MMX technology register.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register; The second source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is conditionally updated based on writemask k1.

+

Operation + ¶ +

+

PMAXUB (64-bit Operands) + ¶ +

+
IF DEST[7:0] > SRC[17:0]) THEN
+    DEST[7:0] := DEST[7:0];
+ELSE
+    DEST[7:0] := SRC[7:0]; FI;
+(* Repeat operation for 2nd through 7th bytes in source and destination operands *)
+IF DEST[63:56] > SRC[63:56]) THEN
+    DEST[63:56] := DEST[63:56];
+ELSE
+    DEST[63:56] := SRC[63:56]; FI;
+
+

PMAXUB (128-bit Legacy SSE Version) + ¶ +

+
    IF DEST[7:0] >SRC[7:0] THEN
+        DEST[7:0] := DEST[7:0];
+    ELSE
+        DEST[15:0] := SRC[7:0]; FI;
+    (* Repeat operation for 2nd through 15th bytes in source and destination operands *)
+    IF DEST[127:120] >SRC[127:120] THEN
+        DEST[127:120] := DEST[127:120];
+    ELSE
+        DEST[127:120] := SRC[127:120]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMAXUB (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[7:0] >SRC2[7:0] THEN
+        DEST[7:0] := SRC1[7:0];
+    ELSE
+        DEST[7:0] := SRC2[7:0]; FI;
+    (* Repeat operation for 2nd through 15th bytes in source and destination operands *)
+    IF SRC1[127:120] >SRC2[127:120] THEN
+        DEST[127:120] := SRC1[127:120];
+    ELSE
+        DEST[127:120] := SRC2[127:120]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMAXUB (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[7:0] >SRC2[7:0] THEN
+        DEST[7:0] := SRC1[7:0];
+    ELSE
+        DEST[15:0] := SRC2[7:0]; FI;
+    (* Repeat operation for 2nd through 31st bytes in source and destination operands *)
+    IF SRC1[255:248] >SRC2[255:248] THEN
+        DEST[255:248] := SRC1[255:248];
+    ELSE
+        DEST[255:248] := SRC2[255:248]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMAXUB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask* THEN
+        IF SRC1[i+7:i] > SRC2[i+7:i]
+            THEN DEST[i+7:i] := SRC1[i+7:i];
+            ELSE DEST[i+7:i] := SRC2[i+7:i];
+        FI;
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

PMAXUW (128-bit Legacy SSE Version) + ¶ +

+
    IF DEST[15:0] >SRC[15:0] THEN
+        DEST[15:0] := DEST[15:0];
+    ELSE
+        DEST[15:0] := SRC[15:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF DEST[127:112] >SRC[127:112] THEN
+        DEST[127:112] := DEST[127:112];
+    ELSE
+        DEST[127:112] := SRC[127:112]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMAXUW (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[15:0] > SRC2[15:0] THEN
+        DEST[15:0] := SRC1[15:0];
+    ELSE
+        DEST[15:0] := SRC2[15:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF SRC1[127:112] >SRC2[127:112] THEN
+        DEST[127:112] := SRC1[127:112];
+    ELSE
+        DEST[127:112] := SRC2[127:112]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMAXUW (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[15:0] > SRC2[15:0] THEN
+        DEST[15:0] := SRC1[15:0];
+    ELSE
+        DEST[15:0] := SRC2[15:0]; FI;
+    (* Repeat operation for 2nd through 15th words in source and destination operands *)
+    IF SRC1[255:240] >SRC2[255:240] THEN
+        DEST[255:240] := SRC1[255:240];
+    ELSE
+        DEST[255:240] := SRC2[255:240]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMAXUW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask* THEN
+        IF SRC1[i+15:i] > SRC2[i+15:i]
+            THEN DEST[i+15:i] := SRC1[i+15:i];
+            ELSE DEST[i+15:i] := SRC2[i+15:i];
+        FI;
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMAXUB __m512i _mm512_max_epu8( __m512i a, __m512i b);
+
+
VPMAXUB __m512i _mm512_mask_max_epu8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPMAXUB __m512i _mm512_maskz_max_epu8( __mmask64 k, __m512i a, __m512i b);
+
+
VPMAXUW __m512i _mm512_max_epu16( __m512i a, __m512i b);
+
+
VPMAXUW __m512i _mm512_mask_max_epu16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMAXUW __m512i _mm512_maskz_max_epu16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMAXUB __m256i _mm256_mask_max_epu8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPMAXUB __m256i _mm256_maskz_max_epu8( __mmask32 k, __m256i a, __m256i b);
+
+
VPMAXUW __m256i _mm256_mask_max_epu16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMAXUW __m256i _mm256_maskz_max_epu16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMAXUB __m128i _mm_mask_max_epu8(__m128i s, __mmask16 k, __m128i a, __m128i b);
+
+
VPMAXUB __m128i _mm_maskz_max_epu8( __mmask16 k, __m128i a, __m128i b);
+
+
VPMAXUW __m128i _mm_mask_max_epu16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXUW __m128i _mm_maskz_max_epu16( __mmask8 k, __m128i a, __m128i b);
+
+
(V)PMAXUB __m128i _mm_max_epu8 ( __m128i a, __m128i b);
+
+
(V)PMAXUW __m128i _mm_max_epu16 ( __m128i a, __m128i b)
+
+
VPMAXUB __m256i _mm256_max_epu8 ( __m256i a, __m256i b);
+
+
VPMAXUW __m256i _mm256_max_epu16 ( __m256i a, __m256i b);
+
+
PMAXUB __m64 _mm_max_pu8(__m64 a, __m64 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pmaxud.pmaxuq.html b/x86/pmaxud.pmaxuq.html new file mode 100644 index 0000000..eb38fef --- /dev/null +++ b/x86/pmaxud.pmaxuq.html @@ -0,0 +1,258 @@ + +PMAXUD/PMAXUQ + — Maximum of Packed Unsigned Integers

PMAXUD/PMAXUQ + — Maximum of Packed Unsigned Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 3F /r PMAXUD xmm1, xmm2/m128AV/VSSE4_1Compare packed unsigned dword integers in xmm1 and xmm2/m128 and store packed maximum values in xmm1.
VEX.128.66.0F38.WIG 3F /r VPMAXUD xmm1, xmm2, xmm3/m128BV/VAVXCompare packed unsigned dword integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1.
VEX.256.66.0F38.WIG 3F /r VPMAXUD ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed unsigned dword integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1.
EVEX.128.66.0F38.W0 3F /r VPMAXUD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FCompare packed unsigned dword integers in xmm2 and xmm3/m128/m32bcst and store packed maximum values in xmm1 under writemask k1.
EVEX.256.66.0F38.W0 3F /r VPMAXUD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FCompare packed unsigned dword integers in ymm2 and ymm3/m256/m32bcst and store packed maximum values in ymm1 under writemask k1.
EVEX.512.66.0F38.W0 3F /r VPMAXUD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FCompare packed unsigned dword integers in zmm2 and zmm3/m512/m32bcst and store packed maximum values in zmm1 under writemask k1.
EVEX.128.66.0F38.W1 3F /r VPMAXUQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FCompare packed unsigned qword integers in xmm2 and xmm3/m128/m64bcst and store packed maximum values in xmm1 under writemask k1.
EVEX.256.66.0F38.W1 3F /r VPMAXUQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FCompare packed unsigned qword integers in ymm2 and ymm3/m256/m64bcst and store packed maximum values in ymm1 under writemask k1.
EVEX.512.66.0F38.W1 3F /r VPMAXUQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FCompare packed unsigned qword integers in zmm2 and zmm3/m512/m64bcst and store packed maximum values in zmm1 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvvModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvvModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed unsigned dword or qword integers in the second source operand and the first source operand and returns the maximum value for each pair of integers to the destination operand.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register; The second source operand is a YMM register or 256-bit memory location. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register; The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is conditionally updated based on writemask k1.

+

Operation + ¶ +

+

PMAXUD (128-bit Legacy SSE Version) + ¶ +

+
    IF DEST[31:0] >SRC[31:0] THEN
+        DEST[31:0] := DEST[31:0];
+    ELSE
+        DEST[31:0] := SRC[31:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF DEST[127:96] >SRC[127:96] THEN
+        DEST[127:96] := DEST[127:96];
+    ELSE
+        DEST[127:96] := SRC[127:96]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMAXUD (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[31:0] > SRC2[31:0] THEN
+        DEST[31:0] := SRC1[31:0];
+    ELSE
+        DEST[31:0] := SRC2[31:0]; FI;
+    (* Repeat operation for 2nd through 3rd dwords in source and destination operands *)
+    IF SRC1[127:96] > SRC2[127:96] THEN
+        DEST[127:96] := SRC1[127:96];
+    ELSE
+        DEST[127:96] := SRC2[127:96]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMAXUD (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[31:0] > SRC2[31:0] THEN
+        DEST[31:0] := SRC1[31:0];
+    ELSE
+        DEST[31:0] := SRC2[31:0]; FI;
+    (* Repeat operation for 2nd through 7th dwords in source and destination operands *)
+    IF SRC1[255:224] > SRC2[255:224] THEN
+        DEST[255:224] := SRC1[255:224];
+    ELSE
+        DEST[255:224] := SRC2[255:224]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMAXUD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+        IF (EVEX.b = 1) AND (SRC2 *is memory*)
+            THEN
+                IF SRC1[i+31:i] > SRC2[31:0]
+                    THEN DEST[i+31:i] := SRC1[i+31:i];
+                    ELSE DEST[i+31:i] := SRC2[31:0];
+                FI;
+            ELSE
+                IF SRC1[i+31:i] > SRC2[i+31:i]
+                    THEN DEST[i+31:i] := SRC1[i+31:i];
+                    ELSE DEST[i+31:i] := SRC2[i+31:i];
+            FI;
+        FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    THEN DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPMAXUQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+        IF (EVEX.b = 1) AND (SRC2 *is memory*)
+            THEN
+                IF SRC1[i+63:i] > SRC2[63:0]
+                    THEN DEST[i+63:i] := SRC1[i+63:i];
+                    ELSE DEST[i+63:i] := SRC2[63:0];
+                FI;
+            ELSE
+                IF SRC1[i+31:i] > SRC2[i+31:i]
+                    THEN DEST[i+63:i] := SRC1[i+63:i];
+                    ELSE DEST[i+63:i] := SRC2[i+63:i];
+            FI;
+        FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    THEN DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMAXUD __m512i _mm512_max_epu32( __m512i a, __m512i b);
+
+
VPMAXUD __m512i _mm512_mask_max_epu32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPMAXUD __m512i _mm512_maskz_max_epu32( __mmask16 k, __m512i a, __m512i b);
+
+
VPMAXUQ __m512i _mm512_max_epu64( __m512i a, __m512i b);
+
+
VPMAXUQ __m512i _mm512_mask_max_epu64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPMAXUQ __m512i _mm512_maskz_max_epu64( __mmask8 k, __m512i a, __m512i b);
+
+
VPMAXUD __m256i _mm256_mask_max_epu32(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMAXUD __m256i _mm256_maskz_max_epu32( __mmask16 k, __m256i a, __m256i b);
+
+
VPMAXUQ __m256i _mm256_mask_max_epu64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPMAXUQ __m256i _mm256_maskz_max_epu64( __mmask8 k, __m256i a, __m256i b);
+
+
VPMAXUD __m128i _mm_mask_max_epu32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXUD __m128i _mm_maskz_max_epu32( __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXUQ __m128i _mm_mask_max_epu64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMAXUQ __m128i _mm_maskz_max_epu64( __mmask8 k, __m128i a, __m128i b);
+
+
(V)PMAXUD __m128i _mm_max_epu32 ( __m128i a, __m128i b);
+
+
VPMAXUD __m256i _mm256_max_epu32 ( __m256i a, __m256i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pminsb.pminsw.html b/x86/pminsb.pminsw.html new file mode 100644 index 0000000..a1698cd --- /dev/null +++ b/x86/pminsb.pminsw.html @@ -0,0 +1,335 @@ + +PMINSB/PMINSW + — Minimum of Packed Signed Integers

PMINSB/PMINSW + — Minimum of Packed Signed Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F EA /r1 PMINSW mm1, mm2/m64AV/VSSECompare signed word integers in mm2/m64 and mm1 and return minimum values.
66 0F 38 38 /r PMINSB xmm1, xmm2/m128AV/VSSE4_1Compare packed signed byte integers in xmm1 and xmm2/m128 and store packed minimum values in xmm1.
66 0F EA /r PMINSW xmm1, xmm2/m128AV/VSSE2Compare packed signed word integers in xmm2/m128 and xmm1 and store packed minimum values in xmm1.
VEX.128.66.0F38 38 /r VPMINSB xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed byte integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1.
VEX.128.66.0F EA /r VPMINSW xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed word integers in xmm3/m128 and xmm2 and return packed minimum values in xmm1.
VEX.256.66.0F38 38 /r VPMINSB ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed byte integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1.
VEX.256.66.0F EA /r VPMINSW ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed word integers in ymm3/m256 and ymm2 and return packed minimum values in ymm1.
EVEX.128.66.0F38.WIG 38 /r VPMINSB xmm1{k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWCompare packed signed byte integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1 under writemask k1.
EVEX.256.66.0F38.WIG 38 /r VPMINSB ymm1{k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWCompare packed signed byte integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1 under writemask k1.
EVEX.512.66.0F38.WIG 38 /r VPMINSB zmm1{k1}{z}, zmm2, zmm3/m512CV/VAVX512BWCompare packed signed byte integers in zmm2 and zmm3/m512 and store packed minimum values in zmm1 under writemask k1.
EVEX.128.66.0F.WIG EA /r VPMINSW xmm1{k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWCompare packed signed word integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1 under writemask k1.
EVEX.256.66.0F.WIG EA /r VPMINSW ymm1{k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWCompare packed signed word integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1 under writemask k1.
EVEX.512.66.0F.WIG EA /r VPMINSW zmm1{k1}{z}, zmm2, zmm3/m512CV/VAVX512BWCompare packed signed word integers in zmm2 and zmm3/m512 and store packed minimum values in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed signed byte, word, or dword integers in the second source operand and the first source operand and returns the minimum value for each pair of integers to the destination operand.

+

Legacy SSE version PMINSW: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand can be an MMX technology register.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register; The second source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is conditionally updated based on writemask k1.

+

Operation + ¶ +

+

PMINSW (64-bit Operands) + ¶ +

+
IF DEST[15:0] < SRC[15:0] THEN
+    DEST[15:0] := DEST[15:0];
+ELSE
+    DEST[15:0] := SRC[15:0]; FI;
+(* Repeat operation for 2nd and 3rd words in source and destination operands *)
+IF DEST[63:48] < SRC[63:48] THEN
+    DEST[63:48] := DEST[63:48];
+ELSE
+    DEST[63:48] := SRC[63:48]; FI;
+
+

PMINSB (128-bit Legacy SSE Version) + ¶ +

+
    IF DEST[7:0] < SRC[7:0] THEN
+        DEST[7:0] := DEST[7:0];
+    ELSE
+        DEST[15:0] := SRC[7:0]; FI;
+    (* Repeat operation for 2nd through 15th bytes in source and destination operands *)
+    IF DEST[127:120] < SRC[127:120] THEN
+        DEST[127:120] := DEST[127:120];
+    ELSE
+        DEST[127:120] := SRC[127:120]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMINSB (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[7:0] < SRC2[7:0] THEN
+        DEST[7:0] := SRC1[7:0];
+    ELSE
+        DEST[7:0] := SRC2[7:0]; FI;
+    (* Repeat operation for 2nd through 15th bytes in source and destination operands *)
+    IF SRC1[127:120] < SRC2[127:120] THEN
+        DEST[127:120] := SRC1[127:120];
+    ELSE
+        DEST[127:120] := SRC2[127:120]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMINSB (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[7:0] < SRC2[7:0] THEN
+        DEST[7:0] := SRC1[7:0];
+    ELSE
+        DEST[15:0] := SRC2[7:0]; FI;
+    (* Repeat operation for 2nd through 31st bytes in source and destination operands *)
+    IF SRC1[255:248] < SRC2[255:248] THEN
+        DEST[255:248] := SRC1[255:248];
+    ELSE
+        DEST[255:248] := SRC2[255:248]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMINSB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask* THEN
+        IF SRC1[i+7:i] < SRC2[i+7:i]
+            THEN DEST[i+7:i] := SRC1[i+7:i];
+            ELSE DEST[i+7:i] := SRC2[i+7:i];
+        FI;
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

PMINSW (128-bit Legacy SSE Version) + ¶ +

+
    IF DEST[15:0] < SRC[15:0] THEN
+        DEST[15:0] := DEST[15:0];
+    ELSE
+        DEST[15:0] := SRC[15:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF DEST[127:112] < SRC[127:112] THEN
+        DEST[127:112] := DEST[127:112];
+    ELSE
+        DEST[127:112] := SRC[127:112]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMINSW (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[15:0] < SRC2[15:0] THEN
+        DEST[15:0] := SRC1[15:0];
+    ELSE
+        DEST[15:0] := SRC2[15:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF SRC1[127:112] < SRC2[127:112] THEN
+        DEST[127:112] := SRC1[127:112];
+    ELSE
+        DEST[127:112] := SRC2[127:112]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMINSW (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[15:0] < SRC2[15:0] THEN
+        DEST[15:0] := SRC1[15:0];
+    ELSE
+        DEST[15:0] := SRC2[15:0]; FI;
+    (* Repeat operation for 2nd through 15th words in source and destination operands *)
+    IF SRC1[255:240] < SRC2[255:240] THEN
+        DEST[255:240] := SRC1[255:240];
+    ELSE
+        DEST[255:240] := SRC2[255:240]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMINSW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask* THEN
+        IF SRC1[i+15:i] < SRC2[i+15:i]
+            THEN DEST[i+15:i] := SRC1[i+15:i];
+            ELSE DEST[i+15:i] := SRC2[i+15:i];
+        FI;
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMINSB __m512i _mm512_min_epi8( __m512i a, __m512i b);
+
+
VPMINSB __m512i _mm512_mask_min_epi8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPMINSB __m512i _mm512_maskz_min_epi8( __mmask64 k, __m512i a, __m512i b);
+
+
VPMINSW __m512i _mm512_min_epi16( __m512i a, __m512i b);
+
+
VPMINSW __m512i _mm512_mask_min_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMINSW __m512i _mm512_maskz_min_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMINSB __m256i _mm256_mask_min_epi8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPMINSB __m256i _mm256_maskz_min_epi8( __mmask32 k, __m256i a, __m256i b);
+
+
VPMINSW __m256i _mm256_mask_min_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMINSW __m256i _mm256_maskz_min_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMINSB __m128i _mm_mask_min_epi8(__m128i s, __mmask16 k, __m128i a, __m128i b);
+
+
VPMINSB __m128i _mm_maskz_min_epi8( __mmask16 k, __m128i a, __m128i b);
+
+
VPMINSW __m128i _mm_mask_min_epi16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMINSW __m128i _mm_maskz_min_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
(V)PMINSB __m128i _mm_min_epi8 ( __m128i a, __m128i b);
+
+
(V)PMINSW __m128i _mm_min_epi16 ( __m128i a, __m128i b)
+
+
VPMINSB __m256i _mm256_min_epi8 ( __m256i a, __m256i b);
+
+
VPMINSW __m256i _mm256_min_epi16 ( __m256i a, __m256i b)
+
+
PMINSW__m64 _mm_min_pi16 (__m64 a, __m64 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#MF(64-bit operations only) If there is a pending x87 FPU exception.
diff --git a/x86/pminsd.pminsq.html b/x86/pminsd.pminsq.html new file mode 100644 index 0000000..6251ccd --- /dev/null +++ b/x86/pminsd.pminsq.html @@ -0,0 +1,259 @@ + +PMINSD/PMINSQ + — Minimum of Packed Signed Integers

PMINSD/PMINSQ + — Minimum of Packed Signed Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/E n64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 39 /r PMINSD xmm1, xmm2/m128AV/VSSE4_1Compare packed signed dword integers in xmm1 and xmm2/m128 and store packed minimum values in xmm1.
VEX.128.66.0F38.WIG 39 /r VPMINSD xmm1, xmm2, xmm3/m128BV/VAVXCompare packed signed dword integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1.
VEX.256.66.0F38.WIG 39 /r VPMINSD ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed signed dword integers in ymm2 and ymm3/m128 and store packed minimum values in ymm1.
EVEX.128.66.0F38.W0 39 /r VPMINSD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FCompare packed signed dword integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1 under writemask k1.
EVEX.256.66.0F38.W0 39 /r VPMINSD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FCompare packed signed dword integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1 under writemask k1.
EVEX.512.66.0F38.W0 39 /r VPMINSD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FCompare packed signed dword integers in zmm2 and zmm3/m512/m32bcst and store packed minimum values in zmm1 under writemask k1.
EVEX.128.66.0F38.W1 39 /r VPMINSQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FCompare packed signed qword integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1 under writemask k1.
EVEX.256.66.0F38.W1 39 /r VPMINSQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FCompare packed signed qword integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1 under writemask k1.
EVEX.512.66.0F38.W1 39 /r VPMINSQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FCompare packed signed qword integers in zmm2 and zmm3/m512/m64bcst and store packed minimum values in zmm1 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed signed dword or qword integers in the second source operand and the first source operand and returns the minimum value for each pair of integers to the destination operand.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register; The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is conditionally updated based on writemask k1.

+

Operation + ¶ +

+

PMINSD (128-bit Legacy SSE Version) + ¶ +

+
    IF DEST[31:0] < SRC[31:0] THEN
+        DEST[31:0] := DEST[31:0];
+    ELSE
+        DEST[31:0] := SRC[31:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF DEST[127:96] < SRC[127:96] THEN
+        DEST[127:96] := DEST[127:96];
+    ELSE
+        DEST[127:96] := SRC[127:96]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMINSD (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[31:0] < SRC2[31:0] THEN
+        DEST[31:0] := SRC1[31:0];
+    ELSE
+        DEST[31:0] := SRC2[31:0]; FI;
+    (* Repeat operation for 2nd through 3rd dwords in source and destination operands *)
+    IF SRC1[127:96] < SRC2[127:96] THEN
+        DEST[127:96] := SRC1[127:96];
+    ELSE
+        DEST[127:96] := SRC2[127:96]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMINSD (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[31:0] < SRC2[31:0] THEN
+        DEST[31:0] := SRC1[31:0];
+    ELSE
+        DEST[31:0] := SRC2[31:0]; FI;
+    (* Repeat operation for 2nd through 7th dwords in source and destination operands *)
+    IF SRC1[255:224] < SRC2[255:224] THEN
+        DEST[255:224] := SRC1[255:224];
+    ELSE
+        DEST[255:224] := SRC2[255:224]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMINSD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+        IF (EVEX.b = 1) AND (SRC2 *is memory*)
+            THEN
+                IF SRC1[i+31:i] < SRC2[31:0]
+                    THEN DEST[i+31:i] := SRC1[i+31:i];
+                    ELSE DEST[i+31:i] := SRC2[31:0];
+                FI;
+            ELSE
+                IF SRC1[i+31:i] < SRC2[i+31:i]
+                    THEN DEST[i+31:i] := SRC1[i+31:i];
+                    ELSE DEST[i+31:i] := SRC2[i+31:i];
+            FI;
+        FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPMINSQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+        IF (EVEX.b = 1) AND (SRC2 *is memory*)
+            THEN
+                IF SRC1[i+63:i] < SRC2[63:0]
+                    THEN DEST[i+63:i] := SRC1[i+63:i];
+                    ELSE DEST[i+63:i] := SRC2[63:0];
+                FI;
+            ELSE
+                IF SRC1[i+63:i] < SRC2[i+63:i]
+                    THEN DEST[i+63:i] := SRC1[i+63:i];
+                    ELSE DEST[i+63:i] := SRC2[i+63:i];
+            FI;
+        FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMINSD __m512i _mm512_min_epi32( __m512i a, __m512i b);
+
+
VPMINSD __m512i _mm512_mask_min_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPMINSD __m512i _mm512_maskz_min_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPMINSQ __m512i _mm512_min_epi64( __m512i a, __m512i b);
+
+
VPMINSQ __m512i _mm512_mask_min_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPMINSQ __m512i _mm512_maskz_min_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPMINSD __m256i _mm256_mask_min_epi32(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMINSD __m256i _mm256_maskz_min_epi32( __mmask16 k, __m256i a, __m256i b);
+
+
VPMINSQ __m256i _mm256_mask_min_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPMINSQ __m256i _mm256_maskz_min_epi64( __mmask8 k, __m256i a, __m256i b);
+
+
VPMINSD __m128i _mm_mask_min_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMINSD __m128i _mm_maskz_min_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
VPMINSQ __m128i _mm_mask_min_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMINSQ __m128i _mm_maskz_min_epu64( __mmask8 k, __m128i a, __m128i b);
+
+
(V)PMINSD __m128i _mm_min_epi32 ( __m128i a, __m128i b);
+
+
VPMINSD __m256i _mm256_min_epi32 (__m256i a, __m256i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pminub.pminuw.html b/x86/pminub.pminuw.html new file mode 100644 index 0000000..6d44202 --- /dev/null +++ b/x86/pminub.pminuw.html @@ -0,0 +1,332 @@ + +PMINUB/PMINUW + — Minimum of Packed Unsigned Integers

PMINUB/PMINUW + — Minimum of Packed Unsigned Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F DA /r1 PMINUB mm1, mm2/m64AV/VSSECompare unsigned byte integers in mm2/m64 and mm1 and returns minimum values.
66 0F DA /r PMINUB xmm1, xmm2/m128AV/VSSE2Compare packed unsigned byte integers in xmm1 and xmm2/m128 and store packed minimum values in xmm1.
66 0F 38 3A/r PMINUW xmm1, xmm2/m128AV/VSSE4_1Compare packed unsigned word integers in xmm2/m128 and xmm1 and store packed minimum values in xmm1.
VEX.128.66.0F DA /r VPMINUB xmm1, xmm2, xmm3/m128BV/VAVXCompare packed unsigned byte integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1.
VEX.128.66.0F38 3A/r VPMINUW xmm1, xmm2, xmm3/m128BV/VAVXCompare packed unsigned word integers in xmm3/m128 and xmm2 and return packed minimum values in xmm1.
VEX.256.66.0F DA /r VPMINUB ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed unsigned byte integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1.
VEX.256.66.0F38 3A/r VPMINUW ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed unsigned word integers in ymm3/m256 and ymm2 and return packed minimum values in ymm1.
EVEX.128.66.0F DA /r VPMINUB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWCompare packed unsigned byte integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1 under writemask k1.
EVEX.256.66.0F DA /r VPMINUB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWCompare packed unsigned byte integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1 under writemask k1.
EVEX.512.66.0F DA /r VPMINUB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWCompare packed unsigned byte integers in zmm2 and zmm3/m512 and store packed minimum values in zmm1 under writemask k1.
EVEX.128.66.0F38 3A/r VPMINUW xmm1{k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWCompare packed unsigned word integers in xmm3/m128 and xmm2 and return packed minimum values in xmm1 under writemask k1.
EVEX.256.66.0F38 3A/r VPMINUW ymm1{k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWCompare packed unsigned word integers in ymm3/m256 and ymm2 and return packed minimum values in ymm1 under writemask k1.
EVEX.512.66.0F38 3A/r VPMINUW zmm1{k1}{z}, zmm2, zmm3/m512CV/VAVX512BWCompare packed unsigned word integers in zmm3/m512 and zmm2 and return packed minimum values in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed unsigned byte or word integers in the second source operand and the first source operand and returns the minimum value for each pair of integers to the destination operand.

+

Legacy SSE version PMINUB: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand can be an MMX technology register.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register; The second source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is conditionally updated based on writemask k1.

+

Operation + ¶ +

+

PMINUB (64-bit Operands) + ¶ +

+
IF DEST[7:0] < SRC[17:0] THEN
+    DEST[7:0] := DEST[7:0];
+ELSE
+    DEST[7:0] := SRC[7:0]; FI;
+(* Repeat operation for 2nd through 7th bytes in source and destination operands *)
+IF DEST[63:56] < SRC[63:56] THEN
+    DEST[63:56] := DEST[63:56];
+ELSE
+    DEST[63:56] := SRC[63:56]; FI;
+
+

PMINUB (128-bit Operands) + ¶ +

+
    IF DEST[7:0] < SRC[7:0] THEN
+        DEST[7:0] := DEST[7:0];
+    ELSE
+        DEST[15:0] := SRC[7:0]; FI;
+    (* Repeat operation for 2nd through 15th bytes in source and destination operands *)
+    IF DEST[127:120] < SRC[127:120] THEN
+        DEST[127:120] := DEST[127:120];
+    ELSE
+        DEST[127:120] := SRC[127:120]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMINUB (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[7:0] < SRC2[7:0] THEN
+        DEST[7:0] := SRC1[7:0];
+    ELSE
+        DEST[7:0] := SRC2[7:0]; FI;
+    (* Repeat operation for 2nd through 15th bytes in source and destination operands *)
+    IF SRC1[127:120] < SRC2[127:120] THEN
+        DEST[127:120] := SRC1[127:120];
+    ELSE
+        DEST[127:120] := SRC2[127:120]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMINUB (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[7:0] < SRC2[7:0] THEN
+        DEST[7:0] := SRC1[7:0];
+    ELSE
+        DEST[15:0] := SRC2[7:0]; FI;
+    (* Repeat operation for 2nd through 31st bytes in source and destination operands *)
+    IF SRC1[255:248] < SRC2[255:248] THEN
+        DEST[255:248] := SRC1[255:248];
+    ELSE
+        DEST[255:248] := SRC2[255:248]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMINUB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask* THEN
+        IF SRC1[i+7:i] < SRC2[i+7:i]
+            THEN DEST[i+7:i] := SRC1[i+7:i];
+            ELSE DEST[i+7:i] := SRC2[i+7:i];
+        FI;
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

PMINUW (128-bit Operands) + ¶ +

+
    IF DEST[15:0] < SRC[15:0] THEN
+        DEST[15:0] := DEST[15:0];
+    ELSE
+        DEST[15:0] := SRC[15:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF DEST[127:112] < SRC[127:112] THEN
+        DEST[127:112] := DEST[127:112];
+    ELSE
+        DEST[127:112] := SRC[127:112]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMINUW (VEX.128 Encoded Version) + ¶ +

+
    IF SRC1[15:0] < SRC2[15:0] THEN
+        DEST[15:0] := SRC1[15:0];
+    ELSE
+        DEST[15:0] := SRC2[15:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF SRC1[127:112] < SRC2[127:112] THEN
+        DEST[127:112] := SRC1[127:112];
+    ELSE
+        DEST[127:112] := SRC2[127:112]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMINUW (VEX.256 Encoded Version) + ¶ +

+
    IF SRC1[15:0] < SRC2[15:0] THEN
+        DEST[15:0] := SRC1[15:0];
+    ELSE
+        DEST[15:0] := SRC2[15:0]; FI;
+    (* Repeat operation for 2nd through 15th words in source and destination operands *)
+    IF SRC1[255:240] < SRC2[255:240] THEN
+        DEST[255:240] := SRC1[255:240];
+    ELSE
+        DEST[255:240] := SRC2[255:240]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMINUW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask* THEN
+        IF SRC1[i+15:i] < SRC2[i+15:i]
+            THEN DEST[i+15:i] := SRC1[i+15:i];
+            ELSE DEST[i+15:i] := SRC2[i+15:i];
+        FI;
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMINUB __m512i _mm512_min_epu8( __m512i a, __m512i b);
+
+
VPMINUB __m512i _mm512_mask_min_epu8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPMINUB __m512i _mm512_maskz_min_epu8( __mmask64 k, __m512i a, __m512i b);
+
+
VPMINUW __m512i _mm512_min_epu16( __m512i a, __m512i b);
+
+
VPMINUW __m512i _mm512_mask_min_epu16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMINUW __m512i _mm512_maskz_min_epu16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMINUB __m256i _mm256_mask_min_epu8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPMINUB __m256i _mm256_maskz_min_epu8( __mmask32 k, __m256i a, __m256i b);
+
+
VPMINUW __m256i _mm256_mask_min_epu16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMINUW __m256i _mm256_maskz_min_epu16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMINUB __m128i _mm_mask_min_epu8(__m128i s, __mmask16 k, __m128i a, __m128i b);
+
+
VPMINUB __m128i _mm_maskz_min_epu8( __mmask16 k, __m128i a, __m128i b);
+
+
VPMINUW __m128i _mm_mask_min_epu16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMINUW __m128i _mm_maskz_min_epu16( __mmask8 k, __m128i a, __m128i b);
+
+
(V)PMINUB __m128i _mm_min_epu8 ( __m128i a, __m128i b)
+
+
(V)PMINUW __m128i _mm_min_epu16 ( __m128i a, __m128i b);
+
+
VPMINUB __m256i _mm256_min_epu8 ( __m256i a, __m256i b)
+
+
VPMINUW __m256i _mm256_min_epu16 ( __m256i a, __m256i b);
+
+
PMINUB __m64 _m_min_pu8 (__m64 a, __m64 b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pminud.pminuq.html b/x86/pminud.pminuq.html new file mode 100644 index 0000000..c95cc59 --- /dev/null +++ b/x86/pminud.pminuq.html @@ -0,0 +1,262 @@ + +PMINUD/PMINUQ + — Minimum of Packed Unsigned Integers

PMINUD/PMINUQ + — Minimum of Packed Unsigned Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/E n64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 3B /r PMINUD xmm1, xmm2/m128AV/VSSE4_1Compare packed unsigned dword integers in xmm1 and xmm2/m128 and store packed minimum values in xmm1.
VEX.128.66.0F38.WIG 3B /r VPMINUD xmm1, xmm2, xmm3/m128BV/VAVXCompare packed unsigned dword integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1.
VEX.256.66.0F38.WIG 3B /r VPMINUD ymm1, ymm2, ymm3/m256BV/VAVX2Compare packed unsigned dword integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1.
EVEX.128.66.0F38.W0 3B /r VPMINUD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FCompare packed unsigned dword integers in xmm2 and xmm3/m128/m32bcst and store packed minimum values in xmm1 under writemask k1.
EVEX.256.66.0F38.W0 3B /r VPMINUD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FCompare packed unsigned dword integers in ymm2 and ymm3/m256/m32bcst and store packed minimum values in ymm1 under writemask k1.
EVEX.512.66.0F38.W0 3B /r VPMINUD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FCompare packed unsigned dword integers in zmm2 and zmm3/m512/m32bcst and store packed minimum values in zmm1 under writemask k1.
EVEX.128.66.0F38.W1 3B /r VPMINUQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FCompare packed unsigned qword integers in xmm2 and xmm3/m128/m64bcst and store packed minimum values in xmm1 under writemask k1.
EVEX.256.66.0F38.W1 3B /r VPMINUQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FCompare packed unsigned qword integers in ymm2 and ymm3/m256/m64bcst and store packed minimum values in ymm1 under writemask k1.
EVEX.512.66.0F38.W1 3B /r VPMINUQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FCompare packed unsigned qword integers in zmm2 and zmm3/m512/m64bcst and store packed minimum values in zmm1 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed unsigned dword/qword integers in the second source operand and the first source operand and returns the minimum value for each pair of integers to the destination operand.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register; The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is conditionally updated based on writemask k1.

+

Operation + ¶ +

+

PMINUD (128-bit Legacy SSE Version) + ¶ +

+
PMINUD instruction for 128-bit operands:
+    IF DEST[31:0] < SRC[31:0] THEN
+        DEST[31:0] := DEST[31:0];
+    ELSE
+        DEST[31:0] := SRC[31:0]; FI;
+    (* Repeat operation for 2nd through 7th words in source and destination operands *)
+    IF DEST[127:96] < SRC[127:96] THEN
+        DEST[127:96] := DEST[127:96];
+    ELSE
+        DEST[127:96] := SRC[127:96]; FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPMINUD (VEX.128 Encoded Version) + ¶ +

+
VPMINUD instruction for 128-bit operands:
+    IF SRC1[31:0] < SRC2[31:0] THEN
+        DEST[31:0] := SRC1[31:0];
+    ELSE
+        DEST[31:0] := SRC2[31:0]; FI;
+    (* Repeat operation for 2nd through 3rd dwords in source and destination operands *)
+    IF SRC1[127:96] < SRC2[127:96] THEN
+        DEST[127:96] := SRC1[127:96];
+    ELSE
+        DEST[127:96] := SRC2[127:96]; FI;
+DEST[MAXVL-1:128] := 0
+
+

VPMINUD (VEX.256 Encoded Version) + ¶ +

+
VPMINUD instruction for 128-bit operands:
+    IF SRC1[31:0] < SRC2[31:0] THEN
+        DEST[31:0] := SRC1[31:0];
+    ELSE
+        DEST[31:0] := SRC2[31:0]; FI;
+    (* Repeat operation for 2nd through 7th dwords in source and destination operands *)
+    IF SRC1[255:224] < SRC2[255:224] THEN
+        DEST[255:224] := SRC1[255:224];
+    ELSE
+        DEST[255:224] := SRC2[255:224]; FI;
+DEST[MAXVL-1:256] := 0
+
+

VPMINUD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+        IF (EVEX.b = 1) AND (SRC2 *is memory*)
+            THEN
+                IF SRC1[i+31:i] < SRC2[31:0]
+                    THEN DEST[i+31:i] := SRC1[i+31:i];
+                    ELSE DEST[i+31:i] := SRC2[31:0];
+                FI;
+            ELSE
+                IF SRC1[i+31:i] < SRC2[i+31:i]
+                    THEN DEST[i+31:i] := SRC1[i+31:i];
+                    ELSE DEST[i+31:i] := SRC2[i+31:i];
+            FI;
+        FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPMINUQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+        IF (EVEX.b = 1) AND (SRC2 *is memory*)
+            THEN
+                IF SRC1[i+63:i] < SRC2[63:0]
+                    THEN DEST[i+63:i] := SRC1[i+63:i];
+                    ELSE DEST[i+63:i] := SRC2[63:0];
+                FI;
+            ELSE
+                IF SRC1[i+63:i] < SRC2[i+63:i]
+                    THEN DEST[i+63:i] := SRC1[i+63:i];
+                    ELSE DEST[i+63:i] := SRC2[i+63:i];
+            FI;
+        FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMINUD __m512i _mm512_min_epu32( __m512i a, __m512i b);
+
+
VPMINUD __m512i _mm512_mask_min_epu32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPMINUD __m512i _mm512_maskz_min_epu32( __mmask16 k, __m512i a, __m512i b);
+
+
VPMINUQ __m512i _mm512_min_epu64( __m512i a, __m512i b);
+
+
VPMINUQ __m512i _mm512_mask_min_epu64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPMINUQ __m512i _mm512_maskz_min_epu64( __mmask8 k, __m512i a, __m512i b);
+
+
VPMINUD __m256i _mm256_mask_min_epu32(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMINUD __m256i _mm256_maskz_min_epu32( __mmask16 k, __m256i a, __m256i b);
+
+
VPMINUQ __m256i _mm256_mask_min_epu64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPMINUQ __m256i _mm256_maskz_min_epu64( __mmask8 k, __m256i a, __m256i b);
+
+
VPMINUD __m128i _mm_mask_min_epu32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMINUD __m128i _mm_maskz_min_epu32( __mmask8 k, __m128i a, __m128i b);
+
+
VPMINUQ __m128i _mm_mask_min_epu64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMINUQ __m128i _mm_maskz_min_epu64( __mmask8 k, __m128i a, __m128i b);
+
+
(V)PMINUD __m128i _mm_min_epu32 ( __m128i a, __m128i b);
+
+
VPMINUD __m256i _mm256_min_epu32 ( __m256i a, __m256i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pmovmskb.html b/x86/pmovmskb.html new file mode 100644 index 0000000..2571f1c --- /dev/null +++ b/x86/pmovmskb.html @@ -0,0 +1,150 @@ + +PMOVMSKB + — Move Byte Mask

PMOVMSKB + — Move Byte Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F D7 /r1 PMOVMSKB reg, mmRMV/VSSEMove a byte mask of mm to reg. The upper bits of r32 or r64 are zeroed
66 0F D7 /r PMOVMSKB reg, xmmRMV/VSSE2Move a byte mask of xmm to reg. The upper bits of r32 or r64 are zeroed
VEX.128.66.0F.WIG D7 /r VPMOVMSKB reg, xmm1RMV/VAVXMove a byte mask of xmm1 to reg. The upper bits of r32 or r64 are filled with zeros.
VEX.256.66.0F.WIG D7 /r VPMOVMSKB reg, ymm1RMV/VAVX2Move a 32-bit mask of ymm1 to reg. The upper bits of r64 are filled with zeros.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Creates a mask made up of the most significant bit of each byte of the source operand (second operand) and stores the result in the low byte or word of the destination operand (first operand).

+

The byte mask is 8 bits for 64-bit source operand, 16 bits for 128-bit source operand and 32 bits for 256-bit source operand. The destination operand is a general-purpose register.

+

In 64-bit mode, the instruction can access additional registers (XMM8-XMM15, R8-R15) when used with a REX.R prefix. The default operand size is 64-bit in 64-bit mode.

+

Legacy SSE version: The source operand is an MMX technology register.

+

128-bit Legacy SSE version: The source operand is an XMM register.

+

VEX.128 encoded version: The source operand is an XMM register.

+

VEX.256 encoded version: The source operand is a YMM register.

+

Note: VEX.vvvv is reserved and must be 1111b.

+

Operation + ¶ +

+

PMOVMSKB (With 64-bit Source Operand and r32) + ¶ +

+
r32[0] := SRC[7];
+r32[1] := SRC[15];
+(* Repeat operation for bytes 2 through 6 *)
+r32[7] := SRC[63];
+r32[31:8] := ZERO_FILL;
+
+

(V)PMOVMSKB (With 128-bit Source Operand and r32) + ¶ +

+
r32[0] := SRC[7];
+r32[1] := SRC[15];
+(* Repeat operation for bytes 2 through 14 *)
+r32[15] := SRC[127];
+r32[31:16] := ZERO_FILL;
+
+

VPMOVMSKB (With 256-bit Source Operand and r32) + ¶ +

+
r32[0] := SRC[7];
+r32[1] := SRC[15];
+(* Repeat operation for bytes 3rd through 31*)
+r32[31] := SRC[255];
+
+

PMOVMSKB (With 64-bit Source Operand and r64) + ¶ +

+
r64[0] := SRC[7];
+r64[1] := SRC[15];
+(* Repeat operation for bytes 2 through 6 *)
+r64[7] := SRC[63];
+r64[63:8] := ZERO_FILL;
+
+

(V)PMOVMSKB (With 128-bit Source Operand and r64) + ¶ +

+
r64[0] := SRC[7];
+r64[1] := SRC[15];
+(* Repeat operation for bytes 2 through 14 *)
+r64[15] := SRC[127];
+r64[63:16] := ZERO_FILL;
+
+

VPMOVMSKB (With 256-bit Source Operand and r64) + ¶ +

+
r64[0] := SRC[7];
+r64[1] := SRC[15];
+(* Repeat operation for bytes 2 through 31*)
+r64[31] := SRC[255];
+r64[63:32] := ZERO_FILL;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PMOVMSKB int _mm_movemask_pi8(__m64 a)
+
+
(V)PMOVMSKB int _mm_movemask_epi8 ( __m128i a)
+
+
VPMOVMSKB int _mm256_movemask_epi8 ( __m256i a)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-24, “Type 7 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/pmovsx.html b/x86/pmovsx.html new file mode 100644 index 0000000..bcc0c51 --- /dev/null +++ b/x86/pmovsx.html @@ -0,0 +1,736 @@ + +PMOVSX + — Packed Move With Sign Extend

PMOVSX + — Packed Move With Sign Extend

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0f 38 20 /r PMOVSXBW xmm1, xmm2/m64AV/VSSE4_1Sign extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.
66 0f 38 21 /r PMOVSXBD xmm1, xmm2/m32AV/VSSE4_1Sign extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1.
66 0f 38 22 /r PMOVSXBQ xmm1, xmm2/m16AV/VSSE4_1Sign extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1.
66 0f 38 23/r PMOVSXWD xmm1, xmm2/m64AV/VSSE4_1Sign extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1.
66 0f 38 24 /r PMOVSXWQ xmm1, xmm2/m32AV/VSSE4_1Sign extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1.
66 0f 38 25 /r PMOVSXDQ xmm1, xmm2/m64AV/VSSE4_1Sign extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in xmm1.
VEX.128.66.0F38.WIG 20 /r VPMOVSXBW xmm1, xmm2/m64AV/VAVXSign extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.
VEX.128.66.0F38.WIG 21 /r VPMOVSXBD xmm1, xmm2/m32AV/VAVXSign extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1.
VEX.128.66.0F38.WIG 22 /r VPMOVSXBQ xmm1, xmm2/m16AV/VAVXSign extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1.
VEX.128.66.0F38.WIG 23 /r VPMOVSXWD xmm1, xmm2/m64AV/VAVXSign extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1.
VEX.128.66.0F38.WIG 24 /r VPMOVSXWQ xmm1, xmm2/m32AV/VAVXSign extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1.
VEX.128.66.0F38.WIG 25 /r VPMOVSXDQ xmm1, xmm2/m64AV/VAVXSign extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in xmm1.
VEX.256.66.0F38.WIG 20 /r VPMOVSXBW ymm1, xmm2/m128AV/VAVX2Sign extend 16 packed 8-bit integers in xmm2/m128 to 16 packed 16-bit integers in ymm1.
VEX.256.66.0F38.WIG 21 /r VPMOVSXBD ymm1, xmm2/m64AV/VAVX2Sign extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 32-bit integers in ymm1.
VEX.256.66.0F38.WIG 22 /r VPMOVSXBQ ymm1, xmm2/m32AV/VAVX2Sign extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 64-bit integers in ymm1.
VEX.256.66.0F38.WIG 23 /r VPMOVSXWD ymm1, xmm2/m128AV/VAVX2Sign extend 8 packed 16-bit integers in the low 16 bytes of xmm2/m128 to 8 packed 32-bit integers in ymm1.
VEX.256.66.0F38.WIG 24 /r VPMOVSXWQ ymm1, xmm2/m64AV/VAVX2Sign extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 64-bit integers in ymm1.
VEX.256.66.0F38.WIG 25 /r VPMOVSXDQ ymm1, xmm2/m128AV/VAVX2Sign extend 4 packed 32-bit integers in the low 16 bytes of xmm2/m128 to 4 packed 64-bit integers in ymm1.
EVEX.128.66.0F38.WIG 20 /r VPMOVSXBW xmm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512BWSign extend 8 packed 8-bit integers in xmm2/m64 to 8 packed 16-bit integers in zmm1.
EVEX.256.66.0F38.WIG 20 /r VPMOVSXBW ymm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512BWSign extend 16 packed 8-bit integers in xmm2/m128 to 16 packed 16-bit integers in ymm1.
EVEX.512.66.0F38.WIG 20 /r VPMOVSXBW zmm1 {k1}{z}, ymm2/m256BV/VAVX512BWSign extend 32 packed 8-bit integers in ymm2/m256 to 32 packed 16-bit integers in zmm1.
EVEX.128.66.0F38.WIG 21 /r VPMOVSXBD xmm1 {k1}{z}, xmm2/m32CV/VAVX512VL AVX512FSign extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1 subject to writemask k1.
EVEX.256.66.0F38.WIG 21 /r VPMOVSXBD ymm1 {k1}{z}, xmm2/m64CV/VAVX512VL AVX512FSign extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 32-bit integers in ymm1 subject to writemask k1.
EVEX.512.66.0F38.WIG 21 /r VPMOVSXBD zmm1 {k1}{z}, xmm2/m128CV/VAVX512FSign extend 16 packed 8-bit integers in the low 16 bytes of xmm2/m128 to 16 packed 32-bit integers in zmm1 subject to writemask k1.
EVEX.128.66.0F38.WIG 22 /r VPMOVSXBQ xmm1 {k1}{z}, xmm2/m16DV/VAVX512VL AVX512FSign extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1 subject to writemask k1.
EVEX.256.66.0F38.WIG 22 /r VPMOVSXBQ ymm1 {k1}{z}, xmm2/m32DV/VAVX512VL AVX512FSign extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 64-bit integers in ymm1 subject to writemask k1.
EVEX.512.66.0F38.WIG 22 /r VPMOVSXBQ zmm1 {k1}{z}, xmm2/m64DV/VAVX512FSign extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 64-bit integers in zmm1 subject to writemask k1.
EVEX.128.66.0F38.WIG 23 /r VPMOVSXWD xmm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512FSign extend 4 packed 16-bit integers in the low 8 bytes of ymm2/mem to 4 packed 32-bit integers in xmm1 subject to writemask k1.
EVEX.256.66.0F38.WIG 23 /r VPMOVSXWD ymm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512FSign extend 8 packed 16-bit integers in the low 16 bytes of ymm2/m128 to 8 packed 32-bit integers in ymm1 subject to writemask k1.
EVEX.512.66.0F38.WIG 23 /r VPMOVSXWD zmm1 {k1}{z}, ymm2/m256BV/VAVX512FSign extend 16 packed 16-bit integers in the low 32 bytes of ymm2/m256 to 16 packed 32-bit integers in zmm1 subject to writemask k1.
EVEX.128.66.0F38.WIG 24 /r VPMOVSXWQ xmm1 {k1}{z}, xmm2/m32CV/VAVX512VL AVX512FSign extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1 subject to writemask k1.
EVEX.256.66.0F38.WIG 24 /r VPMOVSXWQ ymm1 {k1}{z}, xmm2/m64CV/VAVX512VL AVX512FSign extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 64-bit integers in ymm1 subject to writemask k1.
EVEX.512.66.0F38.WIG 24 /r VPMOVSXWQ zmm1 {k1}{z}, xmm2/m128CV/VAVX512FSign extend 8 packed 16-bit integers in the low 16 bytes of xmm2/m128 to 8 packed 64-bit integers in zmm1 subject to writemask k1.
EVEX.128.66.0F38.W0 25 /r VPMOVSXDQ xmm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512FSign extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in zmm1 using writemask k1.
EVEX.256.66.0F38.W0 25 /r VPMOVSXDQ ymm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512FSign extend 4 packed 32-bit integers in the low 16 bytes of xmm2/m128 to 4 packed 64-bit integers in zmm1 using writemask k1.
EVEX.512.66.0F38.W0 25 /r VPMOVSXDQ zmm1 {k1}{z}, ymm2/m256BV/VAVX512FSign extend 8 packed 32-bit integers in the low 32 bytes of ymm2/m256 to 8 packed 64-bit integers in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BHalf MemModRM:reg (w)ModRM:r/m (r)N/AN/A
CQuarter MemModRM:reg (w)ModRM:r/m (r)N/AN/A
DEighth MemModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Legacy and VEX encoded versions: Packed byte, word, or dword integers in the low bytes of the source operand (second operand) are sign extended to word, dword, or quadword integers and stored in packed signed bytes the destination operand.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX.128 encoded versions: Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 and EVEX.256 encoded versions: Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX encoded versions: Packed byte, word or dword integers starting from the low bytes of the source operand (second operand) are sign extended to word, dword or quadword integers and stored to the destination operand under the writemask. The destination register is XMM, YMM or ZMM Register.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

Packed_Sign_Extend_BYTE_to_WORD(DEST, SRC) + ¶ +

+
DEST[15:0] := SignExtend(SRC[7:0]);
+DEST[31:16] := SignExtend(SRC[15:8]);
+DEST[47:32] := SignExtend(SRC[23:16]);
+DEST[63:48] := SignExtend(SRC[31:24]);
+DEST[79:64] := SignExtend(SRC[39:32]);
+DEST[95:80] := SignExtend(SRC[47:40]);
+DEST[111:96] := SignExtend(SRC[55:48]);
+DEST[127:112] := SignExtend(SRC[63:56]);
+
+

Packed_Sign_Extend_BYTE_to_DWORD(DEST, SRC) + ¶ +

+
DEST[31:0] := SignExtend(SRC[7:0]);
+DEST[63:32] := SignExtend(SRC[15:8]);
+DEST[95:64] := SignExtend(SRC[23:16]);
+DEST[127:96] := SignExtend(SRC[31:24]);
+
+

Packed_Sign_Extend_BYTE_to_QWORD(DEST, SRC) + ¶ +

+
DEST[63:0] := SignExtend(SRC[7:0]);
+DEST[127:64] := SignExtend(SRC[15:8]);
+
+

Packed_Sign_Extend_WORD_to_DWORD(DEST, SRC) + ¶ +

+
DEST[31:0] := SignExtend(SRC[15:0]);
+DEST[63:32] := SignExtend(SRC[31:16]);
+DEST[95:64] := SignExtend(SRC[47:32]);
+DEST[127:96] := SignExtend(SRC[63:48]);
+
+

Packed_Sign_Extend_WORD_to_QWORD(DEST, SRC) + ¶ +

+
DEST[63:0] := SignExtend(SRC[15:0]);
+DEST[127:64] := SignExtend(SRC[31:16]);
+
+

Packed_Sign_Extend_DWORD_to_QWORD(DEST, SRC) + ¶ +

+
DEST[63:0] := SignExtend(SRC[31:0]);
+DEST[127:64] := SignExtend(SRC[63:32]);
+
+

VPMOVSXBW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+Packed_Sign_Extend_BYTE_to_WORD(TMP_DEST[127:0], SRC[63:0])
+IF VL >= 256
+    Packed_Sign_Extend_BYTE_to_WORD(TMP_DEST[255:128], SRC[127:64])
+FI;
+IF VL >= 512
+    Packed_Sign_Extend_BYTE_to_WORD(TMP_DEST[383:256], SRC[191:128])
+    Packed_Sign_Extend_BYTE_to_WORD(TMP_DEST[511:384], SRC[255:192])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TEMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVSXBD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+Packed_Sign_Extend_BYTE_to_DWORD(TMP_DEST[127:0], SRC[31:0])
+IF VL >= 256
+    Packed_Sign_Extend_BYTE_to_DWORD(TMP_DEST[255:128], SRC[63:32])
+FI;
+IF VL >= 512
+    Packed_Sign_Extend_BYTE_to_DWORD(TMP_DEST[383:256], SRC[95:64])
+    Packed_Sign_Extend_BYTE_to_DWORD(TMP_DEST[511:384], SRC[127:96])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TEMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVSXBQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+Packed_Sign_Extend_BYTE_to_QWORD(TMP_DEST[127:0], SRC[15:0])
+IF VL >= 256
+    Packed_Sign_Extend_BYTE_to_QWORD(TMP_DEST[255:128], SRC[31:16])
+FI;
+IF VL >= 512
+    Packed_Sign_Extend_BYTE_to_QWORD(TMP_DEST[383:256], SRC[47:32])
+    Packed_Sign_Extend_BYTE_to_QWORD(TMP_DEST[511:384], SRC[63:48])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TEMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVSXWD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+Packed_Sign_Extend_WORD_to_DWORD(TMP_DEST[127:0], SRC[63:0])
+IF VL >= 256
+    Packed_Sign_Extend_WORD_to_DWORD(TMP_DEST[255:128], SRC[127:64])
+FI;
+IF VL >= 512
+    Packed_Sign_Extend_WORD_to_DWORD(TMP_DEST[383:256], SRC[191:128])
+    Packed_Sign_Extend_WORD_to_DWORD(TMP_DEST[511:384], SRC[256:192])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TEMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVSXWQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+Packed_Sign_Extend_WORD_to_QWORD(TMP_DEST[127:0], SRC[31:0])
+IF VL >= 256
+    Packed_Sign_Extend_WORD_to_QWORD(TMP_DEST[255:128], SRC[63:32])
+FI;
+IF VL >= 512
+    Packed_Sign_Extend_WORD_to_QWORD(TMP_DEST[383:256], SRC[95:64])
+    Packed_Sign_Extend_WORD_to_QWORD(TMP_DEST[511:384], SRC[127:96])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TEMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVSXDQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+Packed_Sign_Extend_DWORD_to_QWORD(TEMP_DEST[127:0], SRC[63:0])
+IF VL >= 256
+    Packed_Sign_Extend_DWORD_to_QWORD(TEMP_DEST[255:128], SRC[127:64])
+FI;
+IF VL >= 512
+    Packed_Sign_Extend_DWORD_to_QWORD(TEMP_DEST[383:256], SRC[191:128])
+    Packed_Sign_Extend_DWORD_to_QWORD(TEMP_DEST[511:384], SRC[255:192])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TEMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVSXBW (VEX.256 Encoded Version) + ¶ +

+
Packed_Sign_Extend_BYTE_to_WORD(DEST[127:0], SRC[63:0])
+Packed_Sign_Extend_BYTE_to_WORD(DEST[255:128], SRC[127:64])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVSXBD (VEX.256 Encoded Version) + ¶ +

+
Packed_Sign_Extend_BYTE_to_DWORD(DEST[127:0], SRC[31:0])
+Packed_Sign_Extend_BYTE_to_DWORD(DEST[255:128], SRC[63:32])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVSXBQ (VEX.256 Encoded Version) + ¶ +

+
Packed_Sign_Extend_BYTE_to_QWORD(DEST[127:0], SRC[15:0])
+Packed_Sign_Extend_BYTE_to_QWORD(DEST[255:128], SRC[31:16])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVSXWD (VEX.256 Encoded Version) + ¶ +

+
Packed_Sign_Extend_WORD_to_DWORD(DEST[127:0], SRC[63:0])
+Packed_Sign_Extend_WORD_to_DWORD(DEST[255:128], SRC[127:64])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVSXWQ (VEX.256 Encoded Version) + ¶ +

+
Packed_Sign_Extend_WORD_to_QWORD(DEST[127:0], SRC[31:0])
+Packed_Sign_Extend_WORD_to_QWORD(DEST[255:128], SRC[63:32])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVSXDQ (VEX.256 Encoded Version) + ¶ +

+
Packed_Sign_Extend_DWORD_to_QWORD(DEST[127:0], SRC[63:0])
+Packed_Sign_Extend_DWORD_to_QWORD(DEST[255:128], SRC[127:64])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVSXBW (VEX.128 Encoded Version) + ¶ +

+
Packed_Sign_Extend_BYTE_to_WORDDEST[127:0], SRC[127:0]()
+DEST[MAXVL-1:128] := 0
+
+

VPMOVSXBD (VEX.128 Encoded Version) + ¶ +

+
Packed_Sign_Extend_BYTE_to_DWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] := 0
+
+

VPMOVSXBQ (VEX.128 Encoded Version) + ¶ +

+
Packed_Sign_Extend_BYTE_to_QWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] := 0
+
+

VPMOVSXWD (VEX.128 Encoded Version) + ¶ +

+
Packed_Sign_Extend_WORD_to_DWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] := 0
+
+

VPMOVSXWQ (VEX.128 Encoded Version) + ¶ +

+
Packed_Sign_Extend_WORD_to_QWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] := 0
+
+

VPMOVSXDQ (VEX.128 Encoded Version) + ¶ +

+
Packed_Sign_Extend_DWORD_to_QWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] := 0
+
+

PMOVSXBW + ¶ +

+
Packed_Sign_Extend_BYTE_to_WORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVSXBD + ¶ +

+
Packed_Sign_Extend_BYTE_to_DWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVSXBQ + ¶ +

+
Packed_Sign_Extend_BYTE_to_QWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVSXWD + ¶ +

+
Packed_Sign_Extend_WORD_to_DWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVSXWQ + ¶ +

+
Packed_Sign_Extend_WORD_to_QWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVSXDQ + ¶ +

+
Packed_Sign_Extend_DWORD_to_QWORD(DEST[127:0], SRC[127:0])
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMOVSXBW __m512i _mm512_cvtepi8_epi16(__m512i a);
+
+
VPMOVSXBW __m512i _mm512_mask_cvtepi8_epi16(__m512i a, __mmask32 k, __m512i b);
+
+
VPMOVSXBW __m512i _mm512_maskz_cvtepi8_epi16( __mmask32 k, __m512i b);
+
+
VPMOVSXBD __m512i _mm512_cvtepi8_epi32(__m512i a);
+
+
VPMOVSXBD __m512i _mm512_mask_cvtepi8_epi32(__m512i a, __mmask16 k, __m512i b);
+
+
VPMOVSXBD __m512i _mm512_maskz_cvtepi8_epi32( __mmask16 k, __m512i b);
+
+
VPMOVSXBQ __m512i _mm512_cvtepi8_epi64(__m512i a);
+
+
VPMOVSXBQ __m512i _mm512_mask_cvtepi8_epi64(__m512i a, __mmask8 k, __m512i b);
+
+
VPMOVSXBQ __m512i _mm512_maskz_cvtepi8_epi64( __mmask8 k, __m512i a);
+
+
VPMOVSXDQ __m512i _mm512_cvtepi32_epi64(__m512i a);
+
+
VPMOVSXDQ __m512i _mm512_mask_cvtepi32_epi64(__m512i a, __mmask8 k, __m512i b);
+
+
VPMOVSXDQ __m512i _mm512_maskz_cvtepi32_epi64( __mmask8 k, __m512i a);
+
+
VPMOVSXWD __m512i _mm512_cvtepi16_epi32(__m512i a);
+
+
VPMOVSXWD __m512i _mm512_mask_cvtepi16_epi32(__m512i a, __mmask16 k, __m512i b);
+
+
VPMOVSXWD __m512i _mm512_maskz_cvtepi16_epi32(__mmask16 k, __m512i a);
+
+
VPMOVSXWQ __m512i _mm512_cvtepi16_epi64(__m512i a);
+
+
VPMOVSXWQ __m512i _mm512_mask_cvtepi16_epi64(__m512i a, __mmask8 k, __m512i b);
+
+
VPMOVSXWQ __m512i _mm512_maskz_cvtepi16_epi64( __mmask8 k, __m512i a);
+
+
VPMOVSXBW __m256i _mm256_cvtepi8_epi16(__m256i a);
+
+
VPMOVSXBW __m256i _mm256_mask_cvtepi8_epi16(__m256i a, __mmask16 k, __m256i b);
+
+
VPMOVSXBW __m256i _mm256_maskz_cvtepi8_epi16( __mmask16 k, __m256i b);
+
+
VPMOVSXBD __m256i _mm256_cvtepi8_epi32(__m256i a);
+
+
VPMOVSXBD __m256i _mm256_mask_cvtepi8_epi32(__m256i a, __mmask8 k, __m256i b);
+
+
VPMOVSXBD __m256i _mm256_maskz_cvtepi8_epi32( __mmask8 k, __m256i b);
+
+
VPMOVSXBQ __m256i _mm256_cvtepi8_epi64(__m256i a);
+
+
VPMOVSXBQ __m256i _mm256_mask_cvtepi8_epi64(__m256i a, __mmask8 k, __m256i b);
+
+
VPMOVSXBQ __m256i _mm256_maskz_cvtepi8_epi64( __mmask8 k, __m256i a);
+
+
VPMOVSXDQ __m256i _mm256_cvtepi32_epi64(__m256i a);
+
+
VPMOVSXDQ __m256i _mm256_mask_cvtepi32_epi64(__m256i a, __mmask8 k, __m256i b);
+
+
VPMOVSXDQ __m256i _mm256_maskz_cvtepi32_epi64( __mmask8 k, __m256i a);
+
+
VPMOVSXWD __m256i _mm256_cvtepi16_epi32(__m256i a);
+
+
VPMOVSXWD __m256i _mm256_mask_cvtepi16_epi32(__m256i a, __mmask16 k, __m256i b);
+
+
VPMOVSXWD __m256i _mm256_maskz_cvtepi16_epi32(__mmask16 k, __m256i a);
+
+
VPMOVSXWQ __m256i _mm256_cvtepi16_epi64(__m256i a);
+
+
VPMOVSXWQ __m256i _mm256_mask_cvtepi16_epi64(__m256i a, __mmask8 k, __m256i b);
+
+
VPMOVSXWQ __m256i _mm256_maskz_cvtepi16_epi64( __mmask8 k, __m256i a);
+
+
VPMOVSXBW __m128i _mm_mask_cvtepi8_epi16(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSXBW __m128i _mm_maskz_cvtepi8_epi16( __mmask8 k, __m128i b);
+
+
VPMOVSXBD __m128i _mm_mask_cvtepi8_epi32(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSXBD __m128i _mm_maskz_cvtepi8_epi32( __mmask8 k, __m128i b);
+
+
VPMOVSXBQ __m128i _mm_mask_cvtepi8_epi64(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSXBQ __m128i _mm_maskz_cvtepi8_epi64( __mmask8 k, __m128i a);
+
+
VPMOVSXDQ __m128i _mm_mask_cvtepi32_epi64(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSXDQ __m128i _mm_maskz_cvtepi32_epi64( __mmask8 k, __m128i a);
+
+
VPMOVSXWD __m128i _mm_mask_cvtepi16_epi32(__m128i a, __mmask16 k, __m128i b);
+
+
VPMOVSXWD __m128i _mm_maskz_cvtepi16_epi32(__mmask16 k, __m128i a);
+
+
VPMOVSXWQ __m128i _mm_mask_cvtepi16_epi64(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSXWQ __m128i _mm_maskz_cvtepi16_epi64( __mmask8 k, __m128i a);
+
+
PMOVSXBW __m128i _mm_ cvtepi8_epi16 ( __m128i a);
+
+
PMOVSXBD __m128i _mm_ cvtepi8_epi32 ( __m128i a);
+
+
PMOVSXBQ __m128i _mm_ cvtepi8_epi64 ( __m128i a);
+
+
PMOVSXWD __m128i _mm_ cvtepi16_epi32 ( __m128i a);
+
+
PMOVSXWQ __m128i _mm_ cvtepi16_epi64 ( __m128i a);
+
+
PMOVSXDQ __m128i _mm_ cvtepi32_epi64 ( __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-51, “Type E5 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B, or EVEX.vvvv != 1111B.
diff --git a/x86/pmovzx.html b/x86/pmovzx.html new file mode 100644 index 0000000..6ffe051 --- /dev/null +++ b/x86/pmovzx.html @@ -0,0 +1,736 @@ + +PMOVZX + — Packed Move With Zero Extend

PMOVZX + — Packed Move With Zero Extend

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0f 38 30 /r PMOVZXBW xmm1, xmm2/m64AV/VSSE4_1Zero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.
66 0f 38 31 /r PMOVZXBD xmm1, xmm2/m32AV/VSSE4_1Zero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1.
66 0f 38 32 /r PMOVZXBQ xmm1, xmm2/m16AV/VSSE4_1Zero extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1.
66 0f 38 33 /r PMOVZXWD xmm1, xmm2/m64AV/VSSE4_1Zero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1.
66 0f 38 34 /r PMOVZXWQ xmm1, xmm2/m32AV/VSSE4_1Zero extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1.
66 0f 38 35 /r PMOVZXDQ xmm1, xmm2/m64AV/VSSE4_1Zero extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in xmm1.
VEX.128.66.0F38.WIG 30 /r VPMOVZXBW xmm1, xmm2/m64AV/VAVXZero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.
VEX.128.66.0F38.WIG 31 /r VPMOVZXBD xmm1, xmm2/m32AV/VAVXZero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1.
VEX.128.66.0F38.WIG 32 /r VPMOVZXBQ xmm1, xmm2/m16AV/VAVXZero extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1.
VEX.128.66.0F38.WIG 33 /r VPMOVZXWD xmm1, xmm2/m64AV/VAVXZero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1.
VEX.128.66.0F38.WIG 34 /r VPMOVZXWQ xmm1, xmm2/m32AV/VAVXZero extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1.
VEX.128.66.0F 38.WIG 35 /r VPMOVZXDQ xmm1, xmm2/m64AV/VAVXZero extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in xmm1.
VEX.256.66.0F38.WIG 30 /r VPMOVZXBW ymm1, xmm2/m128AV/VAVX2Zero extend 16 packed 8-bit integers in xmm2/m128 to 16 packed 16-bit integers in ymm1.
VEX.256.66.0F38.WIG 31 /r VPMOVZXBD ymm1, xmm2/m64AV/VAVX2Zero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 32-bit integers in ymm1.
VEX.256.66.0F38.WIG 32 /r VPMOVZXBQ ymm1, xmm2/m32AV/VAVX2Zero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 64-bit integers in ymm1.
VEX.256.66.0F38.WIG 33 /r VPMOVZXWD ymm1, xmm2/m128AV/VAVX2Zero extend 8 packed 16-bit integers xmm2/m128 to 8 packed 32-bit integers in ymm1.
VEX.256.66.0F38.WIG 34 /r VPMOVZXWQ ymm1, xmm2/m64AV/VAVX2Zero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 64-bit integers in xmm1.
VEX.256.66.0F38.WIG 35 /r VPMOVZXDQ ymm1, xmm2/m128AV/VAVX2Zero extend 4 packed 32-bit integers in xmm2/m128 to 4 packed 64-bit integers in ymm1.
EVEX.128.66.0F38 30.WIG /r VPMOVZXBW xmm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512BWZero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.
EVEX.256.66.0F38.WIG 30 /r VPMOVZXBW ymm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512BWZero extend 16 packed 8-bit integers in xmm2/m128 to 16 packed 16-bit integers in ymm1.
EVEX.512.66.0F38.WIG 30 /r VPMOVZXBW zmm1 {k1}{z}, ymm2/m256BV/VAVX512BWZero extend 32 packed 8-bit integers in ymm2/m256 to 32 packed 16-bit integers in zmm1.
EVEX.128.66.0F38.WIG 31 /r VPMOVZXBD xmm1 {k1}{z}, xmm2/m32CV/VAVX512VL AVX512FZero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1 subject to writemask k1.
EVEX.256.66.0F38.WIG 31 /r VPMOVZXBD ymm1 {k1}{z}, xmm2/m64CV/VAVX512VL AVX512FZero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 32-bit integers in ymm1 subject to writemask k1.
EVEX.512.66.0F38.WIG 31 /r VPMOVZXBD zmm1 {k1}{z}, xmm2/m128CV/VAVX512FZero extend 16 packed 8-bit integers in xmm2/m128 to 16 packed 32-bit integers in zmm1 subject to writemask k1.
EVEX.128.66.0F38.WIG 32 /r VPMOVZXBQ xmm1 {k1}{z}, xmm2/m16DV/VAVX512VL AVX512FZero extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1 subject to writemask k1.
EVEX.256.66.0F38.WIG 32 /r VPMOVZXBQ ymm1 {k1}{z}, xmm2/m32DV/VAVX512VL AVX512FZero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 64-bit integers in ymm1 subject to writemask k1.
EVEX.512.66.0F38.WIG 32 /r VPMOVZXBQ zmm1 {k1}{z}, xmm2/m64DV/VAVX512FZero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 64-bit integers in zmm1 subject to writemask k1.
EVEX.128.66.0F38.WIG 33 /r VPMOVZXWD xmm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512FZero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1 subject to writemask k1.
EVEX.256.66.0F38.WIG 33 /r VPMOVZXWD ymm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512FZero extend 8 packed 16-bit integers in xmm2/m128 to 8 packed 32-bit integers in zmm1 subject to writemask k1.
EVEX.512.66.0F38.WIG 33 /r VPMOVZXWD zmm1 {k1}{z}, ymm2/m256BV/VAVX512FZero extend 16 packed 16-bit integers in ymm2/m256 to 16 packed 32-bit integers in zmm1 subject to writemask k1.
EVEX.128.66.0F38.WIG 34 /r VPMOVZXWQ xmm1 {k1}{z}, xmm2/m32CV/VAVX512VL AVX512FZero extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1 subject to writemask k1.
EVEX.256.66.0F38.WIG 34 /r VPMOVZXWQ ymm1 {k1}{z}, xmm2/m64CV/VAVX512VL AVX512FZero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 64-bit integers in ymm1 subject to writemask k1.
EVEX.512.66.0F38.WIG 34 /r VPMOVZXWQ zmm1 {k1}{z}, xmm2/m128CV/VAVX512FZero extend 8 packed 16-bit integers in xmm2/m128 to 8 packed 64-bit integers in zmm1 subject to writemask k1.
EVEX.128.66.0F38.W0 35 /r VPMOVZXDQ xmm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512FZero extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in zmm1 using writemask k1.
EVEX.256.66.0F38.W0 35 /r VPMOVZXDQ ymm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512FZero extend 4 packed 32-bit integers in xmm2/m128 to 4 packed 64-bit integers in zmm1 using writemask k1.
EVEX.512.66.0F38.W0 35 /r VPMOVZXDQ zmm1 {k1}{z}, ymm2/m256BV/VAVX512FZero extend 8 packed 32-bit integers in ymm2/m256 to 8 packed 64-bit integers in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BHalf MemModRM:reg (w)ModRM:r/m (r)N/AN/A
CQuarter MemModRM:reg (w)ModRM:r/m (r)N/AN/A
DEighth MemModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Legacy, VEX, and EVEX encoded versions: Packed byte, word, or dword integers starting from the low bytes of the source operand (second operand) are zero extended to word, dword, or quadword integers and stored in packed signed bytes the destination operand.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX encoded versions: Packed dword integers starting from the low bytes of the source operand (second operand) are zero extended to quadword integers and stored to the destination operand under the writemask.The destination register is XMM, YMM or ZMM Register.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

Packed_Zero_Extend_BYTE_to_WORD(DEST, SRC) + ¶ +

+
DEST[15:0] := ZeroExtend(SRC[7:0]);
+DEST[31:16] := ZeroExtend(SRC[15:8]);
+DEST[47:32] := ZeroExtend(SRC[23:16]);
+DEST[63:48] := ZeroExtend(SRC[31:24]);
+DEST[79:64] := ZeroExtend(SRC[39:32]);
+DEST[95:80] := ZeroExtend(SRC[47:40]);
+DEST[111:96] := ZeroExtend(SRC[55:48]);
+DEST[127:112] := ZeroExtend(SRC[63:56]);
+
+

Packed_Zero_Extend_BYTE_to_DWORD(DEST, SRC) + ¶ +

+
DEST[31:0] := ZeroExtend(SRC[7:0]);
+DEST[63:32] := ZeroExtend(SRC[15:8]);
+DEST[95:64] := ZeroExtend(SRC[23:16]);
+DEST[127:96] := ZeroExtend(SRC[31:24]);
+
+

Packed_Zero_Extend_BYTE_to_QWORD(DEST, SRC) + ¶ +

+
DEST[63:0] := ZeroExtend(SRC[7:0]);
+DEST[127:64] := ZeroExtend(SRC[15:8]);
+
+

Packed_Zero_Extend_WORD_to_DWORD(DEST, SRC) + ¶ +

+
DEST[31:0] := ZeroExtend(SRC[15:0]);
+DEST[63:32] := ZeroExtend(SRC[31:16]);
+DEST[95:64] := ZeroExtend(SRC[47:32]);
+DEST[127:96] := ZeroExtend(SRC[63:48]);
+
+

Packed_Zero_Extend_WORD_to_QWORD(DEST, SRC) + ¶ +

+
DEST[63:0] := ZeroExtend(SRC[15:0]);
+DEST[127:64] := ZeroExtend(SRC[31:16]);
+
+

Packed_Zero_Extend_DWORD_to_QWORD(DEST, SRC) + ¶ +

+
DEST[63:0] := ZeroExtend(SRC[31:0]);
+DEST[127:64] := ZeroExtend(SRC[63:32]);
+
+

VPMOVZXBW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+Packed_Zero_Extend_BYTE_to_WORD(TMP_DEST[127:0], SRC[63:0])
+IF VL >= 256
+    Packed_Zero_Extend_BYTE_to_WORD(TMP_DEST[255:128], SRC[127:64])
+FI;
+IF VL >= 512
+    Packed_Zero_Extend_BYTE_to_WORD(TMP_DEST[383:256], SRC[191:128])
+    Packed_Zero_Extend_BYTE_to_WORD(TMP_DEST[511:384], SRC[255:192])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TEMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVZXBD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+Packed_Zero_Extend_BYTE_to_DWORD(TMP_DEST[127:0], SRC[31:0])
+IF VL >= 256
+    Packed_Zero_Extend_BYTE_to_DWORD(TMP_DEST[255:128], SRC[63:32])
+FI;
+IF VL >= 512
+    Packed_Zero_Extend_BYTE_to_DWORD(TMP_DEST[383:256], SRC[95:64])
+    Packed_Zero_Extend_BYTE_to_DWORD(TMP_DEST[511:384], SRC[127:96])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TEMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVZXBQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+Packed_Zero_Extend_BYTE_to_QWORD(TMP_DEST[127:0], SRC[15:0])
+IF VL >= 256
+    Packed_Zero_Extend_BYTE_to_QWORD(TMP_DEST[255:128], SRC[31:16])
+FI;
+IF VL >= 512
+    Packed_Zero_Extend_BYTE_to_QWORD(TMP_DEST[383:256], SRC[47:32])
+    Packed_Zero_Extend_BYTE_to_QWORD(TMP_DEST[511:384], SRC[63:48])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TEMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVZXWD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+Packed_Zero_Extend_WORD_to_DWORD(TMP_DEST[127:0], SRC[63:0])
+IF VL >= 256
+    Packed_Zero_Extend_WORD_to_DWORD(TMP_DEST[255:128], SRC[127:64])
+FI;
+IF VL >= 512
+    Packed_Zero_Extend_WORD_to_DWORD(TMP_DEST[383:256], SRC[191:128])
+    Packed_Zero_Extend_WORD_to_DWORD(TMP_DEST[511:384], SRC[256:192])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TEMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVZXWQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+Packed_Zero_Extend_WORD_to_QWORD(TMP_DEST[127:0], SRC[31:0])
+IF VL >= 256
+    Packed_Zero_Extend_WORD_to_QWORD(TMP_DEST[255:128], SRC[63:32])
+FI;
+IF VL >= 512
+    Packed_Zero_Extend_WORD_to_QWORD(TMP_DEST[383:256], SRC[95:64])
+    Packed_Zero_Extend_WORD_to_QWORD(TMP_DEST[511:384], SRC[127:96])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TEMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVZXDQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+Packed_Zero_Extend_DWORD_to_QWORD(TEMP_DEST[127:0], SRC[63:0])
+IF VL >= 256
+    Packed_Zero_Extend_DWORD_to_QWORD(TEMP_DEST[255:128], SRC[127:64])
+FI;
+IF VL >= 512
+    Packed_Zero_Extend_DWORD_to_QWORD(TEMP_DEST[383:256], SRC[191:128])
+    Packed_Zero_Extend_DWORD_to_QWORD(TEMP_DEST[511:384], SRC[255:192])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TEMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVZXBW (VEX.256 Encoded Version) + ¶ +

+
Packed_Zero_Extend_BYTE_to_WORD(DEST[127:0], SRC[63:0])
+Packed_Zero_Extend_BYTE_to_WORD(DEST[255:128], SRC[127:64])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVZXBD (VEX.256 Encoded Version) + ¶ +

+
Packed_Zero_Extend_BYTE_to_DWORD(DEST[127:0], SRC[31:0])
+Packed_Zero_Extend_BYTE_to_DWORD(DEST[255:128], SRC[63:32])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVZXBQ (VEX.256 Encoded Version) + ¶ +

+
Packed_Zero_Extend_BYTE_to_QWORD(DEST[127:0], SRC[15:0])
+Packed_Zero_Extend_BYTE_to_QWORD(DEST[255:128], SRC[31:16])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVZXWD (VEX.256 Encoded Version) + ¶ +

+
Packed_Zero_Extend_WORD_to_DWORD(DEST[127:0], SRC[63:0])
+Packed_Zero_Extend_WORD_to_DWORD(DEST[255:128], SRC[127:64])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVZXWQ (VEX.256 Encoded Version) + ¶ +

+
Packed_Zero_Extend_WORD_to_QWORD(DEST[127:0], SRC[31:0])
+Packed_Zero_Extend_WORD_to_QWORD(DEST[255:128], SRC[63:32])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVZXDQ (VEX.256 Encoded Version) + ¶ +

+
Packed_Zero_Extend_DWORD_to_QWORD(DEST[127:0], SRC[63:0])
+Packed_Zero_Extend_DWORD_to_QWORD(DEST[255:128], SRC[127:64])
+DEST[MAXVL-1:256] := 0
+
+

VPMOVZXBW (VEX.128 Encoded Version) + ¶ +

+
Packed_Zero_Extend_BYTE_to_WORD()
+DEST[MAXVL-1:128] := 0
+
+

VPMOVZXBD (VEX.128 Encoded Version) + ¶ +

+
Packed_Zero_Extend_BYTE_to_DWORD()
+DEST[MAXVL-1:128] := 0
+
+

VPMOVZXBQ (VEX.128 Encoded Version) + ¶ +

+
Packed_Zero_Extend_BYTE_to_QWORD()
+DEST[MAXVL-1:128] := 0
+
+

VPMOVZXWD (VEX.128 Encoded Version) + ¶ +

+
Packed_Zero_Extend_WORD_to_DWORD()
+DEST[MAXVL-1:128] := 0
+
+

VPMOVZXWQ (VEX.128 Encoded Version) + ¶ +

+
Packed_Zero_Extend_WORD_to_QWORD()
+DEST[MAXVL-1:128] := 0
+
+

VPMOVZXDQ (VEX.128 Encoded Version + ¶ +

+
Packed_Zero_Extend_DWORD_to_QWORD()
+DEST[MAXVL-1:128] := 0
+
+

PMOVZXBW + ¶ +

+
Packed_Zero_Extend_BYTE_to_WORD()
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVZXBD + ¶ +

+
Packed_Zero_Extend_BYTE_to_DWORD()
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVZXBQ + ¶ +

+
Packed_Zero_Extend_BYTE_to_QWORD()
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVZXWD + ¶ +

+
Packed_Zero_Extend_WORD_to_DWORD()
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVZXWQ + ¶ +

+
Packed_Zero_Extend_WORD_to_QWORD()
+DEST[MAXVL-1:128] (Unmodified)
+
+

PMOVZXDQ + ¶ +

+
Packed_Zero_Extend_DWORD_to_QWORD()
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMOVZXBW __m512i _mm512_cvtepu8_epi16(__m256i a);
+
+
VPMOVZXBW __m512i _mm512_mask_cvtepu8_epi16(__m512i a, __mmask32 k, __m256i b);
+
+
VPMOVZXBW __m512i _mm512_maskz_cvtepu8_epi16( __mmask32 k, __m256i b);
+
+
VPMOVZXBD __m512i _mm512_cvtepu8_epi32(__m128i a);
+
+
VPMOVZXBD __m512i _mm512_mask_cvtepu8_epi32(__m512i a, __mmask16 k, __m128i b);
+
+
VPMOVZXBD __m512i _mm512_maskz_cvtepu8_epi32( __mmask16 k, __m128i b);
+
+
VPMOVZXBQ __m512i _mm512_cvtepu8_epi64(__m128i a);
+
+
VPMOVZXBQ __m512i _mm512_mask_cvtepu8_epi64(__m512i a, __mmask8 k, __m128i b);
+
+
VPMOVZXBQ __m512i _mm512_maskz_cvtepu8_epi64( __mmask8 k, __m128i a);
+
+
VPMOVZXDQ __m512i _mm512_cvtepu32_epi64(__m256i a);
+
+
VPMOVZXDQ __m512i _mm512_mask_cvtepu32_epi64(__m512i a, __mmask8 k, __m256i b);
+
+
VPMOVZXDQ __m512i _mm512_maskz_cvtepu32_epi64( __mmask8 k, __m256i a);
+
+
VPMOVZXWD __m512i _mm512_cvtepu16_epi32(__m128i a);
+
+
VPMOVZXWD __m512i _mm512_mask_cvtepu16_epi32(__m512i a, __mmask16 k, __m128i b);
+
+
VPMOVZXWD __m512i _mm512_maskz_cvtepu16_epi32(__mmask16 k, __m128i a);
+
+
VPMOVZXWQ __m512i _mm512_cvtepu16_epi64(__m256i a);
+
+
VPMOVZXWQ __m512i _mm512_mask_cvtepu16_epi64(__m512i a, __mmask8 k, __m256i b);
+
+
VPMOVZXWQ __m512i _mm512_maskz_cvtepu16_epi64( __mmask8 k, __m256i a);
+
+
VPMOVZXBW __m256i _mm256_cvtepu8_epi16(__m256i a);
+
+
VPMOVZXBW __m256i _mm256_mask_cvtepu8_epi16(__m256i a, __mmask16 k, __m128i b);
+
+
VPMOVZXBW __m256i _mm256_maskz_cvtepu8_epi16( __mmask16 k, __m128i b);
+
+
VPMOVZXBD __m256i _mm256_cvtepu8_epi32(__m128i a);
+
+
VPMOVZXBD __m256i _mm256_mask_cvtepu8_epi32(__m256i a, __mmask8 k, __m128i b);
+
+
VPMOVZXBD __m256i _mm256_maskz_cvtepu8_epi32( __mmask8 k, __m128i b);
+
+
VPMOVZXBQ __m256i _mm256_cvtepu8_epi64(__m128i a);
+
+
VPMOVZXBQ __m256i _mm256_mask_cvtepu8_epi64(__m256i a, __mmask8 k, __m128i b);
+
+
VPMOVZXBQ __m256i _mm256_maskz_cvtepu8_epi64( __mmask8 k, __m128i a);
+
+
VPMOVZXDQ __m256i _mm256_cvtepu32_epi64(__m128i a);
+
+
VPMOVZXDQ __m256i _mm256_mask_cvtepu32_epi64(__m256i a, __mmask8 k, __m128i b);
+
+
VPMOVZXDQ __m256i _mm256_maskz_cvtepu32_epi64( __mmask8 k, __m128i a);
+
+
VPMOVZXWD __m256i _mm256_cvtepu16_epi32(__m128i a);
+
+
VPMOVZXWD __m256i _mm256_mask_cvtepu16_epi32(__m256i a, __mmask16 k, __m128i b);
+
+
VPMOVZXWD __m256i _mm256_maskz_cvtepu16_epi32(__mmask16 k, __m128i a);
+
+
VPMOVZXWQ __m256i _mm256_cvtepu16_epi64(__m128i a);
+
+
VPMOVZXWQ __m256i _mm256_mask_cvtepu16_epi64(__m256i a, __mmask8 k, __m128i b);
+
+
VPMOVZXWQ __m256i _mm256_maskz_cvtepu16_epi64( __mmask8 k, __m128i a);
+
+
VPMOVZXBW __m128i _mm_mask_cvtepu8_epi16(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVZXBW __m128i _mm_maskz_cvtepu8_epi16( __mmask8 k, __m128i b);
+
+
VPMOVZXBD __m128i _mm_mask_cvtepu8_epi32(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVZXBD __m128i _mm_maskz_cvtepu8_epi32( __mmask8 k, __m128i b);
+
+
VPMOVZXBQ __m128i _mm_mask_cvtepu8_epi64(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVZXBQ __m128i _mm_maskz_cvtepu8_epi64( __mmask8 k, __m128i a);
+
+
VPMOVZXDQ __m128i _mm_mask_cvtepu32_epi64(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVZXDQ __m128i _mm_maskz_cvtepu32_epi64( __mmask8 k, __m128i a);
+
+
VPMOVZXWD __m128i _mm_mask_cvtepu16_epi32(__m128i a, __mmask16 k, __m128i b);
+
+
VPMOVZXWD __m128i _mm_maskz_cvtepu16_epi32(__mmask8 k, __m128i a);
+
+
VPMOVZXWQ __m128i _mm_mask_cvtepu16_epi64(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVZXWQ __m128i _mm_maskz_cvtepu16_epi64( __mmask8 k, __m128i a);
+
+
PMOVZXBW __m128i _mm_ cvtepu8_epi16 ( __m128i a);
+
+
PMOVZXBD __m128i _mm_ cvtepu8_epi32 ( __m128i a);
+
+
PMOVZXBQ __m128i _mm_ cvtepu8_epi64 ( __m128i a);
+
+
PMOVZXWD __m128i _mm_ cvtepu16_epi32 ( __m128i a);
+
+
PMOVZXWQ __m128i _mm_ cvtepu16_epi64 ( __m128i a);
+
+
PMOVZXDQ __m128i _mm_ cvtepu32_epi64 ( __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-22, “Type 5 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-51, “Type E5 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B, or EVEX.vvvv != 1111B.
diff --git a/x86/pmuldq.html b/x86/pmuldq.html new file mode 100644 index 0000000..2dd5c5b --- /dev/null +++ b/x86/pmuldq.html @@ -0,0 +1,171 @@ + +PMULDQ + — Multiply Packed Doubleword Integers

PMULDQ + — Multiply Packed Doubleword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 28 /r PMULDQ xmm1, xmm2/m128AV/VSSE4_1Multiply packed signed doubleword integers in xmm1 by packed signed doubleword integers in xmm2/m128, and store the quadword results in xmm1.
VEX.128.66.0F38.WIG 28 /r VPMULDQ xmm1, xmm2, xmm3/m128BV/VAVXMultiply packed signed doubleword integers in xmm2 by packed signed doubleword integers in xmm3/m128, and store the quadword results in xmm1.
VEX.256.66.0F38.WIG 28 /r VPMULDQ ymm1, ymm2, ymm3/m256BV/VAVX2Multiply packed signed doubleword integers in ymm2 by packed signed doubleword integers in ymm3/m256, and store the quadword results in ymm1.
EVEX.128.66.0F38.W1 28 /r VPMULDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FMultiply packed signed doubleword integers in xmm2 by packed signed doubleword integers in xmm3/m128/m64bcst, and store the quadword results in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 28 /r VPMULDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FMultiply packed signed doubleword integers in ymm2 by packed signed doubleword integers in ymm3/m256/m64bcst, and store the quadword results in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 28 /r VPMULDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FMultiply packed signed doubleword integers in zmm2 by packed signed doubleword integers in zmm3/m512/m64bcst, and store the quadword results in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiplies packed signed doubleword integers in the even-numbered (zero-based reference) elements of the first source operand with the packed signed doubleword integers in the corresponding elements of the second source operand and stores packed signed quadword results in the destination operand.

+

128-bit Legacy SSE version: The input signed doubleword integers are taken from the even-numbered elements of the source operands, i.e., the first (low) and third doubleword element. For 128-bit memory operands, 128 bits are fetched from memory, but only the first and third doublewords are used in the computation. The first source operand and the destination XMM operand is the same. The second source operand can be an XMM register or 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register remain unchanged.

+

VEX.128 encoded version: The input signed doubleword integers are taken from the even-numbered elements of the source operands, i.e., the first (low) and third doubleword element. For 128-bit memory operands, 128 bits are fetched from memory, but only the first and third doublewords are used in the computation.The first source operand and the destination operand are XMM registers. The second source operand can be an XMM register or 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The input signed doubleword integers are taken from the even-numbered elements of the source operands, i.e., the first, 3rd, 5th, 7th doubleword element. For 256-bit memory operands, 256 bits are fetched from memory, but only the four even-numbered doublewords are used in the computation. The first source operand and the destination operand are YMM registers. The second source operand can be a YMM register or 256-bit memory location. Bits (MAXVL-1:256) of the corresponding destination ZMM register are zeroed.

+

EVEX encoded version: The input signed doubleword integers are taken from the even-numbered elements of the source operands. The first source operand is a ZMM/YMM/XMM registers. The second source operand can be an ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination is a ZMM/YMM/XMM register, and updated according to the writemask at 64-bit granularity.

+

Operation + ¶ +

+

VPMULDQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := SignExtend64( SRC1[i+31:i]) * SignExtend64( SRC2[31:0])
+                ELSE DEST[i+63:i] := SignExtend64( SRC1[i+31:i]) * SignExtend64( SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMULDQ (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SignExtend64( SRC1[31:0]) * SignExtend64( SRC2[31:0])
+DEST[127:64] := SignExtend64( SRC1[95:64]) * SignExtend64( SRC2[95:64])
+DEST[191:128] := SignExtend64( SRC1[159:128]) * SignExtend64( SRC2[159:128])
+DEST[255:192] := SignExtend64( SRC1[223:192]) * SignExtend64( SRC2[223:192])
+DEST[MAXVL-1:256] := 0
+
+

VPMULDQ (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SignExtend64( SRC1[31:0]) * SignExtend64( SRC2[31:0])
+DEST[127:64] := SignExtend64( SRC1[95:64]) * SignExtend64( SRC2[95:64])
+DEST[MAXVL-1:128] := 0
+
+

PMULDQ (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SignExtend64( DEST[31:0]) * SignExtend64( SRC[31:0])
+DEST[127:64] := SignExtend64( DEST[95:64]) * SignExtend64( SRC[95:64])
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMULDQ __m512i _mm512_mul_epi32(__m512i a, __m512i b);
+
+
VPMULDQ __m512i _mm512_mask_mul_epi32(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPMULDQ __m512i _mm512_maskz_mul_epi32( __mmask8 k, __m512i a, __m512i b);
+
+
VPMULDQ __m256i _mm256_mask_mul_epi32(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPMULDQ __m256i _mm256_mask_mul_epi32( __mmask8 k, __m256i a, __m256i b);
+
+
VPMULDQ __m128i _mm_mask_mul_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMULDQ __m128i _mm_mask_mul_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
(V)PMULDQ __m128i _mm_mul_epi32( __m128i a, __m128i b);
+
+
VPMULDQ __m256i _mm256_mul_epi32( __m256i a, __m256i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pmulhrsw.html b/x86/pmulhrsw.html new file mode 100644 index 0000000..5cc6c2a --- /dev/null +++ b/x86/pmulhrsw.html @@ -0,0 +1,249 @@ + +PMULHRSW + — Packed Multiply High With Round and Scale

PMULHRSW + — Packed Multiply High With Round and Scale

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 0B /r1 PMULHRSW mm1, mm2/m64AV/VSSSE3Multiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to mm1.
66 0F 38 0B /r PMULHRSW xmm1, xmm2/m128AV/VSSSE3Multiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to xmm1.
VEX.128.66.0F38.WIG 0B /r VPMULHRSW xmm1, xmm2, xmm3/m128BV/VAVXMultiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to xmm1.
VEX.256.66.0F38.WIG 0B /r VPMULHRSW ymm1, ymm2, ymm3/m256BV/VAVX2Multiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to ymm1.
EVEX.128.66.0F38.WIG 0B /r VPMULHRSW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWMultiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to xmm1 under writemask k1.
EVEX.256.66.0F38.WIG 0B /r VPMULHRSW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWMultiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to ymm1 under writemask k1.
EVEX.512.66.0F38.WIG 0B /r VPMULHRSW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWMultiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

PMULHRSW multiplies vertically each signed 16-bit integer from the destination operand (first operand) with the corresponding signed 16-bit integer of the source operand (second operand), producing intermediate, signed 32-bit integers. Each intermediate 32-bit integer is truncated to the 18 most significant bits. Rounding is always performed by adding 1 to the least significant bit of the 18-bit intermediate result. The final result is obtained by selecting the 16 bits immediately to the right of the most significant bit of each 18-bit intermediate result and packed to the destination operand.

+

When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

In 64-bit mode and not encoded with VEX/EVEX, use the REX prefix to access XMM8-XMM15 registers.

+

Legacy SSE version 64-bit operand: Both operands can be MMX registers. The second source operand is an MMX register or a 64-bit memory location.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Operation + ¶ +

+

PMULHRSW (With 64-bit Operands) + ¶ +

+
temp0[31:0] = INT32 ((DEST[15:0] * SRC[15:0]) >>14) + 1;
+temp1[31:0] = INT32 ((DEST[31:16] * SRC[31:16]) >>14) + 1;
+temp2[31:0] = INT32 ((DEST[47:32] * SRC[47:32]) >> 14) + 1;
+temp3[31:0] = INT32 ((DEST[63:48] * SRc[63:48]) >> 14) + 1;
+DEST[15:0] = temp0[16:1];
+DEST[31:16] = temp1[16:1];
+DEST[47:32] = temp2[16:1];
+DEST[63:48] = temp3[16:1];
+
+

PMULHRSW (With 128-bit Operands) + ¶ +

+
temp0[31:0] = INT32 ((DEST[15:0] * SRC[15:0]) >>14) + 1;
+temp1[31:0] = INT32 ((DEST[31:16] * SRC[31:16]) >>14) + 1;
+temp2[31:0] = INT32 ((DEST[47:32] * SRC[47:32]) >>14) + 1;
+temp3[31:0] = INT32 ((DEST[63:48] * SRC[63:48]) >>14) + 1;
+temp4[31:0] = INT32 ((DEST[79:64] * SRC[79:64]) >>14) + 1;
+temp5[31:0] = INT32 ((DEST[95:80] * SRC[95:80]) >>14) + 1;
+temp6[31:0] = INT32 ((DEST[111:96] * SRC[111:96]) >>14) + 1;
+temp7[31:0] = INT32 ((DEST[127:112] * SRC[127:112) >>14) + 1;
+DEST[15:0] = temp0[16:1];
+DEST[31:16] = temp1[16:1];
+DEST[47:32] = temp2[16:1];
+DEST[63:48] = temp3[16:1];
+DEST[79:64] = temp4[16:1];
+DEST[95:80] = temp5[16:1];
+DEST[111:96] = temp6[16:1];
+DEST[127:112] = temp7[16:1];
+
+

VPMULHRSW (VEX.128 Encoded Version) + ¶ +

+
temp0[31:0] := INT32 ((SRC1[15:0] * SRC2[15:0]) >>14) + 1
+temp1[31:0] := INT32 ((SRC1[31:16] * SRC2[31:16]) >>14) + 1
+temp2[31:0] := INT32 ((SRC1[47:32] * SRC2[47:32]) >>14) + 1
+temp3[31:0] := INT32 ((SRC1[63:48] * SRC2[63:48]) >>14) + 1
+temp4[31:0] := INT32 ((SRC1[79:64] * SRC2[79:64]) >>14) + 1
+temp5[31:0] := INT32 ((SRC1[95:80] * SRC2[95:80]) >>14) + 1
+temp6[31:0] := INT32 ((SRC1[111:96] * SRC2[111:96]) >>14) + 1
+temp7[31:0] := INT32 ((SRC1[127:112] * SRC2[127:112) >>14) + 1
+DEST[15:0] := temp0[16:1]
+DEST[31:16] := temp1[16:1]
+DEST[47:32] := temp2[16:1]
+DEST[63:48] := temp3[16:1]
+DEST[79:64] := temp4[16:1]
+DEST[95:80] := temp5[16:1]
+DEST[111:96] := temp6[16:1]
+DEST[127:112] := temp7[16:1]
+DEST[MAXVL-1:128] := 0
+
+

VPMULHRSW (VEX.256 Encoded Version) + ¶ +

+
temp0[31:0] := INT32 ((SRC1[15:0] * SRC2[15:0]) >>14) + 1
+temp1[31:0] := INT32 ((SRC1[31:16] * SRC2[31:16]) >>14) + 1
+temp2[31:0] := INT32 ((SRC1[47:32] * SRC2[47:32]) >>14) + 1
+temp3[31:0] := INT32 ((SRC1[63:48] * SRC2[63:48]) >>14) + 1
+temp4[31:0] := INT32 ((SRC1[79:64] * SRC2[79:64]) >>14) + 1
+temp5[31:0] := INT32 ((SRC1[95:80] * SRC2[95:80]) >>14) + 1
+temp6[31:0] := INT32 ((SRC1[111:96] * SRC2[111:96]) >>14) + 1
+temp7[31:0] := INT32 ((SRC1[127:112] * SRC2[127:112) >>14) + 1
+temp8[31:0] := INT32 ((SRC1[143:128] * SRC2[143:128]) >>14) + 1
+temp9[31:0] := INT32 ((SRC1[159:144] * SRC2[159:144]) >>14) + 1
+temp10[31:0] := INT32 ((SRC1[75:160] * SRC2[175:160]) >>14) + 1
+temp11[31:0] := INT32 ((SRC1[191:176] * SRC2[191:176]) >>14) + 1
+temp12[31:0] := INT32 ((SRC1[207:192] * SRC2[207:192]) >>14) + 1
+temp13[31:0] := INT32 ((SRC1[223:208] * SRC2[223:208]) >>14) + 1
+temp14[31:0] := INT32 ((SRC1[239:224] * SRC2[239:224]) >>14) + 1
+temp15[31:0] := INT32 ((SRC1[255:240] * SRC2[255:240) >>14) + 1
+DEST[15:0] := temp0[16:1]
+DEST[31:16] := temp1[16:1]
+DEST[47:32] := temp2[16:1]
+DEST[63:48] := temp3[16:1]
+DEST[79:64] := temp4[16:1]
+DEST[95:80] := temp5[16:1]
+DEST[111:96] := temp6[16:1]
+DEST[127:112] := temp7[16:1]
+DEST[143:128] := temp8[16:1]
+DEST[159:144] := temp9[16:1]
+DEST[175:160] := temp10[16:1]
+DEST[191:176] := temp11[16:1]
+DEST[207:192] := temp12[16:1]
+DEST[223:208] := temp13[16:1]
+DEST[239:224] := temp14[16:1]
+DEST[255:240] := temp15[16:1]
+DEST[MAXVL-1:256] := 0
+
+

VPMULHRSW (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN
+            temp[31:0] := ((SRC1[i+15:i] * SRC2[i+15:i]) >>14) + 1
+            DEST[i+15:i] := tmp[16:1]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMULHRSW __m512i _mm512_mulhrs_epi16(__m512i a, __m512i b);
+
+
VPMULHRSW __m512i _mm512_mask_mulhrs_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMULHRSW __m512i _mm512_maskz_mulhrs_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMULHRSW __m256i _mm256_mask_mulhrs_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMULHRSW __m256i _mm256_maskz_mulhrs_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMULHRSW __m128i _mm_mask_mulhrs_epi16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMULHRSW __m128i _mm_maskz_mulhrs_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
PMULHRSW __m64 _mm_mulhrs_pi16 (__m64 a, __m64 b)
+
+
(V)PMULHRSW __m128i _mm_mulhrs_epi16 (__m128i a, __m128i b)
+
+
VPMULHRSW __m256i _mm256_mulhrs_epi16 (__m256i a, __m256i b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pmulhuw.html b/x86/pmulhuw.html new file mode 100644 index 0000000..db3a15d --- /dev/null +++ b/x86/pmulhuw.html @@ -0,0 +1,359 @@ + +PMULHUW + — Multiply Packed Unsigned Integers and Store High Result

PMULHUW + — Multiply Packed Unsigned Integers and Store High Result

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F E4 /r1 PMULHUW mm1, mm2/m64AV/VSSEMultiply the packed unsigned word integers in mm1 register and mm2/m64, and store the high 16 bits of the results in mm1.
66 0F E4 /r PMULHUW xmm1, xmm2/m128AV/VSSE2Multiply the packed unsigned word integers in xmm1 and xmm2/m128, and store the high 16 bits of the results in xmm1.
VEX.128.66.0F.WIG E4 /r VPMULHUW xmm1, xmm2, xmm3/m128BV/VAVXMultiply the packed unsigned word integers in xmm2 and xmm3/m128, and store the high 16 bits of the results in xmm1.
VEX.256.66.0F.WIG E4 /r VPMULHUW ymm1, ymm2, ymm3/m256BV/VAVX2Multiply the packed unsigned word integers in ymm2 and ymm3/m256, and store the high 16 bits of the results in ymm1.
EVEX.128.66.0F.WIG E4 /r VPMULHUW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWMultiply the packed unsigned word integers in xmm2 and xmm3/m128, and store the high 16 bits of the results in xmm1 under writemask k1.
EVEX.256.66.0F.WIG E4 /r VPMULHUW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWMultiply the packed unsigned word integers in ymm2 and ymm3/m256, and store the high 16 bits of the results in ymm1 under writemask k1.
EVEX.512.66.0F.WIG E4 /r VPMULHUW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWMultiply the packed unsigned word integers in zmm2 and zmm3/m512, and store the high 16 bits of the results in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD unsigned multiply of the packed unsigned word integers in the destination operand (first operand) and the source operand (second operand), and stores the high 16 bits of each 32-bit intermediate results in the destination operand. (Figure 4-12 shows this operation when using 64-bit operands.)

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version 64-bit operand: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an MMX technology register.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.L must be 0, otherwise the instruction will #UD.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC +X3 +X2 +X1 +X0 +DEST +Y3 +Y2 +Y1 +Y0 +Z3 = X3 ∗ Y3 +Z2 = X2 ∗ Y2 +Z1 = X1 ∗ Y1 +Z0 = X0 ∗ Y0 +TEMP +DEST +Z3[31:16] +Z2[31:16] +Z1[31:16] +Z0[31:16] +
Figure 4-12. PMULHUW and PMULHW Instruction Operation Using 64-bit Operands
+

Operation + ¶ +

+

PMULHUW (With 64-bit Operands) + ¶ +

+
TEMP0[31:0] := DEST[15:0] ∗ SRC[15:0]; (* Unsigned multiplication *)
+TEMP1[31:0] := DEST[31:16] ∗ SRC[31:16];
+TEMP2[31:0] := DEST[47:32] ∗ SRC[47:32];
+TEMP3[31:0] := DEST[63:48] ∗ SRC[63:48];
+DEST[15:0] := TEMP0[31:16];
+DEST[31:16] := TEMP1[31:16];
+DEST[47:32] := TEMP2[31:16];
+DEST[63:48] := TEMP3[31:16];
+
+

PMULHUW (With 128-bit Operands) + ¶ +

+
TEMP0[31:0] := DEST[15:0] ∗ SRC[15:0]; (* Unsigned multiplication *)
+TEMP1[31:0] := DEST[31:16] ∗ SRC[31:16];
+TEMP2[31:0] := DEST[47:32] ∗ SRC[47:32];
+TEMP3[31:0] := DEST[63:48] ∗ SRC[63:48];
+TEMP4[31:0] := DEST[79:64] ∗ SRC[79:64];
+TEMP5[31:0] := DEST[95:80] ∗ SRC[95:80];
+TEMP6[31:0] := DEST[111:96] ∗ SRC[111:96];
+TEMP7[31:0] := DEST[127:112] ∗ SRC[127:112];
+DEST[15:0] := TEMP0[31:16];
+DEST[31:16] := TEMP1[31:16];
+DEST[47:32] := TEMP2[31:16];
+DEST[63:48] := TEMP3[31:16];
+DEST[79:64] := TEMP4[31:16];
+DEST[95:80] := TEMP5[31:16];
+DEST[111:96] := TEMP6[31:16];
+DEST[127:112] := TEMP7[31:16];
+
+

VPMULHUW (VEX.128 Encoded Version) + ¶ +

+
TEMP0[31:0] := SRC1[15:0] * SRC2[15:0]
+TEMP1[31:0] := SRC1[31:16] * SRC2[31:16]
+TEMP2[31:0] := SRC1[47:32] * SRC2[47:32]
+TEMP3[31:0] := SRC1[63:48] * SRC2[63:48]
+TEMP4[31:0] := SRC1[79:64] * SRC2[79:64]
+TEMP5[31:0] := SRC1[95:80] * SRC2[95:80]
+TEMP6[31:0] := SRC1[111:96] * SRC2[111:96]
+TEMP7[31:0] := SRC1[127:112] * SRC2[127:112]
+DEST[15:0] := TEMP0[31:16]
+DEST[31:16] := TEMP1[31:16]
+DEST[47:32] := TEMP2[31:16]
+DEST[63:48] := TEMP3[31:16]
+DEST[79:64] := TEMP4[31:16]
+DEST[95:80] := TEMP5[31:16]
+DEST[111:96] := TEMP6[31:16]
+DEST[127:112] := TEMP7[31:16]
+DEST[MAXVL-1:128] := 0
+
+

PMULHUW (VEX.256 Encoded Version) + ¶ +

+
TEMP0[31:0] := SRC1[15:0] * SRC2[15:0]
+TEMP1[31:0] := SRC1[31:16] * SRC2[31:16]
+TEMP2[31:0] := SRC1[47:32] * SRC2[47:32]
+TEMP3[31:0] := SRC1[63:48] * SRC2[63:48]
+TEMP4[31:0] := SRC1[79:64] * SRC2[79:64]
+TEMP5[31:0] := SRC1[95:80] * SRC2[95:80]
+TEMP6[31:0] := SRC1[111:96] * SRC2[111:96]
+TEMP7[31:0] := SRC1[127:112] * SRC2[127:112]
+TEMP8[31:0] := SRC1[143:128] * SRC2[143:128]
+TEMP9[31:0] := SRC1[159:144] * SRC2[159:144]
+TEMP10[31:0] := SRC1[175:160] * SRC2[175:160]
+TEMP11[31:0] := SRC1[191:176] * SRC2[191:176]
+TEMP12[31:0] := SRC1[207:192] * SRC2[207:192]
+TEMP13[31:0] := SRC1[223:208] * SRC2[223:208]
+TEMP14[31:0] := SRC1[239:224] * SRC2[239:224]
+TEMP15[31:0] := SRC1[255:240] * SRC2[255:240]
+DEST[15:0] := TEMP0[31:16]
+DEST[31:16] := TEMP1[31:16]
+DEST[47:32] := TEMP2[31:16]
+DEST[63:48] := TEMP3[31:16]
+DEST[79:64] := TEMP4[31:16]
+DEST[95:80] := TEMP5[31:16]
+DEST[111:96] := TEMP6[31:16]
+DEST[127:112] := TEMP7[31:16]
+DEST[143:128] := TEMP8[31:16]
+DEST[159:144] := TEMP9[31:16]
+DEST[175:160] := TEMP10[31:16]
+DEST[191:176] := TEMP11[31:16]
+DEST[207:192] := TEMP12[31:16]
+DEST[223:208] := TEMP13[31:16]
+DEST[239:224] := TEMP14[31:16]
+DEST[255:240] := TEMP15[31:16]
+DEST[MAXVL-1:256] := 0
+
+

PMULHUW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN
+            temp[31:0] := SRC1[i+15:i] * SRC2[i+15:i]
+            DEST[i+15:i] := tmp[31:16]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMULHUW __m512i _mm512_mulhi_epu16(__m512i a, __m512i b);
+
+
VPMULHUW __m512i _mm512_mask_mulhi_epu16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMULHUW __m512i _mm512_maskz_mulhi_epu16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMULHUW __m256i _mm256_mask_mulhi_epu16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMULHUW __m256i _mm256_maskz_mulhi_epu16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMULHUW __m128i _mm_mask_mulhi_epu16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMULHUW __m128i _mm_maskz_mulhi_epu16( __mmask8 k, __m128i a, __m128i b);
+
+
PMULHUW __m64 _mm_mulhi_pu16(__m64 a, __m64 b)
+
+
(V)PMULHUW __m128i _mm_mulhi_epu16 ( __m128i a, __m128i b)
+
+
VPMULHUW __m256i _mm256_mulhi_epu16 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pmulhw.html b/x86/pmulhw.html new file mode 100644 index 0000000..5ecdf70 --- /dev/null +++ b/x86/pmulhw.html @@ -0,0 +1,252 @@ + +PMULHW + — Multiply Packed Signed Integers and Store High Result

PMULHW + — Multiply Packed Signed Integers and Store High Result

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F E5 /r1 PMULHW mm, mm/m64AV/VMMXMultiply the packed signed word integers in mm1 register and mm2/m64, and store the high 16 bits of the results in mm1.
66 0F E5 /r PMULHW xmm1, xmm2/m128AV/VSSE2Multiply the packed signed word integers in xmm1 and xmm2/m128, and store the high 16 bits of the results in xmm1.
VEX.128.66.0F.WIG E5 /r VPMULHW xmm1, xmm2, xmm3/m128BV/VAVXMultiply the packed signed word integers in xmm2 and xmm3/m128, and store the high 16 bits of the results in xmm1.
VEX.256.66.0F.WIG E5 /r VPMULHW ymm1, ymm2, ymm3/m256BV/VAVX2Multiply the packed signed word integers in ymm2 and ymm3/m256, and store the high 16 bits of the results in ymm1.
EVEX.128.66.0F.WIG E5 /r VPMULHW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWMultiply the packed signed word integers in xmm2 and xmm3/m128, and store the high 16 bits of the results in xmm1 under writemask k1.
EVEX.256.66.0F.WIG E5 /r VPMULHW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWMultiply the packed signed word integers in ymm2 and ymm3/m256, and store the high 16 bits of the results in ymm1 under writemask k1.
EVEX.512.66.0F.WIG E5 /r VPMULHW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWMultiply the packed signed word integers in zmm2 and zmm3/m512, and store the high 16 bits of the results in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD signed multiply of the packed signed word integers in the destination operand (first operand) and the source operand (second operand), and stores the high 16 bits of each intermediate 32-bit result in the destination operand. (Figure 4-12 shows this operation when using 64-bit operands.)

+

n 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version 64-bit operand: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an MMX technology register.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.L must be 0, otherwise the instruction will #UD.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Operation + ¶ +

+

PMULHW (With 64-bit Operands) + ¶ +

+
TEMP0[31:0] := DEST[15:0] ∗ SRC[15:0]; (* Signed multiplication *)
+TEMP1[31:0] := DEST[31:16] ∗ SRC[31:16];
+TEMP2[31:0] := DEST[47:32] ∗ SRC[47:32];
+TEMP3[31:0] := DEST[63:48] ∗ SRC[63:48];
+DEST[15:0] := TEMP0[31:16];
+DEST[31:16] := TEMP1[31:16];
+DEST[47:32] := TEMP2[31:16];
+DEST[63:48] := TEMP3[31:16];
+
+

PMULHW (With 128-bit Operands) + ¶ +

+
TEMP0[31:0] := DEST[15:0] ∗ SRC[15:0]; (* Signed multiplication *)
+TEMP1[31:0] := DEST[31:16] ∗ SRC[31:16];
+TEMP2[31:0] := DEST[47:32] ∗ SRC[47:32];
+TEMP3[31:0] := DEST[63:48] ∗ SRC[63:48];
+TEMP4[31:0] := DEST[79:64] ∗ SRC[79:64];
+TEMP5[31:0] := DEST[95:80] ∗ SRC[95:80];
+TEMP6[31:0] := DEST[111:96] ∗ SRC[111:96];
+TEMP7[31:0] := DEST[127:112] ∗ SRC[127:112];
+DEST[15:0] := TEMP0[31:16];
+DEST[31:16] := TEMP1[31:16];
+DEST[47:32] := TEMP2[31:16];
+DEST[63:48] := TEMP3[31:16];
+DEST[79:64] := TEMP4[31:16];
+DEST[95:80] := TEMP5[31:16];
+DEST[111:96] := TEMP6[31:16];
+DEST[127:112] := TEMP7[31:16];
+
+

VPMULHW (VEX.128 Encoded Version) + ¶ +

+
TEMP0[31:0] := SRC1[15:0] * SRC2[15:0] (*Signed Multiplication*)
+TEMP1[31:0] := SRC1[31:16] * SRC2[31:16]
+TEMP2[31:0] := SRC1[47:32] * SRC2[47:32]
+TEMP3[31:0] := SRC1[63:48] * SRC2[63:48]
+TEMP4[31:0] := SRC1[79:64] * SRC2[79:64]
+TEMP5[31:0] := SRC1[95:80] * SRC2[95:80]
+TEMP6[31:0] := SRC1[111:96] * SRC2[111:96]
+TEMP7[31:0] := SRC1[127:112] * SRC2[127:112]
+DEST[15:0] := TEMP0[31:16]
+DEST[31:16] := TEMP1[31:16]
+DEST[47:32] := TEMP2[31:16]
+DEST[63:48] := TEMP3[31:16]
+DEST[79:64] := TEMP4[31:16]
+DEST[95:80] := TEMP5[31:16]
+DEST[111:96] := TEMP6[31:16]
+DEST[127:112] := TEMP7[31:16]
+DEST[MAXVL-1:128] := 0
+
+

PMULHW (VEX.256 Encoded Version) + ¶ +

+
TEMP0[31:0] := SRC1[15:0] * SRC2[15:0] (*Signed Multiplication*)
+TEMP1[31:0] := SRC1[31:16] * SRC2[31:16]
+TEMP2[31:0] := SRC1[47:32] * SRC2[47:32]
+TEMP3[31:0] := SRC1[63:48] * SRC2[63:48]
+TEMP4[31:0] := SRC1[79:64] * SRC2[79:64]
+TEMP5[31:0] := SRC1[95:80] * SRC2[95:80]
+TEMP6[31:0] := SRC1[111:96] * SRC2[111:96]
+TEMP7[31:0] := SRC1[127:112] * SRC2[127:112]
+TEMP8[31:0] := SRC1[143:128] * SRC2[143:128]
+TEMP9[31:0] := SRC1[159:144] * SRC2[159:144]
+TEMP10[31:0] := SRC1[175:160] * SRC2[175:160]
+TEMP11[31:0] := SRC1[191:176] * SRC2[191:176]
+TEMP12[31:0] := SRC1[207:192] * SRC2[207:192]
+TEMP13[31:0] := SRC1[223:208] * SRC2[223:208]
+TEMP14[31:0] := SRC1[239:224] * SRC2[239:224]
+TEMP15[31:0] := SRC1[255:240] * SRC2[255:240]
+DEST[15:0] := TEMP0[31:16]
+DEST[31:16] := TEMP1[31:16]
+DEST[47:32] := TEMP2[31:16]
+DEST[63:48] := TEMP3[31:16]
+DEST[79:64] := TEMP4[31:16]
+DEST[95:80] := TEMP5[31:16]
+DEST[111:96] := TEMP6[31:16]
+DEST[127:112] := TEMP7[31:16]
+DEST[143:128] := TEMP8[31:16]
+DEST[159:144] := TEMP9[31:16]
+DEST[175:160] := TEMP10[31:16]
+DEST[191:176] := TEMP11[31:16]
+DEST[207:192] := TEMP12[31:16]
+DEST[223:208] := TEMP13[31:16]
+DEST[239:224] := TEMP14[31:16]
+DEST[255:240] := TEMP15[31:16]
+DEST[MAXVL-1:256] := 0
+
+

PMULHW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN
+            temp[31:0] := SRC1[i+15:i] * SRC2[i+15:i]
+            DEST[i+15:i] := tmp[31:16]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMULHW __m512i _mm512_mulhi_epi16(__m512i a, __m512i b);
+
+
VPMULHW __m512i _mm512_mask_mulhi_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMULHW __m512i _mm512_maskz_mulhi_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMULHW __m256i _mm256_mask_mulhi_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMULHW __m256i _mm256_maskz_mulhi_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMULHW __m128i _mm_mask_mulhi_epi16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMULHW __m128i _mm_maskz_mulhi_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
PMULHW __m64 _mm_mulhi_pi16 (__m64 m1, __m64 m2)
+
+
(V)PMULHW __m128i _mm_mulhi_epi16 ( __m128i a, __m128i b)
+
+
VPMULHW __m256i _mm256_mulhi_epi16 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pmulld.pmullq.html b/x86/pmulld.pmullq.html new file mode 100644 index 0000000..f92591d --- /dev/null +++ b/x86/pmulld.pmullq.html @@ -0,0 +1,254 @@ + +PMULLD/PMULLQ + — Multiply Packed Integers and Store Low Result

PMULLD/PMULLQ + — Multiply Packed Integers and Store Low Result

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 40 /r PMULLD xmm1, xmm2/m128AV/VSSE4_1Multiply the packed dword signed integers in xmm1 and xmm2/m128 and store the low 32 bits of each product in xmm1.
VEX.128.66.0F38.WIG 40 /r VPMULLD xmm1, xmm2, xmm3/m128BV/VAVXMultiply the packed dword signed integers in xmm2 and xmm3/m128 and store the low 32 bits of each product in xmm1.
VEX.256.66.0F38.WIG 40 /r VPMULLD ymm1, ymm2, ymm3/m256BV/VAVX2Multiply the packed dword signed integers in ymm2 and ymm3/m256 and store the low 32 bits of each product in ymm1.
EVEX.128.66.0F38.W0 40 /r VPMULLD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FMultiply the packed dword signed integers in xmm2 and xmm3/m128/m32bcst and store the low 32 bits of each product in xmm1 under writemask k1.
EVEX.256.66.0F38.W0 40 /r VPMULLD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FMultiply the packed dword signed integers in ymm2 and ymm3/m256/m32bcst and store the low 32 bits of each product in ymm1 under writemask k1.
EVEX.512.66.0F38.W0 40 /r VPMULLD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FMultiply the packed dword signed integers in zmm2 and zmm3/m512/m32bcst and store the low 32 bits of each product in zmm1 under writemask k1.
EVEX.128.66.0F38.W1 40 /r VPMULLQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512DQMultiply the packed qword signed integers in xmm2 and xmm3/m128/m64bcst and store the low 64 bits of each product in xmm1 under writemask k1.
EVEX.256.66.0F38.W1 40 /r VPMULLQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VLA VX512DQMultiply the packed qword signed integers in ymm2 and ymm3/m256/m64bcst and store the low 64 bits of each product in ymm1 under writemask k1.
EVEX.512.66.0F38.W1 40 /r VPMULLQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512DQMultiply the packed qword signed integers in zmm2 and zmm3/m512/m64bcst and store the low 64 bits of each product in zmm1 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD signed multiply of the packed signed dword/qword integers from each element of the first source operand with the corresponding element in the second source operand. The low 32/64 bits of each 64/128-bit intermediate results are stored to the destination operand.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding ZMM destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding ZMM register are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register; The second source operand is a YMM register or 256-bit memory location. Bits (MAXVL-1:256) of the corresponding destination ZMM register are zeroed.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is conditionally updated based on writemask k1.

+

Operation + ¶ +

+

VPMULLQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN Temp[127:0] := SRC1[i+63:i] * SRC2[63:0]
+                ELSE Temp[127:0] := SRC1[i+63:i] * SRC2[i+63:i]
+            FI;
+            DEST[i+63:i] := Temp[63:0]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMULLD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN Temp[63:0] := SRC1[i+31:i] * SRC2[31:0]
+                ELSE Temp[63:0] := SRC1[i+31:i] * SRC2[i+31:i]
+            FI;
+            DEST[i+31:i] := Temp[31:0]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMULLD (VEX.256 Encoded Version) + ¶ +

+
Temp0[63:0] := SRC1[31:0] * SRC2[31:0]
+Temp1[63:0] := SRC1[63:32] * SRC2[63:32]
+Temp2[63:0] := SRC1[95:64] * SRC2[95:64]
+Temp3[63:0] := SRC1[127:96] * SRC2[127:96]
+Temp4[63:0] := SRC1[159:128] * SRC2[159:128]
+Temp5[63:0] := SRC1[191:160] * SRC2[191:160]
+Temp6[63:0] := SRC1[223:192] * SRC2[223:192]
+Temp7[63:0] := SRC1[255:224] * SRC2[255:224]
+DEST[31:0] := Temp0[31:0]
+DEST[63:32] := Temp1[31:0]
+DEST[95:64] := Temp2[31:0]
+DEST[127:96] := Temp3[31:0]
+DEST[159:128] := Temp4[31:0]
+DEST[191:160] := Temp5[31:0]
+DEST[223:192] := Temp6[31:0]
+DEST[255:224] := Temp7[31:0]
+DEST[MAXVL-1:256] := 0
+
+

VPMULLD (VEX.128 Encoded Version) + ¶ +

+
Temp0[63:0] := SRC1[31:0] * SRC2[31:0]
+Temp1[63:0] := SRC1[63:32] * SRC2[63:32]
+Temp2[63:0] := SRC1[95:64] * SRC2[95:64]
+Temp3[63:0] := SRC1[127:96] * SRC2[127:96]
+DEST[31:0] := Temp0[31:0]
+DEST[63:32] := Temp1[31:0]
+DEST[95:64] := Temp2[31:0]
+DEST[127:96] := Temp3[31:0]
+DEST[MAXVL-1:128] := 0
+
+

PMULLD (128-bit Legacy SSE Version) + ¶ +

+
Temp0[63:0] := DEST[31:0] * SRC[31:0]
+Temp1[63:0] := DEST[63:32] * SRC[63:32]
+Temp2[63:0] := DEST[95:64] * SRC[95:64]
+Temp3[63:0] := DEST[127:96] * SRC[127:96]
+DEST[31:0] := Temp0[31:0]
+DEST[63:32] := Temp1[31:0]
+DEST[95:64] := Temp2[31:0]
+DEST[127:96] := Temp3[31:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMULLD __m512i _mm512_mullo_epi32(__m512i a, __m512i b);
+
+
VPMULLD __m512i _mm512_mask_mullo_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPMULLD __m512i _mm512_maskz_mullo_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPMULLD __m256i _mm256_mask_mullo_epi32(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPMULLD __m256i _mm256_maskz_mullo_epi32( __mmask8 k, __m256i a, __m256i b);
+
+
VPMULLD __m128i _mm_mask_mullo_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMULLD __m128i _mm_maskz_mullo_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
VPMULLD __m256i _mm256_mullo_epi32(__m256i a, __m256i b);
+
+
PMULLD __m128i _mm_mullo_epi32(__m128i a, __m128i b);
+
+
VPMULLQ __m512i _mm512_mullo_epi64(__m512i a, __m512i b);
+
+
VPMULLQ __m512i _mm512_mask_mullo_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPMULLQ __m512i _mm512_maskz_mullo_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPMULLQ __m256i _mm256_mullo_epi64(__m256i a, __m256i b);
+
+
VPMULLQ __m256i _mm256_mask_mullo_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPMULLQ __m256i _mm256_maskz_mullo_epi64( __mmask8 k, __m256i a, __m256i b);
+
+
VPMULLQ __m128i _mm_mullo_epi64(__m128i a, __m128i b);
+
+
VPMULLQ __m128i _mm_mask_mullo_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMULLQ __m128i _mm_maskz_mullo_epi64( __mmask8 k, __m128i a, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pmullw.html b/x86/pmullw.html new file mode 100644 index 0000000..5f455e5 --- /dev/null +++ b/x86/pmullw.html @@ -0,0 +1,339 @@ + +PMULLW + — Multiply Packed Signed Integers and Store Low Result

PMULLW + — Multiply Packed Signed Integers and Store Low Result

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F D5 /r1 PMULLW mm, mm/m64AV/VMMXMultiply the packed signed word integers in mm1 register and mm2/m64, and store the low 16 bits of the results in mm1.
66 0F D5 /r PMULLW xmm1, xmm2/m128AV/VSSE2Multiply the packed signed word integers in xmm1 and xmm2/m128, and store the low 16 bits of the results in xmm1.
VEX.128.66.0F.WIG D5 /r VPMULLW xmm1, xmm2, xmm3/m128BV/VAVXMultiply the packed dword signed integers in xmm2 and xmm3/m128 and store the low 32 bits of each product in xmm1.
VEX.256.66.0F.WIG D5 /r VPMULLW ymm1, ymm2, ymm3/m256BV/VAVX2Multiply the packed signed word integers in ymm2 and ymm3/m256, and store the low 16 bits of the results in ymm1.
EVEX.128.66.0F.WIG D5 /r VPMULLW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWMultiply the packed signed word integers in xmm2 and xmm3/m128, and store the low 16 bits of the results in xmm1 under writemask k1.
EVEX.256.66.0F.WIG D5 /r VPMULLW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWMultiply the packed signed word integers in ymm2 and ymm3/m256, and store the low 16 bits of the results in ymm1 under writemask k1.
EVEX.512.66.0F.WIG D5 /r VPMULLW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWMultiply the packed signed word integers in zmm2 and zmm3/m512, and store the low 16 bits of the results in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD signed multiply of the packed signed word integers in the destination operand (first operand) and the source operand (second operand), and stores the low 16 bits of each intermediate 32-bit result in the destination operand. (Figure 4-12 shows this operation when using 64-bit operands.)

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version 64-bit operand: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an MMX technology register.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.L must be 0, otherwise the instruction will #UD.

+

VEX.256 encoded version: The second source operand can be an YMM register or a 256-bit memory location. The first source and destination operands are YMM registers.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination operand is conditionally updated based on writemask k1.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC +X3 +X2 +X1 +X0 + +DEST + +DEST + +DEST + +DEST + +DEST + +DEST + +DEST + +DEST + +DEST +Y2 +Y1 +Y0 +Z3 = X3 ∗ Y3 +Z2 = X2 ∗ Y2 +Z1 = X1 ∗ Y1 +Z0 = X0 ∗ Y0 +TEMP +DEST +Z3[15:0] +Z2[15:0] +Z1[15:0] +Z0[15:0] +
Figure 4-13. PMULLU Instruction Operation Using 64-bit Operands
+

Operation + ¶ +

+

PMULLW (With 64-bit Operands) + ¶ +

+
TEMP0[31:0] := DEST[15:0] ∗ SRC[15:0]; (* Signed multiplication *)
+TEMP1[31:0] := DEST[31:16] ∗ SRC[31:16];
+TEMP2[31:0] := DEST[47:32] ∗ SRC[47:32];
+TEMP3[31:0] := DEST[63:48] ∗ SRC[63:48];
+DEST[15:0] := TEMP0[15:0];
+DEST[31:16] := TEMP1[15:0];
+DEST[47:32] := TEMP2[15:0];
+DEST[63:48] := TEMP3[15:0];
+
+

PMULLW (With 128-bit Operands) + ¶ +

+
    TEMP0[31:0] := DEST[15:0] ∗ SRC[15:0]; (* Signed multiplication *)
+    TEMP1[31:0] := DEST[31:16] ∗ SRC[31:16];
+    TEMP2[31:0] := DEST[47:32] ∗ SRC[47:32];
+    TEMP3[31:0] := DEST[63:48] ∗ SRC[63:48];
+    TEMP4[31:0] := DEST[79:64] ∗ SRC[79:64];
+    TEMP5[31:0] := DEST[95:80] ∗ SRC[95:80];
+    TEMP6[31:0] := DEST[111:96] ∗ SRC[111:96];
+    TEMP7[31:0] := DEST[127:112] ∗ SRC[127:112];
+    DEST[15:0] := TEMP0[15:0];
+    DEST[31:16] := TEMP1[15:0];
+    DEST[47:32] := TEMP2[15:0];
+    DEST[63:48] := TEMP3[15:0];
+    DEST[79:64] := TEMP4[15:0];
+    DEST[95:80] := TEMP5[15:0];
+    DEST[111:96] := TEMP6[15:0];
+    DEST[127:112] := TEMP7[15:0];
+DEST[MAXVL-1:256] := 0
+
+

VPMULLW (VEX.128 Encoded Version) + ¶ +

+
Temp0[31:0] := SRC1[15:0] * SRC2[15:0]
+Temp1[31:0] := SRC1[31:16] * SRC2[31:16]
+Temp2[31:0] := SRC1[47:32] * SRC2[47:32]
+Temp3[31:0] := SRC1[63:48] * SRC2[63:48]
+Temp4[31:0] := SRC1[79:64] * SRC2[79:64]
+Temp5[31:0] := SRC1[95:80] * SRC2[95:80]
+Temp6[31:0] := SRC1[111:96] * SRC2[111:96]
+Temp7[31:0] := SRC1[127:112] * SRC2[127:112]
+DEST[15:0] := Temp0[15:0]
+DEST[31:16] := Temp1[15:0]
+DEST[47:32] := Temp2[15:0]
+DEST[63:48] := Temp3[15:0]
+DEST[79:64] := Temp4[15:0]
+DEST[95:80] := Temp5[15:0]
+DEST[111:96] := Temp6[15:0]
+DEST[127:112] := Temp7[15:0]
+DEST[MAXVL-1:128] := 0
+
+

PMULLW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN
+            temp[31:0] := SRC1[i+15:i] * SRC2[i+15:i]
+            DEST[i+15:i] := temp[15:0]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMULLW __m512i _mm512_mullo_epi16(__m512i a, __m512i b);
+
+
VPMULLW __m512i _mm512_mask_mullo_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPMULLW __m512i _mm512_maskz_mullo_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPMULLW __m256i _mm256_mask_mullo_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPMULLW __m256i _mm256_maskz_mullo_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPMULLW __m128i _mm_mask_mullo_epi16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMULLW __m128i _mm_maskz_mullo_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
PMULLW __m64 _mm_mullo_pi16(__m64 m1, __m64 m2)
+
+
(V)PMULLW __m128i _mm_mullo_epi16 ( __m128i a, __m128i b)
+
+
VPMULLW __m256i _mm256_mullo_epi16 ( __m256i a, __m256i b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pmuludq.html b/x86/pmuludq.html new file mode 100644 index 0000000..4318d96 --- /dev/null +++ b/x86/pmuludq.html @@ -0,0 +1,192 @@ + +PMULUDQ + — Multiply Packed Unsigned Doubleword Integers

PMULUDQ + — Multiply Packed Unsigned Doubleword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F F4 /r1 PMULUDQ mm1, mm2/m64AV/VSSE2Multiply unsigned doubleword integer in mm1 by unsigned doubleword integer in mm2/m64, and store the quadword result in mm1.
66 0F F4 /r PMULUDQ xmm1, xmm2/m128AV/VSSE2Multiply packed unsigned doubleword integers in xmm1 by packed unsigned doubleword integers in xmm2/m128, and store the quadword results in xmm1.
VEX.128.66.0F.WIG F4 /r VPMULUDQ xmm1, xmm2, xmm3/m128BV/VAVXMultiply packed unsigned doubleword integers in xmm2 by packed unsigned doubleword integers in xmm3/m128, and store the quadword results in xmm1.
VEX.256.66.0F.WIG F4 /r VPMULUDQ ymm1, ymm2, ymm3/m256BV/VAVX2Multiply packed unsigned doubleword integers in ymm2 by packed unsigned doubleword integers in ymm3/m256, and store the quadword results in ymm1.
EVEX.128.66.0F.W1 F4 /r VPMULUDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FMultiply packed unsigned doubleword integers in xmm2 by packed unsigned doubleword integers in xmm3/m128/m64bcst, and store the quadword results in xmm1 under writemask k1.
EVEX.256.66.0F.W1 F4 /r VPMULUDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FMultiply packed unsigned doubleword integers in ymm2 by packed unsigned doubleword integers in ymm3/m256/m64bcst, and store the quadword results in ymm1 under writemask k1.
EVEX.512.66.0F.W1 F4 /r VPMULUDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FMultiply packed unsigned doubleword integers in zmm2 by packed unsigned doubleword integers in zmm3/m512/m64bcst, and store the quadword results in zmm1 under writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiplies the first operand (destination operand) by the second operand (source operand) and stores the result in the destination operand.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version 64-bit operand: The source operand can be an unsigned doubleword integer stored in the low doubleword of an MMX technology register or a 64-bit memory location. The destination operand can be an unsigned doubleword integer stored in the low doubleword an MMX technology register. The result is an unsigned

+

quadword integer stored in the destination an MMX technology register. When a quadword result is too large to be represented in 64 bits (overflow), the result is wrapped around and the low 64 bits are written to the destination element (that is, the carry is ignored).

+

For 64-bit memory operands, 64 bits are fetched from memory, but only the low doubleword is used in the computation.

+

128-bit Legacy SSE version: The second source operand is two packed unsigned doubleword integers stored in the first (low) and third doublewords of an XMM register or a 128-bit memory location. For 128-bit memory operands, 128 bits are fetched from memory, but only the first and third doublewords are used in the computation. The first source operand is two packed unsigned doubleword integers stored in the first and third doublewords of an XMM register. The destination contains two packed unsigned quadword integers stored in an XMM register. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand is two packed unsigned doubleword integers stored in the first (low) and third doublewords of an XMM register or a 128-bit memory location. For 128-bit memory operands, 128 bits are fetched from memory, but only the first and third doublewords are used in the computation. The first source operand is two packed unsigned doubleword integers stored in the first and third doublewords of an XMM register. The destination contains two packed unsigned quadword integers stored in an XMM register. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The second source operand is four packed unsigned doubleword integers stored in the first (low), third, fifth, and seventh doublewords of a YMM register or a 256-bit memory location. For 256-bit memory operands, 256 bits are fetched from memory, but only the first, third, fifth, and seventh doublewords are used in the computation. The first source operand is four packed unsigned doubleword integers stored in the first, third, fifth, and seventh doublewords of an YMM register. The destination contains four packed unaligned quadword integers stored in an YMM register.

+

EVEX encoded version: The input unsigned doubleword integers are taken from the even-numbered elements of the source operands. The first source operand is a ZMM/YMM/XMM registers. The second source operand can be an ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination is a ZMM/YMM/XMM register, and updated according to the writemask at 64-bit granularity.

+

Operation + ¶ +

+

PMULUDQ (With 64-Bit Operands) + ¶ +

+
DEST[63:0] := DEST[31:0] ∗ SRC[31:0];
+
+

PMULUDQ (With 128-Bit Operands) + ¶ +

+
DEST[63:0] := DEST[31:0] ∗ SRC[31:0];
+DEST[127:64] := DEST[95:64] ∗ SRC[95:64];
+
+

VPMULUDQ (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[31:0] * SRC2[31:0]
+DEST[127:64] := SRC1[95:64] * SRC2[95:64]
+DEST[MAXVL-1:128] := 0
+
+

VPMULUDQ (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[31:0] * SRC2[31:0]
+DEST[127:64] := SRC1[95:64] * SRC2[95:64
+DEST[191:128] := SRC1[159:128] * SRC2[159:128]
+DEST[255:192] := SRC1[223:192] * SRC2[223:192]
+DEST[MAXVL-1:256] := 0
+
+

VPMULUDQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := ZeroExtend64( SRC1[i+31:i]) * ZeroExtend64( SRC2[31:0] )
+                ELSE DEST[i+63:i] := ZeroExtend64( SRC1[i+31:i]) * ZeroExtend64( SRC2[i+31:i] )
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMULUDQ __m512i _mm512_mul_epu32(__m512i a, __m512i b);
+
+
VPMULUDQ __m512i _mm512_mask_mul_epu32(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPMULUDQ __m512i _mm512_maskz_mul_epu32( __mmask8 k, __m512i a, __m512i b);
+
+
VPMULUDQ __m256i _mm256_mask_mul_epu32(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPMULUDQ __m256i _mm256_maskz_mul_epu32( __mmask8 k, __m256i a, __m256i b);
+
+
VPMULUDQ __m128i _mm_mask_mul_epu32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMULUDQ __m128i _mm_maskz_mul_epu32( __mmask8 k, __m128i a, __m128i b);
+
+
PMULUDQ __m64 _mm_mul_su32 (__m64 a, __m64 b)
+
+
(V)PMULUDQ __m128i _mm_mul_epu32 ( __m128i a, __m128i b)
+
+
VPMULUDQ __m256i _mm256_mul_epu32( __m256i a, __m256i b);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/pop.html b/x86/pop.html new file mode 100644 index 0000000..01c54e2 --- /dev/null +++ b/x86/pop.html @@ -0,0 +1,378 @@ + +POP + — Pop a Value From the Stack

POP + — Pop a Value From the Stack

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
8F /0POP r/m16MValidValidPop top of stack into m16; increment stack pointer.
8F /0POP r/m32MN.E.ValidPop top of stack into m32; increment stack pointer.
8F /0POP r/m64MValidN.E.Pop top of stack into m64; increment stack pointer. Cannot encode 32-bit operand size.
58+ rwPOP r16OValidValidPop top of stack into r16; increment stack pointer.
58+ rdPOP r32ON.E.ValidPop top of stack into r32; increment stack pointer.
58+ rdPOP r64OValidN.E.Pop top of stack into r64; increment stack pointer. Cannot encode 32-bit operand size.
1FPOP DSZOInvalidValidPop top of stack into DS; increment stack pointer.
07POP ESZOInvalidValidPop top of stack into ES; increment stack pointer.
17POP SSZOInvalidValidPop top of stack into SS; increment stack pointer.
0F A1POP FSZOValidValidPop top of stack into FS; increment stack pointer by 16 bits.
0F A1POP FSZON.E.ValidPop top of stack into FS; increment stack pointer by 32 bits.
0F A1POP FSZOValidN.E.Pop top of stack into FS; increment stack pointer by 64 bits.
0F A9POP GSZOValidValidPop top of stack into GS; increment stack pointer by 16 bits.
0F A9POP GSZON.E.ValidPop top of stack into GS; increment stack pointer by 32 bits.
0F A9POP GSZOValidN.E.Pop top of stack into GS; increment stack pointer by 64 bits.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
Oopcode + rd (w)N/AN/AN/A
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Loads the value from the top of the stack to the location specified with the destination operand (or explicit opcode) and then increments the stack pointer. The destination operand can be a general-purpose register, memory location, or segment register.

+

Address and operand sizes are determined and used as follows:

+
    +
  • Address size. The D flag in the current code-segment descriptor determines the default address size; it may be overridden by an instruction prefix (67H).
+

The address size is used only when writing to a destination operand in memory.

+
    +
  • Operand size. The D flag in the current code-segment descriptor determines the default operand size; it may be overridden by instruction prefixes (66H or REX.W).
+

The operand size (16, 32, or 64 bits) determines the amount by which the stack pointer is incremented (2, 4 or 8).

+
    +
  • Stack-address size. Outside of 64-bit mode, the B flag in the current stack-segment descriptor determines the size of the stack pointer (16 or 32 bits); in 64-bit mode, the size of the stack pointer is always 64 bits.
+

The stack-address size determines the width of the stack pointer when reading from the stack in memory and when incrementing the stack pointer. (As stated above, the amount by which the stack pointer is incremented is determined by the operand size.)

+

If the destination operand is one of the segment registers DS, ES, FS, GS, or SS, the value loaded into the register must be a valid segment selector. In protected mode, popping a segment selector into a segment register automat-

+

ically causes the descriptor information associated with that segment selector to be loaded into the hidden (shadow) part of the segment register and causes the selector and the descriptor information to be validated (see the “Operation” section below).

+

A NULL value (0000-0003) may be popped into the DS, ES, FS, or GS register without causing a general protection fault. However, any subsequent attempt to reference a segment whose corresponding segment register is loaded with a NULL value causes a general protection exception (#GP). In this situation, no memory reference occurs and the saved value of the segment register is NULL.

+

The POP instruction cannot pop a value into the CS register. To load the CS register from the stack, use the RET instruction.

+

If the ESP register is used as a base register for addressing a destination operand in memory, the POP instruction computes the effective address of the operand after it increments the ESP register. For the case of a 16-bit stack where ESP wraps to 0H as a result of the POP instruction, the resulting location of the memory write is processor-family-specific.

+

The POP ESP instruction increments the stack pointer (ESP) before data at the old top of stack is written into the destination.

+

Loading the SS register with a POP instruction suppresses or inhibits some debug exceptions and inhibits interrupts on the following instruction boundary. (The inhibition ends after delivery of an exception or the execution of the next instruction.) This behavior allows a stack pointer to be loaded into the ESP register with the next instruction (POP ESP) before an event can be delivered. See Section 6.8.3, “Masking Exceptions and Interrupts When Switching Stacks,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A. Intel recommends that software use the LSS instruction to load the SS register and ESP together.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). When in 64-bit mode, POPs using 32-bit operands are not encodable and POPs to DS, ES, SS are not valid. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF StackAddrSize = 32
+    THEN
+        IF OperandSize = 32
+                THEN
+                    DEST := SS:ESP; (* Copy a doubleword *)
+                    ESP := ESP + 4;
+                ELSE (* OperandSize = 16*)
+                    DEST := SS:ESP; (* Copy a word *)
+                    ESP := ESP + 2;
+        FI;
+    ELSE IF StackAddrSize = 64
+        THEN
+                IF OperandSize = 64
+                    THEN
+                        DEST := SS:RSP; (* Copy quadword *)
+                        RSP := RSP + 8;
+                    ELSE (* OperandSize = 16*)
+                        DEST := SS:RSP; (* Copy a word *)
+                        RSP := RSP + 2;
+                FI;
+        FI;
+    ELSE StackAddrSize = 16
+        THEN
+                IF OperandSize = 16
+                    THEN
+                        DEST := SS:SP; (* Copy a word *)
+                        SP := SP + 2;
+                    ELSE (* OperandSize = 32 *)
+                        DEST := SS:SP; (* Copy a doubleword *)
+                        SP := SP + 4;
+                FI;
+FI;
+Loading a segment register while in protected mode results in special actions, as described in the following listing.
+These checks are performed on the segment selector and the segment descriptor it points to.
+64-BIT_MODE
+IF FS, or GS is loaded with non-NULL selector;
+    THEN
+        IF segment selector index is outside descriptor table limits
+                OR segment is not a data or readable code segment
+                OR ((segment is a data or nonconforming code segment)
+                    AND ((RPL > DPL) or (CPL > DPL))
+                        THEN #GP(selector);
+                IF segment not marked present
+                    THEN #NP(selector);
+        ELSE
+                SegmentRegister := segment selector;
+                SegmentRegister := segment descriptor;
+        FI;
+FI;
+IF FS, or GS is loaded with a NULL selector;
+        THEN
+                SegmentRegister := segment selector;
+                SegmentRegister := segment descriptor;
+FI;
+PREOTECTED MODE OR COMPATIBILITY MODE;
+IF SS is loaded;
+    THEN
+        IF segment selector is NULL
+                THEN #GP(0);
+        FI;
+        IF segment selector index is outside descriptor table limits
+                or segment selector's RPL ≠ CPL
+                or segment is not a writable data segment
+                or DPL ≠ CPL
+                    THEN #GP(selector);
+        FI;
+        IF segment not marked present
+                THEN #SS(selector);
+                ELSE
+                    SS := segment selector;
+                    SS := segment descriptor;
+        FI;
+FI;
+IF DS, ES, FS, or GS is loaded with non-NULL selector;
+    THEN
+        IF segment selector index is outside descriptor table limits
+                or segment is not a data or readable code segment
+                or ((segment is a data or nonconforming code segment)
+                and ((RPL > DPL) or (CPL > DPL))
+                    THEN #GP(selector);
+        FI;
+        IF segment not marked present
+                THEN #NP(selector);
+                ELSE
+                    SegmentRegister := segment selector;
+                    SegmentRegister := segment descriptor;
+            FI;
+FI;
+IF DS, ES, FS, or GS is loaded with a NULL selector
+    THEN
+        SegmentRegister := segment selector;
+        SegmentRegister := segment descriptor;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If attempt is made to load SS register with NULL segment selector.
If the destination operand is in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#GP(selector)If segment selector index is outside descriptor table limits.
If the SS register is being loaded and the segment selector's RPL and the segment descriptor’s DPL are not equal to the CPL.
If the SS register is being loaded and the segment pointed to is a non-writable data segment.
If the DS, ES, FS, or GS register is being loaded and the segment pointed to is not a data or readable code segment.
If the DS, ES, FS, or GS register is being loaded and the segment pointed to is a data or nonconforming code segment, but both the RPL and the CPL are greater than the DPL.
#SS(0)If the current top of stack is not within the stack segment.
If a memory operand effective address is outside the SS segment limit.
#SS(selector)If the SS register is being loaded and the segment pointed to is marked not present.
#NPIf the DS, ES, FS, or GS register is being loaded and the segment pointed to is marked not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while the current privilege level is 3 and alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same as for protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#SS(0)If the stack address is in a non-canonical form.
#GP(selector)If the descriptor is outside the descriptor table limit.
If the FS or GS register is being loaded and the segment pointed to is not a data or readable code segment.
If the FS or GS register is being loaded and the segment pointed to is a data or nonconforming code segment, but both the RPL and the CPL are greater than the DPL.
#AC(0)If an unaligned memory reference is made while alignment checking is enabled.
#PF(fault-code)If a page fault occurs.
#NPIf the FS or GS register is being loaded and the segment pointed to is marked not present.
#UDIf the LOCK prefix is used.
If the DS, ES, or SS register is being loaded.
diff --git a/x86/popa.popad.html b/x86/popa.popad.html new file mode 100644 index 0000000..316394d --- /dev/null +++ b/x86/popa.popad.html @@ -0,0 +1,140 @@ + +POPA/POPAD + — Pop All General-Purpose Registers

POPA/POPAD + — Pop All General-Purpose Registers

+ + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
61POPAZOInvalidValidPop DI, SI, BP, BX, DX, CX, and AX.
61POPADZOInvalidValidPop EDI, ESI, EBP, EBX, EDX, ECX, and EAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Pops doublewords (POPAD) or words (POPA) from the stack into the general-purpose registers. The registers are loaded in the following order: EDI, ESI, EBP, EBX, EDX, ECX, and EAX (if the operand-size attribute is 32) and DI, SI, BP, BX, DX, CX, and AX (if the operand-size attribute is 16). (These instructions reverse the operation of the PUSHA/PUSHAD instructions.) The value on the stack for the ESP or SP register is ignored. Instead, the ESP or SP register is incremented after each register is loaded.

+

The POPA (pop all) and POPAD (pop all double) mnemonics reference the same opcode. The POPA instruction is intended for use when the operand-size attribute is 16 and the POPAD instruction for when the operand-size attribute is 32. Some assemblers may force the operand size to 16 when POPA is used and to 32 when POPAD is used (using the operand-size override prefix [66H] if necessary). Others may treat these mnemonics as synonyms (POPA/POPAD) and use the current setting of the operand-size attribute to determine the size of values to be popped from the stack, regardless of the mnemonic used. (The D flag in the current code segment’s segment descriptor determines the operand-size attribute.)

+

This instruction executes as described in non-64-bit modes. It is not valid in 64-bit mode.

+

Operation + ¶ +

+
IF 64-Bit Mode
+    THEN
+        #UD;
+ELSE
+    IF OperandSize = 32 (* Instruction = POPAD *)
+    THEN
+        EDI := Pop();
+        ESI := Pop();
+        EBP := Pop();
+        Increment ESP by 4; (* Skip next 4 bytes of stack *)
+        EBX := Pop();
+        EDX := Pop();
+        ECX := Pop();
+        EAX := Pop();
+    ELSE (* OperandSize = 16, instruction = POPA *)
+        DI := Pop();
+        SI := Pop();
+        BP := Pop();
+        Increment ESP by 2; (* Skip next 2 bytes of stack *)
+        BX := Pop();
+        DX := Pop();
+        CX := Pop();
+        AX := Pop();
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If the starting or ending stack address is not within the stack segment.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while the current privilege level is 3 and alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#SSIf the starting or ending stack address is not within the stack segment.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If the starting or ending stack address is not within the stack segment.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same as for protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf in 64-bit mode.
diff --git a/x86/popcnt.html b/x86/popcnt.html new file mode 100644 index 0000000..5cd30ef --- /dev/null +++ b/x86/popcnt.html @@ -0,0 +1,161 @@ + +POPCNT + — Return the Count of Number of Bits Set to 1

POPCNT + — Return the Count of Number of Bits Set to 1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F3 0F B8 /rPOPCNT r16, r/m16RMValidValidPOPCNT on r/m16
F3 0F B8 /rPOPCNT r32, r/m32RMValidValidPOPCNT on r/m32
F3 REX.W 0F B8 /rPOPCNT r64, r/m64RMValidN.E.POPCNT on r/m64
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction calculates the number of bits set to 1 in the second operand (source) and returns the count in the first operand (a destination register).

+

Operation + ¶ +

+
Count = 0;
+For (i=0; i < OperandSize; i++)
+{ IF (SRC[ i] = 1) // i’th bit
+    THEN Count++; FI;
+}
+DEST := Count;
+
+

Flags Affected + ¶ +

+

OF, SF, ZF, AF, CF, PF are all cleared. ZF is set if SRC = 0, otherwise ZF is cleared.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
POPCNT int _mm_popcnt_u32(unsigned int a);
+
+
POPCNT int64_t _mm_popcnt_u64(unsigned __int64 a);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS or GS segments.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code) For a page fault.
#AC(0)If an unaligned memory reference is made while the current privilege level is 3 and alignment checking is enabled.
#UDIf CPUID.01H:ECX.POPCNT [Bit 23] = 0.
If LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#UDIf CPUID.01H:ECX.POPCNT [Bit 23] = 0.
If LOCK prefix is used.
+

Virtual 8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code) For a page fault.
#AC(0)If an unaligned memory reference is made while alignment checking is enabled.
#UDIf CPUID.01H:ECX.POPCNT [Bit 23] = 0.
If LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in Protected Mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf CPUID.01H:ECX.POPCNT [Bit 23] = 0.
If LOCK prefix is used.
diff --git a/x86/popf.popfd.popfq.html b/x86/popf.popfd.popfq.html new file mode 100644 index 0000000..4d293ba --- /dev/null +++ b/x86/popf.popfd.popfq.html @@ -0,0 +1,649 @@ + +POPF/POPFD/POPFQ + — Pop Stack Into EFLAGS Register

POPF/POPFD/POPFQ + — Pop Stack Into EFLAGS Register

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
9DPOPFZOValidValidPop top of stack into lower 16 bits of EFLAGS.
9DPOPFDZON.E.ValidPop top of stack into EFLAGS.
9DPOPFQZOValidN.E.Pop top of stack and zero-extend into RFLAGS.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Pops a doubleword (POPFD) from the top of the stack (if the current operand-size attribute is 32) and stores the value in the EFLAGS register, or pops a word from the top of the stack (if the operand-size attribute is 16) and stores it in the lower 16 bits of the EFLAGS register (that is, the FLAGS register). These instructions reverse the operation of the PUSHF/PUSHFD/PUSHFQ instructions.

+

The POPF (pop flags) and POPFD (pop flags double) mnemonics reference the same opcode. The POPF instruction is intended for use when the operand-size attribute is 16; the POPFD instruction is intended for use when the operand-size attribute is 32. Some assemblers may force the operand size to 16 for POPF and to 32 for POPFD. Others may treat the mnemonics as synonyms (POPF/POPFD) and use the setting of the operand-size attribute to determine the size of values to pop from the stack.

+

The effect of POPF/POPFD on the EFLAGS register changes, depending on the mode of operation. See Table 4-16 and the key below for details.

+

When operating in protected, compatibility, or 64-bit mode at privilege level 0 (or in real-address mode, the equivalent to privilege level 0), all non-reserved flags in the EFLAGS register except RF1, VIP, VIF, and VM may be modified. VIP, VIF, and VM remain unaffected.

+

When operating in protected, compatibility, or 64-bit mode with a privilege level greater than 0, but less than or equal to IOPL, all flags can be modified except the IOPL field and RF, IF, VIP, VIF, and VM; these remain unaffected. The AC and ID flags can only be modified if the operand-size attribute is 32. The interrupt flag (IF) is altered only when executing at a level at least as privileged as the IOPL. If a POPF/POPFD instruction is executed with insufficient privilege, an exception does not occur but privileged bits do not change.

+

When operating in virtual-8086 mode (EFLAGS.VM = 1) without the virtual-8086 mode extensions (CR4.VME = 0), the POPF/POPFD instructions can be used only if IOPL = 3; otherwise, a general-protection exception (#GP) occurs. If the virtual-8086 mode extensions are enabled (CR4.VME = 1), POPF (but not POPFD) can be executed in virtual-8086 mode with IOPL < 3.

+

(The protected-mode virtual-interrupt feature — enabled by setting CR4.PVI — affects the CLI and STI instructions in the same manner as the virtual-8086 mode extensions. POPF, however, is not affected by CR4.PVI.)

+

In 64-bit mode, the mnemonic assigned is POPFQ (note that the 32-bit operand is not encodable). POPFQ pops 64 bits from the stack. Reserved bits of RFLAGS (including the upper 32 bits of RFLAGS) are not affected.

+

See Chapter 3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for more information about the EFLAGS registers.

+
+

1. RF is always zero after the execution of POPF. This is because POPF, like all instructions, clears RF as it begins to execute.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModeOperand SizeCPLIOPLFlagsNotes
2120191817161413:1211109876420
IDVIPVIFACVMRFNTIOPLOFDFIFTFSFZFAFPFCF
Real-Address Mode (CR0.PE = 0)1600-3NNNNN0SSSSSSSSSSS
3200-3SNNSN0SSSSSSSSSSS
Protected, Compatibility, and 64-Bit Modes (CR0.PE = 1 EFLAGS.VM = 0)1600-3NNNNN0SSSSSSSSSSS
161-3<CPLNNNNN0SNSSNSSSSSS
161-3≥CPLNNNNN0SNSSSSSSSSS
32, 6400-3SNNSN0SSSSSSSSSSS
32, 641-3<CPLSNNSN0SNSSNSSSSSS
32, 641-3≥CPLSNNSN0SNSSSSSSSSS
Virtual-8086 (CR0.PE = 1 EFLAGS.VM = 1 CR4.VME = 0)1630-2XXXXXXXXXXXXXXXXX1
1633NNNNN0SNSSSSSSSSS
3230-2XXXXXXXXXXXXXXXXX1
3233SNNSN0SNSSSSSSSSS
VME (CR0.PE = 1 EFLAGS.VM = 1 CR4.VME = 1)1630-2N/XN/XSV/XN/XN/X0/XS/XN/XS/XS/XN/XS/XS/XS/XS/XS/XS/X2,3
1633NNNNN0SNSSSSSSSSS
3230-2XXXXXXXXXXXXXXXXX1
3233SNNSN0SNSSSSSSSSS
+
Table 4-16. Effect of POPF/POPFD on the EFLAGS Register
+
+

1. #GP fault - no flag update

+

2. #GP fault with no flag update if VIP=1 in EFLAGS register and IF=1 in FLAGS value on stack

+

3. #GP fault with no flag update if TF=1 in FLAGS value on stack

+ + + + + + + + + + + + + + + + + +
Key
SUpdated from stack
SVUpdated from IF (bit 9) in FLAGS value on stack
NNo change in value
XNo EFLAGS update
0Value is cleared
+

Operation + ¶ +

+
IF EFLAGS.VM = 0 (* Not in Virtual-8086 Mode *)
+    THEN IF CPL = 0 OR CR0.PE = 0
+        THEN
+            IF OperandSize = 32;
+                THEN
+                    EFLAGS := Pop(); (* 32-bit pop *)
+                    (* All non-reserved flags except RF, VIP, VIF, and VM can be modified;
+                    VIP, VIF, VM, and all reserved bits are unaffected. RF is cleared. *)
+                ELSE IF (Operandsize = 64)
+                    RFLAGS = Pop(); (* 64-bit pop *)
+                    (* All non-reserved flags except RF, VIP, VIF, and VM can be modified;
+                    VIP, VIF, VM, and all reserved bits are unaffected. RF is cleared. *)
+                ELSE (* OperandSize = 16 *)
+                    EFLAGS[15:0] := Pop(); (* 16-bit pop *)
+                    (* All non-reserved flags can be modified. *)
+            FI;
+        ELSE (* CPL > 0 *)
+            IF OperandSize = 32
+                THEN
+                    IF CPL > IOPL
+                        THEN
+                            EFLAGS := Pop(); (* 32-bit pop *)
+                            (* All non-reserved bits except IF, IOPL, VIP, VIF, VM, and RF can be modified;
+                            IF, IOPL, VIP, VIF, VM, and all reserved bits are unaffected; RF is cleared. *)
+                        ELSE
+                            EFLAGS := Pop(); (* 32-bit pop *)
+                            (* All non-reserved bits except IOPL, VIP, VIF, VM, and RF can be modified;
+                            IOPL, VIP, VIF, VM, and all reserved bits are unaffected; RF is cleared. *)
+                    FI;
+                ELSE IF (Operandsize = 64)
+                    IF CPL > IOPL
+                        THEN
+                            RFLAGS := Pop(); (* 64-bit pop *)
+                            (* All non-reserved bits except IF, IOPL, VIP, VIF, VM, and RF can be modified;
+                            IF, IOPL, VIP, VIF, VM, and all reserved bits are unaffected; RF is cleared. *)
+                        ELSE
+                            RFLAGS := Pop(); (* 64-bit pop *)
+                            (* All non-reserved bits except IOPL, VIP, VIF, VM, and RF can be modified;
+                            IOPL, VIP, VIF, VM, and all reserved bits are unaffected; RF is cleared. *)
+                    FI;
+                ELSE (* OperandSize = 16 *)
+                    EFLAGS[15:0] := Pop(); (* 16-bit pop *)
+                    (* All non-reserved bits except IOPL can be modified; IOPL and all
+                    reserved bits are unaffected. *)
+            FI;
+        FI;
+    ELSE (* In virtual-8086 mode *)
+        IF IOPL = 3
+            THEN
+                IF OperandSize = 32
+                    THEN
+                        EFLAGS := Pop();
+                        (* All non-reserved bits except IOPL, VIP, VIF, VM, and RF can be modified;
+                        VIP, VIF, VM, IOPL, and all reserved bits are unaffected. RF is cleared. *)
+                    ELSE
+                        EFLAGS[15:0] := Pop(); FI;
+                        (* All non-reserved bits except IOPL can be modified; IOPL and all reserved bits are unaffected. *)
+                FI;
+            ELSE (* IOPL < 3 *)
+                IF (Operandsize = 32) OR (CR4.VME = 0)
+                    THEN #GP(0); (* Trap to virtual-8086 monitor. *)
+                    ELSE (* Operandsize = 16 and CR4.VME = 1 *)
+                        tempFLAGS := Pop();
+                        IF (EFLAGS.VIP = 1 AND tempFLAGS[9] = 1) OR tempFLAGS[8] = 1
+                            THEN #GP(0);
+                            ELSE
+                                EFLAGS.VIF := tempFLAGS[9];
+                                EFLAGS[15:0] := tempFLAGS;
+                                (* All non-reserved bits except IOPL and IF can be modified;
+                                IOPL, IF, and all reserved bits are unaffected. *)
+                        FI;
+                FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

All flags may be affected; see the Operation section for details.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If the top of stack is not within the stack segment.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while CPL = 3 and alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#SSIf the top of stack is not within the stack segment.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If IOPL < 3 and VME is not enabled.
If IOPL < 3 and the 32-bit operand size is used.
If IOPL < 3, EFLAGS.VIP = 1, and bit 9 (IF) is set in the FLAGS value on the stack.
If IOPL < 3 and bit 8 (TF) is set in the FLAGS value on the stack.
If an attempt is made to execute the POPF/POPFD instruction with an operand-size override prefix.
#SS(0)If the top of stack is not within the stack segment.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same as for protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If the stack address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/por.html b/x86/por.html new file mode 100644 index 0000000..1391f1d --- /dev/null +++ b/x86/por.html @@ -0,0 +1,226 @@ + +POR + — Bitwise Logical OR

POR + — Bitwise Logical OR

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F EB /r1 POR mm, mm/m64AV/VMMXBitwise OR of mm/m64 and mm.
66 0F EB /r POR xmm1, xmm2/m128AV/VSSE2Bitwise OR of xmm2/m128 and xmm1.
VEX.128.66.0F.WIG EB /r VPOR xmm1, xmm2, xmm3/m128BV/VAVXBitwise OR of xmm2/m128 and xmm3.
VEX.256.66.0F.WIG EB /r VPOR ymm1, ymm2, ymm3/m256BV/VAVX2Bitwise OR of ymm2/m256 and ymm3.
EVEX.128.66.0F.W0 EB /r VPORD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FBitwise OR of packed doubleword integers in xmm2 and xmm3/m128/m32bcst using writemask k1.
EVEX.256.66.0F.W0 EB /r VPORD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FBitwise OR of packed doubleword integers in ymm2 and ymm3/m256/m32bcst using writemask k1.
EVEX.512.66.0F.W0 EB /r VPORD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FBitwise OR of packed doubleword integers in zmm2 and zmm3/m512/m32bcst using writemask k1.
EVEX.128.66.0F.W1 EB /r VPORQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FBitwise OR of packed quadword integers in xmm2 and xmm3/m128/m64bcst using writemask k1.
EVEX.256.66.0F.W1 EB /r VPORQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FBitwise OR of packed quadword integers in ymm2 and ymm3/m256/m64bcst using writemask k1.
EVEX.512.66.0F.W1 EB /r VPORQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FBitwise OR of packed quadword integers in zmm2 and zmm3/m512/m64bcst using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical OR operation on the source operand (second operand) and the destination operand (first operand) and stores the result in the destination operand. Each bit of the result is set to 1 if either or both of the corresponding bits of the first and second operands are 1; otherwise, it is set to 0.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an MMX technology register.

+

128-bit Legacy SSE version: The second source operand is an XMM register or a 128-bit memory location. The first source and destination operands can be XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand is an XMM register or a 128-bit memory location. The first source and destination operands can be XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The second source operand is an YMM register or a 256-bit memory location. The first source and destination operands can be YMM registers.

+

EVEX encoded version: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with write-mask k1 at 32/64-bit granularity.

+

Operation + ¶ +

+

POR (64-bit Operand) + ¶ +

+
DEST := DEST OR SRC
+
+

POR (128-bit Legacy SSE Version) + ¶ +

+
DEST := DEST OR SRC
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPOR (VEX.128 Encoded Version) + ¶ +

+
DEST := SRC1 OR SRC2
+DEST[MAXVL-1:128] := 0
+
+

VPOR (VEX.256 Encoded Version) + ¶ +

+
DEST := SRC1 OR SRC2
+DEST[MAXVL-1:256] := 0
+
+

VPORD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := SRC1[i+31:i] BITWISE OR SRC2[31:0]
+                ELSE DEST[i+31:i] := SRC1[i+31:i] BITWISE OR SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPORD __m512i _mm512_or_epi32(__m512i a, __m512i b);
+
+
VPORD __m512i _mm512_mask_or_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPORD __m512i _mm512_maskz_or_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPORD __m256i _mm256_or_epi32(__m256i a, __m256i b);
+
+
VPORD __m256i _mm256_mask_or_epi32(__m256i s, __mmask8 k, __m256i a, __m256i b,);
+
+
VPORD __m256i _mm256_maskz_or_epi32( __mmask8 k, __m256i a, __m256i b);
+
+
VPORD __m128i _mm_or_epi32(__m128i a, __m128i b);
+
+
VPORD __m128i _mm_mask_or_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPORD __m128i _mm_maskz_or_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
VPORQ __m512i _mm512_or_epi64(__m512i a, __m512i b);
+
+
VPORQ __m512i _mm512_mask_or_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPORQ __m512i _mm512_maskz_or_epi64(__mmask8 k, __m512i a, __m512i b);
+
+
VPORQ __m256i _mm256_or_epi64(__m256i a, int imm);
+
+
VPORQ __m256i _mm256_mask_or_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPORQ __m256i _mm256_maskz_or_epi64( __mmask8 k, __m256i a, __m256i b);
+
+
VPORQ __m128i _mm_or_epi64(__m128i a, __m128i b);
+
+
VPORQ __m128i _mm_mask_or_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPORQ __m128i _mm_maskz_or_epi64( __mmask8 k, __m128i a, __m128i b);
+
+
POR __m64 _mm_or_si64(__m64 m1, __m64 m2)
+
+
(V)POR __m128i _mm_or_si128(__m128i m1, __m128i m2)
+
+
VPOR __m256i _mm256_or_si256 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/prefetchh.html b/x86/prefetchh.html new file mode 100644 index 0000000..74b40dc --- /dev/null +++ b/x86/prefetchh.html @@ -0,0 +1,96 @@ + +PREFETCHh + — Prefetch Data Into Caches

PREFETCHh + — Prefetch Data Into Caches

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 18 /1PREFETCHT0 m8MValidValidMove data from m8 closer to the processor using T0 hint.
0F 18 /2PREFETCHT1 m8MValidValidMove data from m8 closer to the processor using T1 hint.
0F 18 /3PREFETCHT2 m8MValidValidMove data from m8 closer to the processor using T2 hint.
0F 18 /0PREFETCHNTA m8MValidValidMove data from m8 closer to the processor using NTA hint.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Fetches the line of data from memory that contains the byte specified with the source operand to a location in the cache hierarchy specified by a locality hint:

+
    +
  • T0 (temporal data)—prefetch data into all levels of the cache hierarchy.
  • +
  • T1 (temporal data with respect to first level cache misses)—prefetch data into level 2 cache and higher.
  • +
  • T2 (temporal data with respect to second level cache misses)—prefetch data into level 3 cache and higher, or an implementation-specific choice.
  • +
  • NTA (non-temporal data with respect to all cache levels)—prefetch data into non-temporal cache structure and into a location close to the processor, minimizing cache pollution.
+

The source operand is a byte memory location. (The locality hints are encoded into the machine level instruction using bits 3 through 5 of the ModR/M byte.)

+

If the line selected is already present in the cache hierarchy at a level closer to the processor, no data movement occurs. Prefetches from uncacheable or WC memory are ignored.

+

The PREFETCHh instruction is merely a hint and does not affect program behavior. If executed, this instruction moves data closer to the processor in anticipation of future use.

+

The implementation of prefetch locality hints is implementation-dependent, and can be overloaded or ignored by a processor implementation. The amount of data prefetched is also processor implementation-dependent. It will, however, be a minimum of 32 bytes. Additional details of the implementation-dependent locality hints are described in Section 7.4 of Intel® 64 and IA-32 Architectures Optimization Reference Manual.

+

It should be noted that processors are free to speculatively fetch and cache data from system memory regions that are assigned a memory-type that permits speculative reads (that is, the WB, WC, and WT memory types). A PREFETCHh instruction is considered a hint to this speculative behavior. Because this speculative fetching can occur at any time and is not tied to instruction execution, a PREFETCHh instruction is not ordered with respect to the fence instructions (MFENCE, SFENCE, and LFENCE) or locked memory references. A PREFETCHh instruction is also unordered with respect to CLFLUSH and CLFLUSHOPT instructions, other PREFETCHh instructions, or any other general instruction. It is ordered with respect to serializing instructions such as CPUID, WRMSR, OUT, and MOV CR.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
FETCH (m8);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
void _mm_prefetch(char *p, int i)
+
+
The argument “*p” gives the address of the byte (and corresponding cache line) to be prefetched. The value “i” gives a constant (_MM_HINT_T0, _MM_HINT_T1, _MM_HINT_T2, or _MM_HINT_NTA) that specifies the type of prefetch operation to be performed.
+
+

Numeric Exceptions + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/prefetchw.html b/x86/prefetchw.html new file mode 100644 index 0000000..50560f0 --- /dev/null +++ b/x86/prefetchw.html @@ -0,0 +1,97 @@ + +PREFETCHW + — Prefetch Data Into Caches in Anticipation of a Write

PREFETCHW + — Prefetch Data Into Caches in Anticipation of a Write

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
0F 0D /1 PREFETCHW m8MV/VPREFETCHWMove data from m8 closer to the processor in anticipation of a write.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Fetches the cache line of data from memory that contains the byte specified with the source operand to a location in the 1st or 2nd level cache and invalidates other cached instances of the line.

+

The source operand is a byte memory location. If the line selected is already present in the lowest level cache and is already in an exclusively owned state, no data movement occurs. Prefetches from non-writeback memory are ignored.

+

The PREFETCHW instruction is merely a hint and does not affect program behavior. If executed, this instruction moves data closer to the processor and invalidates other cached copies in anticipation of the line being written to in the future.

+

The characteristic of prefetch locality hints is implementation-dependent, and can be overloaded or ignored by a processor implementation. The amount of data prefetched is also processor implementation-dependent. It will, however, be a minimum of 32 bytes. Additional details of the implementation-dependent locality hints are described in Section 7.4 of Intel® 64 and IA-32 Architectures Optimization Reference Manual.

+

It should be noted that processors are free to speculatively fetch and cache data with exclusive ownership from system memory regions that permit such accesses (that is, the WB memory type). A PREFETCHW instruction is considered a hint to this speculative behavior. Because this speculative fetching can occur at any time and is not tied to instruction execution, a PREFETCHW instruction is not ordered with respect to the fence instructions (MFENCE, SFENCE, and LFENCE) or locked memory references. A PREFETCHW instruction is also unordered with respect to CLFLUSH and CLFLUSHOPT instructions, other PREFETCHW instructions, or any other general instruction

+

It is ordered with respect to serializing instructions such as CPUID, WRMSR, OUT, and MOV CR.

+

This instruction's operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
FETCH_WITH_EXCLUSIVE_OWNERSHIP (m8);
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
void _m_prefetchw( void * );
+
+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
diff --git a/x86/prefetchwt1.html b/x86/prefetchwt1.html new file mode 100644 index 0000000..1c03534 --- /dev/null +++ b/x86/prefetchwt1.html @@ -0,0 +1,102 @@ + +PREFETCHWT1 + — Prefetch Vector Data Into Caches With Intent to Write and T1 Hint

PREFETCHWT1 + — Prefetch Vector Data Into Caches With Intent to Write and T1 Hint

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
0F 0D /2 PREFETCHWT1 m8MV/VPREFETCHWT1Move data from m8 closer to the processor using T1 hint with intent to write.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/En Operand 1 Operand 2 Operand 3 Operand 4
M ModRM:r/m (r) N/A N/A N/A
+

Description + ¶ +

+

Fetches the line of data from memory that contains the byte specified with the source operand to a location in the cache hierarchy specified by an intent to write hint (so that data is brought into ‘Exclusive’ state via a request for ownership) and a locality hint:

+
    +
  • T1 (temporal data with respect to first level cache)—prefetch data into the second level cache.
+

The source operand is a byte memory location. (The locality hints are encoded into the machine level instruction using bits 3 through 5 of the ModR/M byte. Use of any ModR/M value other than the specified ones will lead to unpredictable behavior.)

+

If the line selected is already present in the cache hierarchy at a level closer to the processor, no data movement occurs. Prefetches from uncacheable or WC memory are ignored.

+

The PREFETCHWT1 instruction is merely a hint and does not affect program behavior. If executed, this instruction moves data closer to the processor in anticipation of future use.

+

The implementation of prefetch locality hints is implementation-dependent, and can be overloaded or ignored by a processor implementation. The amount of data prefetched is also processor implementation-dependent. It will, however, be a minimum of 32 bytes. Additional details of the implementation-dependent locality hints are described in Section 9.5, “Memory Optimization Using Prefetch” of the Intel® 64 and IA-32 Architectures Optimization Reference Manual.

+

It should be noted that processors are free to speculatively fetch and cache data from system memory regions that are assigned a memory-type that permits speculative reads (that is, the WB, WC, and WT memory types). A PREFETCHWT1 instruction is considered a hint to this speculative behavior. Because this speculative fetching can occur at any time and is not tied to instruction execution, a PREFETCHWT1 instruction is not ordered with respect to the fence instructions (MFENCE, SFENCE, and LFENCE) or locked memory references. A PREFETCHWT1 instruction is also unordered with respect to CLFLUSH and CLFLUSHOPT instructions, other PREFETCHWT1 instructions, or any other general instruction. It is ordered with respect to serializing instructions such as CPUID, WRMSR, OUT, and MOV CR.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
PREFETCH(mem, Level, State) Prefetches a byte memory location pointed by ‘mem’ into the cache level specified by ‘Level’; a request
+for exclusive/ownership is done if ‘State’ is 1. Note that the memory location ignore cache line splits. This operation is considered a
+hint for the processor and may be skipped depending on implementation.
+Prefetch (m8, Level = 1, EXCLUSIVE=1);
+
+

Flags Affected + ¶ +

+

All flags are affected.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
void _mm_prefetch( char const *, int hint= _MM_HINT_ET1);
+
+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
diff --git a/x86/psadbw.html b/x86/psadbw.html new file mode 100644 index 0000000..91e81a6 --- /dev/null +++ b/x86/psadbw.html @@ -0,0 +1,391 @@ + +PSADBW + — Compute Sum of Absolute Differences

PSADBW + — Compute Sum of Absolute Differences

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F F6 /r1 PSADBW mm1, mm2/m64AV/VSSEComputes the absolute differences of the packed unsigned byte integers from mm2 /m64 and mm1; differences are then summed to produce an unsigned word integer result.
66 0F F6 /r PSADBW xmm1, xmm2/m128AV/VSSE2Computes the absolute differences of the packed unsigned byte integers from xmm2 /m128 and xmm1; the 8 low differences and 8 high differences are then summed separately to produce two unsigned word integer results.
VEX.128.66.0F.WIG F6 /r VPSADBW xmm1, xmm2, xmm3/m128BV/VAVXComputes the absolute differences of the packed unsigned byte integers from xmm3 /m128 and xmm2; the 8 low differences and 8 high differences are then summed separately to produce two unsigned word integer results.
VEX.256.66.0F.WIG F6 /r VPSADBW ymm1, ymm2, ymm3/m256BV/VAVX2Computes the absolute differences of the packed unsigned byte integers from ymm3 /m256 and ymm2; then each consecutive 8 differences are summed separately to produce four unsigned word integer results.
EVEX.128.66.0F.WIG F6 /r VPSADBW xmm1, xmm2, xmm3/m128CV/VAVX512VL AVX512BWComputes the absolute differences of the packed unsigned byte integers from xmm3 /m128 and xmm2; then each consecutive 8 differences are summed separately to produce two unsigned word integer results.
EVEX.256.66.0F.WIG F6 /r VPSADBW ymm1, ymm2, ymm3/m256CV/VAVX512VL AVX512BWComputes the absolute differences of the packed unsigned byte integers from ymm3 /m256 and ymm2; then each consecutive 8 differences are summed separately to produce four unsigned word integer results.
EVEX.512.66.0F.WIG F6 /r VPSADBW zmm1, zmm2, zmm3/m512CV/VAVX512BWComputes the absolute differences of the packed unsigned byte integers from zmm3 /m512 and zmm2; then each consecutive 8 differences are summed separately to produce eight unsigned word integer results.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvvModRM:r/m (r)N/A
+

Description + ¶ +

+

Computes the absolute value of the difference of 8 unsigned byte integers from the source operand (second operand) and from the destination operand (first operand). These 8 differences are then summed to produce an unsigned word integer result that is stored in the destination operand. Figure 4-14 shows the operation of the PSADBW instruction when using 64-bit operands.

+

When operating on 64-bit operands, the word integer result is stored in the low word of the destination operand, and the remaining bytes in the destination operand are cleared to all 0s.

+

When operating on 128-bit operands, two packed results are computed. Here, the 8 low-order bytes of the source and destination operands are operated on to produce a word result that is stored in the low word of the destination operand, and the 8 high-order bytes are operated on to produce a word result that is stored in bits 64 through 79 of the destination operand. The remaining bytes of the destination operand are cleared.

+

For 256-bit version, the third group of 8 differences are summed to produce an unsigned word in bits[143:128] of the destination register and the fourth group of 8 differences are summed to produce an unsigned word in bits[207:192] of the destination register. The remaining words of the destination are set to 0.

+

For 512-bit version, the fifth group result is stored in bits [271:256] of the destination. The result from the sixth group is stored in bits [335:320]. The results for the seventh and eighth group are stored respectively in bits [399:384] and bits [463:447], respectively. The remaining bits in the destination are set to 0.

+

In 64-bit mode and not encoded by VEX/EVEX prefix, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an MMX technology register.

+

128-bit Legacy SSE version: The first source operand and destination register are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding ZMM destination register remain unchanged.

+

VEX.128 and EVEX.128 encoded versions: The first source operand and destination register are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding ZMM register are zeroed.

+

VEX.256 and EVEX.256 encoded versions: The first source operand and destination register are YMM registers. The second source operand is an YMM register or a 256-bit memory location. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX.512 encoded version: The first source operand and destination register are ZMM registers. The second source operand is a ZMM register or a 512-bit memory location.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC +X7 X6 X5 X4 +X3 X2 X1 X0 +DEST +Y7 Y6 Y5 Y4 +Y3 Y2 Y1 Y0 +TEMP +ABS(X7:Y7) ABS(X6:Y6) +ABS(X5:Y5) ABS(X4:Y4) +ABS(X3:Y3) ABS(X2:Y2) ABS(X1:Y1) ABS(X0:Y0) +DEST +00H 00H 00H 00H +00H 00H +SUM(TEMP7...TEMP0) +
Figure 4-14. PSADBW Instruction Operation Using 64-bit Operands
+

Operation + ¶ +

+

VPSADBW (EVEX Encoded Versions) + ¶ +

+
VL = 128, 256, 512
+TEMP0 := ABS(SRC1[7:0] - SRC2[7:0])
+(* Repeat operation for bytes 1 through 15 *)
+TEMP15 := ABS(SRC1[127:120] - SRC2[127:120])
+DEST[15:0] := SUM(TEMP0:TEMP7)
+DEST[63:16] := 000000000000H
+DEST[79:64] := SUM(TEMP8:TEMP15)
+DEST[127:80] := 00000000000H
+IF VL >= 256
+    (* Repeat operation for bytes 16 through 31*)
+    TEMP31 := ABS(SRC1[255:248] - SRC2[255:248])
+    DEST[143:128] := SUM(TEMP16:TEMP23)
+    DEST[191:144] := 000000000000H
+    DEST[207:192] := SUM(TEMP24:TEMP31)
+    DEST[223:208] := 00000000000H
+FI;
+IF VL >= 512
+(* Repeat operation for bytes 32 through 63*)
+    TEMP63 := ABS(SRC1[511:504] - SRC2[511:504])
+    DEST[271:256] := SUM(TEMP0:TEMP7)
+    DEST[319:272] := 000000000000H
+    DEST[335:320] := SUM(TEMP8:TEMP15)
+    DEST[383:336] := 00000000000H
+    DEST[399:384] := SUM(TEMP16:TEMP23)
+    DEST[447:400] := 000000000000H
+    DEST[463:448] := SUM(TEMP24:TEMP31)
+    DEST[511:464] := 00000000000H
+FI;
+DEST[MAXVL-1:VL] := 0
+
+

VPSADBW (VEX.256 Encoded Version) + ¶ +

+
TEMP0 := ABS(SRC1[7:0] - SRC2[7:0])
+(* Repeat operation for bytes 2 through 30*)
+TEMP31 := ABS(SRC1[255:248] - SRC2[255:248])
+DEST[15:0] := SUM(TEMP0:TEMP7)
+DEST[63:16] := 000000000000H
+DEST[79:64] := SUM(TEMP8:TEMP15)
+DEST[127:80] := 00000000000H
+DEST[143:128] := SUM(TEMP16:TEMP23)
+DEST[191:144] := 000000000000H
+DEST[207:192] := SUM(TEMP24:TEMP31)
+DEST[223:208] := 00000000000H
+DEST[MAXVL-1:256] := 0
+
+

VPSADBW (VEX.128 Encoded Version) + ¶ +

+
TEMP0 := ABS(SRC1[7:0] - SRC2[7:0])
+(* Repeat operation for bytes 2 through 14 *)
+TEMP15 := ABS(SRC1[127:120] - SRC2[127:120])
+DEST[15:0] := SUM(TEMP0:TEMP7)
+DEST[63:16] := 000000000000H
+DEST[79:64] := SUM(TEMP8:TEMP15)
+DEST[127:80] := 00000000000H
+DEST[MAXVL-1:128] := 0
+
+

PSADBW (128-bit Legacy SSE Version) + ¶ +

+
TEMP0 := ABS(DEST[7:0] - SRC[7:0])
+(* Repeat operation for bytes 2 through 14 *)
+TEMP15 := ABS(DEST[127:120] - SRC[127:120])
+DEST[15:0] := SUM(TEMP0:TEMP7)
+DEST[63:16] := 000000000000H
+DEST[79:64] := SUM(TEMP8:TEMP15)
+DEST[127:80] := 00000000000
+DEST[MAXVL-1:128] (Unmodified)
+
+

PSADBW (64-bit Operand) + ¶ +

+
TEMP0 := ABS(DEST[7:0] - SRC[7:0])
+(* Repeat operation for bytes 2 through 6 *)
+TEMP7 := ABS(DEST[63:56] - SRC[63:56])
+DEST[15:0] := SUM(TEMP0:TEMP7)
+DEST[63:16] := 000000000000H
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSADBW __m512i _mm512_sad_epu8( __m512i a, __m512i b)
+
+
PSADBW __m64 _mm_sad_pu8(__m64 a,__m64 b)
+
+
(V)PSADBW __m128i _mm_sad_epu8(__m128i a, __m128i b)
+
+
VPSADBW __m256i _mm256_sad_epu8( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/pshufb.html b/x86/pshufb.html new file mode 100644 index 0000000..2e7ee05 --- /dev/null +++ b/x86/pshufb.html @@ -0,0 +1,300 @@ + +PSHUFB + — Packed Shuffle Bytes

PSHUFB + — Packed Shuffle Bytes

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 00 /r1 PSHUFB mm1, mm2/m64AV/VSSSE3Shuffle bytes in mm1 according to contents of mm2/m64.
66 0F 38 00 /r PSHUFB xmm1, xmm2/m128AV/VSSSE3Shuffle bytes in xmm1 according to contents of xmm2/m128.
VEX.128.66.0F38.WIG 00 /r VPSHUFB xmm1, xmm2, xmm3/m128BV/VAVXShuffle bytes in xmm2 according to contents of xmm3/m128.
VEX.256.66.0F38.WIG 00 /r VPSHUFB ymm1, ymm2, ymm3/m256BV/VAVX2Shuffle bytes in ymm2 according to contents of ymm3/m256.
EVEX.128.66.0F38.WIG 00 /r VPSHUFB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWShuffle bytes in xmm2 according to contents of xmm3/m128 under write mask k1.
EVEX.256.66.0F38.WIG 00 /r VPSHUFB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWShuffle bytes in ymm2 according to contents of ymm3/m256 under write mask k1.
EVEX.512.66.0F38.WIG 00 /r VPSHUFB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWShuffle bytes in zmm2 according to contents of zmm3/m512 under write mask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

PSHUFB performs in-place shuffles of bytes in the destination operand (the first operand) according to the shuffle control mask in the source operand (the second operand). The instruction permutes the data in the destination operand, leaving the shuffle mask unaffected. If the most significant bit (bit[7]) of each byte of the shuffle control mask is set, then constant zero is written in the result byte. Each byte in the shuffle control mask forms an index to permute the corresponding byte in the destination operand. The value of each index is the least significant 4 bits (128-bit operation) or 3 bits (64-bit operation) of the shuffle control byte. When the source operand is a 128-bit memory operand, the operand must be aligned on a 16-byte boundary or a general-protection exception (#GP) will be generated.

+

In 64-bit mode and not encoded with VEX/EVEX, use the REX prefix to access XMM8-XMM15 registers.

+

Legacy SSE version 64-bit operand: Both operands can be MMX registers.

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The destination operand is the first operand, the first source operand is the second operand, the second source operand is the third operand. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: Bits (255:128) of the destination YMM register stores the 16-byte shuffle result of the upper 16 bytes of the first source operand, using the upper 16-bytes of the second source operand as control mask.

+

The value of each index is for the high 128-bit lane is the least significant 4 bits of the respective shuffle control byte. The index value selects a source data element within each 128-bit lane.

+

EVEX encoded version: The second source operand is an ZMM/YMM/XMM register or an 512/256/128-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

EVEX and VEX encoded version: Four/two in-lane 128-bit shuffles.

+

Operation + ¶ +

+

PSHUFB (With 64-bit Operands) + ¶ +

+
TEMP := DEST
+for i = 0 to 7 {
+    if (SRC[(i * 8)+7] = 1 ) then
+            DEST[(i*8)+7...(i*8)+0] := 0;
+    else
+            index[2..0] := SRC[(i*8)+2 .. (i*8)+0];
+            DEST[(i*8)+7...(i*8)+0] := TEMP[(index*8+7)..(index*8+0)];
+    endif;
+}
+PSHUFB (with 128 bit operands)
+TEMP := DEST
+for i = 0 to 15 {
+    if (SRC[(i * 8)+7] = 1 ) then
+            DEST[(i*8)+7..(i*8)+0] := 0;
+        else
+            index[3..0] := SRC[(i*8)+3 .. (i*8)+0];
+            DEST[(i*8)+7..(i*8)+0] := TEMP[(index*8+7)..(index*8+0)];
+    endif
+}
+
+

VPSHUFB (VEX.128 Encoded Version) + ¶ +

+
for i = 0 to 15 {
+    if (SRC2[(i * 8)+7] = 1) then
+        DEST[(i*8)+7..(i*8)+0] := 0;
+        else
+        index[3..0] := SRC2[(i*8)+3 .. (i*8)+0];
+        DEST[(i*8)+7..(i*8)+0] := SRC1[(index*8+7)..(index*8+0)];
+    endif
+}
+DEST[MAXVL-1:128] := 0
+
+

VPSHUFB (VEX.256 Encoded Version) + ¶ +

+
for i = 0 to 15 {
+    if (SRC2[(i * 8)+7] == 1 ) then
+        DEST[(i*8)+7..(i*8)+0] := 0;
+        else
+        index[3..0] := SRC2[(i*8)+3 .. (i*8)+0];
+        DEST[(i*8)+7..(i*8)+0] := SRC1[(index*8+7)..(index*8+0)];
+    endif
+    if (SRC2[128 + (i * 8)+7] == 1 ) then
+        DEST[128 + (i*8)+7..(i*8)+0] := 0;
+        else
+        index[3..0] := SRC2[128 + (i*8)+3 .. (i*8)+0];
+        DEST[128 + (i*8)+7..(i*8)+0] := SRC1[128 + (index*8+7)..(index*8+0)];
+    endif
+}
+
+

VPSHUFB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+jmask := (KL-1) & ~0xF
+                // 0x00, 0x10, 0x30 depending on the VL
+FOR j = 0 TO KL-1
+                // dest
+    IF kl[ i ] or no_masking
+        index := src.byte[ j ];
+        IF index & 0x80
+            Dest.byte[ j ] := 0;
+        ELSE
+            index := (index & 0xF) + (j & jmask);
+                // 16-element in-lane lookup
+            Dest.byte[ j ] := src.byte[ index ];
+    ELSE if zeroing
+        Dest.byte[ j ] := 0;
+DEST[MAXVL-1:VL] := 0;
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +MM2 +07H +07H +FFH +80H +01H +00H +00H +00H +MM1 +04H +01H +07H +03H +02H +02H +FFH +01H +MM1 +04H +04H +00H +00H +FFH +01H +01H +01H +
Figure 4-15. PSHUFB with 64-Bit Operands
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSHUFB __m512i _mm512_shuffle_epi8(__m512i a, __m512i b);
+
+
VPSHUFB __m512i _mm512_mask_shuffle_epi8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPSHUFB __m512i _mm512_maskz_shuffle_epi8( __mmask64 k, __m512i a, __m512i b);
+
+
VPSHUFB __m256i _mm256_mask_shuffle_epi8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPSHUFB __m256i _mm256_maskz_shuffle_epi8( __mmask32 k, __m256i a, __m256i b);
+
+
VPSHUFB __m128i _mm_mask_shuffle_epi8(__m128i s, __mmask16 k, __m128i a, __m128i b);
+
+
VPSHUFB __m128i _mm_maskz_shuffle_epi8( __mmask16 k, __m128i a, __m128i b);
+
+
PSHUFB: __m64 _mm_shuffle_pi8 (__m64 a, __m64 b)
+
+
(V)PSHUFB: __m128i _mm_shuffle_epi8 (__m128i a, __m128i b)
+
+
VPSHUFB:__m256i _mm256_shuffle_epi8(__m256i a, __m256i b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/pshufd.html b/x86/pshufd.html new file mode 100644 index 0000000..dc9889a --- /dev/null +++ b/x86/pshufd.html @@ -0,0 +1,446 @@ + +PSHUFD + — Shuffle Packed Doublewords

PSHUFD + — Shuffle Packed Doublewords

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 70 /r ib PSHUFD xmm1, xmm2/m128, imm8AV/VSSE2Shuffle the doublewords in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VEX.128.66.0F.WIG 70 /r ib VPSHUFD xmm1, xmm2/m128, imm8AV/VAVXShuffle the doublewords in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VEX.256.66.0F.WIG 70 /r ib VPSHUFD ymm1, ymm2/m256, imm8AV/VAVX2Shuffle the doublewords in ymm2/m256 based on the encoding in imm8 and store the result in ymm1.
EVEX.128.66.0F.W0 70 /r ib VPSHUFD xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8BV/VAVX512VL AVX512FShuffle the doublewords in xmm2/m128/m32bcst based on the encoding in imm8 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F.W0 70 /r ib VPSHUFD ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8BV/VAVX512VL AVX512FShuffle the doublewords in ymm2/m256/m32bcst based on the encoding in imm8 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F.W0 70 /r ib VPSHUFD zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8BV/VAVX512FShuffle the doublewords in zmm2/m512/m32bcst based on the encoding in imm8 and store the result in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)imm8N/A
BFullModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Copies doublewords from source operand (second operand) and inserts them in the destination operand (first operand) at the locations selected with the order operand (third operand). Figure 4-16 shows the operation of the 256-bit VPSHUFD instruction and the encoding of the order operand. Each 2-bit field in the order operand selects the contents of one doubleword location within a 128-bit lane and copy to the target element in the destination operand. For example, bits 0 and 1 of the order operand targets the first doubleword element in the low and high 128-bit lane of the destination operand for 256-bit VPSHUFD. The encoded value of bits 1:0 of the order operand (see the field encoding in Figure 4-16) determines which doubleword element (from the respective 128-bit lane) of the source operand will be copied to doubleword 0 of the destination operand.

+

For 128-bit operation, only the low 128-bit lane are operative. The source operand can be an XMM register or a 128-bit memory location. The destination operand is an XMM register. The order operand is an 8-bit immediate. Note that this instruction permits a doubleword in the source operand to be copied to more than one doubleword location in the destination operand.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC +X7 X6 X5 X4 +X3 X2 X1 X0 +DEST +Y7 Y6 Y5 Y4 +Y3 Y2 Y1 Y0 +00B - X4 +Encoding +00B - X0 +Encoding +01B - X5 +of Fields in +ORDER +01B - X1 +of Fields in +10B - X6 +ORDER +

10B - X2 ORDER 11B - X7 7 6 5 4 3 2 1 0 Operand 11B - X3 Operand

+
Figure 4-16. 256-bit VPSHUFD Instruction Operation
+

The source operand can be an XMM register or a 128-bit memory location. The destination operand is an XMM register. The order operand is an 8-bit immediate. Note that this instruction permits a doubleword in the source operand to be copied to more than one doubleword location in the destination operand.

+

In 64-bit mode and not encoded in VEX/EVEX, using REX.R permits this instruction to access XMM8-XMM15.

+

128-bit Legacy SSE version: Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The source operand can be an XMM register or a 128-bit memory location. The destination operand is an XMM register. Bits (MAXVL-1:128) of the corresponding ZMM register are zeroed.

+

VEX.256 encoded version: The source operand can be an YMM register or a 256-bit memory location. The destination operand is an YMM register. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed. Bits (255-1:128) of the destination stores the shuffled results of the upper 16 bytes of the source operand using the immediate byte as the order operand.

+

EVEX encoded version: The source operand can be an ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the writemask.

+

Each 128-bit lane of the destination stores the shuffled results of the respective lane of the source operand using the immediate byte as the order operand.

+

Note: EVEX.vvvv and VEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

PSHUFD (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := (SRC >> (ORDER[1:0] * 32))[31:0];
+DEST[63:32] := (SRC >> (ORDER[3:2] * 32))[31:0];
+DEST[95:64] := (SRC >> (ORDER[5:4] * 32))[31:0];
+DEST[127:96] := (SRC >> (ORDER[7:6] * 32))[31:0];
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSHUFD (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := (SRC >> (ORDER[1:0] * 32))[31:0];
+DEST[63:32] := (SRC >> (ORDER[3:2] * 32))[31:0];
+DEST[95:64] := (SRC >> (ORDER[5:4] * 32))[31:0];
+DEST[127:96] := (SRC >> (ORDER[7:6] * 32))[31:0];
+DEST[MAXVL-1:128] := 0
+
+

VPSHUFD (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := (SRC[127:0] >> (ORDER[1:0] * 32))[31:0];
+DEST[63:32] := (SRC[127:0] >> (ORDER[3:2] * 32))[31:0];
+DEST[95:64] := (SRC[127:0] >> (ORDER[5:4] * 32))[31:0];
+DEST[127:96] := (SRC[127:0] >> (ORDER[7:6] * 32))[31:0];
+DEST[159:128] := (SRC[255:128] >> (ORDER[1:0] * 32))[31:0];
+DEST[191:160] := (SRC[255:128] >> (ORDER[3:2] * 32))[31:0];
+DEST[223:192] := (SRC[255:128] >> (ORDER[5:4] * 32))[31:0];
+DEST[255:224] := (SRC[255:128] >> (ORDER[7:6] * 32))[31:0];
+DEST[MAXVL-1:256] := 0
+
+

VPSHUFD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF (EVEX.b = 1) AND (SRC *is memory*)
+        THEN TMP_SRC[i+31:i] := SRC[31:0]
+        ELSE TMP_SRC[i+31:i] := SRC[i+31:i]
+    FI;
+ENDFOR;
+IF VL >= 128
+    TMP_DEST[31:0] := (TMP_SRC[127:0] >> (ORDER[1:0] * 32))[31:0];
+    TMP_DEST[63:32] := (TMP_SRC[127:0] >> (ORDER[3:2] * 32))[31:0];
+    TMP_DEST[95:64] := (TMP_SRC[127:0] >> (ORDER[5:4] * 32))[31:0];
+    TMP_DEST[127:96] := (TMP_SRC[127:0] >> (ORDER[7:6] * 32))[31:0];
+FI;
+IF VL >= 256
+    TMP_DEST[159:128] := (TMP_SRC[255:128]
+                        >> (ORDER[1:0] * 32))[31:0];
+    TMP_DEST[191:160] := (TMP_SRC[255:128]
+                        >> (ORDER[3:2] * 32))[31:0];
+    TMP_DEST[223:192] := (TMP_SRC[255:128]
+                        >> (ORDER[5:4] * 32))[31:0];
+    TMP_DEST[255:224] := (TMP_SRC[255:128]
+                        >> (ORDER[7:6] * 32))[31:0];
+FI;
+IF VL >= 512
+    TMP_DEST[287:256] := (TMP_SRC[383:256]
+                        >> (ORDER[1:0] * 32))[31:0];
+    TMP_DEST[319:288] := (TMP_SRC[383:256]
+                        >> (ORDER[3:2] * 32))[31:0];
+    TMP_DEST[351:320] := (TMP_SRC[383:256]
+                        >> (ORDER[5:4] * 32))[31:0];
+    TMP_DEST[383:352] := (TMP_SRC[383:256]
+                        >> (ORDER[7:6] * 32))[31:0];
+    TMP_DEST[415:384] := (TMP_SRC[511:384]
+                        >> (ORDER[1:0] * 32))[31:0];
+    TMP_DEST[447:416] := (TMP_SRC[511:384]
+                        >> (ORDER[3:2] * 32))[31:0];
+    TMP_DEST[479:448] := (TMP_SRC[511:384]
+                        >> (ORDER[5:4] * 32))[31:0];
+    TMP_DEST[511:480] := (TMP_SRC[511:384]
+                        >> (ORDER[7:6] * 32))[31:0];
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                            ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                                ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSHUFD __m512i _mm512_shuffle_epi32(__m512i a, int n );
+
+
VPSHUFD __m512i _mm512_mask_shuffle_epi32(__m512i s, __mmask16 k, __m512i a, int n );
+
+
VPSHUFD __m512i _mm512_maskz_shuffle_epi32( __mmask16 k, __m512i a, int n );
+
+
VPSHUFD __m256i _mm256_mask_shuffle_epi32(__m256i s, __mmask8 k, __m256i a, int n );
+
+
VPSHUFD __m256i _mm256_maskz_shuffle_epi32( __mmask8 k, __m256i a, int n );
+
+
VPSHUFD __m128i _mm_mask_shuffle_epi32(__m128i s, __mmask8 k, __m128i a, int n );
+
+
VPSHUFD __m128i _mm_maskz_shuffle_epi32( __mmask8 k, __m128i a, int n );
+
+
(V)PSHUFD __m128i _mm_shuffle_epi32(__m128i a, int n)
+
+
VPSHUFD __m256i _mm256_shuffle_epi32(__m256i a, const int n)
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B or EVEX.vvvv ≠ 1111B.
diff --git a/x86/pshufhw.html b/x86/pshufhw.html new file mode 100644 index 0000000..dd105d7 --- /dev/null +++ b/x86/pshufhw.html @@ -0,0 +1,211 @@ + +PSHUFHW + — Shuffle Packed High Words

PSHUFHW + — Shuffle Packed High Words

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 70 /r ib PSHUFHW xmm1, xmm2/m128, imm8AV/VSSE2Shuffle the high words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VEX.128.F3.0F.WIG 70 /r ib VPSHUFHW xmm1, xmm2/m128, imm8AV/VAVXShuffle the high words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VEX.256.F3.0F.WIG 70 /r ib VPSHUFHW ymm1, ymm2/m256, imm8AV/VAVX2Shuffle the high words in ymm2/m256 based on the encoding in imm8 and store the result in ymm1.
EVEX.128.F3.0F.WIG 70 /r ib VPSHUFHW xmm1 {k1}{z}, xmm2/m128, imm8BV/VAVX512VL AVX512BWShuffle the high words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1 under write mask k1.
EVEX.256.F3.0F.WIG 70 /r ib VPSHUFHW ymm1 {k1}{z}, ymm2/m256, imm8BV/VAVX512VL AVX512BWShuffle the high words in ymm2/m256 based on the encoding in imm8 and store the result in ymm1 under write mask k1.
EVEX.512.F3.0F.WIG 70 /r ib VPSHUFHW zmm1 {k1}{z}, zmm2/m512, imm8BV/VAVX512BWShuffle the high words in zmm2/m512 based on the encoding in imm8 and store the result in zmm1 under write mask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)imm8N/A
BFull MemModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Copies words from the high quadword of a 128-bit lane of the source operand and inserts them in the high quadword of the destination operand at word locations (of the respective lane) selected with the immediate operand. This 256-bit operation is similar to the in-lane operation used by the 256-bit VPSHUFD instruction, which is illustrated in Figure 4-16. For 128-bit operation, only the low 128-bit lane is operative. Each 2-bit field in the immediate operand selects the contents of one word location in the high quadword of the destination operand. The binary encodings of the immediate operand fields select words (0, 1, 2 or 3, 4) from the high quadword of the source operand to be copied to the destination operand. The low quadword of the source operand is copied to the low quadword of the destination operand, for each 128-bit lane.

+

Note that this instruction permits a word in the high quadword of the source operand to be copied to more than one word location in the high quadword of the destination operand.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

128-bit Legacy SSE version: The destination operand is an XMM register. The source operand can be an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The destination operand is an XMM register. The source operand can be an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.vvvv is reserved and must be 1111b, VEX.L must be 0, otherwise the instruction will #UD.

+

VEX.256 encoded version: The destination operand is an YMM register. The source operand can be an YMM register or a 256-bit memory location.

+

EVEX encoded version: The destination operand is a ZMM/YMM/XMM registers. The source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination is updated according to the write-mask.

+

Note: In VEX encoded versions, VEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

PSHUFHW (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SRC[63:0]
+DEST[79:64] := (SRC >> (imm[1:0] *16))[79:64]
+DEST[95:80] := (SRC >> (imm[3:2] * 16))[79:64]
+DEST[111:96] := (SRC >> (imm[5:4] * 16))[79:64]
+DEST[127:112] := (SRC >> (imm[7:6] * 16))[79:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSHUFHW (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0]
+DEST[79:64] := (SRC1 >> (imm[1:0] *16))[79:64]
+DEST[95:80] := (SRC1 >> (imm[3:2] * 16))[79:64]
+DEST[111:96] := (SRC1 >> (imm[5:4] * 16))[79:64]
+DEST[127:112] := (SRC1 >> (imm[7:6] * 16))[79:64]
+DEST[MAXVL-1:128] := 0
+
+

VPSHUFHW (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0]
+DEST[79:64] := (SRC1 >> (imm[1:0] *16))[79:64]
+DEST[95:80] := (SRC1 >> (imm[3:2] * 16))[79:64]
+DEST[111:96] := (SRC1 >> (imm[5:4] * 16))[79:64]
+DEST[127:112] := (SRC1 >> (imm[7:6] * 16))[79:64]
+DEST[191:128] := SRC1[191:128]
+DEST[207192] := (SRC1 >> (imm[1:0] *16))[207:192]
+DEST[223:208] := (SRC1 >> (imm[3:2] * 16))[207:192]
+DEST[239:224] := (SRC1 >> (imm[5:4] * 16))[207:192]
+DEST[255:240] := (SRC1 >> (imm[7:6] * 16))[207:192]
+DEST[MAXVL-1:256] := 0
+
+

VPSHUFHW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL >= 128
+    TMP_DEST[63:0] := SRC1[63:0]
+    TMP_DEST[79:64] := (SRC1 >> (imm[1:0] *16))[79:64]
+    TMP_DEST[95:80] := (SRC1 >> (imm[3:2] * 16))[79:64]
+    TMP_DEST[111:96] := (SRC1 >> (imm[5:4] * 16))[79:64]
+    TMP_DEST[127:112] := (SRC1 >> (imm[7:6] * 16))[79:64]
+FI;
+IF VL >= 256
+    TMP_DEST[191:128] := SRC1[191:128]
+    TMP_DEST[207:192] := (SRC1 >> (imm[1:0] *16))[207:192]
+    TMP_DEST[223:208] := (SRC1 >> (imm[3:2] * 16))[207:192]
+    TMP_DEST[239:224] := (SRC1 >> (imm[5:4] * 16))[207:192]
+    TMP_DEST[255:240] := (SRC1 >> (imm[7:6] * 16))[207:192]
+FI;
+IF VL >= 512
+    TMP_DEST[319:256] := SRC1[319:256]
+    TMP_DEST[335:320] := (SRC1 >> (imm[1:0] *16))[335:320]
+    TMP_DEST[351:336] := (SRC1 >> (imm[3:2] * 16))[335:320]
+    TMP_DEST[367:352] := (SRC1 >> (imm[5:4] * 16))[335:320]
+    TMP_DEST[383:368] := (SRC1 >> (imm[7:6] * 16))[335:320]
+    TMP_DEST[447:384] := SRC1[447:384]
+    TMP_DEST[463:448] := (SRC1 >> (imm[1:0] *16))[463:448]
+    TMP_DEST[479:464] := (SRC1 >> (imm[3:2] * 16))[463:448]
+    TMP_DEST[495:480] := (SRC1 >> (imm[5:4] * 16))[463:448]
+    TMP_DEST[511:496] := (SRC1 >> (imm[7:6] * 16))[463:448]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i];
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSHUFHW __m512i _mm512_shufflehi_epi16(__m512i a, int n);
+
+
VPSHUFHW __m512i _mm512_mask_shufflehi_epi16(__m512i s, __mmask16 k, __m512i a, int n );
+
+
VPSHUFHW __m512i _mm512_maskz_shufflehi_epi16( __mmask16 k, __m512i a, int n );
+
+
VPSHUFHW __m256i _mm256_mask_shufflehi_epi16(__m256i s, __mmask8 k, __m256i a, int n );
+
+
VPSHUFHW __m256i _mm256_maskz_shufflehi_epi16( __mmask8 k, __m256i a, int n );
+
+
VPSHUFHW __m128i _mm_mask_shufflehi_epi16(__m128i s, __mmask8 k, __m128i a, int n );
+
+
VPSHUFHW __m128i _mm_maskz_shufflehi_epi16( __mmask8 k, __m128i a, int n );
+
+
(V)PSHUFHW __m128i _mm_shufflehi_epi16(__m128i a, int n)
+
+
VPSHUFHW __m256i _mm256_shufflehi_epi16(__m256i a, const int n)
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B, or EVEX.vvvv != 1111B.
diff --git a/x86/pshuflw.html b/x86/pshuflw.html new file mode 100644 index 0000000..de53db9 --- /dev/null +++ b/x86/pshuflw.html @@ -0,0 +1,211 @@ + +PSHUFLW + — Shuffle Packed Low Words

PSHUFLW + — Shuffle Packed Low Words

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 70 /r ib PSHUFLW xmm1, xmm2/m128, imm8AV/VSSE2Shuffle the low words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VEX.128.F2.0F.WIG 70 /r ib VPSHUFLW xmm1, xmm2/m128, imm8AV/VAVXShuffle the low words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VEX.256.F2.0F.WIG 70 /r ib VPSHUFLW ymm1, ymm2/m256, imm8AV/VAVX2Shuffle the low words in ymm2/m256 based on the encoding in imm8 and store the result in ymm1.
EVEX.128.F2.0F.WIG 70 /r ib VPSHUFLW xmm1 {k1}{z}, xmm2/m128, imm8BV/VAVX512VL AVX512BWShuffle the low words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1 under write mask k1.
EVEX.256.F2.0F.WIG 70 /r ib VPSHUFLW ymm1 {k1}{z}, ymm2/m256, imm8BV/VAVX512VL AVX512BWShuffle the low words in ymm2/m256 based on the encoding in imm8 and store the result in ymm1 under write mask k1.
EVEX.512.F2.0F.WIG 70 /r ib VPSHUFLW zmm1 {k1}{z}, zmm2/m512, imm8BV/VAVX512BWShuffle the low words in zmm2/m512 based on the encoding in imm8 and store the result in zmm1 under write mask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)imm8N/A
BFull MemModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Copies words from the low quadword of a 128-bit lane of the source operand and inserts them in the low quadword of the destination operand at word locations (of the respective lane) selected with the immediate operand. The 256-bit operation is similar to the in-lane operation used by the 256-bit VPSHUFD instruction, which is illustrated in Figure 4-16. For 128-bit operation, only the low 128-bit lane is operative. Each 2-bit field in the immediate operand selects the contents of one word location in the low quadword of the destination operand. The binary encodings of the immediate operand fields select words (0, 1, 2 or 3) from the low quadword of the source operand to be copied to the destination operand. The high quadword of the source operand is copied to the high quadword of the destination operand, for each 128-bit lane.

+

Note that this instruction permits a word in the low quadword of the source operand to be copied to more than one word location in the low quadword of the destination operand.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

128-bit Legacy SSE version: The destination operand is an XMM register. The source operand can be an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The destination operand is an XMM register. The source operand can be an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The destination operand is an YMM register. The source operand can be an YMM register or a 256-bit memory location.

+

EVEX encoded version: The destination operand is a ZMM/YMM/XMM registers. The source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination is updated according to the write-mask.

+

Note: In VEX encoded versions, VEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

PSHUFLW (128-bit Legacy SSE Version) + ¶ +

+
DEST[15:0] := (SRC >> (imm[1:0] *16))[15:0]
+DEST[31:16] := (SRC >> (imm[3:2] * 16))[15:0]
+DEST[47:32] := (SRC >> (imm[5:4] * 16))[15:0]
+DEST[63:48] := (SRC >> (imm[7:6] * 16))[15:0]
+DEST[127:64] := SRC[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSHUFLW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := (SRC1 >> (imm[1:0] *16))[15:0]
+DEST[31:16] := (SRC1 >> (imm[3:2] * 16))[15:0]
+DEST[47:32] := (SRC1 >> (imm[5:4] * 16))[15:0]
+DEST[63:48] := (SRC1 >> (imm[7:6] * 16))[15:0]
+DEST[127:64] := SRC[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VPSHUFLW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := (SRC1 >> (imm[1:0] *16))[15:0]
+DEST[31:16] := (SRC1 >> (imm[3:2] * 16))[15:0]
+DEST[47:32] := (SRC1 >> (imm[5:4] * 16))[15:0]
+DEST[63:48] := (SRC1 >> (imm[7:6] * 16))[15:0]
+DEST[127:64] := SRC1[127:64]
+DEST[143:128] := (SRC1 >> (imm[1:0] *16))[143:128]
+DEST[159:144] := (SRC1 >> (imm[3:2] * 16))[143:128]
+DEST[175:160] := (SRC1 >> (imm[5:4] * 16))[143:128]
+DEST[191:176] := (SRC1 >> (imm[7:6] * 16))[143:128]
+DEST[255:192] := SRC1[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VPSHUFLW (EVEX.U1.512 Encoded Version) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL >= 128
+    TMP_DEST[15:0] := (SRC1 >> (imm[1:0] *16))[15:0]
+    TMP_DEST[31:16] := (SRC1 >> (imm[3:2] * 16))[15:0]
+    TMP_DEST[47:32] := (SRC1 >> (imm[5:4] * 16))[15:0]
+    TMP_DEST[63:48] := (SRC1 >> (imm[7:6] * 16))[15:0]
+    TMP_DEST[127:64] := SRC1[127:64]
+FI;
+IF VL >= 256
+    TMP_DEST[143:128] := (SRC1 >> (imm[1:0] *16))[143:128]
+    TMP_DEST[159:144] := (SRC1 >> (imm[3:2] * 16))[143:128]
+    TMP_DEST[175:160] := (SRC1 >> (imm[5:4] * 16))[143:128]
+    TMP_DEST[191:176] := (SRC1 >> (imm[7:6] * 16))[143:128]
+    TMP_DEST[255:192] := SRC1[255:192]
+FI;
+IF VL >= 512
+    TMP_DEST[271:256] := (SRC1 >> (imm[1:0] *16))[271:256]
+    TMP_DEST[287:272] := (SRC1 >> (imm[3:2] * 16))[271:256]
+    TMP_DEST[303:288] := (SRC1 >> (imm[5:4] * 16))[271:256]
+    TMP_DEST[319:304] := (SRC1 >> (imm[7:6] * 16))[271:256]
+    TMP_DEST[383:320] := SRC1[383:320]
+    TMP_DEST[399:384] := (SRC1 >> (imm[1:0] *16))[399:384]
+    TMP_DEST[415:400] := (SRC1 >> (imm[3:2] * 16))[399:384]
+    TMP_DEST[431:416] := (SRC1 >> (imm[5:4] * 16))[399:384]
+    TMP_DEST[447:432] := (SRC1 >> (imm[7:6] * 16))[399:384]
+    TMP_DEST[511:448] := SRC1[511:448]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i];
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSHUFLW __m512i _mm512_shufflelo_epi16(__m512i a, int n);
+
+
VPSHUFLW __m512i _mm512_mask_shufflelo_epi16(__m512i s, __mmask16 k, __m512i a, int n );
+
+
VPSHUFLW __m512i _mm512_maskz_shufflelo_epi16( __mmask16 k, __m512i a, int n );
+
+
VPSHUFLW __m256i _mm256_mask_shufflelo_epi16(__m256i s, __mmask8 k, __m256i a, int n );
+
+
VPSHUFLW __m256i _mm256_maskz_shufflelo_epi16( __mmask8 k, __m256i a, int n );
+
+
VPSHUFLW __m128i _mm_mask_shufflelo_epi16(__m128i s, __mmask8 k, __m128i a, int n );
+
+
VPSHUFLW __m128i _mm_maskz_shufflelo_epi16( __mmask8 k, __m128i a, int n );
+
+
(V)PSHUFLW:__m128i _mm_shufflelo_epi16(__m128i a, int n)
+
+
VPSHUFLW:__m256i _mm256_shufflelo_epi16(__m256i a, const int n)
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.vvvv != 1111B, or EVEX.vvvv != 1111B.
diff --git a/x86/pshufw.html b/x86/pshufw.html new file mode 100644 index 0000000..92f7a2d --- /dev/null +++ b/x86/pshufw.html @@ -0,0 +1,69 @@ + +PSHUFW + — Shuffle Packed Words

PSHUFW + — Shuffle Packed Words

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F 70 /r ib PSHUFW mm1, mm2/m64, imm8RMIValidValidShuffle the words in mm2/m64 based on the encoding in imm8 and store the result in mm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Copies words from the source operand (second operand) and inserts them in the destination operand (first operand) at word locations selected with the order operand (third operand). This operation is similar to the operation used by the PSHUFD instruction, which is illustrated in Figure 4-16. For the PSHUFW instruction, each 2-bit field in the order operand selects the contents of one word location in the destination operand. The encodings of the order operand fields select words from the source operand to be copied to the destination operand.

+

The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an MMX technology register. The order operand is an 8-bit immediate. Note that this instruction permits a word in the source operand to be copied to more than one word location in the destination operand.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Operation + ¶ +

+
DEST[15:0] := (SRC >> (ORDER[1:0] * 16))[15:0];
+DEST[31:16] := (SRC >> (ORDER[3:2] * 16))[15:0];
+DEST[47:32] := (SRC >> (ORDER[5:4] * 16))[15:0];
+DEST[63:48] := (SRC >> (ORDER[7:6] * 16))[15:0];
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PSHUFW __m64 _mm_shuffle_pi16(__m64 a, int n)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 23-7, “Exception Conditions for SIMD/MMX Instructions with Memory Reference,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

diff --git a/x86/psignb.psignw.psignd.html b/x86/psignb.psignw.psignd.html new file mode 100644 index 0000000..e959711 --- /dev/null +++ b/x86/psignb.psignw.psignd.html @@ -0,0 +1,256 @@ + +PSIGNB/PSIGNW/PSIGND + — Packed SIGN

PSIGNB/PSIGNW/PSIGND + — Packed SIGN

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 08 /r1 PSIGNB mm1, mm2/m64RMV/VSSSE3Negate/zero/preserve packed byte integers in mm1 depending on the corresponding sign in mm2/m64.
66 0F 38 08 /r PSIGNB xmm1, xmm2/m128RMV/VSSSE3Negate/zero/preserve packed byte integers in xmm1 depending on the corresponding sign in xmm2/m128.
NP 0F 38 09 /r1 PSIGNW mm1, mm2/m64RMV/VSSSE3Negate/zero/preserve packed word integers in mm1 depending on the corresponding sign in mm2/m128.
66 0F 38 09 /r PSIGNW xmm1, xmm2/m128RMV/VSSSE3Negate/zero/preserve packed word integers in xmm1 depending on the corresponding sign in xmm2/m128.
NP 0F 38 0A /r1 PSIGND mm1, mm2/m64RMV/VSSSE3Negate/zero/preserve packed doubleword integers in mm1 depending on the corresponding sign in mm2/m128.
66 0F 38 0A /r PSIGND xmm1, xmm2/m128RMV/VSSSE3Negate/zero/preserve packed doubleword integers in xmm1 depending on the corresponding sign in xmm2/m128.
VEX.128.66.0F38.WIG 08 /r VPSIGNB xmm1, xmm2, xmm3/m128RVMV/VAVXNegate/zero/preserve packed byte integers in xmm2 depending on the corresponding sign in xmm3/m128.
VEX.128.66.0F38.WIG 09 /r VPSIGNW xmm1, xmm2, xmm3/m128RVMV/VAVXNegate/zero/preserve packed word integers in xmm2 depending on the corresponding sign in xmm3/m128.
VEX.128.66.0F38.WIG 0A /r VPSIGND xmm1, xmm2, xmm3/m128RVMV/VAVXNegate/zero/preserve packed doubleword integers in xmm2 depending on the corresponding sign in xmm3/m128.
VEX.256.66.0F38.WIG 08 /r VPSIGNB ymm1, ymm2, ymm3/m256RVMV/VAVX2Negate packed byte integers in ymm2 if the corresponding sign in ymm3/m256 is less than zero.
VEX.256.66.0F38.WIG 09 /r VPSIGNW ymm1, ymm2, ymm3/m256RVMV/VAVX2Negate packed 16-bit integers in ymm2 if the corresponding sign in ymm3/m256 is less than zero.
VEX.256.66.0F38.WIG 0A /r VPSIGND ymm1, ymm2, ymm3/m256RVMV/VAVX2Negate packed doubleword integers in ymm2 if the corresponding sign in ymm3/m256 is less than zero.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

(V)PSIGNB/(V)PSIGNW/(V)PSIGND negates each data element of the destination operand (the first operand) if the signed integer value of the corresponding data element in the source operand (the second operand) is less than zero. If the signed integer value of a data element in the source operand is positive, the corresponding data element in the destination operand is unchanged. If a data element in the source operand is zero, the corresponding data element in the destination operand is set to zero.

+

(V)PSIGNB operates on signed bytes. (V)PSIGNW operates on 16-bit signed words. (V)PSIGND operates on signed 32-bit integers.

+

Legacy SSE instructions: Both operands can be MMX registers. In 64-bit mode, use the REX prefix to access additional registers.

+

128-bit Legacy SSE version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The first source and destination operands are XMM registers. The second source operand is an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the destination YMM register are zeroed. VEX.L must be 0, otherwise instructions will #UD.

+

VEX.256 encoded version: The first source and destination operands are YMM registers. The second source operand is an YMM register or a 256-bit memory location.

+

Operation + ¶ +

+
def byte_sign(control, input_val):
+    if control<0:
+        return negate(input_val)
+    elif control==0:
+        return 0
+    return input_val
+def word_sign(control, input_val):
+    if control<0:
+        return negate(input_val)
+    elif control==0:
+        return 0
+    return input_val
+def dword_sign(control, input_val):
+    if control<0:
+        return negate(input_val)
+    elif control==0:
+        return 0
+    return input_val
+
+

PSIGNB srcdest, src + ¶ +

+

// MMX 64-bit Operands + ¶ +

+
VL=64
+KL := VL/8
+for i in 0...KL-1:
+    srcdest.byte[i] := byte_sign(src.byte[i], srcdest.byte[i])
+
+

PSIGNW srcdest, src // MMX 64-bit Operands + ¶ +

+
VL=64
+KL := VL/16
+FOR i in 0...KL-1:
+    srcdest.word[i] := word_sign(src.word[i], srcdest.word[i])
+
+

PSIGND srcdest, src // MMX 64-bit Operands + ¶ +

+
VL=64
+KL := VL/32
+FOR i in 0...KL-1:
+    srcdest.dword[i] := dword_sign(src.dword[i], srcdest.dword[i])
+
+

PSIGNB srcdest, src // SSE 128-bit Operands + ¶ +

+
VL=128
+KL := VL/8
+FOR i in 0...KL-1:
+    srcdest.byte[i] := byte_sign(src.byte[i], srcdest.byte[i])
+
+

PSIGNW srcdest, src // SSE 128-bit Operands + ¶ +

+
VL=128
+KL := VL/16
+FOR i in 0...KL-1:
+    srcdest.word[i] := word_sign(src.word[i], srcdest.word[i])
+
+

PSIGND srcdest, src // SSE 128-bit Operands + ¶ +

+
VL=128
+KL := VL/32
+FOR i in 0...KL-1:
+    srcdest.dword[i] := dword_sign(src.dword[i], srcdest.dword[i])
+
+

VPSIGNB dest, src1, src2 // AVX 128-bit or 256-bit Operands + ¶ +

+
VL=(128,256)
+KL := VL/8
+FOR i in 0...KL-1:
+    dest.byte[i] := byte_sign(src2.byte[i], src1.byte[i])
+DEST[MAXVL-1:VL] := 0
+
+

VPSIGNW dest, src1, src2 // AVX 128-bit or 256-bit Operands + ¶ +

+
VL=(128,256)
+KL := VL/16
+FOR i in 0...KL-1:
+    dest.word[i] := word_sign(src2.word[i], src1.word[i])
+DEST[MAXVL-1:VL] := 0
+
+

VPSIGND dest, src1, src2 // AVX 128-bit or 256-bit Operands + ¶ +

+
VL=(128,256)
+KL := VL/32
+FOR i in 0...KL-1:
+    dest.dword[i] := dword_sign(src2.dword[i], src1.dword[i])
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PSIGNB __m64 _mm_sign_pi8 (__m64 a, __m64 b)
+
+
(V)PSIGNB __m128i _mm_sign_epi8 (__m128i a, __m128i b)
+
+
VPSIGNB __m256i _mm256_sign_epi8 (__m256i a, __m256i b)
+
+
PSIGNW __m64 _mm_sign_pi16 (__m64 a, __m64 b)
+
+
(V)PSIGNW __m128i _mm_sign_epi16 (__m128i a, __m128i b)
+
+
VPSIGNW __m256i _mm256_sign_epi16 (__m256i a, __m256i b)
+
+
PSIGND __m64 _mm_sign_pi32 (__m64 a, __m64 b)
+
+
(V)PSIGND __m128i _mm_sign_epi32 (__m128i a, __m128i b)
+
+
VPSIGND __m256i _mm256_sign_epi32 (__m256i a, __m256i b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.L = 1.
diff --git a/x86/pslldq.html b/x86/pslldq.html new file mode 100644 index 0000000..00381da --- /dev/null +++ b/x86/pslldq.html @@ -0,0 +1,153 @@ + +PSLLDQ + — Shift Double Quadword Left Logical

PSLLDQ + — Shift Double Quadword Left Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 73 /7 ib PSLLDQ xmm1, imm8AV/VSSE2Shift xmm1 left by imm8 bytes while shifting in 0s.
VEX.128.66.0F.WIG 73 /7 ib VPSLLDQ xmm1, xmm2, imm8BV/VAVXShift xmm2 left by imm8 bytes while shifting in 0s and store result in xmm1.
VEX.256.66.0F.WIG 73 /7 ib VPSLLDQ ymm1, ymm2, imm8BV/VAVX2Shift ymm2 left by imm8 bytes while shifting in 0s and store result in ymm1.
EVEX.128.66.0F.WIG 73 /7 ib VPSLLDQ xmm1,xmm2/ m128, imm8CV/VAVX512VL AVX512BWShift xmm2/m128 left by imm8 bytes while shifting in 0s and store result in xmm1.
EVEX.256.66.0F.WIG 73 /7 ib VPSLLDQ ymm1, ymm2/m256, imm8CV/VAVX512VL AVX512BWShift ymm2/m256 left by imm8 bytes while shifting in 0s and store result in ymm1.
EVEX.512.66.0F.WIG 73 /7 ib VPSLLDQ zmm1, zmm2/m512, imm8CV/VAVX512BWShift zmm2/m512 left by imm8 bytes while shifting in 0s and store result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (r, w)imm8N/AN/A
BN/AVEX.vvvv (w)ModRM:r/m (r)imm8N/A
CFull MemEVEX.vvvv (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Shifts the destination operand (first operand) to the left by the number of bytes specified in the count operand (second operand). The empty low-order bytes are cleared (set to all 0s). If the value specified by the count operand is greater than 15, the destination operand is set to all 0s. The count operand is an 8-bit immediate.

+

128-bit Legacy SSE version: The source and destination operands are the same. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The source and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The source operand is YMM register. The destination operand is an YMM register. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed. The count operand applies to both the low and high 128-bit lanes.

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register. The count operand applies to each 128-bit lanes.

+

Operation + ¶ +

+

VPSLLDQ (EVEX.U1.512 Encoded Version) + ¶ +

+
TEMP := COUNT
+IF (TEMP > 15) THEN TEMP := 16; FI
+DEST[127:0] := SRC[127:0] << (TEMP * 8)
+DEST[255:128] := SRC[255:128] << (TEMP * 8)
+DEST[383:256] := SRC[383:256] << (TEMP * 8)
+DEST[511:384] := SRC[511:384] << (TEMP * 8)
+DEST[MAXVL-1:512] := 0
+
+

VPSLLDQ (VEX.256 and EVEX.256 Encoded Version) + ¶ +

+
TEMP := COUNT
+IF (TEMP > 15) THEN TEMP := 16; FI
+DEST[127:0] := SRC[127:0] << (TEMP * 8)
+DEST[255:128] := SRC[255:128] << (TEMP * 8)
+DEST[MAXVL-1:256] := 0
+
+

VPSLLDQ (VEX.128 and EVEX.128 Encoded Version) + ¶ +

+
TEMP := COUNT
+IF (TEMP > 15) THEN TEMP := 16; FI
+DEST := SRC << (TEMP * 8)
+DEST[MAXVL-1:128] := 0
+
+

PSLLDQ(128-bit Legacy SSE Version) + ¶ +

+
TEMP := COUNT
+IF (TEMP > 15) THEN TEMP := 16; FI
+DEST := DEST << (TEMP * 8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
(V)PSLLDQ __m128i _mm_slli_si128 ( __m128i a, int imm)
+
+
VPSLLDQ __m256i _mm256_slli_si256 ( __m256i a, const int imm)
+
+
VPSLLDQ __m512i _mm512_bslli_epi128 ( __m512i a, const int imm)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-24, “Type 7 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/psllw.pslld.psllq.html b/x86/psllw.pslld.psllq.html new file mode 100644 index 0000000..d01fb13 --- /dev/null +++ b/x86/psllw.pslld.psllq.html @@ -0,0 +1,964 @@ + +PSLLW/PSLLD/PSLLQ + — Shift Packed Data Left Logical

PSLLW/PSLLD/PSLLQ + — Shift Packed Data Left Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F F1 /r1 PSLLW mm, mm/m64AV/VMMXShift words in mm left mm/m64 while shifting in 0s.
66 0F F1 /r PSLLW xmm1, xmm2/m128AV/VSSE2Shift words in xmm1 left by xmm2/m128 while shifting in 0s.
NP 0F 71 /6 ib PSLLW mm1, imm8BV/VMMXShift words in mm left by imm8 while shifting in 0s.
66 0F 71 /6 ib PSLLW xmm1, imm8BV/VSSE2Shift words in xmm1 left by imm8 while shifting in 0s.
NP 0F F2 /r1 PSLLD mm, mm/m64AV/VMMXShift doublewords in mm left by mm/m64 while shifting in 0s.
66 0F F2 /r PSLLD xmm1, xmm2/m128AV/VSSE2Shift doublewords in xmm1 left by xmm2/m128 while shifting in 0s.
NP 0F 72 /6 ib1 PSLLD mm, imm8BV/VMMXShift doublewords in mm left by imm8 while shifting in 0s.
66 0F 72 /6 ib PSLLD xmm1, imm8BV/VSSE2Shift doublewords in xmm1 left by imm8 while shifting in 0s.
NP 0F F3 /r1 PSLLQ mm, mm/m64AV/VMMXShift quadword in mm left by mm/m64 while shifting in 0s.
66 0F F3 /r PSLLQ xmm1, xmm2/m128AV/VSSE2Shift quadwords in xmm1 left by xmm2/m128 while shifting in 0s.
NP 0F 73 /6 ib1 PSLLQ mm, imm8BV/VMMXShift quadword in mm left by imm8 while shifting in 0s.
66 0F 73 /6 ib PSLLQ xmm1, imm8BV/VSSE2Shift quadwords in xmm1 left by imm8 while shifting in 0s.
VEX.128.66.0F.WIG F1 /r VPSLLW xmm1, xmm2, xmm3/m128CV/VAVXShift words in xmm2 left by amount specified in xmm3/m128 while shifting in 0s.
VEX.128.66.0F.WIG 71 /6 ib VPSLLW xmm1, xmm2, imm8DV/VAVXShift words in xmm2 left by imm8 while shifting in 0s.
VEX.128.66.0F.WIG F2 /r VPSLLD xmm1, xmm2, xmm3/m128CV/VAVXShift doublewords in xmm2 left by amount specified in xmm3/m128 while shifting in 0s.
VEX.128.66.0F.WIG 72 /6 ib VPSLLD xmm1, xmm2, imm8DV/VAVXShift doublewords in xmm2 left by imm8 while shifting in 0s.
VEX.128.66.0F.WIG F3 /r VPSLLQ xmm1, xmm2, xmm3/m128CV/VAVXShift quadwords in xmm2 left by amount specified in xmm3/m128 while shifting in 0s.
VEX.128.66.0F.WIG 73 /6 ib VPSLLQ xmm1, xmm2, imm8DV/VAVXShift quadwords in xmm2 left by imm8 while shifting in 0s.
VEX.256.66.0F.WIG F1 /r VPSLLW ymm1, ymm2, xmm3/m128CV/VAVX2Shift words in ymm2 left by amount specified in xmm3/m128 while shifting in 0s.
VEX.256.66.0F.WIG 71 /6 ib VPSLLW ymm1, ymm2, imm8DV/VAVX2Shift words in ymm2 left by imm8 while shifting in 0s.
VEX.256.66.0F.WIG F2 /r VPSLLD ymm1, ymm2, xmm3/m128CV/VAVX2Shift doublewords in ymm2 left by amount specified in xmm3/m128 while shifting in 0s.
VEX.256.66.0F.WIG 72 /6 ib VPSLLD ymm1, ymm2, imm8DV/VAVX2Shift doublewords in ymm2 left by imm8 while shifting in 0s.
VEX.256.66.0F.WIG F3 /r VPSLLQ ymm1, ymm2, xmm3/m128CV/VAVX2Shift quadwords in ymm2 left by amount specified in xmm3/m128 while shifting in 0s.
VEX.256.66.0F.WIG 73 /6 ib VPSLLQ ymm1, ymm2, imm8DV/VAVX2Shift quadwords in ymm2 left by imm8 while shifting in 0s.
EVEX.128.66.0F.WIG F1 /r VPSLLW xmm1 {k1}{z}, xmm2, xmm3/m128GV/VAVX512VL AVX512BWShift words in xmm2 left by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.256.66.0F.WIG F1 /r VPSLLW ymm1 {k1}{z}, ymm2, xmm3/m128GV/VAVX512VL AVX512BWShift words in ymm2 left by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.512.66.0F.WIG F1 /r VPSLLW zmm1 {k1}{z}, zmm2, xmm3/m128GV/VAVX512BWShift words in zmm2 left by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.128.66.0F.WIG 71 /6 ib VPSLLW xmm1 {k1}{z}, xmm2/m128, imm8EV/VAVX512VL AVX512BWShift words in xmm2/m128 left by imm8 while shifting in 0s using writemask k1.
EVEX.256.66.0F.WIG 71 /6 ib VPSLLW ymm1 {k1}{z}, ymm2/m256, imm8EV/VAVX512VL AVX512BWShift words in ymm2/m256 left by imm8 while shifting in 0s using writemask k1.
EVEX.512.66.0F.WIG 71 /6 ib VPSLLW zmm1 {k1}{z}, zmm2/m512, imm8EV/VAVX512BWShift words in zmm2/m512 left by imm8 while shifting in 0 using writemask k1.
EVEX.128.66.0F.W0 F2 /r VPSLLD xmm1 {k1}{z}, xmm2, xmm3/m128GV/VAVX512VL AVX512FShift doublewords in xmm2 left by amount specified in xmm3/m128 while shifting in 0s under writemask k1.
EVEX.256.66.0F.W0 F2 /r VPSLLD ymm1 {k1}{z}, ymm2, xmm3/m128GV/VAVX512VL AVX512FShift doublewords in ymm2 left by amount specified in xmm3/m128 while shifting in 0s under writemask k1.
EVEX.512.66.0F.W0 F2 /r VPSLLD zmm1 {k1}{z}, zmm2, xmm3/m128GV/VAVX512FShift doublewords in zmm2 left by amount specified in xmm3/m128 while shifting in 0s under writemask k1.
EVEX.128.66.0F.W0 72 /6 ib VPSLLD xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8FV/VAVX512VL AVX512FShift doublewords in xmm2/m128/m32bcst left by imm8 while shifting in 0s using writemask k1.
EVEX.256.66.0F.W0 72 /6 ib VPSLLD ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8FV/VAVX512VL AVX512FShift doublewords in ymm2/m256/m32bcst left by imm8 while shifting in 0s using writemask k1.
EVEX.512.66.0F.W0 72 /6 ib VPSLLD zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8FV/VAVX512FShift doublewords in zmm2/m512/m32bcst left by imm8 while shifting in 0s using writemask k1.
EVEX.128.66.0F.W1 F3 /r VPSLLQ xmm1 {k1}{z}, xmm2, xmm3/m128GV/VAVX512VL AVX512FShift quadwords in xmm2 left by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.256.66.0F.W1 F3 /r VPSLLQ ymm1 {k1}{z}, ymm2, xmm3/m128GV/VAVX512VL AVX512FShift quadwords in ymm2 left by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.512.66.0F.W1 F3 /r VPSLLQ zmm1 {k1}{z}, zmm2, xmm3/m128GV/VAVX512FShift quadwords in zmm2 left by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.128.66.0F.W1 73 /6 ib VPSLLQ xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8FV/VAVX512VL AVX512FShift quadwords in xmm2/m128/m64bcst left by imm8 while shifting in 0s using writemask k1.
EVEX.256.66.0F.W1 73 /6 ib VPSLLQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8FV/VAVX512VL AVX512FShift quadwords in ymm2/m256/m64bcst left by imm8 while shifting in 0s using writemask k1.
EVEX.512.66.0F.W1 73 /6 ib VPSLLQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8FV/VAVX512FShift quadwords in zmm2/m512/m64bcst left by imm8 while shifting in 0s using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (r, w)imm8N/AN/A
CN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
DN/AVEX.vvvv (w)ModRM:r/m (r)imm8N/A
EFull MemEVEX.vvvv (w)ModRM:r/m (r)imm8N/A
FFullEVEX.vvvv (w)ModRM:r/m (r)imm8N/A
GMem128ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Shifts the bits in the individual data elements (words, doublewords, or quadword) in the destination operand (first operand) to the left by the number of bits specified in the count operand (second operand). As the bits in the data elements are shifted left, the empty low-order bits are cleared (set to 0). If the value specified by the count operand is greater than 15 (for words), 31 (for doublewords), or 63 (for a quadword), then the destination operand is set to all 0s. Figure 4-17 gives an example of shifting words in a 64-bit operand.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Pre-Shift +X3 +X2 +X1 +X0 +DEST +Shift Left +with Zero +Extension +Post-Shift +X0 << COUNT +X3 << COUNT +X2 << COUNT +X1 << COUNT +DEST +
Figure 4-17. PSLLW, PSLLD, and PSLLQ Instruction Operation Using 64-bit Operand
+

The (V)PSLLW instruction shifts each of the words in the destination operand to the left by the number of bits specified in the count operand; the (V)PSLLD instruction shifts each of the doublewords in the destination operand; and the (V)PSLLQ instruction shifts the quadword (or quadwords) in the destination operand.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE instructions 64-bit operand: The destination operand is an MMX technology register; the count operand can be either an MMX technology register or an 64-bit memory location.

+

128-bit Legacy SSE version: The destination and first source operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged. The count operand can be either an XMM register or a 128-bit memory location or an 8-bit immediate. If the count operand is a memory address, 128 bits are loaded but the upper 64 bits are ignored.

+

VEX.128 encoded version: The destination and first source operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed. The count operand can be either an XMM register or a 128-bit memory location or an 8-bit immediate. If the count operand is a memory address, 128 bits are loaded but the upper 64 bits are ignored.

+

VEX.256 encoded version: The destination operand is a YMM register. The source operand is a YMM register or a memory location. The count operand can come either from an XMM register or a memory location or an 8-bit immediate. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded versions: The destination operand is a ZMM register updated according to the writemask. The count operand is either an 8-bit immediate (the immediate count version) or an 8-bit value from an XMM register or a memory location (the variable count version). For the immediate count version, the source operand (the second operand) can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location. For the variable count version, the first source operand (the second operand) is a ZMM register, the second source operand (the third operand, 8-bit variable count) can be an XMM register or a memory location.

+

Note: In VEX/EVEX encoded versions of shifts with an immediate count, vvvv of VEX/EVEX encode the destination register, and VEX.B/EVEX.B + ModRM.r/m encodes the source register.

+

Note: For shifts with an immediate count (VEX.128.66.0F 71-73 /6, or EVEX.128.66.0F 71-73 /6), VEX.vvvv/EVEX.vvvv encodes the destination register.

+

Operation + ¶ +

+

PSLLW (With 64-bit Operand) + ¶ +

+
    IF (COUNT > 15)
+    THEN
+        DEST[64:0] := 0000000000000000H;
+    ELSE
+        DEST[15:0] := ZeroExtend(DEST[15:0] << COUNT);
+        (* Repeat shift operation for 2nd and 3rd words *)
+        DEST[63:48] := ZeroExtend(DEST[63:48] << COUNT);
+    FI;
+PSLLD (with 64-bit operand)
+    IF (COUNT > 31)
+    THEN
+        DEST[64:0] := 0000000000000000H;
+    ELSE
+        DEST[31:0] := ZeroExtend(DEST[31:0] << COUNT);
+        DEST[63:32] := ZeroExtend(DEST[63:32] << COUNT);
+    FI;
+
+

PSLLQ (With 64-bit Operand) + ¶ +

+
    IF (COUNT > 63)
+    THEN
+        DEST[64:0] := 0000000000000000H;
+    ELSE
+        DEST := ZeroExtend(DEST << COUNT);
+    FI;
+LOGICAL_LEFT_SHIFT_WORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 15)
+THEN
+    DEST[127:0] := 00000000000000000000000000000000H
+ELSE
+    DEST[15:0] := ZeroExtend(SRC[15:0] << COUNT);
+    (* Repeat shift operation for 2nd through 7th words *)
+    DEST[127:112] := ZeroExtend(SRC[127:112] << COUNT);
+FI;
+LOGICAL_LEFT_SHIFT_DWORDS1(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 31)
+THEN
+    DEST[31:0] := 0
+ELSE
+    DEST[31:0] := ZeroExtend(SRC[31:0] << COUNT);
+FI;
+LOGICAL_LEFT_SHIFT_DWORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 31)
+THEN
+    DEST[127:0] := 00000000000000000000000000000000H
+ELSE
+    DEST[31:0] := ZeroExtend(SRC[31:0] << COUNT);
+    (* Repeat shift operation for 2nd through 3rd words *)
+    DEST[127:96] := ZeroExtend(SRC[127:96] << COUNT);
+FI;
+LOGICAL_LEFT_SHIFT_QWORDS1(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 63)
+THEN
+    DEST[63:0] := 0
+ELSE
+    DEST[63:0] := ZeroExtend(SRC[63:0] << COUNT);
+FI;
+LOGICAL_LEFT_SHIFT_QWORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 63)
+THEN
+    DEST[127:0] := 00000000000000000000000000000000H
+ELSE
+    DEST[63:0] := ZeroExtend(SRC[63:0] << COUNT);
+    DEST[127:64] := ZeroExtend(SRC[127:64] << COUNT);
+FI;
+LOGICAL_LEFT_SHIFT_WORDS_256b(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 15)
+THEN
+    DEST[127:0] := 00000000000000000000000000000000H
+    DEST[255:128] := 00000000000000000000000000000000H
+ELSE
+    DEST[15:0] := ZeroExtend(SRC[15:0] << COUNT);
+    (* Repeat shift operation for 2nd through 15th words *)
+    DEST[255:240] := ZeroExtend(SRC[255:240] << COUNT);
+FI;
+LOGICAL_LEFT_SHIFT_DWORDS_256b(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 31)
+THEN
+    DEST[127:0] := 00000000000000000000000000000000H
+    DEST[255:128] := 00000000000000000000000000000000H
+ELSE
+    DEST[31:0] := ZeroExtend(SRC[31:0] << COUNT);
+    (* Repeat shift operation for 2nd through 7th words *)
+    DEST[255:224] := ZeroExtend(SRC[255:224] << COUNT);
+FI;
+LOGICAL_LEFT_SHIFT_QWORDS_256b(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 63)
+THEN
+    DEST[127:0] := 00000000000000000000000000000000H
+    DEST[255:128] := 00000000000000000000000000000000H
+ELSE
+    DEST[63:0] := ZeroExtend(SRC[63:0] << COUNT);
+    DEST[127:64] := ZeroExtend(SRC[127:64] << COUNT)
+    DEST[191:128] := ZeroExtend(SRC[191:128] << COUNT);
+    DEST[255:192] := ZeroExtend(SRC[255:192] << COUNT);
+FI;
+
+

VPSLLW (EVEX Versions, xmm/m128) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    TMP_DEST[127:0] := LOGICAL_LEFT_SHIFT_WORDS_128b(SRC1[127:0], SRC2)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := LOGICAL_LEFT_SHIFT_WORDS_256b(SRC1[255:0], SRC2)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := LOGICAL_LEFT_SHIFT_WORDS_256b(SRC1[255:0], SRC2)
+    TMP_DEST[511:256] := LOGICAL_LEFT_SHIFT_WORDS_256b(SRC1[511:256], SRC2)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSLLW (EVEX Versions, imm8) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    TMP_DEST[127:0] := LOGICAL_LEFT_SHIFT_WORDS_128b(SRC1[127:0], imm8)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC1[255:0], imm8)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := LOGICAL_LEFT_SHIFT_WORDS_256b(SRC1[255:0], imm8)
+    TMP_DEST[511:256] := LOGICAL_LEFT_SHIFT_WORDS_256b(SRC1[511:256], imm8)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSLLW (ymm, ymm, xmm/m128) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_LEFT_SHIFT_WORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0;
+
+

VPSLLW (ymm, imm8) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_LEFT_SHIFT_WORD_256b(SRC1, imm8)
+DEST[MAXVL-1:256] := 0;
+
+

VPSLLW (xmm, xmm, xmm/m128) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_WORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPSLLW (xmm, imm8) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_WORDS(SRC1, imm8)
+DEST[MAXVL-1:128] := 0
+
+

PSLLW (xmm, xmm, xmm/m128) + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_WORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

PSLLW (xmm, imm8) + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_WORDS(DEST, imm8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSLLD (EVEX versions, imm8) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+31:i] := LOGICAL_LEFT_SHIFT_DWORDS1(SRC1[31:0], imm8)
+                ELSE DEST[i+31:i] := LOGICAL_LEFT_SHIFT_DWORDS1(SRC1[i+31:i], imm8)
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSLLD (EVEX Versions, xmm/m128) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF VL = 128
+    TMP_DEST[127:0] := LOGICAL_LEFT_SHIFT_DWORDS_128b(SRC1[127:0], SRC2)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := LOGICAL_LEFT_SHIFT_DWORDS_256b(SRC1[255:0], SRC2)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := LOGICAL_LEFT_SHIFT_DWORDS_256b(SRC1[255:0], SRC2)
+    TMP_DEST[511:256] := LOGICAL_LEFT_SHIFT_DWORDS_256b(SRC1[511:256], SRC2)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking* ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSLLD (ymm, ymm, xmm/m128) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_LEFT_SHIFT_DWORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0;
+
+

VPSLLD (ymm, imm8) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_LEFT_SHIFT_DWORDS_256b(SRC1, imm8)
+DEST[MAXVL-1:256] := 0;
+
+

VPSLLD (xmm, xmm, xmm/m128) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_DWORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPSLLD (xmm, imm8) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_DWORDS(SRC1, imm8)
+DEST[MAXVL-1:128] := 0
+
+

PSLLD (xmm, xmm, xmm/m128) + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_DWORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

PSLLD (xmm, imm8) + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_DWORDS(DEST, imm8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSLLQ (EVEX Versions, imm8) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+63:i] := LOGICAL_LEFT_SHIFT_QWORDS1(SRC1[63:0], imm8)
+                ELSE DEST[i+63:i] := LOGICAL_LEFT_SHIFT_QWORDS1(SRC1[i+63:i], imm8)
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+
+

VPSLLQ (EVEX Versions, xmm/m128) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF VL = 128
+    TMP_DEST[127:0] := LOGICAL_LEFT_SHIFT_QWORDS_128b(SRC1[127:0], SRC2)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := LOGICAL_LEFT_SHIFT_QWORDS_256b(SRC1[255:0], SRC2)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := LOGICAL_LEFT_SHIFT_QWORDS_256b(SRC1[255:0], SRC2)
+    TMP_DEST[511:256] := LOGICAL_LEFT_SHIFT_QWORDS_256b(SRC1[511:256], SRC2)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSLLQ (ymm, ymm, xmm/m128) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_LEFT_SHIFT_QWORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0;
+
+

VPSLLQ (ymm, imm8) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_LEFT_SHIFT_QWORDS_256b(SRC1, imm8)
+DEST[MAXVL-1:256] := 0;
+
+

VPSLLQ (xmm, xmm, xmm/m128) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_QWORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPSLLQ (xmm, imm8) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_QWORDS(SRC1, imm8)
+DEST[MAXVL-1:128] := 0
+
+

PSLLQ (xmm, xmm, xmm/m128) + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_QWORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

PSLLQ (xmm, imm8) + ¶ +

+
DEST[127:0] := LOGICAL_LEFT_SHIFT_QWORDS(DEST, imm8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPSLLD __m512i _mm512_slli_epi32(__m512i a, unsigned int imm);
+
+
VPSLLD __m512i _mm512_mask_slli_epi32(__m512i s, __mmask16 k, __m512i a, unsigned int imm);
+
+
VPSLLD __m512i _mm512_maskz_slli_epi32( __mmask16 k, __m512i a, unsigned int imm);
+
+
VPSLLD __m256i _mm256_mask_slli_epi32(__m256i s, __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSLLD __m256i _mm256_maskz_slli_epi32( __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSLLD __m128i _mm_mask_slli_epi32(__m128i s, __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSLLD __m128i _mm_maskz_slli_epi32( __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSLLD __m512i _mm512_sll_epi32(__m512i a, __m128i cnt);
+
+
VPSLLD __m512i _mm512_mask_sll_epi32(__m512i s, __mmask16 k, __m512i a, __m128i cnt);
+
+
VPSLLD __m512i _mm512_maskz_sll_epi32( __mmask16 k, __m512i a, __m128i cnt);
+
+
VPSLLD __m256i _mm256_mask_sll_epi32(__m256i s, __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSLLD __m256i _mm256_maskz_sll_epi32( __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSLLD __m128i _mm_mask_sll_epi32(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLD __m128i _mm_maskz_sll_epi32( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLQ __m512i _mm512_mask_slli_epi64(__m512i a, unsigned int imm);
+
+
VPSLLQ __m512i _mm512_mask_slli_epi64(__m512i s, __mmask8 k, __m512i a, unsigned int imm);
+
+
VPSLLQ __m512i _mm512_maskz_slli_epi64( __mmask8 k, __m512i a, unsigned int imm);
+
+
VPSLLQ __m256i _mm256_mask_slli_epi64(__m256i s, __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSLLQ __m256i _mm256_maskz_slli_epi64( __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSLLQ __m128i _mm_mask_slli_epi64(__m128i s, __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSLLQ __m128i _mm_maskz_slli_epi64( __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSLLQ __m512i _mm512_mask_sll_epi64(__m512i a, __m128i cnt);
+
+
VPSLLQ __m512i _mm512_mask_sll_epi64(__m512i s, __mmask8 k, __m512i a, __m128i cnt);
+
+
VPSLLQ __m512i _mm512_maskz_sll_epi64( __mmask8 k, __m512i a, __m128i cnt);
+
+
VPSLLQ __m256i _mm256_mask_sll_epi64(__m256i s, __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSLLQ __m256i _mm256_maskz_sll_epi64( __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSLLQ __m128i _mm_mask_sll_epi64(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLQ __m128i _mm_maskz_sll_epi64( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLW __m512i _mm512_slli_epi16(__m512i a, unsigned int imm);
+
+
VPSLLW __m512i _mm512_mask_slli_epi16(__m512i s, __mmask32 k, __m512i a, unsigned int imm);
+
+
VPSLLW __m512i _mm512_maskz_slli_epi16( __mmask32 k, __m512i a, unsigned int imm);
+
+
VPSLLW __m256i _mm256_mask_slli_epi16(__m256i s, __mmask16 k, __m256i a, unsigned int imm);
+
+
VPSLLW __m256i _mm256_maskz_slli_epi16( __mmask16 k, __m256i a, unsigned int imm);
+
+
VPSLLW __m128i _mm_mask_slli_epi16(__m128i s, __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSLLW __m128i _mm_maskz_slli_epi16( __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSLLW __m512i _mm512_sll_epi16(__m512i a, __m128i cnt);
+
+
VPSLLW __m512i _mm512_mask_sll_epi16(__m512i s, __mmask32 k, __m512i a, __m128i cnt);
+
+
VPSLLW __m512i _mm512_maskz_sll_epi16( __mmask32 k, __m512i a, __m128i cnt);
+
+
VPSLLW __m256i _mm256_mask_sll_epi16(__m256i s, __mmask16 k, __m256i a, __m128i cnt);
+
+
VPSLLW __m256i _mm256_maskz_sll_epi16( __mmask16 k, __m256i a, __m128i cnt);
+
+
VPSLLW __m128i _mm_mask_sll_epi16(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLW __m128i _mm_maskz_sll_epi16( __mmask8 k, __m128i a, __m128i cnt);
+
+
PSLLW __m64 _mm_slli_pi16 (__m64 m, int count)
+
+
PSLLW __m64 _mm_sll_pi16(__m64 m, __m64 count)
+
+
(V)PSLLW __m128i _mm_slli_epi16(__m64 m, int count)
+
+
(V)PSLLW __m128i _mm_sll_epi16(__m128i m, __m128i count)
+
+
VPSLLW __m256i _mm256_slli_epi16 (__m256i m, int count)
+
+
VPSLLW __m256i _mm256_sll_epi16 (__m256i m, __m128i count)
+
+
PSLLD __m64 _mm_slli_pi32(__m64 m, int count)
+
+
PSLLD __m64 _mm_sll_pi32(__m64 m, __m64 count)
+
+
(V)PSLLD __m128i _mm_slli_epi32(__m128i m, int count)
+
+
(V)PSLLD __m128i _mm_sll_epi32(__m128i m, __m128i count)
+
+
VPSLLD __m256i _mm256_slli_epi32 (__m256i m, int count)
+
+
VPSLLD __m256i _mm256_sll_epi32 (__m256i m, __m128i count)
+
+
PSLLQ __m64 _mm_slli_si64(__m64 m, int count)
+
+
PSLLQ __m64 _mm_sll_si64(__m64 m, __m64 count)
+
+
(V)PSLLQ __m128i _mm_slli_epi64(__m128i m, int count)
+
+
(V)PSLLQ __m128i _mm_sll_epi64(__m128i m, __m128i count)
+
+
VPSLLQ __m256i _mm256_slli_epi64 (__m256i m, int count)
+
+
VPSLLQ __m256i _mm256_sll_epi64 (__m256i m, __m128i count)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+
    +
  • VEX-encoded instructions: +
      +
    • Syntax with RM/RVM operand encoding (A/C in the operand encoding table), seeTable 2-21, “Type 4 Class Exception Conditions.”
    • +
    • Syntax with RM/RVM operand encoding (A/C in the operand encoding table), seeTable 2-21, “Type 4 Class Exception Conditions.”
    • +
    • Syntax with MI/VMI operand encoding (B/D in the operand encoding table), seeTable 2-24, “Type 7 Class Exception Conditions.”
    • +
    • Syntax with MI/VMI operand encoding (B/D in the operand encoding table), seeTable 2-24, “Type 7 Class Exception Conditions.”
  • +
  • EVEX-encoded VPSLLW (E in the operand encoding table), see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”
  • +
  • EVEX-encoded VPSLLD/Q: +
      +
    • Syntax with Mem128 tuple type (G in the operand encoding table), see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”
    • +
    • Syntax with Mem128 tuple type (G in the operand encoding table), see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”
    • +
    • Syntax with Full tuple type (F in the operand encoding table), seeTable 2-49, “Type E4 Class Exception Conditions.”
    • +
    • Syntax with Full tuple type (F in the operand encoding table), seeTable 2-49, “Type E4 Class Exception Conditions.”
diff --git a/x86/psraw.psrad.psraq.html b/x86/psraw.psrad.psraq.html new file mode 100644 index 0000000..f3cb8f6 --- /dev/null +++ b/x86/psraw.psrad.psraq.html @@ -0,0 +1,824 @@ + +PSRAW/PSRAD/PSRAQ + — Shift Packed Data Right Arithmetic

PSRAW/PSRAD/PSRAQ + — Shift Packed Data Right Arithmetic

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F E1 /r1 PSRAW mm, mm/m64AV/VMMXShift words in mm right by mm/m64 while shifting in sign bits.
66 0F E1 /r PSRAW xmm1, xmm2/m128AV/VSSE2Shift words in xmm1 right by xmm2/m128 while shifting in sign bits.
NP 0F 71 /4 ib1 PSRAW mm, imm8BV/VMMXShift words in mm right by imm8 while shifting in sign bits
66 0F 71 /4 ib PSRAW xmm1, imm8BV/VSSE2Shift words in xmm1 right by imm8 while shifting in sign bits
NP 0F E2 /r1 PSRAD mm, mm/m64AV/VMMXShift doublewords in mm right by mm/m64 while shifting in sign bits.
66 0F E2 /r PSRAD xmm1, xmm2/m128AV/VSSE2Shift doubleword in xmm1 right by xmm2 /m128 while shifting in sign bits.
NP 0F 72 /4 ib1 PSRAD mm, imm8BV/VMMXShift doublewords in mm right by imm8 while shifting in sign bits.
66 0F 72 /4 ib PSRAD xmm1, imm8BV/VSSE2Shift doublewords in xmm1 right by imm8 while shifting in sign bits.
VEX.128.66.0F.WIG E1 /r VPSRAW xmm1, xmm2, xmm3/m128CV/VAVXShift words in xmm2 right by amount specified in xmm3/m128 while shifting in sign bits.
VEX.128.66.0F.WIG 71 /4 ib VPSRAW xmm1, xmm2, imm8DV/VAVXShift words in xmm2 right by imm8 while shifting in sign bits.
VEX.128.66.0F.WIG E2 /r VPSRAD xmm1, xmm2, xmm3/m128CV/VAVXShift doublewords in xmm2 right by amount specified in xmm3/m128 while shifting in sign bits.
VEX.128.66.0F.WIG 72 /4 ib VPSRAD xmm1, xmm2, imm8DV/VAVXShift doublewords in xmm2 right by imm8 while shifting in sign bits.
VEX.256.66.0F.WIG E1 /r VPSRAW ymm1, ymm2, xmm3/m128CV/VAVX2Shift words in ymm2 right by amount specified in xmm3/m128 while shifting in sign bits.
VEX.256.66.0F.WIG 71 /4 ib VPSRAW ymm1, ymm2, imm8DV/VAVX2Shift words in ymm2 right by imm8 while shifting in sign bits.
VEX.256.66.0F.WIG E2 /r VPSRAD ymm1, ymm2, xmm3/m128CV/VAVX2Shift doublewords in ymm2 right by amount specified in xmm3/m128 while shifting in sign bits.
VEX.256.66.0F.WIG 72 /4 ib VPSRAD ymm1, ymm2, imm8DV/VAVX2Shift doublewords in ymm2 right by imm8 while shifting in sign bits.
EVEX.128.66.0F.WIG E1 /r VPSRAW xmm1 {k1}{z}, xmm2, xmm3/m128GV/VAVX512VL AVX512BWShift words in xmm2 right by amount specified in xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.256.66.0F.WIG E1 /r VPSRAW ymm1 {k1}{z}, ymm2, xmm3/m128GV/VAVX512VL AVX512BWShift words in ymm2 right by amount specified in xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.512.66.0F.WIG E1 /r VPSRAW zmm1 {k1}{z}, zmm2, xmm3/m128GV/VAVX512BWShift words in zmm2 right by amount specified in xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.128.66.0F.WIG 71 /4 ib VPSRAW xmm1 {k1}{z}, xmm2/m128, imm8EV/VAVX512VL AVX512BWShift words in xmm2/m128 right by imm8 while shifting in sign bits using writemask k1.
EVEX.256.66.0F.WIG 71 /4 ib VPSRAW ymm1 {k1}{z}, ymm2/m256, imm8EV/VAVX512VL AVX512BWShift words in ymm2/m256 right by imm8 while shifting in sign bits using writemask k1.
EVEX.512.66.0F.WIG 71 /4 ib VPSRAW zmm1 {k1}{z}, zmm2/m512, imm8EV/VAVX512BWShift words in zmm2/m512 right by imm8 while shifting in sign bits using writemask k1.
EVEX.128.66.0F.W0 E2 /r VPSRAD xmm1 {k1}{z}, xmm2, xmm3/m128GV/VAVX512VL AVX512FShift doublewords in xmm2 right by amount specified in xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.256.66.0F.W0 E2 /r VPSRAD ymm1 {k1}{z}, ymm2, xmm3/m128GV/VAVX512VL AVX512FShift doublewords in ymm2 right by amount specified in xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.512.66.0F.W0 E2 /r VPSRAD zmm1 {k1}{z}, zmm2, xmm3/m128GV/VAVX512FShift doublewords in zmm2 right by amount specified in xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.128.66.0F.W0 72 /4 ib VPSRAD xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8FV/VAVX512VL AVX512FShift doublewords in xmm2/m128/m32bcst right by imm8 while shifting in sign bits using writemask k1.
EVEX.256.66.0F.W0 72 /4 ib VPSRAD ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8FV/VAVX512VL AVX512FShift doublewords in ymm2/m256/m32bcst right by imm8 while shifting in sign bits using writemask k1.
EVEX.512.66.0F.W0 72 /4 ib VPSRAD zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8FV/VAVX512FShift doublewords in zmm2/m512/m32bcst right by imm8 while shifting in sign bits using writemask k1.
EVEX.128.66.0F.W1 E2 /r VPSRAQ xmm1 {k1}{z}, xmm2, xmm3/m128GV/VAVX512VL AVX512FShift quadwords in xmm2 right by amount specified in xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.256.66.0F.W1 E2 /r VPSRAQ ymm1 {k1}{z}, ymm2, xmm3/m128GV/VAVX512VL AVX512FShift quadwords in ymm2 right by amount specified in xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.512.66.0F.W1 E2 /r VPSRAQ zmm1 {k1}{z}, zmm2, xmm3/m128GV/VAVX512FShift quadwords in zmm2 right by amount specified in xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.128.66.0F.W1 72 /4 ib VPSRAQ xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8FV/VAVX512VL AVX512FShift quadwords in xmm2/m128/m64bcst right by imm8 while shifting in sign bits using writemask k1.
EVEX.256.66.0F.W1 72 /4 ib VPSRAQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8FV/VAVX512VL AVX512FShift quadwords in ymm2/m256/m64bcst right by imm8 while shifting in sign bits using writemask k1.
EVEX.512.66.0F.W1 72 /4 ib VPSRAQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8FV/VAVX512FShift quadwords in zmm2/m512/m64bcst right by imm8 while shifting in sign bits using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (r, w)imm8N/AN/A
CN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
DN/AVEX.vvvv (w)ModRM:r/m (r)imm8N/A
EFull MemEVEX.vvvv (w)ModRM:r/m (r)imm8N/A
FFullEVEX.vvvv (w)ModRM:r/m (r)imm8N/A
GMem128ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Shifts the bits in the individual data elements (words, doublewords or quadwords) in the destination operand (first operand) to the right by the number of bits specified in the count operand (second operand). As the bits in the data elements are shifted right, the empty high-order bits are filled with the initial value of the sign bit of the data element. If the value specified by the count operand is greater than 15 (for words), 31 (for doublewords), or 63 (for quadwords), each destination data element is filled with the initial value of the sign bit of the element. (Figure 4-18 gives an example of shifting words in a 64-bit operand.)

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Pre-Shift +X3 +X2 +X1 +X0 +DEST +Shift Right +with Sign +Extension +Post-Shift +X0 >> COUNT +X3 >> COUNT +X2 >> COUNT +X1 >> COUNT +DEST +
Figure 4-18. PSRAW and PSRAD Instruction Operation Using a 64-bit Operand
+

Note that only the first 64-bits of a 128-bit count operand are checked to compute the count. If the second source operand is a memory address, 128 bits are loaded.

+

The (V)PSRAW instruction shifts each of the words in the destination operand to the right by the number of bits specified in the count operand, and the (V)PSRAD instruction shifts each of the doublewords in the destination operand.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE instructions 64-bit operand: The destination operand is an MMX technology register; the count operand can be either an MMX technology register or an 64-bit memory location.

+

128-bit Legacy SSE version: The destination and first source operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged. The count operand can be either an XMM register or a 128-bit memory location or an 8-bit immediate. If the count operand is a memory address, 128 bits are loaded but the upper 64 bits are ignored.

+

VEX.128 encoded version: The destination and first source operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed. The count operand can be either an XMM register or a 128-bit memory location or an 8-bit immediate. If the count operand is a memory address, 128 bits are loaded but the upper 64 bits are ignored.

+

VEX.256 encoded version: The destination operand is a YMM register. The source operand is a YMM register or a memory location. The count operand can come either from an XMM register or a memory location or an 8-bit immediate. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded versions: The destination operand is a ZMM register updated according to the writemask. The count operand is either an 8-bit immediate (the immediate count version) or an 8-bit value from an XMM register or a memory location (the variable count version). For the immediate count version, the source operand (the second operand) can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location. For the variable count version, the first source operand (the second operand) is a ZMM register, the second source operand (the third operand, 8-bit variable count) can be an XMM register or a memory location.

+

Note: In VEX/EVEX encoded versions of shifts with an immediate count, vvvv of VEX/EVEX encode the destination register, and VEX.B/EVEX.B + ModRM.r/m encodes the source register.

+

Note: For shifts with an immediate count (VEX.128.66.0F 71-73 /4, EVEX.128.66.0F 71-73 /4), VEX.vvvv/EVEX.vvvv encodes the destination register.

+

Operation + ¶ +

+

PSRAW (With 64-bit Operand) + ¶ +

+
    IF (COUNT > 15)
+        THEN COUNT := 16;
+    FI;
+    DEST[15:0] := SignExtend(DEST[15:0] >> COUNT);
+    (* Repeat shift operation for 2nd and 3rd words *)
+    DEST[63:48] := SignExtend(DEST[63:48] >> COUNT);
+PSRAD (with 64-bit operand)
+    IF (COUNT > 31)
+        THEN COUNT := 32;
+    FI;
+    DEST[31:0] := SignExtend(DEST[31:0] >> COUNT);
+    DEST[63:32] := SignExtend(DEST[63:32] >> COUNT);
+ARITHMETIC_RIGHT_SHIFT_DWORDS1(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 31)
+THEN
+    DEST[31:0] := SignBit
+ELSE
+    DEST[31:0] := SignExtend(SRC[31:0] >> COUNT);
+FI;
+ARITHMETIC_RIGHT_SHIFT_QWORDS1(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 63)
+THEN
+    DEST[63:0] := SignBit
+ELSE
+    DEST[63:0] := SignExtend(SRC[63:0] >> COUNT);
+FI;
+ARITHMETIC_RIGHT_SHIFT_WORDS_256b(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 15)
+    THEN COUNT := 16;
+FI;
+DEST[15:0] := SignExtend(SRC[15:0] >> COUNT);
+    (* Repeat shift operation for 2nd through 15th words *)
+DEST[255:240] := SignExtend(SRC[255:240] >> COUNT);
+ARITHMETIC_RIGHT_SHIFT_DWORDS_256b(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 31)
+    THEN COUNT := 32;
+FI;
+DEST[31:0] := SignExtend(SRC[31:0] >> COUNT);
+    (* Repeat shift operation for 2nd through 7th words *)
+DEST[255:224] := SignExtend(SRC[255:224] >> COUNT);
+ARITHMETIC_RIGHT_SHIFT_QWORDS(SRC, COUNT_SRC, VL) ; VL: 128b, 256b or 512b
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 63)
+    THEN COUNT := 64;
+FI;
+DEST[63:0] := SignExtend(SRC[63:0] >> COUNT);
+    (* Repeat shift operation for 2nd through 7th words *)
+DEST[VL-1:VL-64] := SignExtend(SRC[VL-1:VL-64] >> COUNT);
+ARITHMETIC_RIGHT_SHIFT_WORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 15)
+    THEN COUNT := 16;
+FI;
+DEST[15:0] := SignExtend(SRC[15:0] >> COUNT);
+    (* Repeat shift operation for 2nd through 7th words *)
+DEST[127:112] := SignExtend(SRC[127:112] >> COUNT);
+ARITHMETIC_RIGHT_SHIFT_DWORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 31)
+    THEN COUNT := 32;
+FI;
+DEST[31:0] := SignExtend(SRC[31:0] >> COUNT);
+    (* Repeat shift operation for 2nd through 3rd words *)
+DEST[127:96] := SignExtend(SRC[127:96] >> COUNT);
+
+

VPSRAW (EVEX versions, xmm/m128) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    TMP_DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_WORDS_128b(SRC1[127:0], SRC2)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_WORDS_256b(SRC1[255:0], SRC2)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_WORDS_256b(SRC1[255:0], SRC2)
+    TMP_DEST[511:256] := ARITHMETIC_RIGHT_SHIFT_WORDS_256b(SRC1[511:256], SRC2)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRAW (EVEX Versions, imm8) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    TMP_DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_WORDS_128b(SRC1[127:0], imm8)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_WORDS_256b(SRC1[255:0], imm8)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_WORDS_256b(SRC1[255:0], imm8)
+    TMP_DEST[511:256] := ARITHMETIC_RIGHT_SHIFT_WORDS_256b(SRC1[511:256], imm8)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRAW (ymm, ymm, xmm/m128) - VEX + ¶ +

+
DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_WORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPSRAW (ymm, imm8) - VEX + ¶ +

+
DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_WORDS_256b(SRC1, imm8)
+DEST[MAXVL-1:256] := 0
+
+

VPSRAW (xmm, xmm, xmm/m128) - VEX + ¶ +

+
DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_WORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPSRAW (xmm, imm8) - VEX + ¶ +

+
DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_WORDS(SRC1, imm8)
+DEST[MAXVL-1:128] := 0
+
+

PSRAW (xmm, xmm, xmm/m128) + ¶ +

+
DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_WORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

PSRAW (xmm, imm8) + ¶ +

+
DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_WORDS(DEST, imm8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSRAD (EVEX Versions, imm8) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+31:i] := ARITHMETIC_RIGHT_SHIFT_DWORDS1(SRC1[31:0], imm8)
+                ELSE DEST[i+31:i] := ARITHMETIC_RIGHT_SHIFT_DWORDS1(SRC1[i+31:i], imm8)
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRAD (EVEX Versions, xmm/m128) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF VL = 128
+    TMP_DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_DWORDS_128b(SRC1[127:0], SRC2)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_DWORDS_256b(SRC1[255:0], SRC2)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_DWORDS_256b(SRC1[255:0], SRC2)
+    TMP_DEST[511:256] := ARITHMETIC_RIGHT_SHIFT_DWORDS_256b(SRC1[511:256], SRC2)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRAD (ymm, ymm, xmm/m128) - VEX + ¶ +

+
DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_DWORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPSRAD (ymm, imm8) - VEX + ¶ +

+
DEST[255:0] := ARITHMETIC_RIGHT_SHIFT_DWORDS_256b(SRC1, imm8)
+DEST[MAXVL-1:256] := 0
+
+

VPSRAD (xmm, xmm, xmm/m128) - VEX + ¶ +

+
DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_DWORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPSRAD (xmm, imm8) - VEX + ¶ +

+
DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_DWORDS(SRC1, imm8)
+DEST[MAXVL-1:128] := 0
+
+

PSRAD (xmm, xmm, xmm/m128) + ¶ +

+
DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_DWORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

PSRAD (xmm, imm8) + ¶ +

+
DEST[127:0] := ARITHMETIC_RIGHT_SHIFT_DWORDS(DEST, imm8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSRAQ (EVEX Versions, imm8) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+63:i] := ARITHMETIC_RIGHT_SHIFT_QWORDS1(SRC1[63:0], imm8)
+                ELSE DEST[i+63:i] := ARITHMETIC_RIGHT_SHIFT_QWORDS1(SRC1[i+63:i], imm8)
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRAQ (EVEX Versions, xmm/m128) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+TMP_DEST[VL-1:0] := ARITHMETIC_RIGHT_SHIFT_QWORDS(SRC1[VL-1:0], SRC2, VL)
+FOR j := 0 TO 7
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPSRAD __m512i _mm512_srai_epi32(__m512i a, unsigned int imm);
+
+
VPSRAD __m512i _mm512_mask_srai_epi32(__m512i s, __mmask16 k, __m512i a, unsigned int imm);
+
+
VPSRAD __m512i _mm512_maskz_srai_epi32( __mmask16 k, __m512i a, unsigned int imm);
+
+
VPSRAD __m256i _mm256_mask_srai_epi32(__m256i s, __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSRAD __m256i _mm256_maskz_srai_epi32( __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSRAD __m128i _mm_mask_srai_epi32(__m128i s, __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRAD __m128i _mm_maskz_srai_epi32( __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRAD __m512i _mm512_sra_epi32(__m512i a, __m128i cnt);
+
+
VPSRAD __m512i _mm512_mask_sra_epi32(__m512i s, __mmask16 k, __m512i a, __m128i cnt);
+
+
VPSRAD __m512i _mm512_maskz_sra_epi32( __mmask16 k, __m512i a, __m128i cnt);
+
+
VPSRAD __m256i _mm256_mask_sra_epi32(__m256i s, __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRAD __m256i _mm256_maskz_sra_epi32( __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRAD __m128i _mm_mask_sra_epi32(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRAD __m128i _mm_maskz_sra_epi32( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRAQ __m512i _mm512_srai_epi64(__m512i a, unsigned int imm);
+
+
VPSRAQ __m512i _mm512_mask_srai_epi64(__m512i s, __mmask8 k, __m512i a, unsigned int imm)
+
+
VPSRAQ __m512i _mm512_maskz_srai_epi64( __mmask8 k, __m512i a, unsigned int imm)
+
+
VPSRAQ __m256i _mm256_mask_srai_epi64(__m256i s, __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSRAQ __m256i _mm256_maskz_srai_epi64( __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSRAQ __m128i _mm_mask_srai_epi64(__m128i s, __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRAQ __m128i _mm_maskz_srai_epi64( __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRAQ __m512i _mm512_sra_epi64(__m512i a, __m128i cnt);
+
+
VPSRAQ __m512i _mm512_mask_sra_epi64(__m512i s, __mmask8 k, __m512i a, __m128i cnt)
+
+
VPSRAQ __m512i _mm512_maskz_sra_epi64( __mmask8 k, __m512i a, __m128i cnt)
+
+
VPSRAQ __m256i _mm256_mask_sra_epi64(__m256i s, __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRAQ __m256i _mm256_maskz_sra_epi64( __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRAQ __m128i _mm_mask_sra_epi64(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRAQ __m128i _mm_maskz_sra_epi64( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRAW __m512i _mm512_srai_epi16(__m512i a, unsigned int imm);
+
+
VPSRAW __m512i _mm512_mask_srai_epi16(__m512i s, __mmask32 k, __m512i a, unsigned int imm);
+
+
VPSRAW __m512i _mm512_maskz_srai_epi16( __mmask32 k, __m512i a, unsigned int imm);
+
+
VPSRAW __m256i _mm256_mask_srai_epi16(__m256i s, __mmask16 k, __m256i a, unsigned int imm);
+
+
VPSRAW __m256i _mm256_maskz_srai_epi16( __mmask16 k, __m256i a, unsigned int imm);
+
+
VPSRAW __m128i _mm_mask_srai_epi16(__m128i s, __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRAW __m128i _mm_maskz_srai_epi16( __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRAW __m512i _mm512_sra_epi16(__m512i a, __m128i cnt);
+
+
VPSRAW __m512i _mm512_mask_sra_epi16(__m512i s, __mmask16 k, __m512i a, __m128i cnt);
+
+
VPSRAW __m512i _mm512_maskz_sra_epi16( __mmask16 k, __m512i a, __m128i cnt);
+
+
VPSRAW __m256i _mm256_mask_sra_epi16(__m256i s, __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRAW __m256i _mm256_maskz_sra_epi16( __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRAW __m128i _mm_mask_sra_epi16(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRAW __m128i _mm_maskz_sra_epi16( __mmask8 k, __m128i a, __m128i cnt);
+
+
PSRAW __m64 _mm_srai_pi16 (__m64 m, int count)
+
+
PSRAW __m64 _mm_sra_pi16 (__m64 m, __m64 count)
+
+
(V)PSRAW __m128i _mm_srai_epi16(__m128i m, int count)
+
+
(V)PSRAW __m128i _mm_sra_epi16(__m128i m, __m128i count)
+
+
VPSRAW __m256i _mm256_srai_epi16 (__m256i m, int count)
+
+
VPSRAW __m256i _mm256_sra_epi16 (__m256i m, __m128i count)
+
+
PSRAD __m64 _mm_srai_pi32 (__m64 m, int count)
+
+
PSRAD __m64 _mm_sra_pi32 (__m64 m, __m64 count)
+
+
(V)PSRAD __m128i _mm_srai_epi32 (__m128i m, int count)
+
+
(V)PSRAD __m128i _mm_sra_epi32 (__m128i m, __m128i count)
+
+
VPSRAD __m256i _mm256_srai_epi32 (__m256i m, int count)
+
+
VPSRAD __m256i _mm256_sra_epi32 (__m256i m, __m128i count)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+
    +
  • VEX-encoded instructions: +
      +
    • Syntax with RM/RVM operand encoding (A/C in the operand encoding table), seeTable 2-21, “Type 4 Class Exception Conditions.”
    • +
    • Syntax with RM/RVM operand encoding (A/C in the operand encoding table), seeTable 2-21, “Type 4 Class Exception Conditions.”
    • +
    • Syntax with MI/VMI operand encoding (B/D in the operand encoding table), seeTable 2-24, “Type 7 Class Exception Conditions.”
    • +
    • Syntax with MI/VMI operand encoding (B/D in the operand encoding table), seeTable 2-24, “Type 7 Class Exception Conditions.”
  • +
  • EVEX-encoded VPSRAW (E in the operand encoding table), see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”
  • +
  • EVEX-encoded VPSRAD/Q: +
      +
    • Syntax with Mem128 tuple type (G in the operand encoding table), see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”
    • +
    • Syntax with Mem128 tuple type (G in the operand encoding table), see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”
    • +
    • Syntax with Full tuple type (F in the operand encoding table), seeTable 2-49, “Type E4 Class Exception Conditions.”
    • +
    • Syntax with Full tuple type (F in the operand encoding table), seeTable 2-49, “Type E4 Class Exception Conditions.”
diff --git a/x86/psrldq.html b/x86/psrldq.html new file mode 100644 index 0000000..2d9fc0f --- /dev/null +++ b/x86/psrldq.html @@ -0,0 +1,156 @@ + +PSRLDQ + — Shift Double Quadword Right Logical

PSRLDQ + — Shift Double Quadword Right Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 73 /3 ib PSRLDQ xmm1, imm8AV/VSSE2Shift xmm1 right by imm8 while shifting in 0s.
VEX.128.66.0F.WIG 73 /3 ib VPSRLDQ xmm1, xmm2, imm8BV/VAVXShift xmm2 right by imm8 bytes while shifting in 0s.
VEX.256.66.0F.WIG 73 /3 ib VPSRLDQ ymm1, ymm2, imm8BV/VAVX2Shift ymm1 right by imm8 bytes while shifting in 0s.
EVEX.128.66.0F.WIG 73 /3 ib VPSRLDQ xmm1, xmm2/m128, imm8CV/VAVX512VL AVX512BWShift xmm2/m128 right by imm8 bytes while shifting in 0s and store result in xmm1.
EVEX.256.66.0F.WIG 73 /3 ib VPSRLDQ ymm1, ymm2/m256, imm8CV/VAVX512VL AVX512BWShift ymm2/m256 right by imm8 bytes while shifting in 0s and store result in ymm1.
EVEX.512.66.0F.WIG 73 /3 ib VPSRLDQ zmm1, zmm2/m512, imm8CV/VAVX512BWShift zmm2/m512 right by imm8 bytes while shifting in 0s and store result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (r, w)imm8N/AN/A
BN/AVEX.vvvv (w)ModRM:r/m (r)imm8N/A
CFull MemEVEX.vvvv (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Shifts the destination operand (first operand) to the right by the number of bytes specified in the count operand (second operand). The empty high-order bytes are cleared (set to all 0s). If the value specified by the count operand is greater than 15, the destination operand is set to all 0s. The count operand is an 8-bit immediate.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

128-bit Legacy SSE version: The source and destination operands are the same. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The source and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The source operand is a YMM register. The destination operand is a YMM register. The count operand applies to both the low and high 128-bit lanes.

+

VEX.256 encoded version: The source operand is YMM register. The destination operand is an YMM register. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed. The count operand applies to both the low and high 128-bit lanes.

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register. The count operand applies to each 128-bit lanes.

+

Note: VEX.vvvv/EVEX.vvvv encodes the destination register.

+

Operation + ¶ +

+

VPSRLDQ (EVEX.512 Encoded Version) + ¶ +

+
TEMP := COUNT
+IF (TEMP > 15) THEN TEMP := 16; FI
+DEST[127:0] := SRC[127:0] >> (TEMP * 8)
+DEST[255:128] := SRC[255:128] >> (TEMP * 8)
+DEST[383:256] := SRC[383:256] >> (TEMP * 8)
+DEST[511:384] := SRC[511:384] >> (TEMP * 8)
+DEST[MAXVL-1:512] := 0;
+
+

VPSRLDQ (VEX.256 and EVEX.256 Encoded Version) + ¶ +

+
TEMP := COUNT
+IF (TEMP > 15) THEN TEMP := 16; FI
+DEST[127:0] := SRC[127:0] >> (TEMP * 8)
+DEST[255:128] := SRC[255:128] >> (TEMP * 8)
+DEST[MAXVL-1:256] := 0;
+
+

VPSRLDQ (VEX.128 and EVEX.128 Encoded Version) + ¶ +

+
TEMP := COUNT
+IF (TEMP > 15) THEN TEMP := 16; FI
+DEST := SRC >> (TEMP * 8)
+DEST[MAXVL-1:128] := 0;
+
+

PSRLDQ (128-bit Legacy SSE Version) + ¶ +

+
TEMP := COUNT
+IF (TEMP > 15) THEN TEMP := 16; FI
+DEST := DEST >> (TEMP * 8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
(V)PSRLDQ __m128i _mm_srli_si128 ( __m128i a, int imm)
+
+
VPSRLDQ __m256i _mm256_bsrli_epi128 ( __m256i, const int)
+
+
VPSRLDQ __m512i _mm512_bsrli_epi128 ( __m512i, int)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-24, “Type 7 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/psrlw.psrld.psrlq.html b/x86/psrlw.psrld.psrlq.html new file mode 100644 index 0000000..0555407 --- /dev/null +++ b/x86/psrlw.psrld.psrlq.html @@ -0,0 +1,970 @@ + +PSRLW/PSRLD/PSRLQ + — Shift Packed Data Right Logical

PSRLW/PSRLD/PSRLQ + — Shift Packed Data Right Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F D1 /r1 PSRLW mm, mm/m64AV/VMMXShift words in mm right by amount specified in mm/m64 while shifting in 0s.
66 0F D1 /r PSRLW xmm1, xmm2/m128AV/VSSE2Shift words in xmm1 right by amount specified in xmm2/m128 while shifting in 0s.
NP 0F 71 /2 ib1 PSRLW mm, imm8BV/VMMXShift words in mm right by imm8 while shifting in 0s.
66 0F 71 /2 ib PSRLW xmm1, imm8BV/VSSE2Shift words in xmm1 right by imm8 while shifting in 0s.
NP 0F D2 /r1 PSRLD mm, mm/m64AV/VMMXShift doublewords in mm right by amount specified in mm/m64 while shifting in 0s.
66 0F D2 /r PSRLD xmm1, xmm2/m128AV/VSSE2Shift doublewords in xmm1 right by amount specified in xmm2 /m128 while shifting in 0s.
NP 0F 72 /2 ib1 PSRLD mm, imm8BV/VMMXShift doublewords in mm right by imm8 while shifting in 0s.
66 0F 72 /2 ib PSRLD xmm1, imm8BV/VSSE2Shift doublewords in xmm1 right by imm8 while shifting in 0s.
NP 0F D3 /r1 PSRLQ mm, mm/m64AV/VMMXShift mm right by amount specified in mm/m64 while shifting in 0s.
66 0F D3 /r PSRLQ xmm1, xmm2/m128AV/VSSE2Shift quadwords in xmm1 right by amount specified in xmm2/m128 while shifting in 0s.
NP 0F 73 /2 ib1 PSRLQ mm, imm8BV/VMMXShift mm right by imm8 while shifting in 0s.
66 0F 73 /2 ib PSRLQ xmm1, imm8BV/VSSE2Shift quadwords in xmm1 right by imm8 while shifting in 0s.
VEX.128.66.0F.WIG D1 /r VPSRLW xmm1, xmm2, xmm3/m128CV/VAVXShift words in xmm2 right by amount specified in xmm3/m128 while shifting in 0s.
VEX.128.66.0F.WIG 71 /2 ib VPSRLW xmm1, xmm2, imm8DV/VAVXShift words in xmm2 right by imm8 while shifting in 0s.
VEX.128.66.0F.WIG D2 /r VPSRLD xmm1, xmm2, xmm3/m128CV/VAVXShift doublewords in xmm2 right by amount specified in xmm3/m128 while shifting in 0s.
VEX.128.66.0F.WIG 72 /2 ib VPSRLD xmm1, xmm2, imm8DV/VAVXShift doublewords in xmm2 right by imm8 while shifting in 0s.
VEX.128.66.0F.WIG D3 /r VPSRLQ xmm1, xmm2, xmm3/m128CV/VAVXShift quadwords in xmm2 right by amount specified in xmm3/m128 while shifting in 0s.
VEX.128.66.0F.WIG 73 /2 ib VPSRLQ xmm1, xmm2, imm8DV/VAVXShift quadwords in xmm2 right by imm8 while shifting in 0s.
VEX.256.66.0F.WIG D1 /r VPSRLW ymm1, ymm2, xmm3/m128CV/VAVX2Shift words in ymm2 right by amount specified in xmm3/m128 while shifting in 0s.
VEX.256.66.0F.WIG 71 /2 ib VPSRLW ymm1, ymm2, imm8DV/VAVX2Shift words in ymm2 right by imm8 while shifting in 0s.
VEX.256.66.0F.WIG D2 /r VPSRLD ymm1, ymm2, xmm3/m128CV/VAVX2Shift doublewords in ymm2 right by amount specified in xmm3/m128 while shifting in 0s.
VEX.256.66.0F.WIG 72 /2 ib VPSRLD ymm1, ymm2, imm8DV/VAVX2Shift doublewords in ymm2 right by imm8 while shifting in 0s.
VEX.256.66.0F.WIG D3 /r VPSRLQ ymm1, ymm2, xmm3/m128CV/VAVX2Shift quadwords in ymm2 right by amount specified in xmm3/m128 while shifting in 0s.
VEX.256.66.0F.WIG 73 /2 ib VPSRLQ ymm1, ymm2, imm8DV/VAVX2Shift quadwords in ymm2 right by imm8 while shifting in 0s.
EVEX.128.66.0F.WIG D1 /r VPSRLW xmm1 {k1}{z}, xmm2, xmm3/m128GV/VAVX512VL AVX512BWShift words in xmm2 right by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.256.66.0F.WIG D1 /r VPSRLW ymm1 {k1}{z}, ymm2, xmm3/m128GV/VAVX512VL AVX512BWShift words in ymm2 right by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.512.66.0F.WIG D1 /r VPSRLW zmm1 {k1}{z}, zmm2, xmm3/m128GV/VAVX512BWShift words in zmm2 right by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.128.66.0F.WIG 71 /2 ib VPSRLW xmm1 {k1}{z}, xmm2/m128, imm8EV/VAVX512VL AVX512BWShift words in xmm2/m128 right by imm8 while shifting in 0s using writemask k1.
EVEX.256.66.0F.WIG 71 /2 ib VPSRLW ymm1 {k1}{z}, ymm2/m256, imm8EV/VAVX512VL AVX512BWShift words in ymm2/m256 right by imm8 while shifting in 0s using writemask k1.
EVEX.512.66.0F.WIG 71 /2 ib VPSRLW zmm1 {k1}{z}, zmm2/m512, imm8EV/VAVX512BWShift words in zmm2/m512 right by imm8 while shifting in 0s using writemask k1.
EVEX.128.66.0F.W0 D2 /r VPSRLD xmm1 {k1}{z}, xmm2, xmm3/m128GV/VAVX512VL AVX512FShift doublewords in xmm2 right by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.256.66.0F.W0 D2 /r VPSRLD ymm1 {k1}{z}, ymm2, xmm3/m128GV/VAVX512VL AVX512FShift doublewords in ymm2 right by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.512.66.0F.W0 D2 /r VPSRLD zmm1 {k1}{z}, zmm2, xmm3/m128GV/VAVX512FShift doublewords in zmm2 right by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.128.66.0F.W0 72 /2 ib VPSRLD xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8FV/VAVX512VL AVX512FShift doublewords in xmm2/m128/m32bcst right by imm8 while shifting in 0s using writemask k1.
EVEX.256.66.0F.W0 72 /2 ib VPSRLD ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8FV/VAVX512VL AVX512FShift doublewords in ymm2/m256/m32bcst right by imm8 while shifting in 0s using writemask k1.
EVEX.512.66.0F.W0 72 /2 ib VPSRLD zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8FV/VAVX512FShift doublewords in zmm2/m512/m32bcst right by imm8 while shifting in 0s using writemask k1.
EVEX.128.66.0F.W1 D3 /r VPSRLQ xmm1 {k1}{z}, xmm2, xmm3/m128GV/VAVX512VL AVX512FShift quadwords in xmm2 right by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.256.66.0F.W1 D3 /r VPSRLQ ymm1 {k1}{z}, ymm2, xmm3/m128GV/VAVX512VL AVX512FShift quadwords in ymm2 right by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.512.66.0F.W1 D3 /r VPSRLQ zmm1 {k1}{z}, zmm2, xmm3/m128GV/VAVX512FShift quadwords in zmm2 right by amount specified in xmm3/m128 while shifting in 0s using writemask k1.
EVEX.128.66.0F.W1 73 /2 ib VPSRLQ xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8FV/VAVX512VL AVX512FShift quadwords in xmm2/m128/m64bcst right by imm8 while shifting in 0s using writemask k1.
EVEX.256.66.0F.W1 73 /2 ib VPSRLQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8FV/VAVX512VL AVX512FShift quadwords in ymm2/m256/m64bcst right by imm8 while shifting in 0s using writemask k1.
EVEX.512.66.0F.W1 73 /2 ib VPSRLQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8FV/VAVX512FShift quadwords in zmm2/m512/m64bcst right by imm8 while shifting in 0s using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:r/m (r, w)imm8N/AN/A
CN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
DN/AVEX.vvvv (w)ModRM:r/m (r)imm8N/A
EFull MemEVEX.vvvv (w)ModRM:r/m (r)imm8N/A
FFullEVEX.vvvv (w)ModRM:r/m (r)imm8N/A
GMem128ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Shifts the bits in the individual data elements (words, doublewords, or quadword) in the destination operand (first operand) to the right by the number of bits specified in the count operand (second operand). As the bits in the data elements are shifted right, the empty high-order bits are cleared (set to 0). If the value specified by the count operand is greater than 15 (for words), 31 (for doublewords), or 63 (for a quadword), then the destination operand is set to all 0s. Figure 4-19 gives an example of shifting words in a 64-bit operand.

+

Note that only the low 64-bits of a 128-bit count operand are checked to compute the count.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Pre-Shift +X3 +X2 +X1 +X0 +DEST +Shift Right +with Zero +Extension +Post-Shift +X0 >> COUNT +X3 >> COUNT +X2 >> COUNT +X1 >> COUNT +DEST +
Figure 4-19. PSRLW, PSRLD, and PSRLQ Instruction Operation Using 64-bit Operand
+

The (V)PSRLW instruction shifts each of the words in the destination operand to the right by the number of bits specified in the count operand; the (V)PSRLD instruction shifts each of the doublewords in the destination operand; and the PSRLQ instruction shifts the quadword (or quadwords) in the destination operand.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE instruction 64-bit operand: The destination operand is an MMX technology register; the count operand can be either an MMX technology register or an 64-bit memory location.

+

128-bit Legacy SSE version: The destination operand is an XMM register; the count operand can be either an XMM register or a 128-bit memory location, or an 8-bit immediate. If the count operand is a memory address, 128 bits are loaded but the upper 64 bits are ignored. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The destination operand is an XMM register; the count operand can be either an XMM register or a 128-bit memory location, or an 8-bit immediate. If the count operand is a memory address, 128 bits are loaded but the upper 64 bits are ignored. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The destination operand is a YMM register. The source operand is a YMM register or a memory location. The count operand can come either from an XMM register or a memory location or an 8-bit immediate. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded versions: The destination operand is a ZMM register updated according to the writemask. The count operand is either an 8-bit immediate (the immediate count version) or an 8-bit value from an XMM register or a memory location (the variable count version). For the immediate count version, the source operand (the second operand) can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location. For the variable count version, the first source operand (the second operand) is a ZMM register, the second source operand (the third operand, 8-bit variable count) can be an XMM register or a memory location.

+

Note: In VEX/EVEX encoded versions of shifts with an immediate count, vvvv of VEX/EVEX encode the destination register, and VEX.B/EVEX.B + ModRM.r/m encodes the source register.

+

Note: For shifts with an immediate count (VEX.128.66.0F 71-73 /2, or EVEX.128.66.0F 71-73 /2), VEX.vvvv/EVEX.vvvv encodes the destination register.

+

Operation + ¶ +

+

PSRLW (With 64-bit Operand) + ¶ +

+
IF (COUNT > 15)
+THEN
+    DEST[64:0] := 0000000000000000H
+ELSE
+    DEST[15:0] := ZeroExtend(DEST[15:0] >> COUNT);
+    (* Repeat shift operation for 2nd and 3rd words *)
+    DEST[63:48] := ZeroExtend(DEST[63:48] >> COUNT);
+FI;
+
+

PSRLD (With 64-bit Operand) + ¶ +

+
IF (COUNT > 31)
+THEN
+    DEST[64:0] := 0000000000000000H
+ELSE
+    DEST[31:0] := ZeroExtend(DEST[31:0] >> COUNT);
+    DEST[63:32] := ZeroExtend(DEST[63:32] >> COUNT);
+FI;
+
+

PSRLQ (With 64-bit Operand) + ¶ +

+
    IF (COUNT > 63)
+    THEN
+        DEST[64:0] := 0000000000000000H
+    ELSE
+        DEST := ZeroExtend(DEST >> COUNT);
+    FI;
+LOGICAL_RIGHT_SHIFT_DWORDS1(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 31)
+THEN
+    DEST[31:0] := 0
+ELSE
+    DEST[31:0] := ZeroExtend(SRC[31:0] >> COUNT);
+FI;
+LOGICAL_RIGHT_SHIFT_QWORDS1(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 63)
+THEN
+    DEST[63:0] := 0
+ELSE
+    DEST[63:0] := ZeroExtend(SRC[63:0] >> COUNT);
+FI;
+LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 15)
+THEN
+    DEST[255:0] := 0
+ELSE
+    DEST[15:0] := ZeroExtend(SRC[15:0] >> COUNT);
+    (* Repeat shift operation for 2nd through 15th words *)
+    DEST[255:240] := ZeroExtend(SRC[255:240] >> COUNT);
+FI;
+LOGICAL_RIGHT_SHIFT_WORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 15)
+THEN
+    DEST[127:0] := 00000000000000000000000000000000H
+ELSE
+    DEST[15:0] := ZeroExtend(SRC[15:0] >> COUNT);
+    (* Repeat shift operation for 2nd through 7th words *)
+    DEST[127:112] := ZeroExtend(SRC[127:112] >> COUNT);
+FI;
+LOGICAL_RIGHT_SHIFT_DWORDS_256b(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 31)
+THEN
+    DEST[255:0] := 0
+ELSE
+    DEST[31:0] := ZeroExtend(SRC[31:0] >> COUNT);
+    (* Repeat shift operation for 2nd through 3rd words *)
+    DEST[255:224] := ZeroExtend(SRC[255:224] >> COUNT);
+FI;
+LOGICAL_RIGHT_SHIFT_DWORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 31)
+THEN
+    DEST[127:0] := 00000000000000000000000000000000H
+ELSE
+    DEST[31:0] := ZeroExtend(SRC[31:0] >> COUNT);
+    (* Repeat shift operation for 2nd through 3rd words *)
+    DEST[127:96] := ZeroExtend(SRC[127:96] >> COUNT);
+FI;
+LOGICAL_RIGHT_SHIFT_QWORDS_256b(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 63)
+THEN
+    DEST[255:0] := 0
+ELSE
+    DEST[63:0] := ZeroExtend(SRC[63:0] >> COUNT);
+    DEST[127:64] := ZeroExtend(SRC[127:64] >> COUNT);
+    DEST[191:128] := ZeroExtend(SRC[191:128] >> COUNT);
+    DEST[255:192] := ZeroExtend(SRC[255:192] >> COUNT);
+FI;
+LOGICAL_RIGHT_SHIFT_QWORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC[63:0];
+IF (COUNT > 63)
+THEN
+    DEST[127:0] := 00000000000000000000000000000000H
+ELSE
+    DEST[63:0] := ZeroExtend(SRC[63:0] >> COUNT);
+    DEST[127:64] := ZeroExtend(SRC[127:64] >> COUNT);
+FI;
+
+

VPSRLW (EVEX Versions, xmm/m128) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    TMP_DEST[127:0] := LOGICAL_RIGHT_SHIFT_WORDS_128b(SRC1[127:0], SRC2)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC1[255:0], SRC2)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC1[255:0], SRC2)
+    TMP_DEST[511:256] := LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC1[511:256], SRC2)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRLW (EVEX Versions, imm8) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    TMP_DEST[127:0] := LOGICAL_RIGHT_SHIFT_WORDS_128b(SRC1[127:0], imm8)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC1[255:0], imm8)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC1[255:0], imm8)
+    TMP_DEST[511:256] := LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC1[511:256], imm8)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRLW (ymm, ymm, xmm/m128) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0;
+
+

VPSRLW (ymm, imm8) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_RIGHT_SHIFT_WORDS_256b(SRC1, imm8)
+DEST[MAXVL-1:256] := 0;
+
+

VPSRLW (xmm, xmm, xmm/m128) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_WORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPSRLW (xmm, imm8) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_WORDS(SRC1, imm8)
+DEST[MAXVL-1:128] := 0
+
+

PSRLW (xmm, xmm, xmm/m128) + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_WORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

PSRLW (xmm, imm8) + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_WORDS(DEST, imm8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSRLD (EVEX Versions, xmm/m128) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF VL = 128
+    TMP_DEST[127:0] := LOGICAL_RIGHT_SHIFT_DWORDS_128b(SRC1[127:0], SRC2)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_DWORDS_256b(SRC1[255:0], SRC2)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_DWORDS_256b(SRC1[255:0], SRC2)
+    TMP_DEST[511:256] := LOGICAL_RIGHT_SHIFT_DWORDS_256b(SRC1[511:256], SRC2)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRLD (EVEX Versions, imm8) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+31:i] := LOGICAL_RIGHT_SHIFT_DWORDS1(SRC1[31:0], imm8)
+                ELSE DEST[i+31:i] := LOGICAL_RIGHT_SHIFT_DWORDS1(SRC1[i+31:i], imm8)
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRLD (ymm, ymm, xmm/m128) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_RIGHT_SHIFT_DWORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0;
+
+

VPSRLD (ymm, imm8) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_RIGHT_SHIFT_DWORDS_256b(SRC1, imm8)
+DEST[MAXVL-1:256] := 0;
+
+

VPSRLD (xmm, xmm, xmm/m128) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_DWORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPSRLD (xmm, imm8) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_DWORDS(SRC1, imm8)
+DEST[MAXVL-1:128] := 0
+
+

PSRLD (xmm, xmm, xmm/m128) + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_DWORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

PSRLD (xmm, imm8) + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_DWORDS(DEST, imm8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSRLQ (EVEX Versions, xmm/m128) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_QWORDS_256b(SRC1[255:0], SRC2)
+TMP_DEST[511:256] := LOGICAL_RIGHT_SHIFT_QWORDS_256b(SRC1[511:256], SRC2)
+IF VL = 128
+    TMP_DEST[127:0] := LOGICAL_RIGHT_SHIFT_QWORDS_128b(SRC1[127:0], SRC2)
+FI;
+IF VL = 256
+    TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_QWORDS_256b(SRC1[255:0], SRC2)
+FI;
+IF VL = 512
+    TMP_DEST[255:0] := LOGICAL_RIGHT_SHIFT_QWORDS_256b(SRC1[255:0], SRC2)
+    TMP_DEST[511:256] := LOGICAL_RIGHT_SHIFT_QWORDS_256b(SRC1[511:256], SRC2)
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRLQ (EVEX Versions, imm8) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+63:i] := LOGICAL_RIGHT_SHIFT_QWORDS1(SRC1[63:0], imm8)
+                ELSE DEST[i+63:i] := LOGICAL_RIGHT_SHIFT_QWORDS1(SRC1[i+63:i], imm8)
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSRLQ (ymm, ymm, xmm/m128) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_RIGHT_SHIFT_QWORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0;
+
+

VPSRLQ (ymm, imm8) - VEX.256 Encoding + ¶ +

+
DEST[255:0] := LOGICAL_RIGHT_SHIFT_QWORDS_256b(SRC1, imm8)
+DEST[MAXVL-1:256] := 0;
+
+

VPSRLQ (xmm, xmm, xmm/m128) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_QWORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPSRLQ (xmm, imm8) - VEX.128 Encoding + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_QWORDS(SRC1, imm8)
+DEST[MAXVL-1:128] := 0
+
+

PSRLQ (xmm, xmm, xmm/m128) + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_QWORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

PSRLQ (xmm, imm8) + ¶ +

+
DEST[127:0] := LOGICAL_RIGHT_SHIFT_QWORDS(DEST, imm8)
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPSRLD __m512i _mm512_srli_epi32(__m512i a, unsigned int imm);
+
+
VPSRLD __m512i _mm512_mask_srli_epi32(__m512i s, __mmask16 k, __m512i a, unsigned int imm);
+
+
VPSRLD __m512i _mm512_maskz_srli_epi32( __mmask16 k, __m512i a, unsigned int imm);
+
+
VPSRLD __m256i _mm256_mask_srli_epi32(__m256i s, __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSRLD __m256i _mm256_maskz_srli_epi32( __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSRLD __m128i _mm_mask_srli_epi32(__m128i s, __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRLD __m128i _mm_maskz_srli_epi32( __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRLD __m512i _mm512_srl_epi32(__m512i a, __m128i cnt);
+
+
VPSRLD __m512i _mm512_mask_srl_epi32(__m512i s, __mmask16 k, __m512i a, __m128i cnt);
+
+
VPSRLD __m512i _mm512_maskz_srl_epi32( __mmask16 k, __m512i a, __m128i cnt);
+
+
VPSRLD __m256i _mm256_mask_srl_epi32(__m256i s, __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRLD __m256i _mm256_maskz_srl_epi32( __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRLD __m128i _mm_mask_srl_epi32(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLD __m128i _mm_maskz_srl_epi32( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLQ __m512i _mm512_srli_epi64(__m512i a, unsigned int imm);
+
+
VPSRLQ __m512i _mm512_mask_srli_epi64(__m512i s, __mmask8 k, __m512i a, unsigned int imm);
+
+
VPSRLQ __m512i _mm512_mask_srli_epi64( __mmask8 k, __m512i a, unsigned int imm);
+
+
VPSRLQ __m256i _mm256_mask_srli_epi64(__m256i s, __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSRLQ __m256i _mm256_maskz_srli_epi64( __mmask8 k, __m256i a, unsigned int imm);
+
+
VPSRLQ __m128i _mm_mask_srli_epi64(__m128i s, __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRLQ __m128i _mm_maskz_srli_epi64( __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRLQ __m512i _mm512_srl_epi64(__m512i a, __m128i cnt);
+
+
VPSRLQ __m512i _mm512_mask_srl_epi64(__m512i s, __mmask8 k, __m512i a, __m128i cnt);
+
+
VPSRLQ __m512i _mm512_mask_srl_epi64( __mmask8 k, __m512i a, __m128i cnt);
+
+
VPSRLQ __m256i _mm256_mask_srl_epi64(__m256i s, __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRLQ __m256i _mm256_maskz_srl_epi64( __mmask8 k, __m256i a, __m128i cnt);
+
+
VPSRLQ __m128i _mm_mask_srl_epi64(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLQ __m128i _mm_maskz_srl_epi64( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLW __m512i _mm512_srli_epi16(__m512i a, unsigned int imm);
+
+
VPSRLW __m512i _mm512_mask_srli_epi16(__m512i s, __mmask32 k, __m512i a, unsigned int imm);
+
+
VPSRLW __m512i _mm512_maskz_srli_epi16( __mmask32 k, __m512i a, unsigned int imm);
+
+
VPSRLW __m256i _mm256_mask_srli_epi16(__m256i s, __mmask16 k, __m256i a, unsigned int imm);
+
+
VPSRLW __m256i _mm256_maskz_srli_epi16( __mmask16 k, __m256i a, unsigned int imm);
+
+
VPSRLW __m128i _mm_mask_srli_epi16(__m128i s, __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRLW __m128i _mm_maskz_srli_epi16( __mmask8 k, __m128i a, unsigned int imm);
+
+
VPSRLW __m512i _mm512_srl_epi16(__m512i a, __m128i cnt);
+
+
VPSRLW __m512i _mm512_mask_srl_epi16(__m512i s, __mmask32 k, __m512i a, __m128i cnt);
+
+
VPSRLW __m512i _mm512_maskz_srl_epi16( __mmask32 k, __m512i a, __m128i cnt);
+
+
VPSRLW __m256i _mm256_mask_srl_epi16(__m256i s, __mmask16 k, __m256i a, __m128i cnt);
+
+
VPSRLW __m256i _mm256_maskz_srl_epi16( __mmask8 k, __mmask16 a, __m128i cnt);
+
+
VPSRLW __m128i _mm_mask_srl_epi16(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLW __m128i _mm_maskz_srl_epi16( __mmask8 k, __m128i a, __m128i cnt);
+
+
PSRLW __m64 _mm_srli_pi16(__m64 m, int count)
+
+
PSRLW __m64 _mm_srl_pi16 (__m64 m, __m64 count)
+
+
(V)PSRLW __m128i _mm_srli_epi16 (__m128i m, int count)
+
+
(V)PSRLW __m128i _mm_srl_epi16 (__m128i m, __m128i count)
+
+
VPSRLW __m256i _mm256_srli_epi16 (__m256i m, int count)
+
+
VPSRLW __m256i _mm256_srl_epi16 (__m256i m, __m128i count)
+
+
PSRLD __m64 _mm_srli_pi32 (__m64 m, int count)
+
+
PSRLD __m64 _mm_srl_pi32 (__m64 m, __m64 count)
+
+
(V)PSRLD __m128i _mm_srli_epi32 (__m128i m, int count)
+
+
(V)PSRLD __m128i _mm_srl_epi32 (__m128i m, __m128i count)
+
+
VPSRLD __m256i _mm256_srli_epi32 (__m256i m, int count)
+
+
VPSRLD __m256i _mm256_srl_epi32 (__m256i m, __m128i count)
+
+
PSRLQ __m64 _mm_srli_si64 (__m64 m, int count)
+
+
PSRLQ __m64 _mm_srl_si64 (__m64 m, __m64 count)
+
+
(V)PSRLQ __m128i _mm_srli_epi64 (__m128i m, int count)
+
+
(V)PSRLQ __m128i _mm_srl_epi64 (__m128i m, __m128i count)
+
+
VPSRLQ __m256i _mm256_srli_epi64 (__m256i m, int count)
+
+
VPSRLQ __m256i _mm256_srl_epi64 (__m256i m, __m128i count)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+
    +
  • VEX-encoded instructions: +
      +
    • Syntax with RM/RVM operand encoding (A/C in the operand encoding table), seeTable 2-21, “Type 4 Class Exception Conditions.”
    • +
    • Syntax with RM/RVM operand encoding (A/C in the operand encoding table), seeTable 2-21, “Type 4 Class Exception Conditions.”
    • +
    • Syntax with MI/VMI operand encoding (B/D in the operand encoding table), seeTable 2-24, “Type 7 Class Exception Conditions.”
    • +
    • Syntax with MI/VMI operand encoding (B/D in the operand encoding table), seeTable 2-24, “Type 7 Class Exception Conditions.”
  • +
  • EVEX-encoded VPSRLW (E in the operand encoding table), see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”
  • +
  • EVEX-encoded VPSRLD/Q: +
      +
    • Syntax with Mem128 tuple type (G in the operand encoding table), see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”
    • +
    • Syntax with Mem128 tuple type (G in the operand encoding table), see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”
    • +
    • Syntax with Full tuple type (F in the operand encoding table), seeTable 2-49, “Type E4 Class Exception Conditions.”
    • +
    • Syntax with Full tuple type (F in the operand encoding table), seeTable 2-49, “Type E4 Class Exception Conditions.”
diff --git a/x86/psubb.psubw.psubd.html b/x86/psubb.psubw.psubd.html new file mode 100644 index 0000000..04a5e9b --- /dev/null +++ b/x86/psubb.psubw.psubd.html @@ -0,0 +1,524 @@ + +PSUBB/PSUBW/PSUBD + — Subtract Packed Integers

PSUBB/PSUBW/PSUBD + — Subtract Packed Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F F8 /r1 PSUBB mm, mm/m64AV/VMMXSubtract packed byte integers in mm/m64 from packed byte integers in mm.
66 0F F8 /r PSUBB xmm1, xmm2/m128AV/VSSE2Subtract packed byte integers in xmm2/m128 from packed byte integers in xmm1.
NP 0F F9 /r1 PSUBW mm, mm/m64AV/VMMXSubtract packed word integers in mm/m64 from packed word integers in mm.
66 0F F9 /r PSUBW xmm1, xmm2/m128AV/VSSE2Subtract packed word integers in xmm2/m128 from packed word integers in xmm1.
NP 0F FA /r1 PSUBD mm, mm/m64AV/VMMXSubtract packed doubleword integers in mm/m64 from packed doubleword integers in mm.
66 0F FA /r PSUBD xmm1, xmm2/m128AV/VSSE2Subtract packed doubleword integers in xmm2/mem128 from packed doubleword integers in xmm1.
VEX.128.66.0F.WIG F8 /r VPSUBB xmm1, xmm2, xmm3/m128BV/VAVXSubtract packed byte integers in xmm3/m128 from xmm2.
VEX.128.66.0F.WIG F9 /r VPSUBW xmm1, xmm2, xmm3/m128BV/VAVXSubtract packed word integers in xmm3/m128 from xmm2.
VEX.128.66.0F.WIG FA /r VPSUBD xmm1, xmm2, xmm3/m128BV/VAVXSubtract packed doubleword integers in xmm3/m128 from xmm2.
VEX.256.66.0F.WIG F8 /r VPSUBB ymm1, ymm2, ymm3/m256BV/VAVX2Subtract packed byte integers in ymm3/m256 from ymm2.
VEX.256.66.0F.WIG F9 /r VPSUBW ymm1, ymm2, ymm3/m256BV/VAVX2Subtract packed word integers in ymm3/m256 from ymm2.
VEX.256.66.0F.WIG FA /r VPSUBD ymm1, ymm2, ymm3/m256BV/VAVX2Subtract packed doubleword integers in ymm3/m256 from ymm2.
EVEX.128.66.0F.WIG F8 /r VPSUBB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWSubtract packed byte integers in xmm3/m128 from xmm2 and store in xmm1 using writemask k1.
EVEX.256.66.0F.WIG F8 /r VPSUBB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWSubtract packed byte integers in ymm3/m256 from ymm2 and store in ymm1 using writemask k1.
EVEX.512.66.0F.WIG F8 /r VPSUBB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWSubtract packed byte integers in zmm3/m512 from zmm2 and store in zmm1 using writemask k1.
EVEX.128.66.0F.WIG F9 /r VPSUBW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWSubtract packed word integers in xmm3/m128 from xmm2 and store in xmm1 using writemask k1.
EVEX.256.66.0F.WIG F9 /r VPSUBW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWSubtract packed word integers in ymm3/m256 from ymm2 and store in ymm1 using writemask k1.
EVEX.512.66.0F.WIG F9 /r VPSUBW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWSubtract packed word integers in zmm3/m512 from zmm2 and store in zmm1 using writemask k1.
EVEX.128.66.0F.W0 FA /r VPSUBD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstDV/VAVX512VL AVX512FSubtract packed doubleword integers in xmm3/m128/m32bcst from xmm2 and store in xmm1 using writemask k1.
EVEX.256.66.0F.W0 FA /r VPSUBD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstDV/VAVX512VL AVX512FSubtract packed doubleword integers in ymm3/m256/m32bcst from ymm2 and store in ymm1 using writemask k1.
EVEX.512.66.0F.W0 FA /r VPSUBD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstDV/VAVX512FSubtract packed doubleword integers in zmm3/m512/m32bcst from zmm2 and store in zmm1 using writemask k1
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD subtract of the packed integers of the source operand (second operand) from the packed integers of the destination operand (first operand), and stores the packed integer results in the destination operand. See Figure 9-4 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a SIMD operation. Overflow is handled with wraparound, as described in the following paragraphs.

+

The (V)PSUBB instruction subtracts packed byte integers. When an individual result is too large or too small to be represented in a byte, the result is wrapped around and the low 8 bits are written to the destination element.

+

The (V)PSUBW instruction subtracts packed word integers. When an individual result is too large or too small to be represented in a word, the result is wrapped around and the low 16 bits are written to the destination element.

+

The (V)PSUBD instruction subtracts packed doubleword integers. When an individual result is too large or too small to be represented in a doubleword, the result is wrapped around and the low 32 bits are written to the destination element.

+

Note that the (V)PSUBB, (V)PSUBW, and (V)PSUBD instructions can operate on either unsigned or signed (two's complement notation) packed integers; however, it does not set bits in the EFLAGS register to indicate overflow and/or a carry. To prevent undetected overflow conditions, software must control the ranges of values upon which it operates.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version 64-bit operand: The destination operand must be an MMX technology register and the source operand can be either an MMX technology register or a 64-bit memory location.

+

128-bit Legacy SSE version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded versions: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded VPSUBD: The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

EVEX encoded VPSUBB/W: The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

Operation + ¶ +

+

PSUBB (With 64-bit Operands) + ¶ +

+
DEST[7:0] := DEST[7:0] − SRC[7:0];
+(* Repeat subtract operation for 2nd through 7th byte *)
+DEST[63:56] := DEST[63:56] − SRC[63:56];
+
+

PSUBW (With 64-bit Operands) + ¶ +

+
DEST[15:0] := DEST[15:0] − SRC[15:0];
+(* Repeat subtract operation for 2nd and 3rd word *)
+DEST[63:48] := DEST[63:48] − SRC[63:48];
+
+

PSUBD (With 64-bit Operands) + ¶ +

+
DEST[31:0] := DEST[31:0] − SRC[31:0];
+DEST[63:32] := DEST[63:32] − SRC[63:32];
+
+

PSUBD (With 128-bit Operands) + ¶ +

+
DEST[31:0] := DEST[31:0] − SRC[31:0];
+(* Repeat subtract operation for 2nd and 3rd doubleword *)
+DEST[127:96] := DEST[127:96] − SRC[127:96];
+
+

VPSUBB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SRC1[i+7:i] - SRC2[i+7:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPSUBW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SRC1[i+15:i] - SRC2[i+15:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] = 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPSUBD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := SRC1[i+31:i] - SRC2[31:0]
+                ELSE DEST[i+31:i] := SRC1[i+31:i] - SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPSUBB (VEX.256 Encoded Version) + ¶ +

+
DEST[7:0] := SRC1[7:0]-SRC2[7:0]
+DEST[15:8] := SRC1[15:8]-SRC2[15:8]
+DEST[23:16] := SRC1[23:16]-SRC2[23:16]
+DEST[31:24] := SRC1[31:24]-SRC2[31:24]
+DEST[39:32] := SRC1[39:32]-SRC2[39:32]
+DEST[47:40] := SRC1[47:40]-SRC2[47:40]
+DEST[55:48] := SRC1[55:48]-SRC2[55:48]
+DEST[63:56] := SRC1[63:56]-SRC2[63:56]
+DEST[71:64] := SRC1[71:64]-SRC2[71:64]
+DEST[79:72] := SRC1[79:72]-SRC2[79:72]
+DEST[87:80] := SRC1[87:80]-SRC2[87:80]
+DEST[95:88] := SRC1[95:88]-SRC2[95:88]
+DEST[103:96] := SRC1[103:96]-SRC2[103:96]
+DEST[111:104] := SRC1[111:104]-SRC2[111:104]
+DEST[119:112] := SRC1[119:112]-SRC2[119:112]
+DEST[127:120] := SRC1[127:120]-SRC2[127:120]
+DEST[135:128] := SRC1[135:128]-SRC2[135:128]
+DEST[143:136] := SRC1[143:136]-SRC2[143:136]
+DEST[151:144] := SRC1[151:144]-SRC2[151:144]
+DEST[159:152] := SRC1[159:152]-SRC2[159:152]
+DEST[167:160] := SRC1[167:160]-SRC2[167:160]
+DEST[175:168] := SRC1[175:168]-SRC2[175:168]
+DEST[183:176] := SRC1[183:176]-SRC2[183:176]
+DEST[191:184] := SRC1[191:184]-SRC2[191:184]
+DEST[199:192] := SRC1[199:192]-SRC2[199:192]
+DEST[207:200] := SRC1[207:200]-SRC2[207:200]
+DEST[215:208] := SRC1[215:208]-SRC2[215:208]
+DEST[223:216] := SRC1[223:216]-SRC2[223:216]
+DEST[231:224] := SRC1[231:224]-SRC2[231:224]
+DEST[239:232] := SRC1[239:232]-SRC2[239:232]
+DEST[247:240] := SRC1[247:240]-SRC2[247:240]
+DEST[255:248] := SRC1[255:248]-SRC2[255:248]
+DEST[MAXVL-1:256] := 0
+
+

VPSUBB (VEX.128 Encoded Version) + ¶ +

+
DEST[7:0] := SRC1[7:0]-SRC2[7:0]
+DEST[15:8] := SRC1[15:8]-SRC2[15:8]
+DEST[23:16] := SRC1[23:16]-SRC2[23:16]
+DEST[31:24] := SRC1[31:24]-SRC2[31:24]
+DEST[39:32] := SRC1[39:32]-SRC2[39:32]
+DEST[47:40] := SRC1[47:40]-SRC2[47:40]
+DEST[55:48] := SRC1[55:48]-SRC2[55:48]
+DEST[63:56] := SRC1[63:56]-SRC2[63:56]
+DEST[71:64] := SRC1[71:64]-SRC2[71:64]
+DEST[79:72] := SRC1[79:72]-SRC2[79:72]
+DEST[87:80] := SRC1[87:80]-SRC2[87:80]
+DEST[95:88] := SRC1[95:88]-SRC2[95:88]
+DEST[103:96] := SRC1[103:96]-SRC2[103:96]
+DEST[111:104] := SRC1[111:104]-SRC2[111:104]
+DEST[119:112] := SRC1[119:112]-SRC2[119:112]
+DEST[127:120] := SRC1[127:120]-SRC2[127:120]
+DEST[MAXVL-1:128] := 0
+
+

PSUBB (128-bit Legacy SSE Version) + ¶ +

+
DEST[7:0] := DEST[7:0]-SRC[7:0]
+DEST[15:8] := DEST[15:8]-SRC[15:8]
+DEST[23:16] := DEST[23:16]-SRC[23:16]
+DEST[31:24] := DEST[31:24]-SRC[31:24]
+DEST[39:32] := DEST[39:32]-SRC[39:32]
+DEST[47:40] := DEST[47:40]-SRC[47:40]
+DEST[55:48] := DEST[55:48]-SRC[55:48]
+DEST[63:56] := DEST[63:56]-SRC[63:56]
+DEST[71:64] := DEST[71:64]-SRC[71:64]
+DEST[79:72] := DEST[79:72]-SRC[79:72]
+DEST[87:80] := DEST[87:80]-SRC[87:80]
+DEST[95:88] := DEST[95:88]-SRC[95:88]
+DEST[103:96] := DEST[103:96]-SRC[103:96]
+DEST[111:104] := DEST[111:104]-SRC[111:104]
+DEST[119:112] := DEST[119:112]-SRC[119:112]
+DEST[127:120] := DEST[127:120]-SRC[127:120]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSUBW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := SRC1[15:0]-SRC2[15:0]
+DEST[31:16] := SRC1[31:16]-SRC2[31:16]
+DEST[47:32] := SRC1[47:32]-SRC2[47:32]
+DEST[63:48] := SRC1[63:48]-SRC2[63:48]
+DEST[79:64] := SRC1[79:64]-SRC2[79:64]
+DEST[95:80] := SRC1[95:80]-SRC2[95:80]
+DEST[111:96] := SRC1[111:96]-SRC2[111:96]
+DEST[127:112] := SRC1[127:112]-SRC2[127:112]
+DEST[143:128] := SRC1[143:128]-SRC2[143:128]
+DEST[159:144] := SRC1[159:144]-SRC2[159:144]
+DEST[175:160] := SRC1[175:160]-SRC2[175:160]
+DEST[191:176] := SRC1[191:176]-SRC2[191:176]
+DEST[207:192] := SRC1207:192]-SRC2[207:192]
+DEST[223:208] := SRC1[223:208]-SRC2[223:208]
+DEST[239:224] := SRC1[239:224]-SRC2[239:224]
+DEST[255:240] := SRC1[255:240]-SRC2[255:240]
+DEST[MAXVL-1:256] := 0
+
+

VPSUBW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := SRC1[15:0]-SRC2[15:0]
+DEST[31:16] := SRC1[31:16]-SRC2[31:16]
+DEST[47:32] := SRC1[47:32]-SRC2[47:32]
+DEST[63:48] := SRC1[63:48]-SRC2[63:48]
+DEST[79:64] := SRC1[79:64]-SRC2[79:64]
+DEST[95:80] := SRC1[95:80]-SRC2[95:80]
+DEST[111:96] := SRC1[111:96]-SRC2[111:96]
+DEST[127:112] := SRC1[127:112]-SRC2[127:112]
+DEST[MAXVL-1:128] := 0
+
+

PSUBW (128-bit Legacy SSE Version) + ¶ +

+
DEST[15:0] := DEST[15:0]-SRC[15:0]
+DEST[31:16] := DEST[31:16]-SRC[31:16]
+DEST[47:32] := DEST[47:32]-SRC[47:32]
+DEST[63:48] := DEST[63:48]-SRC[63:48]
+DEST[79:64] := DEST[79:64]-SRC[79:64]
+DEST[95:80] := DEST[95:80]-SRC[95:80]
+DEST[111:96] := DEST[111:96]-SRC[111:96]
+DEST[127:112] := DEST[127:112]-SRC[127:112]
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSUBD (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0]-SRC2[31:0]
+DEST[63:32] := SRC1[63:32]-SRC2[63:32]
+DEST[95:64] := SRC1[95:64]-SRC2[95:64]
+DEST[127:96] := SRC1[127:96]-SRC2[127:96]
+DEST[159:128] := SRC1[159:128]-SRC2[159:128]
+DEST[191:160] := SRC1[191:160]-SRC2[191:160]
+DEST[223:192] := SRC1[223:192]-SRC2[223:192]
+DEST[255:224] := SRC1[255:224]-SRC2[255:224]
+DEST[MAXVL-1:256] := 0
+
+

VPSUBD (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0]-SRC2[31:0]
+DEST[63:32] := SRC1[63:32]-SRC2[63:32]
+DEST[95:64] := SRC1[95:64]-SRC2[95:64]
+DEST[127:96] := SRC1[127:96]-SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

PSUBD (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := DEST[31:0]-SRC[31:0]
+DEST[63:32] := DEST[63:32]-SRC[63:32]
+DEST[95:64] := DEST[95:64]-SRC[95:64]
+DEST[127:96] := DEST[127:96]-SRC[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPSUBB __m512i _mm512_sub_epi8(__m512i a, __m512i b);
+
+
VPSUBB __m512i _mm512_mask_sub_epi8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPSUBB __m512i _mm512_maskz_sub_epi8( __mmask64 k, __m512i a, __m512i b);
+
+
VPSUBB __m256i _mm256_mask_sub_epi8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPSUBB __m256i _mm256_maskz_sub_epi8( __mmask32 k, __m256i a, __m256i b);
+
+
VPSUBB __m128i _mm_mask_sub_epi8(__m128i s, __mmask16 k, __m128i a, __m128i b);
+
+
VPSUBB __m128i _mm_maskz_sub_epi8( __mmask16 k, __m128i a, __m128i b);
+
+
VPSUBW __m512i _mm512_sub_epi16(__m512i a, __m512i b);
+
+
VPSUBW __m512i _mm512_mask_sub_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPSUBW __m512i _mm512_maskz_sub_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPSUBW __m256i _mm256_mask_sub_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPSUBW __m256i _mm256_maskz_sub_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPSUBW __m128i _mm_mask_sub_epi16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPSUBW __m128i _mm_maskz_sub_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
VPSUBD __m512i _mm512_sub_epi32(__m512i a, __m512i b);
+
+
VPSUBD __m512i _mm512_mask_sub_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPSUBD __m512i _mm512_maskz_sub_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPSUBD __m256i _mm256_mask_sub_epi32(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPSUBD __m256i _mm256_maskz_sub_epi32( __mmask8 k, __m256i a, __m256i b);
+
+
VPSUBD __m128i _mm_mask_sub_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPSUBD __m128i _mm_maskz_sub_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
PSUBB __m64 _mm_sub_pi8(__m64 m1, __m64 m2)
+
+
(V)PSUBB __m128i _mm_sub_epi8 ( __m128i a, __m128i b)
+
+
VPSUBB __m256i _mm256_sub_epi8 ( __m256i a, __m256i b)
+
+
PSUBW __m64 _mm_sub_pi16(__m64 m1, __m64 m2)
+
+
(V)PSUBW __m128i _mm_sub_epi16 ( __m128i a, __m128i b)
+
+
VPSUBW __m256i _mm256_sub_epi16 ( __m256i a, __m256i b)
+
+
PSUBD __m64 _mm_sub_pi32(__m64 m1, __m64 m2)
+
+
(V)PSUBD __m128i _mm_sub_epi32 ( __m128i a, __m128i b)
+
+
VPSUBD __m256i _mm256_sub_epi32 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPSUBD, see Table 2-49, “Type E4 Class Exception Conditions.”

+

EVEX-encoded VPSUBB/W, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/psubq.html b/x86/psubq.html new file mode 100644 index 0000000..7708834 --- /dev/null +++ b/x86/psubq.html @@ -0,0 +1,191 @@ + +PSUBQ + — Subtract Packed Quadword Integers

PSUBQ + — Subtract Packed Quadword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F FB /r1 PSUBQ mm1, mm2/m64AV/VSSE2Subtract quadword integer in mm1 from mm2 /m64.
66 0F FB /r PSUBQ xmm1, xmm2/m128AV/VSSE2Subtract packed quadword integers in xmm1 from xmm2 /m128.
VEX.128.66.0F.WIG FB/r VPSUBQ xmm1, xmm2, xmm3/m128BV/VAVXSubtract packed quadword integers in xmm3/m128 from xmm2.
VEX.256.66.0F.WIG FB /r VPSUBQ ymm1, ymm2, ymm3/m256BV/VAVX2Subtract packed quadword integers in ymm3/m256 from ymm2.
EVEX.128.66.0F.W1 FB /r VPSUBQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FSubtract packed quadword integers in xmm3/m128/m64bcst from xmm2 and store in xmm1 using writemask k1.
EVEX.256.66.0F.W1 FB /r VPSUBQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FSubtract packed quadword integers in ymm3/m256/m64bcst from ymm2 and store in ymm1 using writemask k1.
EVEX.512.66.0F.W1 FB/r VPSUBQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FSubtract packed quadword integers in zmm3/m512/m64bcst from zmm2 and store in zmm1 using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Subtracts the second operand (source operand) from the first operand (destination operand) and stores the result in the destination operand. When packed quadword operands are used, a SIMD subtract is performed. When a quadword result is too large to be represented in 64 bits (overflow), the result is wrapped around and the low 64 bits are written to the destination element (that is, the carry is ignored).

+

Note that the (V)PSUBQ instruction can operate on either unsigned or signed (two’s complement notation) integers; however, it does not set bits in the EFLAGS register to indicate overflow and/or a carry. To prevent undetected overflow conditions, software must control the ranges of the values upon which it operates.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version 64-bit operand: The source operand can be a quadword integer stored in an MMX technology register or a 64-bit memory location.

+

128-bit Legacy SSE version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded versions: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded VPSUBQ: The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

Operation + ¶ +

+

PSUBQ (With 64-Bit Operands) + ¶ +

+
DEST[63:0] := DEST[63:0] − SRC[63:0];
+
+

PSUBQ (With 128-Bit Operands) + ¶ +

+
DEST[63:0] := DEST[63:0] − SRC[63:0];
+DEST[127:64] := DEST[127:64] − SRC[127:64];
+
+

VPSUBQ (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0]-SRC2[63:0]
+DEST[127:64] := SRC1[127:64]-SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VPSUBQ (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0]-SRC2[63:0]
+DEST[127:64] := SRC1[127:64]-SRC2[127:64]
+DEST[191:128] := SRC1[191:128]-SRC2[191:128]
+DEST[255:192] := SRC1[255:192]-SRC2[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VPSUBQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := SRC1[i+63:i] - SRC2[63:0]
+                ELSE DEST[i+63:i] := SRC1[i+63:i] - SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPSUBQ __m512i _mm512_sub_epi64(__m512i a, __m512i b);
+
+
VPSUBQ __m512i _mm512_mask_sub_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPSUBQ __m512i _mm512_maskz_sub_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPSUBQ __m256i _mm256_mask_sub_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPSUBQ __m256i _mm256_maskz_sub_epi64( __mmask8 k, __m256i a, __m256i b);
+
+
VPSUBQ __m128i _mm_mask_sub_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPSUBQ __m128i _mm_maskz_sub_epi64( __mmask8 k, __m128i a, __m128i b);
+
+
PSUBQ __m64 _mm_sub_si64(__m64 m1, __m64 m2)
+
+
(V)PSUBQ __m128i _mm_sub_epi64(__m128i m1, __m128i m2)
+
+
VPSUBQ __m256i _mm256_sub_epi64(__m256i m1, __m256i m2)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPSUBQ, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/psubsb.psubsw.html b/x86/psubsb.psubsw.html new file mode 100644 index 0000000..3464ce5 --- /dev/null +++ b/x86/psubsb.psubsw.html @@ -0,0 +1,306 @@ + +PSUBSB/PSUBSW + — Subtract Packed Signed Integers With Signed Saturation

PSUBSB/PSUBSW + — Subtract Packed Signed Integers With Signed Saturation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F E8 /r1 PSUBSB mm, mm/m64AV/VMMXSubtract signed packed bytes in mm/m64 from signed packed bytes in mm and saturate results.
66 0F E8 /r PSUBSB xmm1, xmm2/m128AV/VSSE2Subtract packed signed byte integers in xmm2/m128 from packed signed byte integers in xmm1 and saturate results.
NP 0F E9 /r1 PSUBSW mm, mm/m64AV/VMMXSubtract signed packed words in mm/m64 from signed packed words in mm and saturate results.
66 0F E9 /r PSUBSW xmm1, xmm2/m128AV/VSSE2Subtract packed signed word integers in xmm2/m128 from packed signed word integers in xmm1 and saturate results.
VEX.128.66.0F.WIG E8 /r VPSUBSB xmm1, xmm2, xmm3/m128BV/VAVXSubtract packed signed byte integers in xmm3/m128 from packed signed byte integers in xmm2 and saturate results.
VEX.128.66.0F.WIG E9 /r VPSUBSW xmm1, xmm2, xmm3/m128BV/VAVXSubtract packed signed word integers in xmm3/m128 from packed signed word integers in xmm2 and saturate results.
VEX.256.66.0F.WIG E8 /r VPSUBSB ymm1, ymm2, ymm3/m256BV/VAVX2Subtract packed signed byte integers in ymm3/m256 from packed signed byte integers in ymm2 and saturate results.
VEX.256.66.0F.WIG E9 /r VPSUBSW ymm1, ymm2, ymm3/m256BV/VAVX2Subtract packed signed word integers in ymm3/m256 from packed signed word integers in ymm2 and saturate results.
EVEX.128.66.0F.WIG E8 /r VPSUBSB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWSubtract packed signed byte integers in xmm3/m128 from packed signed byte integers in xmm2 and saturate results and store in xmm1 using writemask k1.
EVEX.256.66.0F.WIG E8 /r VPSUBSB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWSubtract packed signed byte integers in ymm3/m256 from packed signed byte integers in ymm2 and saturate results and store in ymm1 using writemask k1.
EVEX.512.66.0F.WIG E8 /r VPSUBSB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWSubtract packed signed byte integers in zmm3/m512 from packed signed byte integers in zmm2 and saturate results and store in zmm1 using writemask k1.
EVEX.128.66.0F.WIG E9 /r VPSUBSW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWSubtract packed signed word integers in xmm3/m128 from packed signed word integers in xmm2 and saturate results and store in xmm1 using writemask k1.
EVEX.256.66.0F.WIG E9 /r VPSUBSW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWSubtract packed signed word integers in ymm3/m256 from packed signed word integers in ymm2 and saturate results and store in ymm1 using writemask k1.
EVEX.512.66.0F.WIG E9 /r VPSUBSW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWSubtract packed signed word integers in zmm3/m512 from packed signed word integers in zmm2 and saturate results and store in zmm1 using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD subtract of the packed signed integers of the source operand (second operand) from the packed signed integers of the destination operand (first operand), and stores the packed integer results in the destination operand. See Figure 9-4 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a SIMD operation. Overflow is handled with signed saturation, as described in the following paragraphs.

+

The (V)PSUBSB instruction subtracts packed signed byte integers. When an individual byte result is beyond the range of a signed byte integer (that is, greater than 7FH or less than 80H), the saturated value of 7FH or 80H, respectively, is written to the destination operand.

+

The (V)PSUBSW instruction subtracts packed signed word integers. When an individual word result is beyond the range of a signed word integer (that is, greater than 7FFFH or less than 8000H), the saturated value of 7FFFH or 8000H, respectively, is written to the destination operand.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version 64-bit operand: The destination operand must be an MMX technology register and the source operand can be either an MMX technology register or a 64-bit memory location.

+

128-bit Legacy SSE version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded versions: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded version: The second source operand is an ZMM/YMM/XMM register or an 512/256/128-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

Operation + ¶ +

+

PSUBSB (With 64-bit Operands) + ¶ +

+
DEST[7:0] := SaturateToSignedByte (DEST[7:0] − SRC (7:0]);
+(* Repeat subtract operation for 2nd through 7th bytes *)
+DEST[63:56] := SaturateToSignedByte (DEST[63:56] − SRC[63:56] );
+
+

PSUBSW (With 64-bit Operands) + ¶ +

+
DEST[15:0] := SaturateToSignedWord (DEST[15:0] − SRC[15:0] );
+(* Repeat subtract operation for 2nd and 7th words *)
+DEST[63:48] := SaturateToSignedWord (DEST[63:48] − SRC[63:48] );
+
+

VPSUBSB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8;
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateToSignedByte (SRC1[i+7:i] - SRC2[i+7:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0;
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPSUBSW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateToSignedWord (SRC1[i+15:i] - SRC2[i+15:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0;
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

VPSUBSB (VEX.256 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateToSignedByte (SRC1[7:0] - SRC2[7:0]);
+(* Repeat subtract operation for 2nd through 31th bytes *)
+DEST[255:248] := SaturateToSignedByte (SRC1[255:248] - SRC2[255:248]);
+DEST[MAXVL-1:256] := 0;
+
+

VPSUBSB (VEX.128 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateToSignedByte (SRC1[7:0] - SRC2[7:0]);
+(* Repeat subtract operation for 2nd through 14th bytes *)
+DEST[127:120] := SaturateToSignedByte (SRC1[127:120] - SRC2[127:120]);
+DEST[MAXVL-1:128] := 0;
+
+

PSUBSB (128-bit Legacy SSE Version) + ¶ +

+
DEST[7:0] := SaturateToSignedByte (DEST[7:0] - SRC[7:0]);
+(* Repeat subtract operation for 2nd through 14th bytes *)
+DEST[127:120] := SaturateToSignedByte (DEST[127:120] - SRC[127:120]);
+DEST[MAXVL-1:128] (Unmodified);
+
+

VPSUBSW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToSignedWord (SRC1[15:0] - SRC2[15:0]);
+(* Repeat subtract operation for 2nd through 15th words *)
+DEST[255:240] := SaturateToSignedWord (SRC1[255:240] - SRC2[255:240]);
+DEST[MAXVL-1:256] := 0;
+
+

VPSUBSW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToSignedWord (SRC1[15:0] - SRC2[15:0]);
+(* Repeat subtract operation for 2nd through 7th words *)
+DEST[127:112] := SaturateToSignedWord (SRC1[127:112] - SRC2[127:112]);
+DEST[MAXVL-1:128] := 0;
+
+

PSUBSW (128-bit Legacy SSE Version) + ¶ +

+
DEST[15:0] := SaturateToSignedWord (DEST[15:0] - SRC[15:0]);
+(* Repeat subtract operation for 2nd through 7th words *)
+DEST[127:112] := SaturateToSignedWord (DEST[127:112] - SRC[127:112]);
+DEST[MAXVL-1:128] (Unmodified);
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPSUBSB __m512i _mm512_subs_epi8(__m512i a, __m512i b);
+
+
VPSUBSB __m512i _mm512_mask_subs_epi8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPSUBSB __m512i _mm512_maskz_subs_epi8( __mmask64 k, __m512i a, __m512i b);
+
+
VPSUBSB __m256i _mm256_mask_subs_epi8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPSUBSB __m256i _mm256_maskz_subs_epi8( __mmask32 k, __m256i a, __m256i b);
+
+
VPSUBSB __m128i _mm_mask_subs_epi8(__m128i s, __mmask16 k, __m128i a, __m128i b);
+
+
VPSUBSB __m128i _mm_maskz_subs_epi8( __mmask16 k, __m128i a, __m128i b);
+
+
VPSUBSW __m512i _mm512_subs_epi16(__m512i a, __m512i b);
+
+
VPSUBSW __m512i _mm512_mask_subs_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPSUBSW __m512i _mm512_maskz_subs_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPSUBSW __m256i _mm256_mask_subs_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPSUBSW __m256i _mm256_maskz_subs_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPSUBSW __m128i _mm_mask_subs_epi16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPSUBSW __m128i _mm_maskz_subs_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
PSUBSB __m64 _mm_subs_pi8(__m64 m1, __m64 m2)
+
+
(V)PSUBSB __m128i _mm_subs_epi8(__m128i m1, __m128i m2)
+
+
VPSUBSB __m256i _mm256_subs_epi8(__m256i m1, __m256i m2)
+
+
PSUBSW __m64 _mm_subs_pi16(__m64 m1, __m64 m2)
+
+
(V)PSUBSW __m128i _mm_subs_epi16(__m128i m1, __m128i m2)
+
+
VPSUBSW __m256i _mm256_subs_epi16(__m256i m1, __m256i m2)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/psubusb.psubusw.html b/x86/psubusb.psubusw.html new file mode 100644 index 0000000..7c4bbc3 --- /dev/null +++ b/x86/psubusb.psubusw.html @@ -0,0 +1,307 @@ + +PSUBUSB/PSUBUSW + — Subtract Packed Unsigned Integers With Unsigned Saturation

PSUBUSB/PSUBUSW + — Subtract Packed Unsigned Integers With Unsigned Saturation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F D8 /r1 PSUBUSB mm, mm/m64AV/VMMXSubtract unsigned packed bytes in mm/m64 from unsigned packed bytes in mm and saturate result.
66 0F D8 /r PSUBUSB xmm1, xmm2/m128AV/VSSE2Subtract packed unsigned byte integers in xmm2/m128 from packed unsigned byte integers in xmm1 and saturate result.
NP 0F D9 /r1 PSUBUSW mm, mm/m64AV/VMMXSubtract unsigned packed words in mm/m64 from unsigned packed words in mm and saturate result.
66 0F D9 /r PSUBUSW xmm1, xmm2/m128AV/VSSE2Subtract packed unsigned word integers in xmm2/m128 from packed unsigned word integers in xmm1 and saturate result.
VEX.128.66.0F.WIG D8 /r VPSUBUSB xmm1, xmm2, xmm3/m128BV/VAVXSubtract packed unsigned byte integers in xmm3/m128 from packed unsigned byte integers in xmm2 and saturate result.
VEX.128.66.0F.WIG D9 /r VPSUBUSW xmm1, xmm2, xmm3/m128BV/VAVXSubtract packed unsigned word integers in xmm3/m128 from packed unsigned word integers in xmm2 and saturate result.
VEX.256.66.0F.WIG D8 /r VPSUBUSB ymm1, ymm2, ymm3/m256BV/VAVX2Subtract packed unsigned byte integers in ymm3/m256 from packed unsigned byte integers in ymm2 and saturate result.
VEX.256.66.0F.WIG D9 /r VPSUBUSW ymm1, ymm2, ymm3/m256BV/VAVX2Subtract packed unsigned word integers in ymm3/m256 from packed unsigned word integers in ymm2 and saturate result.
EVEX.128.66.0F.WIG D8 /r VPSUBUSB xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWSubtract packed unsigned byte integers in xmm3/m128 from packed unsigned byte integers in xmm2, saturate results and store in xmm1 using writemask k1.
EVEX.256.66.0F.WIG D8 /r VPSUBUSB ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWSubtract packed unsigned byte integers in ymm3/m256 from packed unsigned byte integers in ymm2, saturate results and store in ymm1 using writemask k1.
EVEX.512.66.0F.WIG D8 /r VPSUBUSB zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWSubtract packed unsigned byte integers in zmm3/m512 from packed unsigned byte integers in zmm2, saturate results and store in zmm1 using writemask k1.
EVEX.128.66.0F.WIG D9 /r VPSUBUSW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWSubtract packed unsigned word integers in xmm3/m128 from packed unsigned word integers in xmm2 and saturate results and store in xmm1 using writemask k1.
EVEX.256.66.0F.WIG D9 /r VPSUBUSW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWSubtract packed unsigned word integers in ymm3/m256 from packed unsigned word integers in ymm2, saturate results and store in ymm1 using writemask k1.
EVEX.512.66.0F.WIG D9 /r VPSUBUSW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWSubtract packed unsigned word integers in zmm3/m512 from packed unsigned word integers in zmm2, saturate results and store in zmm1 using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD subtract of the packed unsigned integers of the source operand (second operand) from the packed unsigned integers of the destination operand (first operand), and stores the packed unsigned integer results in the destination operand. See Figure 9-4 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a SIMD operation. Overflow is handled with unsigned saturation, as described in the following paragraphs.

+

These instructions can operate on either 64-bit or 128-bit operands.

+

The (V)PSUBUSB instruction subtracts packed unsigned byte integers. When an individual byte result is less than zero, the saturated value of 00H is written to the destination operand.

+

The (V)PSUBUSW instruction subtracts packed unsigned word integers. When an individual word result is less than zero, the saturated value of 0000H is written to the destination operand.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE version 64-bit operand: The destination operand must be an MMX technology register and the source operand can be either an MMX technology register or a 64-bit memory location.

+

128-bit Legacy SSE version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded versions: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded version: The second source operand is an ZMM/YMM/XMM register or an 512/256/128-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

Operation + ¶ +

+

PSUBUSB (With 64-bit Operands) + ¶ +

+
DEST[7:0] := SaturateToUnsignedByte (DEST[7:0] − SRC (7:0] );
+(* Repeat add operation for 2nd through 7th bytes *)
+DEST[63:56] := SaturateToUnsignedByte (DEST[63:56] − SRC[63:56];
+
+

PSUBUSW (With 64-bit Operands) + ¶ +

+
DEST[15:0] := SaturateToUnsignedWord (DEST[15:0] − SRC[15:0] );
+(* Repeat add operation for 2nd and 3rd words *)
+DEST[63:48] := SaturateToUnsignedWord (DEST[63:48] − SRC[63:48] );
+
+

VPSUBUSB (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8;
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateToUnsignedByte (SRC1[i+7:i] - SRC2[i+7:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0;
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

VPSUBUSW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16;
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateToUnsignedWord (SRC1[i+15:i] - SRC2[i+15:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0;
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

VPSUBUSB (VEX.256 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateToUnsignedByte (SRC1[7:0] - SRC2[7:0]);
+(* Repeat subtract operation for 2nd through 31st bytes *)
+DEST[255:148] := SaturateToUnsignedByte (SRC1[255:248] - SRC2[255:248]);
+DEST[MAXVL-1:256] := 0;
+
+

VPSUBUSB (VEX.128 Encoded Version) + ¶ +

+
DEST[7:0] := SaturateToUnsignedByte (SRC1[7:0] - SRC2[7:0]);
+(* Repeat subtract operation for 2nd through 14th bytes *)
+DEST[127:120] := SaturateToUnsignedByte (SRC1[127:120] - SRC2[127:120]);
+DEST[MAXVL-1:128] := 0
+
+

PSUBUSB (128-bit Legacy SSE Version) + ¶ +

+
DEST[7:0] := SaturateToUnsignedByte (DEST[7:0] - SRC[7:0]);
+(* Repeat subtract operation for 2nd through 14th bytes *)
+DEST[127:120] := SaturateToUnsignedByte (DEST[127:120] - SRC[127:120]);
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPSUBUSW (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToUnsignedWord (SRC1[15:0] - SRC2[15:0]);
+(* Repeat subtract operation for 2nd through 15th words *)
+DEST[255:240] := SaturateToUnsignedWord (SRC1[255:240] - SRC2[255:240]);
+DEST[MAXVL-1:256] := 0;
+
+

VPSUBUSW (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := SaturateToUnsignedWord (SRC1[15:0] - SRC2[15:0]);
+(* Repeat subtract operation for 2nd through 7th words *)
+DEST[127:112] := SaturateToUnsignedWord (SRC1[127:112] - SRC2[127:112]);
+DEST[MAXVL-1:128] := 0
+
+

PSUBUSW (128-bit Legacy SSE Version) + ¶ +

+
DEST[15:0] := SaturateToUnsignedWord (DEST[15:0] - SRC[15:0]);
+(* Repeat subtract operation for 2nd through 7th words *)
+DEST[127:112] := SaturateToUnsignedWord (DEST[127:112] - SRC[127:112]);
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPSUBUSB __m512i _mm512_subs_epu8(__m512i a, __m512i b);
+
+
VPSUBUSB __m512i _mm512_mask_subs_epu8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPSUBUSB __m512i _mm512_maskz_subs_epu8( __mmask64 k, __m512i a, __m512i b);
+
+
VPSUBUSB __m256i _mm256_mask_subs_epu8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPSUBUSB __m256i _mm256_maskz_subs_epu8( __mmask32 k, __m256i a, __m256i b);
+
+
VPSUBUSB __m128i _mm_mask_subs_epu8(__m128i s, __mmask16 k, __m128i a, __m128i b);
+
+
VPSUBUSB __m128i _mm_maskz_subs_epu8( __mmask16 k, __m128i a, __m128i b);
+
+
VPSUBUSW __m512i _mm512_subs_epu16(__m512i a, __m512i b);
+
+
VPSUBUSW __m512i _mm512_mask_subs_epu16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPSUBUSW __m512i _mm512_maskz_subs_epu16( __mmask32 k, __m512i a, __m512i b);
+
+
VPSUBUSW __m256i _mm256_mask_subs_epu16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPSUBUSW __m256i _mm256_maskz_subs_epu16( __mmask16 k, __m256i a, __m256i b);
+
+
VPSUBUSW __m128i _mm_mask_subs_epu16(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPSUBUSW __m128i _mm_maskz_subs_epu16( __mmask8 k, __m128i a, __m128i b);
+
+
PSUBUSB __m64 _mm_subs_pu8(__m64 m1, __m64 m2)
+
+
(V)PSUBUSB __m128i _mm_subs_epu8(__m128i m1, __m128i m2)
+
+
VPSUBUSB __m256i _mm256_subs_epu8(__m256i m1, __m256i m2)
+
+
PSUBUSW __m64 _mm_subs_pu16(__m64 m1, __m64 m2)
+
+
(V)PSUBUSW __m128i _mm_subs_epu16(__m128i m1, __m128i m2)
+
+
VPSUBUSW __m256i _mm256_subs_epu16(__m256i m1, __m256i m2)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/ptest.html b/x86/ptest.html new file mode 100644 index 0000000..920389b --- /dev/null +++ b/x86/ptest.html @@ -0,0 +1,120 @@ + +PTEST + — Logical Compare

PTEST + — Logical Compare

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 17 /r PTEST xmm1, xmm2/m128RMV/VSSE4_1Set ZF if xmm2/m128 AND xmm1 result is all 0s. Set CF if xmm2/m128 AND NOT xmm1 result is all 0s.
VEX.128.66.0F38.WIG 17 /r VPTEST xmm1, xmm2/m128RMV/VAVXSet ZF and CF depending on bitwise AND and ANDN of sources.
VEX.256.66.0F38.WIG 17 /r VPTEST ymm1, ymm2/m256RMV/VAVXSet ZF and CF depending on bitwise AND and ANDN of sources.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

PTEST and VPTEST set the ZF flag if all bits in the result are 0 of the bitwise AND of the first source operand (first operand) and the second source operand (second operand). VPTEST sets the CF flag if all bits in the result are 0 of the bitwise AND of the second source operand (second operand) and the logical NOT of the destination operand.

+

The first source register is specified by the ModR/M reg field.

+

128-bit versions: The first source register is an XMM register. The second source register can be an XMM register or a 128-bit memory location. The destination register is not modified.

+

VEX.256 encoded version: The first source register is a YMM register. The second source register can be a YMM register or a 256-bit memory location. The destination register is not modified.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

(V)PTEST (128-bit Version) + ¶ +

+
IF (SRC[127:0] BITWISE AND DEST[127:0] = 0)
+    THEN ZF := 1;
+    ELSE ZF := 0;
+IF (SRC[127:0] BITWISE AND NOT DEST[127:0] = 0)
+    THEN CF := 1;
+    ELSE CF := 0;
+DEST (unmodified)
+AF := OF := PF := SF := 0;
+
+

VPTEST (VEX.256 Encoded Version) + ¶ +

+
IF (SRC[255:0] BITWISE AND DEST[255:0] = 0) THEN ZF := 1;
+    ELSE ZF := 0;
+IF (SRC[255:0] BITWISE AND NOT DEST[255:0] = 0) THEN CF := 1;
+    ELSE CF := 0;
+DEST (unmodified)
+AF := OF := PF := SF := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
PTEST int _mm_testz_si128 (__m128i s1, __m128i s2);
+
+
PTEST int _mm_testc_si128 (__m128i s1, __m128i s2);
+
+
PTEST int _mm_testnzc_si128 (__m128i s1, __m128i s2);
+
+
VPTEST int _mm256_testz_si256 (__m256i s1, __m256i s2);
+
+
VPTEST int _mm256_testc_si256 (__m256i s1, __m256i s2);
+
+
VPTEST int _mm256_testnzc_si256 (__m256i s1, __m256i s2);
+
+
VPTEST int _mm_testz_si128 (__m128i s1, __m128i s2);
+
+
VPTEST int _mm_testc_si128 (__m128i s1, __m128i s2);
+
+
VPTEST int _mm_testnzc_si128 (__m128i s1, __m128i s2);
+
+

Flags Affected + ¶ +

+

The OF, AF, PF, SF flags are cleared and the ZF, CF flags are set according to the operation.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/ptwrite.html b/x86/ptwrite.html new file mode 100644 index 0000000..fb184ef --- /dev/null +++ b/x86/ptwrite.html @@ -0,0 +1,154 @@ + +PTWRITE + — Write Data to a Processor Trace Packet

PTWRITE + — Write Data to a Processor Trace Packet

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 REX.W 0F AE /4 PTWRITE r64/m64RMV/N.EPTWRITEReads the data from r64/m64 to encode into a PTW packet if dependencies are met (see details below).
F3 0F AE /4 PTWRITE r32/m32RMV/VPTWRITEReads the data from r32/m32 to encode into a PTW packet if dependencies are met (see details below).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:rm (r)N/AN/AN/A
+

Description + ¶ +

+

This instruction reads data in the source operand and sends it to the Intel Processor Trace hardware to be encoded in a PTW packet if TriggerEn, ContextEn, FilterEn, and PTWEn are all set to 1. For more details on these values, see Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, Section 33.2.2, “Software Trace Instrumentation with PTWRITE.” The size of data is 64-bit if using REX.W in 64-bit mode, otherwise 32-bits of data are copied from the source operand.

+

Note: The instruction will #UD if prefix 66H is used.

+

Operation + ¶ +

+
IF (IA32_RTIT_STATUS.TriggerEn & IA32_RTIT_STATUS.ContextEn & IA32_RTIT_STATUS.FilterEn & IA32_RTIT_CTL.PTWEn) = 1
+    PTW.PayloadBytes := Encoded payload size;
+    PTW.IP := IA32_RTIT_CTL.FUPonPTW
+    IF IA32_RTIT_CTL.FUPonPTW = 1
+        Insert FUP packet with IP of PTWRITE;
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS or GS segments.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code) For a page fault.
#AC(0)If an unaligned memory reference is made while the current privilege level is 3 and alignment checking is enabled.
#UDIf CPUID.(EAX=14H, ECX=0H):EBX.PTWRITE [Bit 4] = 0.
If LOCK prefix is used.
If 66H prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#UDIf CPUID.(EAX=14H, ECX=0H):EBX.PTWRITE [Bit 4] = 0.
If LOCK prefix is used.
If 66H prefix is used.
+

Virtual 8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code) For a page fault.
#AC(0)If an unaligned memory reference is made while alignment checking is enabled.
#UDIf CPUID.(EAX=14H, ECX=0H):EBX.PTWRITE [Bit 4] = 0.
If LOCK prefix is used.
If 66H prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in Protected Mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf CPUID.(EAX=14H, ECX=0H):EBX.PTWRITE [Bit 4] = 0.
If LOCK prefix is used.
If 66H prefix is used.
diff --git a/x86/punpckhbw.punpckhwd.punpckhdq.punpckhqdq.html b/x86/punpckhbw.punpckhwd.punpckhdq.punpckhqdq.html new file mode 100644 index 0000000..35202d5 --- /dev/null +++ b/x86/punpckhbw.punpckhwd.punpckhdq.punpckhqdq.html @@ -0,0 +1,892 @@ + +PUNPCKHBW/PUNPCKHWD/PUNPCKHDQ/PUNPCKHQDQ + — Unpack High Data

PUNPCKHBW/PUNPCKHWD/PUNPCKHDQ/PUNPCKHQDQ + — Unpack High Data

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 68 /r1 PUNPCKHBW mm, mm/m64AV/VMMXUnpack and interleave high-order bytes from mm and mm/m64 into mm.
66 0F 68 /r PUNPCKHBW xmm1, xmm2/m128AV/VSSE2Unpack and interleave high-order bytes from xmm1 and xmm2/m128 into xmm1.
NP 0F 69 /r1 PUNPCKHWD mm, mm/m64AV/VMMXUnpack and interleave high-order words from mm and mm/m64 into mm.
66 0F 69 /r PUNPCKHWD xmm1, xmm2/m128AV/VSSE2Unpack and interleave high-order words from xmm1 and xmm2/m128 into xmm1.
NP 0F 6A /r1 PUNPCKHDQ mm, mm/m64AV/VMMXUnpack and interleave high-order doublewords from mm and mm/m64 into mm.
66 0F 6A /r PUNPCKHDQ xmm1, xmm2/m128AV/VSSE2Unpack and interleave high-order doublewords from xmm1 and xmm2/m128 into xmm1.
66 0F 6D /r PUNPCKHQDQ xmm1, xmm2/m128AV/VSSE2Unpack and interleave high-order quadwords from xmm1 and xmm2/m128 into xmm1.
VEX.128.66.0F.WIG 68/r VPUNPCKHBW xmm1,xmm2, xmm3/m128BV/VAVXInterleave high-order bytes from xmm2 and xmm3/m128 into xmm1.
VEX.128.66.0F.WIG 69/r VPUNPCKHWD xmm1,xmm2, xmm3/m128BV/VAVXInterleave high-order words from xmm2 and xmm3/m128 into xmm1.
VEX.128.66.0F.WIG 6A/r VPUNPCKHDQ xmm1, xmm2, xmm3/m128BV/VAVXInterleave high-order doublewords from xmm2 and xmm3/m128 into xmm1.
VEX.128.66.0F.WIG 6D/r VPUNPCKHQDQ xmm1, xmm2, xmm3/m128BV/VAVXInterleave high-order quadword from xmm2 and xmm3/m128 into xmm1 register.
VEX.256.66.0F.WIG 68 /r VPUNPCKHBW ymm1, ymm2, ymm3/m256BV/VAVX2Interleave high-order bytes from ymm2 and ymm3/m256 into ymm1 register.
VEX.256.66.0F.WIG 69 /r VPUNPCKHWD ymm1, ymm2, ymm3/m256BV/VAVX2Interleave high-order words from ymm2 and ymm3/m256 into ymm1 register.
VEX.256.66.0F.WIG 6A /r VPUNPCKHDQ ymm1, ymm2, ymm3/m256BV/VAVX2Interleave high-order doublewords from ymm2 and ymm3/m256 into ymm1 register.
VEX.256.66.0F.WIG 6D /r VPUNPCKHQDQ ymm1, ymm2, ymm3/m256BV/VAVX2Interleave high-order quadword from ymm2 and ymm3/m256 into ymm1 register.
EVEX.128.66.0F.WIG 68 /r VPUNPCKHBW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWInterleave high-order bytes from xmm2 and xmm3/m128 into xmm1 register using k1 write mask.
EVEX.128.66.0F.WIG 69 /r VPUNPCKHWD xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWInterleave high-order words from xmm2 and xmm3/m128 into xmm1 register using k1 write mask.
EVEX.128.66.0F.W0 6A /r VPUNPCKHDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstDV/VAVX512VL AVX512FInterleave high-order doublewords from xmm2 and xmm3/m128/m32bcst into xmm1 register using k1 write mask.
EVEX.128.66.0F.W1 6D /r VPUNPCKHQDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstDV/VAVX512VL AVX512FInterleave high-order quadword from xmm2 and xmm3/m128/m64bcst into xmm1 register using k1 write mask.
EVEX.256.66.0F.WIG 68 /r VPUNPCKHBW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWInterleave high-order bytes from ymm2 and ymm3/m256 into ymm1 register using k1 write mask.
EVEX.256.66.0F.WIG 69 /r VPUNPCKHWD ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWInterleave high-order words from ymm2 and ymm3/m256 into ymm1 register using k1 write mask.
EVEX.256.66.0F.W0 6A /r VPUNPCKHDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstDV/VAVX512VL AVX512FInterleave high-order doublewords from ymm2 and ymm3/m256/m32bcst into ymm1 register using k1 write mask.
EVEX.256.66.0F.W1 6D /r VPUNPCKHQDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstDV/VAVX512VL AVX512FInterleave high-order quadword from ymm2 and ymm3/m256/m64bcst into ymm1 register using k1 write mask.
EVEX.512.66.0F.WIG 68/r VPUNPCKHBW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWInterleave high-order bytes from zmm2 and zmm3/m512 into zmm1 register.
EVEX.512.66.0F.WIG 69/r VPUNPCKHWD zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWInterleave high-order words from zmm2 and zmm3/m512 into zmm1 register.
EVEX.512.66.0F.W0 6A /r VPUNPCKHDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstDV/VAVX512FInterleave high-order doublewords from zmm2 and zmm3/m512/m32bcst into zmm1 register using k1 write mask.
EVEX.512.66.0F.W1 6D /r VPUNPCKHQDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstDV/VAVX512FInterleave high-order quadword from zmm2 and zmm3/m512/m64bcst into zmm1 register using k1 write mask.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Unpacks and interleaves the high-order data elements (bytes, words, doublewords, or quadwords) of the destination operand (first operand) and source operand (second operand) into the destination operand. Figure 4-20 shows the unpack operation for bytes in 64-bit operands. The low-order data elements are ignored.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 X7 X6 X5 X4 X3 X2 X1 X0 DEST +DEST Y7 X7 Y6 X6 Y5 X5 Y4 X4 +
Figure 4-20. PUNPCKHBW Instruction Operation Using 64-bit Operands
+

255 31 0 255 31 0

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 +X7 X6 X5 X4 X3 X2 X1 X0 +255 +0 +DEST Y7 X7 Y6 X6 Y3 X3 Y2 X2 +
Figure 4-21. 256-bit VPUNPCKHDQ Instruction Operation
+

When the source data comes from a 64-bit memory operand, the full 64-bit operand is accessed from memory, but the instruction uses only the high-order 32 bits. When the source data comes from a 128-bit memory operand, an implementation may fetch only the appropriate 64 bits; however, alignment to a 16-byte boundary and normal segment checking will still be enforced.

+

The (V)PUNPCKHBW instruction interleaves the high-order bytes of the source and destination operands, the (V)PUNPCKHWD instruction interleaves the high-order words of the source and destination operands, the (V)PUNPCKHDQ instruction interleaves the high-order doubleword (or doublewords) of the source and destination operands, and the (V)PUNPCKHQDQ instruction interleaves the high-order quadwords of the source and destination operands.

+

These instructions can be used to convert bytes to words, words to doublewords, doublewords to quadwords, and quadwords to double quadwords, respectively, by placing all 0s in the source operand. Here, if the source operand contains all 0s, the result (stored in the destination operand) contains zero extensions of the high-order data elements from the original value in the destination operand. For example, with the (V)PUNPCKHBW instruction the high-order bytes are zero extended (that is, unpacked into unsigned word integers), and with the (V)PUNPCKHWD instruction, the high-order words are zero extended (unpacked into unsigned doubleword integers).

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE versions 64-bit operand: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an MMX technology register.

+

128-bit Legacy SSE versions: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded versions: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers.

+

EVEX encoded VPUNPCKHDQ/QDQ: The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

EVEX encoded VPUNPCKHWD/BW: The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

Operation + ¶ +

+

PUNPCKHBW Instruction With 64-bit Operands: + ¶ +

+
DEST[7:0] := DEST[39:32];
+DEST[15:8] := SRC[39:32];
+DEST[23:16] := DEST[47:40];
+DEST[31:24] := SRC[47:40];
+DEST[39:32] := DEST[55:48];
+DEST[47:40] := SRC[55:48];
+DEST[55:48] := DEST[63:56];
+DEST[63:56] := SRC[63:56];
+
+

PUNPCKHW Instruction With 64-bit Operands: + ¶ +

+
DEST[15:0] := DEST[47:32];
+DEST[31:16] := SRC[47:32];
+DEST[47:32] := DEST[63:48];
+DEST[63:48] := SRC[63:48];
+
+

PUNPCKHDQ Instruction With 64-bit Operands: + ¶ +

+
    DEST[31:0] := DEST[63:32];
+    DEST[63:32] := SRC[63:32];
+INTERLEAVE_HIGH_BYTES_512b (SRC1, SRC2)
+TMP_DEST[255:0] := INTERLEAVE_HIGH_BYTES_256b(SRC1[255:0], SRC[255:0])
+TMP_DEST[511:256] := INTERLEAVE_HIGH_BYTES_256b(SRC1[511:256], SRC[511:256])
+INTERLEAVE_HIGH_BYTES_256b (SRC1, SRC2)
+DEST[7:0] := SRC1[71:64]
+DEST[15:8] := SRC2[71:64]
+DEST[23:16] := SRC1[79:72]
+DEST[31:24] := SRC2[79:72]
+DEST[39:32] := SRC1[87:80]
+DEST[47:40] := SRC2[87:80]
+DEST[55:48] := SRC1[95:88]
+DEST[63:56] := SRC2[95:88]
+DEST[71:64] := SRC1[103:96]
+DEST[79:72] := SRC2[103:96]
+DEST[87:80] := SRC1[111:104]
+DEST[95:88] := SRC2[111:104]
+DEST[103:96] := SRC1[119:112]
+DEST[111:104] := SRC2[119:112]
+DEST[119:112] := SRC1[127:120]
+DEST[127:120] := SRC2[127:120]
+DEST[135:128] := SRC1[199:192]
+DEST[143:136] := SRC2[199:192]
+DEST[151:144] := SRC1[207:200]
+DEST[159:152] := SRC2[207:200]
+DEST[167:160] := SRC1[215:208]
+DEST[175:168] := SRC2[215:208]
+DEST[183:176] := SRC1[223:216]
+DEST[191:184] := SRC2[223:216]
+DEST[199:192] := SRC1[231:224]
+DEST[207:200] := SRC2[231:224]
+DEST[215:208] := SRC1[239:232]
+DEST[223:216] := SRC2[239:232]
+DEST[231:224] := SRC1[247:240]
+DEST[239:232] := SRC2[247:240]
+DEST[247:240] := SRC1[255:248]
+DEST[255:248] := SRC2[255:248]
+INTERLEAVE_HIGH_BYTES (SRC1, SRC2)
+DEST[7:0] := SRC1[71:64]
+DEST[15:8] := SRC2[71:64]
+DEST[23:16] := SRC1[79:72]
+DEST[31:24] := SRC2[79:72]
+DEST[39:32] := SRC1[87:80]
+DEST[47:40] := SRC2[87:80]
+DEST[55:48] := SRC1[95:88]
+DEST[63:56] := SRC2[95:88]
+DEST[71:64] := SRC1[103:96]
+DEST[79:72] := SRC2[103:96]
+DEST[87:80] := SRC1[111:104]
+DEST[95:88] := SRC2[111:104]
+DEST[103:96] := SRC1[119:112]
+DEST[111:104] := SRC2[119:112]
+DEST[119:112] := SRC1[127:120]
+DEST[127:120] := SRC2[127:120]
+INTERLEAVE_HIGH_WORDS_512b (SRC1, SRC2)
+TMP_DEST[255:0] := INTERLEAVE_HIGH_WORDS_256b(SRC1[255:0], SRC[255:0])
+TMP_DEST[511:256] := INTERLEAVE_HIGH_WORDS_256b(SRC1[511:256], SRC[511:256])
+INTERLEAVE_HIGH_WORDS_256b(SRC1, SRC2)
+DEST[15:0] := SRC1[79:64]
+DEST[31:16] := SRC2[79:64]
+DEST[47:32] := SRC1[95:80]
+DEST[63:48] := SRC2[95:80]
+DEST[79:64] := SRC1[111:96]
+DEST[95:80] := SRC2[111:96]
+DEST[111:96] := SRC1[127:112]
+DEST[127:112] := SRC2[127:112]
+DEST[143:128] := SRC1[207:192]
+DEST[159:144] := SRC2[207:192]
+DEST[175:160] := SRC1[223:208]
+DEST[191:176] := SRC2[223:208]
+DEST[207:192] := SRC1[239:224]
+DEST[223:208] := SRC2[239:224]
+DEST[239:224] := SRC1[255:240]
+DEST[255:240] := SRC2[255:240]
+INTERLEAVE_HIGH_WORDS (SRC1, SRC2)
+DEST[15:0] := SRC1[79:64]
+DEST[31:16] := SRC2[79:64]
+DEST[47:32] := SRC1[95:80]
+DEST[63:48] := SRC2[95:80]
+DEST[79:64] := SRC1[111:96]
+DEST[95:80] := SRC2[111:96]
+DEST[111:96] := SRC1[127:112]
+DEST[127:112] := SRC2[127:112]
+INTERLEAVE_HIGH_DWORDS_512b (SRC1, SRC2)
+TMP_DEST[255:0] := INTERLEAVE_HIGH_DWORDS_256b(SRC1[255:0], SRC2[255:0])
+TMP_DEST[511:256] := INTERLEAVE_HIGH_DWORDS_256b(SRC1[511:256], SRC2[511:256])
+INTERLEAVE_HIGH_DWORDS_256b(SRC1, SRC2)
+DEST[31:0] := SRC1[95:64]
+DEST[63:32] := SRC2[95:64]
+DEST[95:64] := SRC1[127:96]
+DEST[127:96] := SRC2[127:96]
+DEST[159:128] := SRC1[223:192]
+DEST[191:160] := SRC2[223:192]
+DEST[223:192] := SRC1[255:224]
+DEST[255:224] := SRC2[255:224]
+INTERLEAVE_HIGH_DWORDS(SRC1, SRC2)
+DEST[31:0] := SRC1[95:64]
+DEST[63:32] := SRC2[95:64]
+DEST[95:64] := SRC1[127:96]
+DEST[127:96] := SRC2[127:96]
+INTERLEAVE_HIGH_QWORDS_512b (SRC1, SRC2)
+TMP_DEST[255:0] := INTERLEAVE_HIGH_QWORDS_256b(SRC1[255:0], SRC2[255:0])
+TMP_DEST[511:256] := INTERLEAVE_HIGH_QWORDS_256b(SRC1[511:256], SRC2[511:256])
+INTERLEAVE_HIGH_QWORDS_256b(SRC1, SRC2)
+DEST[63:0] := SRC1[127:64]
+DEST[127:64] := SRC2[127:64]
+DEST[191:128] := SRC1[255:192]
+DEST[255:192] := SRC2[255:192]
+INTERLEAVE_HIGH_QWORDS(SRC1, SRC2)
+DEST[63:0] := SRC1[127:64]
+DEST[127:64] := SRC2[127:64]
+
+

PUNPCKHBW (128-bit Legacy SSE Version) + ¶ +

+
DEST[127:0] := INTERLEAVE_HIGH_BYTES(DEST, SRC)
+DEST[255:127] (Unmodified)
+
+

VPUNPCKHBW (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := INTERLEAVE_HIGH_BYTES(SRC1, SRC2)
+DEST[MAXVL-1:127] := 0
+
+

VPUNPCKHBW (VEX.256 Encoded Version) + ¶ +

+
DEST[255:0] := INTERLEAVE_HIGH_BYTES_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPUNPCKHBW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+IF VL = 128
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_BYTES(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+IF VL = 256
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_BYTES_256b(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+IF VL = 512
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_BYTES_512b(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := TMP_DEST[i+7:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

PUNPCKHWD (128-bit Legacy SSE Version) + ¶ +

+
DEST[127:0] := INTERLEAVE_HIGH_WORDS(DEST, SRC)
+DEST[255:127] (Unmodified)
+
+

VPUNPCKHWD (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := INTERLEAVE_HIGH_WORDS(SRC1, SRC2)
+DEST[MAXVL-1:127] := 0
+
+

VPUNPCKHWD (VEX.256 Encoded Version) + ¶ +

+
DEST[255:0] := INTERLEAVE_HIGH_WORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPUNPCKHWD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_WORDS(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+IF VL = 256
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_WORDS_256b(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+IF VL = 512
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_WORDS_512b(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

PUNPCKHDQ (128-bit Legacy SSE Version) + ¶ +

+
DEST[127:0] := INTERLEAVE_HIGH_DWORDS(DEST, SRC)
+DEST[255:127] (Unmodified)
+
+

VPUNPCKHDQ (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := INTERLEAVE_HIGH_DWORDS(SRC1, SRC2)
+DEST[MAXVL-1:127] := 0
+
+

VPUNPCKHDQ (VEX.256 Encoded Version) + ¶ +

+
DEST[255:0] := INTERLEAVE_HIGH_DWORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPUNPCKHDQ (EVEX.512 Encoded Version) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+31:i] := SRC2[31:0]
+        ELSE TMP_SRC2[i+31:i] := SRC2[i+31:i]
+    FI;
+ENDFOR;
+IF VL = 128
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_DWORDS(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+IF VL = 256
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_DWORDS_256b(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+IF VL = 512
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_DWORDS_512b(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking* ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

PUNPCKHQDQ (128-bit Legacy SSE Version) + ¶ +

+
DEST[127:0] := INTERLEAVE_HIGH_QWORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPUNPCKHQDQ (VEX.128 Encoded Version) + ¶ +

+
DEST[127:0] := INTERLEAVE_HIGH_QWORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPUNPCKHQDQ (VEX.256 Encoded Version) + ¶ +

+
DEST[255:0] := INTERLEAVE_HIGH_QWORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPUNPCKHQDQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0]
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i]
+    FI;
+ENDFOR;
+IF VL = 128
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_QWORDS(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+IF VL = 256
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_QWORDS_256b(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+IF VL = 512
+    TMP_DEST[VL-1:0] := INTERLEAVE_HIGH_QWORDS_512b(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPUNPCKHBW __m512i _mm512_unpackhi_epi8(__m512i a, __m512i b);
+
+
VPUNPCKHBW __m512i _mm512_mask_unpackhi_epi8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPUNPCKHBW __m512i _mm512_maskz_unpackhi_epi8( __mmask64 k, __m512i a, __m512i b);
+
+
VPUNPCKHBW __m256i _mm256_mask_unpackhi_epi8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPUNPCKHBW __m256i _mm256_maskz_unpackhi_epi8( __mmask32 k, __m256i a, __m256i b);
+
+
VPUNPCKHBW __m128i _mm_mask_unpackhi_epi8(v s, __mmask16 k, __m128i a, __m128i b);
+
+
VPUNPCKHBW __m128i _mm_maskz_unpackhi_epi8( __mmask16 k, __m128i a, __m128i b);
+
+
VPUNPCKHWD __m512i _mm512_unpackhi_epi16(__m512i a, __m512i b);
+
+
VPUNPCKHWD __m512i _mm512_mask_unpackhi_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPUNPCKHWD __m512i _mm512_maskz_unpackhi_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPUNPCKHWD __m256i _mm256_mask_unpackhi_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPUNPCKHWD __m256i _mm256_maskz_unpackhi_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPUNPCKHWD __m128i _mm_mask_unpackhi_epi16(v s, __mmask8 k, __m128i a, __m128i b);
+
+
VPUNPCKHWD __m128i _mm_maskz_unpackhi_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
VPUNPCKHDQ __m512i _mm512_unpackhi_epi32(__m512i a, __m512i b);
+
+
VPUNPCKHDQ __m512i _mm512_mask_unpackhi_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPUNPCKHDQ __m512i _mm512_maskz_unpackhi_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPUNPCKHDQ __m256i _mm256_mask_unpackhi_epi32(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKHDQ __m256i _mm256_maskz_unpackhi_epi32( __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKHDQ __m128i _mm_mask_unpackhi_epi32(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKHDQ __m128i _mm_maskz_unpackhi_epi32( __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKHQDQ __m512i _mm512_unpackhi_epi64(__m512i a, __m512i b);
+
+
VPUNPCKHQDQ __m512i _mm512_mask_unpackhi_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKHQDQ __m512i _mm512_maskz_unpackhi_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKHQDQ __m256i _mm256_mask_unpackhi_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKHQDQ __m256i _mm256_maskz_unpackhi_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKHQDQ __m128i _mm_mask_unpackhi_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKHQDQ __m128i _mm_maskz_unpackhi_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
PUNPCKHBW __m64 _mm_unpackhi_pi8(__m64 m1, __m64 m2)
+
+
(V)PUNPCKHBW __m128i _mm_unpackhi_epi8(__m128i m1, __m128i m2)
+
+
VPUNPCKHBW __m256i _mm256_unpackhi_epi8(__m256i m1, __m256i m2)
+
+
PUNPCKHWD __m64 _mm_unpackhi_pi16(__m64 m1,__m64 m2)
+
+
(V)PUNPCKHWD __m128i _mm_unpackhi_epi16(__m128i m1,__m128i m2)
+
+
VPUNPCKHWD __m256i _mm256_unpackhi_epi16(__m256i m1,__m256i m2)
+
+
PUNPCKHDQ __m64 _mm_unpackhi_pi32(__m64 m1, __m64 m2)
+
+
(V)PUNPCKHDQ __m128i _mm_unpackhi_epi32(__m128i m1, __m128i m2)
+
+
VPUNPCKHDQ __m256i _mm256_unpackhi_epi32(__m256i m1, __m256i m2)
+
+
(V)PUNPCKHQDQ __m128i _mm_unpackhi_epi64 ( __m128i a, __m128i b)
+
+
VPUNPCKHQDQ __m256i _mm256_unpackhi_epi64 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPUNPCKHQDQ/QDQ, see Table 2-50, “Type E4NF Class Exception Conditions.”

+

EVEX-encoded VPUNPCKHBW/WD, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/punpcklbw.punpcklwd.punpckldq.punpcklqdq.html b/x86/punpcklbw.punpcklwd.punpckldq.punpcklqdq.html new file mode 100644 index 0000000..24bd0f2 --- /dev/null +++ b/x86/punpcklbw.punpcklwd.punpckldq.punpcklqdq.html @@ -0,0 +1,896 @@ + +PUNPCKLBW/PUNPCKLWD/PUNPCKLDQ/PUNPCKLQDQ + — Unpack Low Data

PUNPCKLBW/PUNPCKLWD/PUNPCKLDQ/PUNPCKLQDQ + — Unpack Low Data

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 60 /r1 PUNPCKLBW mm, mm/m32AV/VMMXInterleave low-order bytes from mm and mm/m32 into mm.
66 0F 60 /r PUNPCKLBW xmm1, xmm2/m128AV/VSSE2Interleave low-order bytes from xmm1 and xmm2/m128 into xmm1.
NP 0F 61 /r1 PUNPCKLWD mm, mm/m32AV/VMMXInterleave low-order words from mm and mm/m32 into mm.
66 0F 61 /r PUNPCKLWD xmm1, xmm2/m128AV/VSSE2Interleave low-order words from xmm1 and xmm2/m128 into xmm1.
NP 0F 62 /r1 PUNPCKLDQ mm, mm/m32AV/VMMXInterleave low-order doublewords from mm and mm/m32 into mm.
66 0F 62 /r PUNPCKLDQ xmm1, xmm2/m128AV/VSSE2Interleave low-order doublewords from xmm1 and xmm2/m128 into xmm1.
66 0F 6C /r PUNPCKLQDQ xmm1, xmm2/m128AV/VSSE2Interleave low-order quadword from xmm1 and xmm2/m128 into xmm1 register.
VEX.128.66.0F.WIG 60/r VPUNPCKLBW xmm1,xmm2, xmm3/m128BV/VAVXInterleave low-order bytes from xmm2 and xmm3/m128 into xmm1.
VEX.128.66.0F.WIG 61/r VPUNPCKLWD xmm1,xmm2, xmm3/m128BV/VAVXInterleave low-order words from xmm2 and xmm3/m128 into xmm1.
VEX.128.66.0F.WIG 62/r VPUNPCKLDQ xmm1, xmm2, xmm3/m128BV/VAVXInterleave low-order doublewords from xmm2 and xmm3/m128 into xmm1.
VEX.128.66.0F.WIG 6C/r VPUNPCKLQDQ xmm1, xmm2, xmm3/m128BV/VAVXInterleave low-order quadword from xmm2 and xmm3/m128 into xmm1 register.
VEX.256.66.0F.WIG 60 /r VPUNPCKLBW ymm1, ymm2, ymm3/m256BV/VAVX2Interleave low-order bytes from ymm2 and ymm3/m256 into ymm1 register.
VEX.256.66.0F.WIG 61 /r VPUNPCKLWD ymm1, ymm2, ymm3/m256BV/VAVX2Interleave low-order words from ymm2 and ymm3/m256 into ymm1 register.
VEX.256.66.0F.WIG 62 /r VPUNPCKLDQ ymm1, ymm2, ymm3/m256BV/VAVX2Interleave low-order doublewords from ymm2 and ymm3/m256 into ymm1 register.
VEX.256.66.0F.WIG 6C /r VPUNPCKLQDQ ymm1, ymm2, ymm3/m256BV/VAVX2Interleave low-order quadword from ymm2 and ymm3/m256 into ymm1 register.
EVEX.128.66.0F.WIG 60 /r VPUNPCKLBW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWInterleave low-order bytes from xmm2 and xmm3/m128 into xmm1 register subject to write mask k1.
EVEX.128.66.0F.WIG 61 /r VPUNPCKLWD xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWInterleave low-order words from xmm2 and xmm3/m128 into xmm1 register subject to write mask k1.
EVEX.128.66.0F.W0 62 /r VPUNPCKLDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstDV/VAVX512VL AVX512FInterleave low-order doublewords from xmm2 and xmm3/m128/m32bcst into xmm1 register subject to write mask k1.
EVEX.128.66.0F.W1 6C /r VPUNPCKLQDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstDV/VAVX512VL AVX512FInterleave low-order quadword from zmm2 and zmm3/m512/m64bcst into zmm1 register subject to write mask k1.
EVEX.256.66.0F.WIG 60 /r VPUNPCKLBW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWInterleave low-order bytes from ymm2 and ymm3/m256 into ymm1 register subject to write mask k1.
EVEX.256.66.0F.WIG 61 /r VPUNPCKLWD ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWInterleave low-order words from ymm2 and ymm3/m256 into ymm1 register subject to write mask k1.
EVEX.256.66.0F.W0 62 /r VPUNPCKLDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstDV/VAVX512VL AVX512FInterleave low-order doublewords from ymm2 and ymm3/m256/m32bcst into ymm1 register subject to write mask k1.
EVEX.256.66.0F.W1 6C /r VPUNPCKLQDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstDV/VAVX512VL AVX512FInterleave low-order quadword from ymm2 and ymm3/m256/m64bcst into ymm1 register subject to write mask k1.
EVEX.512.66.0F.WIG 60/r VPUNPCKLBW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWInterleave low-order bytes from zmm2 and zmm3/m512 into zmm1 register subject to write mask k1.
EVEX.512.66.0F.WIG 61/r VPUNPCKLWD zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWInterleave low-order words from zmm2 and zmm3/m512 into zmm1 register subject to write mask k1.
EVEX.512.66.0F.W0 62 /r VPUNPCKLDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstDV/VAVX512FInterleave low-order doublewords from zmm2 and zmm3/m512/m32bcst into zmm1 register subject to write mask k1.
EVEX.512.66.0F.W1 6C /r VPUNPCKLQDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstDV/VAVX512FInterleave low-order quadword from zmm2 and zmm3/m512/m64bcst into zmm1 register subject to write mask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Unpacks and interleaves the low-order data elements (bytes, words, doublewords, and quadwords) of the destination operand (first operand) and source operand (second operand) into the destination operand. (Figure 4-22 shows the unpack operation for bytes in 64-bit operands.). The high-order data elements are ignored.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 X7 X6 X5 X4 X3 X2 X1 X0 DEST +DEST Y3 X3 Y2 X2 Y1 X1 Y0 X0 +
Figure 4-22. PUNPCKLBW Instruction Operation Using 64-bit Operands
+

255 31 0 255 31 0

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +SRC Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 +X7 X6 X5 X4 X3 X2 X1 X0 +255 +0 +DEST Y5 X5 Y4 X4 Y1 X1 Y0 X0 +
Figure 4-23. 256-bit VPUNPCKLDQ Instruction Operation
+

When the source data comes from a 128-bit memory operand, an implementation may fetch only the appropriate 64 bits; however, alignment to a 16-byte boundary and normal segment checking will still be enforced.

+

The (V)PUNPCKLBW instruction interleaves the low-order bytes of the source and destination operands, the (V)PUNPCKLWD instruction interleaves the low-order words of the source and destination operands, the (V)PUNPCKLDQ instruction interleaves the low-order doubleword (or doublewords) of the source and destination operands, and the (V)PUNPCKLQDQ instruction interleaves the low-order quadwords of the source and destination operands.

+

These instructions can be used to convert bytes to words, words to doublewords, doublewords to quadwords, and quadwords to double quadwords, respectively, by placing all 0s in the source operand. Here, if the source operand contains all 0s, the result (stored in the destination operand) contains zero extensions of the high-order data elements from the original value in the destination operand. For example, with the (V)PUNPCKLBW instruction the high-order bytes are zero extended (that is, unpacked into unsigned word integers), and with the (V)PUNPCKLWD instruction, the high-order words are zero extended (unpacked into unsigned doubleword integers).

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE versions 64-bit operand: The source operand can be an MMX technology register or a 32-bit memory location. The destination operand is an MMX technology register.

+

128-bit Legacy SSE versions: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded versions: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded VPUNPCKLDQ/QDQ: The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The first source

+

operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

EVEX encoded VPUNPCKLWD/BW: The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The first source operand and destination operands are ZMM/YMM/XMM registers. The destination is conditionally updated with writemask k1.

+

Operation + ¶ +

+

PUNPCKLBW Instruction With 64-bit Operands: + ¶ +

+
DEST[63:56] := SRC[31:24];
+DEST[55:48] := DEST[31:24];
+DEST[47:40] := SRC[23:16];
+DEST[39:32] := DEST[23:16];
+DEST[31:24] := SRC[15:8];
+DEST[23:16] := DEST[15:8];
+DEST[15:8] := SRC[7:0];
+DEST[7:0] := DEST[7:0];
+
+

PUNPCKLWD Instruction With 64-bit Operands: + ¶ +

+
DEST[63:48] := SRC[31:16];
+DEST[47:32] := DEST[31:16];
+DEST[31:16] := SRC[15:0];
+DEST[15:0] := DEST[15:0];
+
+

PUNPCKLDQ Instruction With 64-bit Operands: + ¶ +

+
    DEST[63:32] := SRC[31:0];
+    DEST[31:0] := DEST[31:0];
+INTERLEAVE_BYTES_512b (SRC1, SRC2)
+TMP_DEST[255:0] := INTERLEAVE_BYTES_256b(SRC1[255:0], SRC[255:0])
+TMP_DEST[511:256] := INTERLEAVE_BYTES_256b(SRC1[511:256], SRC[511:256])
+INTERLEAVE_BYTES_256b (SRC1, SRC2)
+DEST[7:0] := SRC1[7:0]
+DEST[15:8] := SRC2[7:0]
+DEST[23:16] := SRC1[15:8]
+DEST[31:24] := SRC2[15:8]
+DEST[39:32] := SRC1[23:16]
+DEST[47:40] := SRC2[23:16]
+DEST[55:48] := SRC1[31:24]
+DEST[63:56] := SRC2[31:24]
+DEST[71:64] := SRC1[39:32]
+DEST[79:72] := SRC2[39:32]
+DEST[87:80] := SRC1[47:40]
+DEST[95:88] := SRC2[47:40]
+DEST[103:96] := SRC1[55:48]
+DEST[111:104] := SRC2[55:48]
+DEST[119:112] := SRC1[63:56]
+DEST[127:120] := SRC2[63:56]
+DEST[135:128] := SRC1[135:128]
+DEST[143:136] := SRC2[135:128]
+DEST[151:144] := SRC1[143:136]
+DEST[159:152] := SRC2[143:136]
+DEST[167:160] := SRC1[151:144]
+DEST[175:168] := SRC2[151:144]
+DEST[183:176] := SRC1[159:152]
+DEST[191:184] := SRC2[159:152]
+DEST[199:192] := SRC1[167:160]
+DEST[207:200] := SRC2[167:160]
+DEST[215:208] := SRC1[175:168]
+DEST[223:216] := SRC2[175:168]
+DEST[231:224] := SRC1[183:176]
+DEST[239:232] := SRC2[183:176]
+DEST[247:240] := SRC1[191:184]
+DEST[255:248] := SRC2[191:184]
+INTERLEAVE_BYTES (SRC1, SRC2)
+DEST[7:0] := SRC1[7:0]
+DEST[15:8] := SRC2[7:0]
+DEST[23:16] := SRC1[15:8]
+DEST[31:24] := SRC2[15:8]
+DEST[39:32] := SRC1[23:16]
+DEST[47:40] := SRC2[23:16]
+DEST[55:48] := SRC1[31:24]
+DEST[63:56] := SRC2[31:24]
+DEST[71:64] := SRC1[39:32]
+DEST[79:72] := SRC2[39:32]
+DEST[87:80] := SRC1[47:40]
+DEST[95:88] := SRC2[47:40]
+DEST[103:96] := SRC1[55:48]
+DEST[111:104] := SRC2[55:48]
+DEST[119:112] := SRC1[63:56]
+DEST[127:120] := SRC2[63:56]
+INTERLEAVE_WORDS_512b (SRC1, SRC2)
+TMP_DEST[255:0] := INTERLEAVE_WORDS_256b(SRC1[255:0], SRC[255:0])
+TMP_DEST[511:256] := INTERLEAVE_WORDS_256b(SRC1[511:256], SRC[511:256])
+INTERLEAVE_WORDS_256b(SRC1, SRC2)
+DEST[15:0] := SRC1[15:0]
+DEST[31:16] := SRC2[15:0]
+DEST[47:32] := SRC1[31:16]
+DEST[63:48] := SRC2[31:16]
+DEST[79:64] := SRC1[47:32]
+DEST[95:80] := SRC2[47:32]
+DEST[111:96] := SRC1[63:48]
+DEST[127:112] := SRC2[63:48]
+DEST[143:128] := SRC1[143:128]
+DEST[159:144] := SRC2[143:128]
+DEST[175:160] := SRC1[159:144]
+DEST[191:176] := SRC2[159:144]
+DEST[207:192] := SRC1[175:160]
+DEST[223:208] := SRC2[175:160]
+DEST[239:224] := SRC1[191:176]
+DEST[255:240] := SRC2[191:176]
+INTERLEAVE_WORDS (SRC1, SRC2)
+DEST[15:0] := SRC1[15:0]
+DEST[31:16] := SRC2[15:0]
+DEST[47:32] := SRC1[31:16]
+DEST[63:48] := SRC2[31:16]
+DEST[79:64] := SRC1[47:32]
+DEST[95:80] := SRC2[47:32]
+DEST[111:96] := SRC1[63:48]
+DEST[127:112] := SRC2[63:48]
+INTERLEAVE_DWORDS_512b (SRC1, SRC2)
+TMP_DEST[255:0] := INTERLEAVE_DWORDS_256b(SRC1[255:0], SRC2[255:0])
+TMP_DEST[511:256] := INTERLEAVE_DWORDS_256b(SRC1[511:256], SRC2[511:256])
+INTERLEAVE_DWORDS_256b(SRC1, SRC2)
+DEST[31:0] := SRC1[31:0]
+DEST[63:32] := SRC2[31:0]
+DEST[95:64] := SRC1[63:32]
+DEST[127:96] := SRC2[63:32]
+DEST[159:128] := SRC1[159:128]
+DEST[191:160] := SRC2[159:128]
+DEST[223:192] := SRC1[191:160]
+DEST[255:224] := SRC2[191:160]
+INTERLEAVE_DWORDS(SRC1, SRC2)
+DEST[31:0] := SRC1[31:0]
+DEST[63:32] := SRC2[31:0]
+DEST[95:64] := SRC1[63:32]
+DEST[127:96] := SRC2[63:32]
+INTERLEAVE_QWORDS_512b (SRC1, SRC2)
+TMP_DEST[255:0] := INTERLEAVE_QWORDS_256b(SRC1[255:0], SRC2[255:0])
+TMP_DEST[511:256] := INTERLEAVE_QWORDS_256b(SRC1[511:256], SRC2[511:256])
+INTERLEAVE_QWORDS_256b(SRC1, SRC2)
+DEST[63:0] := SRC1[63:0]
+DEST[127:64] := SRC2[63:0]
+DEST[191:128] := SRC1[191:128]
+DEST[255:192] := SRC2[191:128]
+INTERLEAVE_QWORDS(SRC1, SRC2)
+DEST[63:0] := SRC1[63:0]
+DEST[127:64] := SRC2[63:0]
+
+

PUNPCKLBW + ¶ +

+
DEST[127:0] := INTERLEAVE_BYTES(DEST, SRC)
+DEST[255:127] (Unmodified)
+
+

VPUNPCKLBW (VEX.128 Encoded Instruction) + ¶ +

+
DEST[127:0] := INTERLEAVE_BYTES(SRC1, SRC2)
+DEST[MAXVL-1:127] := 0
+
+

VPUNPCKLBW (VEX.256 Encoded Instruction) + ¶ +

+
DEST[255:0] := INTERLEAVE_BYTES_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPUNPCKLBW (EVEX.512 Encoded Instruction) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+IF VL = 128
+    TMP_DEST[VL-1:0] := INTERLEAVE_BYTES(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+IF VL = 256
+    TMP_DEST[VL-1:0] := INTERLEAVE_BYTES_256b(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+IF VL = 512
+    TMP_DEST[VL-1:0] := INTERLEAVE_BYTES_512b(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := TMP_DEST[i+7:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+DEST[511:0] := INTERLEAVE_BYTES_512b(SRC1, SRC2)
+
+

PUNPCKLWD + ¶ +

+
DEST[127:0] := INTERLEAVE_WORDS(DEST, SRC)
+DEST[255:127] (Unmodified)
+
+

VPUNPCKLWD (VEX.128 Encoded Instruction) + ¶ +

+
DEST[127:0] := INTERLEAVE_WORDS(SRC1, SRC2)
+DEST[MAXVL-1:127] := 0
+
+

VPUNPCKLWD (VEX.256 Encoded Instruction) + ¶ +

+
DEST[255:0] := INTERLEAVE_WORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPUNPCKLWD (EVEX.512 Encoded Instruction) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    TMP_DEST[VL-1:0] := INTERLEAVE_WORDS(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+IF VL = 256
+    TMP_DEST[VL-1:0] := INTERLEAVE_WORDS_256b(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+IF VL = 512
+    TMP_DEST[VL-1:0] := INTERLEAVE_WORDS_512b(SRC1[VL-1:0], SRC2[VL-1:0])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+DEST[511:0] := INTERLEAVE_WORDS_512b(SRC1, SRC2)
+
+

PUNPCKLDQ + ¶ +

+
DEST[127:0] := INTERLEAVE_DWORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPUNPCKLDQ (VEX.128 Encoded Instruction) + ¶ +

+
DEST[127:0] := INTERLEAVE_DWORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPUNPCKLDQ (VEX.256 Encoded Instruction) + ¶ +

+
DEST[255:0] := INTERLEAVE_DWORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPUNPCKLDQ (EVEX Encoded Instructions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+31:i] := SRC2[31:0]
+        ELSE TMP_SRC2[i+31:i] := SRC2[i+31:i]
+    FI;
+ENDFOR;
+IF VL = 128
+    TMP_DEST[VL-1:0] := INTERLEAVE_DWORDS(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+IF VL = 256
+    TMP_DEST[VL-1:0] := INTERLEAVE_DWORDS_256b(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+IF VL = 512
+    TMP_DEST[VL-1:0] := INTERLEAVE_DWORDS_512b(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking* ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST511:0] := INTERLEAVE_DWORDS_512b(SRC1, SRC2)
+DEST[MAXVL-1:VL] := 0
+
+

PUNPCKLQDQ + ¶ +

+
DEST[127:0] := INTERLEAVE_QWORDS(DEST, SRC)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPUNPCKLQDQ (VEX.128 Encoded Instruction) + ¶ +

+
DEST[127:0] := INTERLEAVE_QWORDS(SRC1, SRC2)
+DEST[MAXVL-1:128] := 0
+
+

VPUNPCKLQDQ (VEX.256 Encoded Instruction) + ¶ +

+
DEST[255:0] := INTERLEAVE_QWORDS_256b(SRC1, SRC2)
+DEST[MAXVL-1:256] := 0
+
+

VPUNPCKLQDQ (EVEX Encoded Instructions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0]
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i]
+    FI;
+ENDFOR;
+IF VL = 128
+    TMP_DEST[VL-1:0] := INTERLEAVE_QWORDS(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+IF VL = 256
+    TMP_DEST[VL-1:0] := INTERLEAVE_QWORDS_256b(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+IF VL = 512
+    TMP_DEST[VL-1:0] := INTERLEAVE_QWORDS_512b(SRC1[VL-1:0], TMP_SRC2[VL-1:0])
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPUNPCKLBW __m512i _mm512_unpacklo_epi8(__m512i a, __m512i b);
+
+
VPUNPCKLBW __m512i _mm512_mask_unpacklo_epi8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPUNPCKLBW __m512i _mm512_maskz_unpacklo_epi8( __mmask64 k, __m512i a, __m512i b);
+
+
VPUNPCKLBW __m256i _mm256_mask_unpacklo_epi8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPUNPCKLBW __m256i _mm256_maskz_unpacklo_epi8( __mmask32 k, __m256i a, __m256i b);
+
+
VPUNPCKLBW __m128i _mm_mask_unpacklo_epi8(v s, __mmask16 k, __m128i a, __m128i b);
+
+
VPUNPCKLBW __m128i _mm_maskz_unpacklo_epi8( __mmask16 k, __m128i a, __m128i b);
+
+
VPUNPCKLWD __m512i _mm512_unpacklo_epi16(__m512i a, __m512i b);
+
+
VPUNPCKLWD __m512i _mm512_mask_unpacklo_epi16(__m512i s, __mmask32 k, __m512i a, __m512i b);
+
+
VPUNPCKLWD __m512i _mm512_maskz_unpacklo_epi16( __mmask32 k, __m512i a, __m512i b);
+
+
VPUNPCKLWD __m256i _mm256_mask_unpacklo_epi16(__m256i s, __mmask16 k, __m256i a, __m256i b);
+
+
VPUNPCKLWD __m256i _mm256_maskz_unpacklo_epi16( __mmask16 k, __m256i a, __m256i b);
+
+
VPUNPCKLWD __m128i _mm_mask_unpacklo_epi16(v s, __mmask8 k, __m128i a, __m128i b);
+
+
VPUNPCKLWD __m128i _mm_maskz_unpacklo_epi16( __mmask8 k, __m128i a, __m128i b);
+
+
VPUNPCKLDQ __m512i _mm512_unpacklo_epi32(__m512i a, __m512i b);
+
+
VPUNPCKLDQ __m512i _mm512_mask_unpacklo_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b);
+
+
VPUNPCKLDQ __m512i _mm512_maskz_unpacklo_epi32( __mmask16 k, __m512i a, __m512i b);
+
+
VPUNPCKLDQ __m256i _mm256_mask_unpacklo_epi32(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPUNPCKLDQ __m256i _mm256_maskz_unpacklo_epi32( __mmask8 k, __m256i a, __m256i b);
+
+
VPUNPCKLDQ __m128i _mm_mask_unpacklo_epi32(v s, __mmask8 k, __m128i a, __m128i b);
+
+
VPUNPCKLDQ __m128i _mm_maskz_unpacklo_epi32( __mmask8 k, __m128i a, __m128i b);
+
+
VPUNPCKLQDQ __m512i _mm512_unpacklo_epi64(__m512i a, __m512i b);
+
+
VPUNPCKLQDQ __m512i _mm512_mask_unpacklo_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKLQDQ __m512i _mm512_maskz_unpacklo_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPUNPCKLQDQ __m256i _mm256_mask_unpacklo_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPUNPCKLQDQ __m256i _mm256_maskz_unpacklo_epi64( __mmask8 k, __m256i a, __m256i b);
+
+
VPUNPCKLQDQ __m128i _mm_mask_unpacklo_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPUNPCKLQDQ __m128i _mm_maskz_unpacklo_epi64( __mmask8 k, __m128i a, __m128i b);
+
+
PUNPCKLBW __m64 _mm_unpacklo_pi8 (__m64 m1, __m64 m2)
+
+
(V)PUNPCKLBW __m128i _mm_unpacklo_epi8 (__m128i m1, __m128i m2)
+
+
VPUNPCKLBW __m256i _mm256_unpacklo_epi8 (__m256i m1, __m256i m2)
+
+
PUNPCKLWD __m64 _mm_unpacklo_pi16 (__m64 m1, __m64 m2)
+
+
(V)PUNPCKLWD __m128i _mm_unpacklo_epi16 (__m128i m1, __m128i m2)
+
+
VPUNPCKLWD __m256i _mm256_unpacklo_epi16 (__m256i m1, __m256i m2)
+
+
PUNPCKLDQ __m64 _mm_unpacklo_pi32 (__m64 m1, __m64 m2)
+
+
(V)PUNPCKLDQ __m128i _mm_unpacklo_epi32 (__m128i m1, __m128i m2)
+
+
VPUNPCKLDQ __m256i _mm256_unpacklo_epi32 (__m256i m1, __m256i m2)
+
+
(V)PUNPCKLQDQ __m128i _mm_unpacklo_epi64 (__m128i m1, __m128i m2)
+
+
VPUNPCKLQDQ __m256i _mm256_unpacklo_epi64 (__m256i m1, __m256i m2)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPUNPCKLDQ/QDQ, see Table 2-50, “Type E4NF Class Exception Conditions.”

+

EVEX-encoded VPUNPCKLBW/WD, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/push.html b/x86/push.html new file mode 100644 index 0000000..7a0eeac --- /dev/null +++ b/x86/push.html @@ -0,0 +1,317 @@ + +PUSH + — Push Word, Doubleword, or Quadword Onto the Stack

PUSH + — Push Word, Doubleword, or Quadword Onto the Stack

+ +

Opcode1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
FF /6PUSH r/m16MValidValidPush r/m16.
FF /6PUSH r/m32MN.E.ValidPush r/m32.
FF /6PUSH r/m64MValidN.E.Push r/m64.
50+rwPUSH r16OValidValidPush r16.
50+rdPUSH r32ON.E.ValidPush r32.
50+rdPUSH r64OValidN.E.Push r64.
6A ibPUSH imm8IValidValidPush imm8.
68 iwPUSH imm16IValidValidPush imm16.
68 idPUSH imm32IValidValidPush imm32.
0EPUSH CSZOInvalidValidPush CS.
16PUSH SSZOInvalidValidPush SS.
1EPUSH DSZOInvalidValidPush DS.
06PUSH ESZOInvalidValidPush ES.
0F A0PUSH FSZOValidValidPush FS.
0F A8PUSH GSZOValidValidPush GS.
+
+

1. See the IA-32 Architecture Compatibility section below.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
Oopcode + rd (r)N/AN/AN/A
Iimm8/16/32N/AN/AN/A
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Decrements the stack pointer and then stores the source operand on the top of the stack. Address and operand sizes are determined and used as follows:

+
    +
  • Address size. The D flag in the current code-segment descriptor determines the default address size; it may be overridden by an instruction prefix (67H).
+

The address size is used only when referencing a source operand in memory.

+
    +
  • Operand size. The D flag in the current code-segment descriptor determines the default operand size; it may be overridden by instruction prefixes (66H or REX.W).
+

The operand size (16, 32, or 64 bits) determines the amount by which the stack pointer is decremented (2, 4 or 8).

+

If the source operand is an immediate of size less than the operand size, a sign-extended value is pushed on the stack. If the source operand is a segment register (16 bits) and the operand size is 64-bits, a zero-extended value is pushed on the stack; if the operand size is 32-bits, either a zero-extended value is pushed on the stack or the segment selector is written on the stack using a 16-bit move. For the last case, all recent Intel Core and Intel Atom processors perform a 16-bit move, leaving the upper portion of the stack location unmodified.

+
    +
  • Stack-address size. Outside of 64-bit mode, the B flag in the current stack-segment descriptor determines the size of the stack pointer (16 or 32 bits); in 64-bit mode, the size of the stack pointer is always 64 bits.
+

The stack-address size determines the width of the stack pointer when writing to the stack in memory and when decrementing the stack pointer. (As stated above, the amount by which the stack pointer is decremented is determined by the operand size.)

+

If the operand size is less than the stack-address size, the PUSH instruction may result in a misaligned stack pointer (a stack pointer that is not aligned on a doubleword or quadword boundary).

+

The PUSH ESP instruction pushes the value of the ESP register as it existed before the instruction was executed. If a PUSH instruction uses a memory operand in which the ESP register is used for computing the operand address, the address of the operand is computed before the ESP register is decremented.

+

If the ESP or SP register is 1 when the PUSH instruction is executed in real-address mode, a stack-fault exception (#SS) is generated (because the limit of the stack segment is violated). Its delivery encounters a second stack-fault exception (for the same reason), causing generation of a double-fault exception (#DF). Delivery of the double-fault exception encounters a third stack-fault exception, and the logical processor enters shutdown mode. See the discussion of the double-fault exception in Chapter 6 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

IA-32 Architecture Compatibility + ¶ +

+

For IA-32 processors from the Intel 286 on, the PUSH ESP instruction pushes the value of the ESP register as it existed before the instruction was executed. (This is also true for Intel 64 architecture, real-address and virtual-8086 modes of IA-32 architecture.) For the Intel® 8086 processor, the PUSH SP instruction pushes the new value of the SP register (that is the value after it has been decremented by 2).

+

Operation + ¶ +

+
(* See Description section for possible sign-extension or zero-extension of source operand and for *)
+(* a case in which the size of the memory store may be
+                    smaller than the instruction’s operand size *)
+IF StackAddrSize = 64
+    THEN
+        IF OperandSize = 64
+            THEN
+                RSP := RSP – 8;
+                Memory[SS:RSP] := SRC;
+                    (* push quadword *)
+        ELSE IF OperandSize = 32
+            THEN
+                RSP := RSP – 4;
+                Memory[SS:RSP] := SRC;
+                    (* push dword *)
+            ELSE (* OperandSize = 16 *)
+                RSP := RSP – 2;
+                Memory[SS:RSP] := SRC;
+                    (* push word *)
+        FI;
+ELSE IF StackAddrSize = 32
+    THEN
+        IF OperandSize = 64
+            THEN
+                ESP := ESP – 8;
+                Memory[SS:ESP] := SRC;
+                    (* push quadword *)
+        ELSE IF OperandSize = 32
+            THEN
+                ESP := ESP – 4;
+                Memory[SS:ESP] := SRC;
+                    (* push dword *)
+            ELSE (* OperandSize = 16 *)
+                ESP := ESP – 2;
+                Memory[SS:ESP] := SRC;
+                    (* push word *)
+        FI;
+    ELSE (* StackAddrSize = 16 *)
+        IF OperandSize = 32
+            THEN
+                SP := SP – 4;
+                Memory[SS:SP] := SRC;
+                    (* push dword *)
+            ELSE (* OperandSize = 16 *)
+                SP := SP – 2;
+                Memory[SS:SP] := SRC;
+                    (* push word *)
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
If the new value of the SP or ESP register is outside the stack segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#SS(0)If the stack address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
If the PUSH is of CS, SS, DS, or ES.
diff --git a/x86/pusha.pushad.html b/x86/pusha.pushad.html new file mode 100644 index 0000000..a5c8580 --- /dev/null +++ b/x86/pusha.pushad.html @@ -0,0 +1,141 @@ + +PUSHA/PUSHAD + — Push All General-Purpose Registers

PUSHA/PUSHAD + — Push All General-Purpose Registers

+ + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
60PUSHAZOInvalidValidPush AX, CX, DX, BX, original SP, BP, SI, and DI.
60PUSHADZOInvalidValidPush EAX, ECX, EDX, EBX, original ESP, EBP, ESI, and EDI.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Pushes the contents of the general-purpose registers onto the stack. The registers are stored on the stack in the following order: EAX, ECX, EDX, EBX, ESP (original value), EBP, ESI, and EDI (if the current operand-size attribute is 32) and AX, CX, DX, BX, SP (original value), BP, SI, and DI (if the operand-size attribute is 16). These instructions perform the reverse operation of the POPA/POPAD instructions. The value pushed for the ESP or SP register is its value before prior to pushing the first register (see the “Operation” section below).

+

The PUSHA (push all) and PUSHAD (push all double) mnemonics reference the same opcode. The PUSHA instruction is intended for use when the operand-size attribute is 16 and the PUSHAD instruction for when the operand-size attribute is 32. Some assemblers may force the operand size to 16 when PUSHA is used and to 32 when PUSHAD is used. Others may treat these mnemonics as synonyms (PUSHA/PUSHAD) and use the current setting of the operand-size attribute to determine the size of values to be pushed from the stack, regardless of the mnemonic used.

+

In the real-address mode, if the ESP or SP register is 1, 3, or 5 when PUSHA/PUSHAD executes: an #SS exception is generated but not delivered (the stack error reported prevents #SS delivery). Next, the processor generates a #DF exception and enters a shutdown state as described in the #DF discussion in Chapter 6 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

This instruction executes as described in compatibility mode and legacy mode. It is not valid in 64-bit mode.

+

Operation + ¶ +

+
IF 64-bit Mode
+    THEN #UD
+FI;
+IF OperandSize = 32 (* PUSHAD instruction *)
+    THEN
+        Temp := (ESP);
+        Push(EAX);
+        Push(ECX);
+        Push(EDX);
+        Push(EBX);
+        Push(Temp);
+        Push(EBP);
+        Push(ESI);
+        Push(EDI);
+    ELSE (* OperandSize = 16, PUSHA instruction *)
+        Temp := (SP);
+        Push(AX);
+        Push(CX);
+        Push(DX);
+        Push(BX);
+        Push(Temp);
+        Push(BP);
+        Push(SI);
+        Push(DI);
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If the starting or ending stack address is outside the stack segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while the current privilege level is 3 and alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf the ESP or SP register contains 7, 9, 11, 13, or 15.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the ESP or SP register contains 7, 9, 11, 13, or 15.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf in 64-bit mode.
diff --git a/x86/pushf.pushfd.pushfq.html b/x86/pushf.pushfd.pushfq.html new file mode 100644 index 0000000..18e8d36 --- /dev/null +++ b/x86/pushf.pushfd.pushfq.html @@ -0,0 +1,161 @@ + +PUSHF/PUSHFD/PUSHFQ + — Push EFLAGS Register Onto the Stack

PUSHF/PUSHFD/PUSHFQ + — Push EFLAGS Register Onto the Stack

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
9CPUSHFZOValidValidPush lower 16 bits of EFLAGS.
9CPUSHFDZON.E.ValidPush EFLAGS.
9CPUSHFQZOValidN.E.Push RFLAGS.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Decrements the stack pointer by 4 (if the current operand-size attribute is 32) and pushes the entire contents of the EFLAGS register onto the stack, or decrements the stack pointer by 2 (if the operand-size attribute is 16) and pushes the lower 16 bits of the EFLAGS register (that is, the FLAGS register) onto the stack. These instructions reverse the operation of the POPF/POPFD instructions.

+

When copying the entire EFLAGS register to the stack, the VM and RF flags (bits 16 and 17) are not copied; instead, the values for these flags are cleared in the EFLAGS image stored on the stack. See Chapter 3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for more information about the EFLAGS register.

+

The PUSHF (push flags) and PUSHFD (push flags double) mnemonics reference the same opcode. The PUSHF instruction is intended for use when the operand-size attribute is 16 and the PUSHFD instruction for when the operand-size attribute is 32. Some assemblers may force the operand size to 16 when PUSHF is used and to 32 when PUSHFD is used. Others may treat these mnemonics as synonyms (PUSHF/PUSHFD) and use the current setting of the operand-size attribute to determine the size of values to be pushed from the stack, regardless of the mnemonic used.

+

In 64-bit mode, the instruction’s default operation is to decrement the stack pointer (RSP) by 8 and pushes RFLAGS on the stack. 16-bit operation is supported using the operand size override prefix 66H. 32-bit operand size cannot be encoded in this mode. When copying RFLAGS to the stack, the VM and RF flags (bits 16 and 17) are not copied; instead, values for these flags are cleared in the RFLAGS image stored on the stack.

+

When operating in virtual-8086 mode (EFLAGS.VM = 1) without the virtual-8086 mode extensions (CR4.VME = 0), the PUSHF/PUSHFD instructions can be used only if IOPL = 3; otherwise, a general-protection exception (#GP) occurs. If the virtual-8086 mode extensions are enabled (CR4.VME = 1), PUSHF (but not PUSHFD) can be executed in virtual-8086 mode with IOPL < 3.

+

(The protected-mode virtual-interrupt feature — enabled by setting CR4.PVI — affects the CLI and STI instructions in the same manner as the virtual-8086 mode extensions. PUSHF, however, is not affected by CR4.PVI.)

+

In the real-address mode, if the ESP or SP register is 1 when PUSHF/PUSHFD instruction executes: an #SS exception is generated but not delivered (the stack error reported prevents #SS delivery). Next, the processor generates a #DF exception and enters a shutdown state as described in the #DF discussion in Chapter 6 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

Operation + ¶ +

+
IF (PE = 0) or (PE = 1 and ((VM = 0) or (VM = 1 and IOPL = 3)))
+(* Real-Address Mode, Protected mode, or Virtual-8086 mode with IOPL equal to 3 *)
+    THEN
+        IF OperandSize = 32
+            THEN
+                push (EFLAGS AND 00FCFFFFH);
+                (* VM and RF bits are cleared in image stored on the stack *)
+            ELSE
+                push (EFLAGS); (* Lower 16 bits only *)
+        FI;
+    ELSE IF 64-bit MODE (* In 64-bit Mode *)
+        IF OperandSize = 64
+            THEN
+                push (RFLAGS AND 00000000_00FCFFFFH);
+                (* VM and RF bits are cleared in image stored on the stack; *)
+            ELSE
+                push (EFLAGS); (* Lower 16 bits only *)
+        FI;
+    ELSE (* In Virtual-8086 Mode with IOPL less than 3 *)
+        IF (CR4.VME = 0) OR (OperandSize = 32)
+            THEN #GP(0); (* Trap to virtual-8086 monitor *)
+            ELSE
+                tempFLAGS = EFLAGS[15:0];
+                tempFLAGS[9] = tempFLAGS[19]; (* VIF replaces IF *)
+                tempFlags[13:12]=3; (*IOPLissetto3inimagestoredonthestack*)
+                push (tempFLAGS);
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If the new value of the ESP register is outside the stack segment boundary.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while CPL = 3 and alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the I/O privilege level is less than 3.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while alignment checking is enabled.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If the stack address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory reference is made while CPL = 3 and alignment checking is enabled.
#UDIf the LOCK prefix is used.
diff --git a/x86/pxor.html b/x86/pxor.html new file mode 100644 index 0000000..54ab34c --- /dev/null +++ b/x86/pxor.html @@ -0,0 +1,247 @@ + +PXOR + — Logical Exclusive OR

PXOR + — Logical Exclusive OR

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F EF /r1 PXOR mm, mm/m64AV/VMMXBitwise XOR of mm/m64 and mm.
66 0F EF /r PXOR xmm1, xmm2/m128AV/VSSE2Bitwise XOR of xmm2/m128 and xmm1.
VEX.128.66.0F.WIG EF /r VPXOR xmm1, xmm2, xmm3/m128BV/VAVXBitwise XOR of xmm3/m128 and xmm2.
VEX.256.66.0F.WIG EF /r VPXOR ymm1, ymm2, ymm3/m256BV/VAVX2Bitwise XOR of ymm3/m256 and ymm2.
EVEX.128.66.0F.W0 EF /r VPXORD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FBitwise XOR of packed doubleword integers in xmm2 and xmm3/m128 using writemask k1.
EVEX.256.66.0F.W0 EF /r VPXORD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FBitwise XOR of packed doubleword integers in ymm2 and ymm3/m256 using writemask k1.
EVEX.512.66.0F.W0 EF /r VPXORD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FBitwise XOR of packed doubleword integers in zmm2 and zmm3/m512/m32bcst using writemask k1.
EVEX.128.66.0F.W1 EF /r VPXORQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FBitwise XOR of packed quadword integers in xmm2 and xmm3/m128 using writemask k1.
EVEX.256.66.0F.W1 EF /r VPXORQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FBitwise XOR of packed quadword integers in ymm2 and ymm3/m256 using writemask k1.
EVEX.512.66.0F.W1 EF /r VPXORQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FBitwise XOR of packed quadword integers in zmm2 and zmm3/m512/m64bcst using writemask k1.
+
+

1. See note in Section 2.5, “Intel® AVX and Intel® SSE Instruction Exception Classification,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, and Section 23.25.3, “Exception Conditions of Legacy SIMD Instructions Operating on MMX Registers,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical exclusive-OR (XOR) operation on the source operand (second operand) and the destination operand (first operand) and stores the result in the destination operand. Each bit of the result is 1 if the corresponding bits of the two operands are different; each bit is 0 if the corresponding bits of the operands are the same.

+

In 64-bit mode and not encoded with VEX/EVEX, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

Legacy SSE instructions 64-bit operand: The source operand can be an MMX technology register or a 64-bit memory location. The destination operand is an MMX technology register.

+

128-bit Legacy SSE version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: The second source operand is an XMM register or a 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding register destination are zeroed.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with write-mask k1.

+

Operation + ¶ +

+

PXOR (64-bit Operand) + ¶ +

+
DEST := DEST XOR SRC
+
+

PXOR (128-bit Legacy SSE Version) + ¶ +

+
DEST := DEST XOR SRC
+DEST[MAXVL-1:128] (Unmodified)
+
+

VPXOR (VEX.128 Encoded Version) + ¶ +

+
DEST := SRC1 XOR SRC2
+DEST[MAXVL-1:128] := 0
+
+

VPXOR (VEX.256 Encoded Version) + ¶ +

+
DEST := SRC1 XOR SRC2
+DEST[MAXVL-1:256] := 0
+
+

VPXORD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := SRC1[i+31:i] BITWISE XOR SRC2[31:0]
+                ELSE DEST[i+31:i] := SRC1[i+31:i] BITWISE XOR SRC2[i+31:i]
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE
+                    ; zeroing-masking
+                DEST[31:0] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPXORQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := SRC1[i+63:i] BITWISE XOR SRC2[63:0]
+                ELSE DEST[i+63:i] := SRC1[i+63:i] BITWISE XOR SRC2[i+63:i]
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[63:0] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPXORD __m512i _mm512_xor_epi32(__m512i a, __m512i b)
+
+
VPXORD __m512i _mm512_mask_xor_epi32(__m512i s, __mmask16 m, __m512i a, __m512i b)
+
+
VPXORD __m512i _mm512_maskz_xor_epi32( __mmask16 m, __m512i a, __m512i b)
+
+
VPXORD __m256i _mm256_xor_epi32(__m256i a, __m256i b)
+
+
VPXORD __m256i _mm256_mask_xor_epi32(__m256i s, __mmask8 m, __m256i a, __m256i b)
+
+
VPXORD __m256i _mm256_maskz_xor_epi32( __mmask8 m, __m256i a, __m256i b)
+
+
VPXORD __m128i _mm_xor_epi32(__m128i a, __m128i b)
+
+
VPXORD __m128i _mm_mask_xor_epi32(__m128i s, __mmask8 m, __m128i a, __m128i b)
+
+
VPXORD __m128i _mm_maskz_xor_epi32( __mmask16 m, __m128i a, __m128i b)
+
+
VPXORQ __m512i _mm512_xor_epi64( __m512i a, __m512i b);
+
+
VPXORQ __m512i _mm512_mask_xor_epi64(__m512i s, __mmask8 m, __m512i a, __m512i b);
+
+
VPXORQ __m512i _mm512_maskz_xor_epi64(__mmask8 m, __m512i a, __m512i b);
+
+
VPXORQ __m256i _mm256_xor_epi64( __m256i a, __m256i b);
+
+
VPXORQ __m256i _mm256_mask_xor_epi64(__m256i s, __mmask8 m, __m256i a, __m256i b);
+
+
VPXORQ __m256i _mm256_maskz_xor_epi64(__mmask8 m, __m256i a, __m256i b);
+
+
VPXORQ __m128i _mm_xor_epi64( __m128i a, __m128i b);
+
+
VPXORQ __m128i _mm_mask_xor_epi64(__m128i s, __mmask8 m, __m128i a, __m128i b);
+
+
VPXORQ __m128i _mm_maskz_xor_epi64(__mmask8 m, __m128i a, __m128i b);
+
+
PXOR:__m64 _mm_xor_si64 (__m64 m1, __m64 m2)
+
+
(V)PXOR:__m128i _mm_xor_si128 ( __m128i a, __m128i b)
+
+
VPXOR:__m256i _mm256_xor_si256 ( __m256i a, __m256i b)
+
+

Flags Affected + ¶ +

+

None.

+

Numeric Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/rcl.rcr.rol.ror.html b/x86/rcl.rcr.rol.ror.html new file mode 100644 index 0000000..30889ee --- /dev/null +++ b/x86/rcl.rcr.rol.ror.html @@ -0,0 +1,669 @@ + +RCL/RCR/ROL/ROR + — Rotate

RCL/RCR/ROL/ROR + — Rotate

+ + + + +

Opcode1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
D0 /2RCL r/m8, 1M1ValidValidRotate 9 bits (CF, r/m8) left once.
REX + D0 /2RCL r/m82, 1M1ValidN.E.Rotate 9 bits (CF, r/m8) left once.
D2 /2RCL r/m8, CLMCValidValidRotate 9 bits (CF, r/m8) left CL times.
REX + D2 /2RCL r/m82, CLMCValidN.E.Rotate 9 bits (CF, r/m8) left CL times.
C0 /2 ibRCL r/m8, imm8MIValidValidRotate 9 bits (CF, r/m8) left imm8 times.
REX + C0 /2 ibRCL r/m82, imm8MIValidN.E.Rotate 9 bits (CF, r/m8) left imm8 times.
D1 /2RCL r/m16, 1M1ValidValidRotate 17 bits (CF, r/m16) left once.
D3 /2RCL r/m16, CLMCValidValidRotate 17 bits (CF, r/m16) left CL times.
C1 /2 ibRCL r/m16, imm8MIValidValidRotate 17 bits (CF, r/m16) left imm8 times.
D1 /2RCL r/m32, 1M1ValidValidRotate 33 bits (CF, r/m32) left once.
REX.W + D1 /2RCL r/m64, 1M1ValidN.E.Rotate 65 bits (CF, r/m64) left once. Uses a 6 bit count.
D3 /2RCL r/m32, CLMCValidValidRotate 33 bits (CF, r/m32) left CL times.
REX.W + D3 /2RCL r/m64, CLMCValidN.E.Rotate 65 bits (CF, r/m64) left CL times. Uses a 6 bit count.
C1 /2 ibRCL r/m32, imm8MIValidValidRotate 33 bits (CF, r/m32) left imm8 times.
REX.W + C1 /2 ibRCL r/m64, imm8MIValidN.E.Rotate 65 bits (CF, r/m64) left imm8 times. Uses a 6 bit count.
D0 /3RCR r/m8, 1M1ValidValidRotate 9 bits (CF, r/m8) right once.
REX + D0 /3RCR r/m82, 1M1ValidN.E.Rotate 9 bits (CF, r/m8) right once.
D2 /3RCR r/m8, CLMCValidValidRotate 9 bits (CF, r/m8) right CL times.
REX + D2 /3RCR r/m82, CLMCValidN.E.Rotate 9 bits (CF, r/m8) right CL times.
C0 /3 ibRCR r/m8, imm8MIValidValidRotate 9 bits (CF, r/m8) right imm8 times.
REX + C0 /3 ibRCR r/m82, imm8MIValidN.E.Rotate 9 bits (CF, r/m8) right imm8 times.
D1 /3RCR r/m16, 1M1ValidValidRotate 17 bits (CF, r/m16) right once.
D3 /3RCR r/m16, CLMCValidValidRotate 17 bits (CF, r/m16) right CL times.
C1 /3 ibRCR r/m16, imm8MIValidValidRotate 17 bits (CF, r/m16) right imm8 times.
D1 /3RCR r/m32, 1M1ValidValidRotate 33 bits (CF, r/m32) right once. Uses a 6 bit count.
REX.W + D1 /3RCR r/m64, 1M1ValidN.E.Rotate 65 bits (CF, r/m64) right once. Uses a 6 bit count.
D3 /3RCR r/m32, CLMCValidValidRotate 33 bits (CF, r/m32) right CL times.
REX.W + D3 /3RCR r/m64, CLMCValidN.E.Rotate 65 bits (CF, r/m64) right CL times. Uses a 6 bit count.
C1 /3 ibRCR r/m32, imm8MIValidValidRotate 33 bits (CF, r/m32) right imm8 times.
REX.W + C1 /3 ibRCR r/m64, imm8MIValidN.E.Rotate 65 bits (CF, r/m64) right imm8 times. Uses a 6 bit count.
D0 /0ROL r/m8, 1M1ValidValidRotate 8 bits r/m8 left once.
REX + D0 /0ROL r/m82, 1M1ValidN.E.Rotate 8 bits r/m8 left once
D2 /0ROL r/m8, CLMCValidValidRotate 8 bits r/m8 left CL times.
REX + D2 /0ROL r/m82, CLMCValidN.E.Rotate 8 bits r/m8 left CL times.
C0 /0 ibROL r/m8, imm8MIValidValidRotate 8 bits r/m8 left imm8 times.
+

Opcode1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
REX + C0 /0 ibROL r/m82, imm8MIValidN.E.Rotate 8 bits r/m8 left imm8 times.
D1 /0ROL r/m16, 1M1ValidValidRotate 16 bits r/m16 left once.
D3 /0ROL r/m16, CLMCValidValidRotate 16 bits r/m16 left CL times.
C1 /0 ibROL r/m16, imm8MIValidValidRotate 16 bits r/m16 left imm8 times.
D1 /0ROL r/m32, 1M1ValidValidRotate 32 bits r/m32 left once.
REX.W + D1 /0ROL r/m64, 1M1ValidN.E.Rotate 64 bits r/m64 left once. Uses a 6 bit count.
D3 /0ROL r/m32, CLMCValidValidRotate 32 bits r/m32 left CL times.
REX.W + D3 /0ROL r/m64, CLMCValidN.E.Rotate 64 bits r/m64 left CL times. Uses a 6 bit count.
C1 /0 ibROL r/m32, imm8MIValidValidRotate 32 bits r/m32 left imm8 times.
REX.W + C1 /0 ibROL r/m64, imm8MIValidN.E.Rotate 64 bits r/m64 left imm8 times. Uses a 6 bit count.
D0 /1ROR r/m8, 1M1ValidValidRotate 8 bits r/m8 right once.
REX + D0 /1ROR r/m82, 1M1ValidN.E.Rotate 8 bits r/m8 right once.
D2 /1ROR r/m8, CLMCValidValidRotate 8 bits r/m8 right CL times.
REX + D2 /1ROR r/m82, CLMCValidN.E.Rotate 8 bits r/m8 right CL times.
C0 /1 ibROR r/m8, imm8MIValidValidRotate 8 bits r/m16 right imm8 times.
REX + C0 /1 ibROR r/m82, imm8MIValidN.E.Rotate 8 bits r/m16 right imm8 times.
D1 /1ROR r/m16, 1M1ValidValidRotate 16 bits r/m16 right once.
D3 /1ROR r/m16, CLMCValidValidRotate 16 bits r/m16 right CL times.
C1 /1 ibROR r/m16, imm8MIValidValidRotate 16 bits r/m16 right imm8 times.
D1 /1ROR r/m32, 1M1ValidValidRotate 32 bits r/m32 right once.
REX.W + D1 /1ROR r/m64, 1M1ValidN.E.Rotate 64 bits r/m64 right once. Uses a 6 bit count.
D3 /1ROR r/m32, CLMCValidValidRotate 32 bits r/m32 right CL times.
REX.W + D3 /1ROR r/m64, CLMCValidN.E.Rotate 64 bits r/m64 right CL times. Uses a 6 bit count.
C1 /1 ibROR r/m32, imm8MIValidValidRotate 32 bits r/m32 right imm8 times.
REX.W + C1 /1 ibROR r/m64, imm8MIValidN.E.Rotate 64 bits r/m64 right imm8 times. Uses a 6 bit count.
+
+

1. See the IA-32 Architecture Compatibility section below.

+

2. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
M1ModRM:r/m (w)1N/AN/A
MCModRM:r/m (w)CLN/AN/A
MIModRM:r/m (w)imm8N/AN/A
+

Description + ¶ +

+

Shifts (rotates) the bits of the first operand (destination operand) the number of bit positions specified in the second operand (count operand) and stores the result in the destination operand. The destination operand can be a register or a memory location; the count operand is an unsigned integer that can be an immediate or a value in the CL register. The count is masked to 5 bits (or 6 bits if in 64-bit mode and REX.W = 1).

+

The rotate left (ROL) and rotate through carry left (RCL) instructions shift all the bits toward more-significant bit positions, except for the most-significant bit, which is rotated to the least-significant bit location. The rotate right (ROR) and rotate through carry right (RCR) instructions shift all the bits toward less significant bit positions, except for the least-significant bit, which is rotated to the most-significant bit location.

+

The RCL and RCR instructions include the CF flag in the rotation. The RCL instruction shifts the CF flag into the least-significant bit and shifts the most-significant bit into the CF flag. The RCR instruction shifts the CF flag into the most-significant bit and shifts the least-significant bit into the CF flag. For the ROL and ROR instructions, the original value of the CF flag is not a part of the result, but the CF flag receives a copy of the bit that was shifted from one end to the other.

+

The OF flag is defined only for the 1-bit rotates; it is undefined in all other cases (except RCL and RCR instructions only: a zero-bit rotate does nothing, that is affects no flags). For left rotates, the OF flag is set to the exclusive OR of the CF bit (after the rotate) and the most-significant bit of the result. For right rotates, the OF flag is set to the exclusive OR of the two most-significant bits of the result.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Use of REX.W promotes the first operand to 64 bits and causes the count operand to become a 6-bit counter.

+

IA-32 Architecture Compatibility + ¶ +

+

The 8086 does not mask the rotation count. However, all other IA-32 processors (starting with the Intel 286 processor) do mask the rotation count to 5 bits, resulting in a maximum count of 31. This masking is done in all operating modes (including the virtual-8086 mode) to reduce the maximum execution time of the instructions.

+

Operation + ¶ +

+

(* RCL and RCR Instructions *) + ¶ +

+
SIZE := OperandSize;
+CASE (determine count) OF
+    SIZE := 8:
+        tempCOUNT := (COUNT AND 1FH) MOD 9;
+    SIZE := 16:
+        tempCOUNT := (COUNT AND 1FH) MOD 17;
+    SIZE := 32:
+        tempCOUNT := COUNT AND 1FH;
+    SIZE := 64:
+        tempCOUNT := COUNT AND 3FH;
+ESAC;
+IF OperandSize = 64
+    THEN COUNTMASK = 3FH;
+    ELSE COUNTMASK = 1FH;
+FI;
+
+

(* RCL Instruction Operation *) + ¶ +

+
WHILE (tempCOUNT ≠ 0)
+    DO
+        tempCF := MSB(DEST);
+        DEST := (DEST ∗ 2) + CF;
+        CF := tempCF;
+        tempCOUNT := tempCOUNT – 1;
+    OD;
+ELIHW;
+IF (COUNT & COUNTMASK) = 1
+    THEN OF := MSB(DEST) XOR CF;
+    ELSE OF is undefined;
+FI;
+
+

(* RCR Instruction Operation *) + ¶ +

+
IF (COUNT & COUNTMASK) = 1
+    THEN OF := MSB(DEST) XOR CF;
+    ELSE OF is undefined;
+FI;
+WHILE (tempCOUNT ≠ 0)
+    DO
+        tempCF := LSB(SRC);
+        DEST := (DEST / 2) + (CF * 2SIZE);
+        CF := tempCF;
+        tempCOUNT := tempCOUNT – 1;
+    OD;
+
+

(* ROL Instruction Operation *) + ¶ +

+
tempCOUNT := (COUNT & COUNTMASK) MOD SIZE
+WHILE (tempCOUNT ≠ 0)
+    DO
+        tempCF := MSB(DEST);
+        DEST := (DEST ∗ 2) + tempCF;
+        tempCOUNT := tempCOUNT – 1;
+    OD;
+ELIHW;
+IF (COUNT & COUNTMASK) ≠ 0
+    THEN CF := LSB(DEST);
+FI;
+IF (COUNT & COUNTMASK) = 1
+    THEN OF := MSB(DEST) XOR CF;
+    ELSE OF is undefined;
+FI;
+
+

(* ROR Instruction Operation *) + ¶ +

+
tempCOUNT := (COUNT & COUNTMASK) MOD SIZE
+WHILE (tempCOUNT ≠ 0)
+    DO
+        tempCF := LSB(SRC);
+        DEST := (DEST / 2) + (tempCF ∗ 2SIZE);
+        tempCOUNT := tempCOUNT – 1;
+    OD;
+ELIHW;
+IF (COUNT & COUNTMASK) ≠ 0
+    THEN CF := MSB(DEST);
+FI;
+IF (COUNT & COUNTMASK) = 1
+    THEN OF := MSB(DEST) XOR MSB − 1(DEST);
+    ELSE OF is undefined;
+FI;
+
+

Flags Affected + ¶ +

+

For RCL and RCR instructions, a zero-bit rotate does nothing, i.e., affects no flags. For ROL and ROR instructions, if the masked count is 0, the flags are not affected. If the masked count is 1, then the OF flag is affected, otherwise (masked count is greater than 1) the OF flag is undefined.

+

For all instructions, the CF flag is affected when the masked count is non-zero. The SF, ZF, AF, and PF flags are always unaffected.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the source operand is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the source operand is located in a nonwritable segment.
If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/rcpps.html b/x86/rcpps.html new file mode 100644 index 0000000..a7baccb --- /dev/null +++ b/x86/rcpps.html @@ -0,0 +1,114 @@ + +RCPPS + — Compute Reciprocals of Packed Single Precision Floating-Point Values

RCPPS + — Compute Reciprocals of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 53 /r RCPPS xmm1, xmm2/m128RMV/VSSEComputes the approximate reciprocals of the packed single precision floating-point values in xmm2/m128 and stores the results in xmm1.
VEX.128.0F.WIG 53 /r VRCPPS xmm1, xmm2/m128RMV/VAVXComputes the approximate reciprocals of packed single precision values in xmm2/mem and stores the results in xmm1.
VEX.256.0F.WIG 53 /r VRCPPS ymm1, ymm2/m256RMV/VAVXComputes the approximate reciprocals of packed single precision values in ymm2/mem and stores the results in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Performs a SIMD computation of the approximate reciprocals of the four packed single precision floating-point values in the source operand (second operand) stores the packed single precision floating-point results in the destination operand. The source operand can be an XMM register or a 128-bit memory location. The destination operand is an XMM register. See Figure 10-5 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a SIMD single precision floating-point operation.

+

The relative error for this approximation is:

+

|Relative Error| ≤ 1.5 ∗ 2−12

+

The RCPPS instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ of the sign of the source value is returned. A denormal source value is treated as a 0.0 (of the same sign). Tiny results (see Section 4.9.1.5, “Numeric Underflow Exception (#U)” in Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1) are always flushed to 0.0, with the sign of the operand. (Input values greater than or equal to |1.11111111110100000000000B∗2125| are guaranteed to not produce tiny results; input values less than or equal to |1.00000000000110000000001B*2126| are guaranteed to produce tiny results, which are in turn flushed to 0.0; and input values in between this range may or may not produce tiny results, depending on the implementation.) When a source value is an SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

RCPPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SRC[31:0])
+DEST[63:32] := APPROXIMATE(1/SRC[63:32])
+DEST[95:64] := APPROXIMATE(1/SRC[95:64])
+DEST[127:96] := APPROXIMATE(1/SRC[127:96])
+DEST[MAXVL-1:128] (Unmodified)
+
+

VRCPPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SRC[31:0])
+DEST[63:32] := APPROXIMATE(1/SRC[63:32])
+DEST[95:64] := APPROXIMATE(1/SRC[95:64])
+DEST[127:96] := APPROXIMATE(1/SRC[127:96])
+DEST[MAXVL-1:128] := 0
+
+

VRCPPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SRC[31:0])
+DEST[63:32] := APPROXIMATE(1/SRC[63:32])
+DEST[95:64] := APPROXIMATE(1/SRC[95:64])
+DEST[127:96] := APPROXIMATE(1/SRC[127:96])
+DEST[159:128] := APPROXIMATE(1/SRC[159:128])
+DEST[191:160] := APPROXIMATE(1/SRC[191:160])
+DEST[223:192] := APPROXIMATE(1/SRC[223:192])
+DEST[255:224] := APPROXIMATE(1/SRC[255:224])
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RCCPS __m128 _mm_rcp_ps(__m128 a)
+
+
RCPPS __m256 _mm256_rcp_ps (__m256 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/rcpss.html b/x86/rcpss.html new file mode 100644 index 0000000..1a7d0c2 --- /dev/null +++ b/x86/rcpss.html @@ -0,0 +1,89 @@ + +RCPSS + — Compute Reciprocal of Scalar Single Precision Floating-Point Values

RCPSS + — Compute Reciprocal of Scalar Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 53 /r RCPSS xmm1, xmm2/m32RMV/VSSEComputes the approximate reciprocal of the scalar single precision floating-point value in xmm2/m32 and stores the result in xmm1.
VEX.LIG.F3.0F.WIG 53 /r VRCPSS xmm1, xmm2, xmm3/m32RVMV/VAVXComputes the approximate reciprocal of the scalar single precision floating-point value in xmm3/m32 and stores the result in xmm1. Also, upper single precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Computes of an approximate reciprocal of the low single precision floating-point value in the source operand (second operand) and stores the single precision floating-point result in the destination operand. The source operand can be an XMM register or a 32-bit memory location. The destination operand is an XMM register. The three high-order doublewords of the destination operand remain unchanged. See Figure 10-6 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a scalar single precision floating-point operation.

+

The relative error for this approximation is:

+

|Relative Error| ≤ 1.5 ∗ 2−12

+

The RCPSS instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ of the sign of the source value is returned. A denormal source value is treated as a 0.0 (of the same sign). Tiny results (see Section 4.9.1.5, “Numeric Underflow Exception (#U)” in Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1) are always flushed to 0.0, with the sign of the operand. (Input values greater than or equal to |1.11111111110100000000000B∗2125| are guaranteed to not produce tiny results; input values less than or equal to |1.00000000000110000000001B*2126| are guaranteed to produce tiny results, which are in turn flushed to 0.0; and input values in between this range may or may not produce tiny results, depending on the implementation.) When a source value is an SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:32) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

Operation + ¶ +

+

RCPSS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SRC[31:0])
+DEST[MAXVL-1:32] (Unmodified)
+
+

VRCPSS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SRC2[31:0])
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RCPSS __m128 _mm_rcp_ss(__m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-22, “Type 5 Class Exception Conditions.”

diff --git a/x86/rdfsbase.rdgsbase.html b/x86/rdfsbase.rdgsbase.html new file mode 100644 index 0000000..344d550 --- /dev/null +++ b/x86/rdfsbase.rdgsbase.html @@ -0,0 +1,122 @@ + +RDFSBASE/RDGSBASE + — Read FS/GS Segment Base

RDFSBASE/RDGSBASE + — Read FS/GS Segment Base

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F AE /0 RDFSBASE r32MV/IFSGSBASELoad the 32-bit destination register with the FS base address.
F3 REX.W 0F AE /0 RDFSBASE r64MV/IFSGSBASELoad the 64-bit destination register with the FS base address.
F3 0F AE /1 RDGSBASE r32MV/IFSGSBASELoad the 32-bit destination register with the GS base address.
F3 REX.W 0F AE /1 RDGSBASE r64MV/IFSGSBASELoad the 64-bit destination register with the GS base address.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Loads the general-purpose register indicated by the ModR/M:r/m field with the FS or GS segment base address.

+

The destination operand may be either a 32-bit or a 64-bit general-purpose register. The REX.W prefix indicates the operand size is 64 bits. If no REX.W prefix is used, the operand size is 32 bits; the upper 32 bits of the source base address (for FS or GS) are ignored and upper 32 bits of the destination register are cleared.

+

This instruction is supported only in 64-bit mode.

+

Operation + ¶ +

+
DEST := FS/GS segment base address;
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RDFSBASE unsigned int _readfsbase_u32(void );
+
+
RDFSBASE unsigned __int64 _readfsbase_u64(void );
+
+
RDGSBASE unsigned int _readgsbase_u32(void );
+
+
RDGSBASE unsigned __int64 _readgsbase_u64(void );
+
+

Protected Mode Exceptions + ¶ +

+ + + +
#UDThe RDFSBASE and RDGSBASE instructions are not recognized in protected mode.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe RDFSBASE and RDGSBASE instructions are not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe RDFSBASE and RDGSBASE instructions are not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe RDFSBASE and RDGSBASE instructions are not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.FSGSBASE[bit 16] = 0.
If CPUID.07H.0H:EBX.FSGSBASE[bit 0] = 0.
diff --git a/x86/rdmsr.html b/x86/rdmsr.html new file mode 100644 index 0000000..fc3c3a9 --- /dev/null +++ b/x86/rdmsr.html @@ -0,0 +1,100 @@ + +RDMSR + — Read From Model Specific Register

RDMSR + — Read From Model Specific Register

+ +

Opcode1

+ + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 32ValidValidRead MSR specified by ECX into EDX:EAX.
+

1. See the IA-32 Architecture Compatibility section below.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Reads the contents of a 64-bit model specific register (MSR) specified in the ECX register into registers EDX:EAX. (On processors that support the Intel 64 architecture, the high-order 32 bits of RCX are ignored.) The EDX register is loaded with the high-order 32 bits of the MSR and the EAX register is loaded with the low-order 32 bits. (On processors that support the Intel 64 architecture, the high-order 32 bits of each of RAX and RDX are cleared.) If fewer than 64 bits are implemented in the MSR being read, the values returned to EDX:EAX in unimplemented bit locations are undefined.

+

This instruction must be executed at privilege level 0 or in real-address mode; otherwise, a general protection exception #GP(0) will be generated. Specifying a reserved or unimplemented MSR address in ECX will also cause a general protection exception.

+

The MSRs control functions for testability, execution tracing, performance-monitoring, and machine check errors. Chapter 2, “Model-Specific Registers (MSRs)” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 4, lists all the MSRs that can be read with this instruction and their addresses. Note that each processor family has its own set of MSRs.

+

The CPUID instruction should be used to determine whether MSRs are supported (CPUID.01H:EDX[5] = 1) before using this instruction.

+

IA-32 Architecture Compatibility + ¶ +

+

The MSRs and the ability to read them with the RDMSR instruction were introduced into the IA-32 Architecture with the Pentium processor. Execution of this instruction by an IA-32 processor earlier than the Pentium processor results in an invalid opcode exception #UD.

+

See “Changes to Instruction Behavior in VMX Non-Root Operation” in Chapter 26 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more information about the behavior of this instruction in VMX non-root operation.

+

Operation + ¶ +

+
EDX:EAX := MSR[ECX];
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the value in ECX specifies a reserved or unimplemented MSR address.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf the value in ECX specifies a reserved or unimplemented MSR address.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)The RDMSR instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/rdpid.html b/x86/rdpid.html new file mode 100644 index 0000000..d6703e0 --- /dev/null +++ b/x86/rdpid.html @@ -0,0 +1,84 @@ + +RDPID + — Read Processor ID

RDPID + — Read Processor ID

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F C7 /7 RDPID r32RN.E./VRDPIDRead IA32_TSC_AUX into r32.
F3 0F C7 /7 RDPID r64RV/N.E.RDPIDRead IA32_TSC_AUX into r64.
+

Instruction Operand Encoding1 + ¶ +

+
+

1.ModRM.MOD = 011B required

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Reads the value of the IA32_TSC_AUX MSR (address C0000103H) into the destination register. The value of CS.D and operand-size prefixes (66H and REX.W) do not affect the behavior of the RDPID instruction.

+

Operation + ¶ +

+
DEST := IA32_TSC_AUX
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.7H.0:ECX.RDPID[bit 22] = 0.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/rdpkru.html b/x86/rdpkru.html new file mode 100644 index 0000000..3dab5ee --- /dev/null +++ b/x86/rdpkru.html @@ -0,0 +1,93 @@ + +RDPKRU + — Read Protection Key Rights for User Pages

RDPKRU + — Read Protection Key Rights for User Pages

+ + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64/32bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 EERDPKRUZOV/VOSPKEReads PKRU into EAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Reads the value of PKRU into EAX and clears EDX. ECX must be 0 when RDPKRU is executed; otherwise, a general-protection exception (#GP) occurs.

+

RDPKRU can be executed only if CR4.PKE = 1; otherwise, an invalid-opcode exception (#UD) occurs. Software can discover the value of CR4.PKE by examining CPUID.(EAX=07H,ECX=0H):ECX.OSPKE [bit 4].

+

On processors that support the Intel 64 Architecture, the high-order 32-bits of RCX are ignored and the high-order 32-bits of RDX and RAX are cleared.

+

Operation + ¶ +

+
IF (ECX = 0)
+    THEN
+        EAX := PKRU;
+        EDX := 0;
+    ELSE #GP(0);
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RDPKRU uint32_t _rdpkru_u32(void);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If ECX ≠ 0.
#UDIf the LOCK prefix is used.
If CR4.PKE = 0.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/rdpmc.html b/x86/rdpmc.html new file mode 100644 index 0000000..f912706 --- /dev/null +++ b/x86/rdpmc.html @@ -0,0 +1,128 @@ + +RDPMC + — Read Performance-Monitoring Counters

RDPMC + — Read Performance-Monitoring Counters

+ + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 33RDPMCZOValidValidRead performance-monitoring counter specified by ECX into EDX:EAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Reads the contents of the performance monitoring counter (PMC) specified in ECX register into registers EDX:EAX. (On processors that support the Intel 64 architecture, the high-order 32 bits of RCX are ignored.) The EDX register is loaded with the high-order 32 bits of the PMC and the EAX register is loaded with the low-order 32 bits. (On processors that support the Intel 64 architecture, the high-order 32 bits of each of RAX and RDX are cleared.) If fewer than 64 bits are implemented in the PMC being read, unimplemented bits returned to EDX:EAX will have value zero.

+

The width of PMCs on processors supporting architectural performance monitoring (CPUID.0AH:EAX[7:0] ≠ 0) are reported by CPUID.0AH:EAX[23:16]. On processors that do not support architectural performance monitoring (CPUID.0AH:EAX[7:0]=0), the width of general-purpose performance PMCs is 40 bits, while the widths of special-purpose PMCs are implementation specific.

+

Use of ECX to specify a PMC depends on whether the processor supports architectural performance monitoring:

+
    +
  • If the processor does not support architectural performance monitoring (CPUID.0AH:EAX[7:0]=0), ECX[30:0] specifies the index of the PMC to be read. Setting ECX[31] selects “fast” read mode if supported. In this mode, RDPMC returns bits 31:0 of the PMC in EAX while clearing EDX to zero.
  • +
  • If the processor does support architectural performance monitoring (CPUID.0AH:EAX[7:0] ≠ 0), ECX[31:16] specifies type of PMC while ECX[15:0] specifies the index of the PMC to be read within that type. The following PMC types are currently defined: +
      +
    • General-purpose counters use type 0. The index x (to read IA32_PMCx) must be less than the value enumerated by CPUID.0AH.EAX[15:8] (thus ECX[15:8] must be zero).
    • +
    • General-purpose counters use type 0. The index x (to read IA32_PMCx) must be less than the value enumerated by CPUID.0AH.EAX[15:8] (thus ECX[15:8] must be zero).
    • +
    • Fixed-function counters use type 4000H. The index x (to read IA32_FIXED_CTRx) can be used if either CPUID.0AH.EDX[4:0] > x or CPUID.0AH.ECX[x] = 1 (thus ECX[15:5] must be 0).
    • +
    • Fixed-function counters use type 4000H. The index x (to read IA32_FIXED_CTRx) can be used if either CPUID.0AH.EDX[4:0] > x or CPUID.0AH.ECX[x] = 1 (thus ECX[15:5] must be 0).
    • +
    • Performance metrics use type 2000H. This type can be used only if IA32_PERF_CAPABILITIES.PERF_MET-RICS_AVAILABLE[bit 15]=1. For this type, the index in ECX[15:0] is implementation specific.
    • +
    • Performance metrics use type 2000H. This type can be used only if IA32_PERF_CAPABILITIES.PERF_MET-RICS_AVAILABLE[bit 15]=1. For this type, the index in ECX[15:0] is implementation specific.
+

Specifying an unsupported PMC encoding will cause a general protection exception #GP(0). For PMC details see Chapter 20, “Performance Monitoring,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.

+

When in protected or virtual 8086 mode, the Performance-monitoring Counters Enabled (PCE) flag in register CR4 restricts the use of the RDPMC instruction. When the PCE flag is set, the RDPMC instruction can be executed at any privilege level; when the flag is clear, the instruction can only be executed at privilege level 0. (When in real-address mode, the RDPMC instruction is always enabled.) The PMCs can also be read with the RDMSR instruction, when executing at privilege level 0.

+

The RDPMC instruction is not a serializing instruction; that is, it does not imply that all the events caused by the preceding instructions have been completed or that events caused by subsequent instructions have not begun. If an exact event count is desired, software must insert a serializing instruction (such as the CPUID instruction) before and/or after the RDPMC instruction.

+

Performing back-to-back fast reads are not guaranteed to be monotonic. To guarantee monotonicity on back-to-back reads, a serializing instruction must be placed between the two RDPMC instructions.

+

The RDPMC instruction can execute in 16-bit addressing mode or virtual-8086 mode; however, the full contents of the ECX register are used to select the PMC, and the event count is stored in the full EAX and EDX registers. The

+

RDPMC instruction was introduced into the IA-32 Architecture in the Pentium Pro processor and the Pentium processor with MMX technology. The earlier Pentium processors have PMCs, but they must be read with the RDMSR instruction.

+

Operation + ¶ +

+
MSCB = Most Significant Counter Bit (* Model-specific *)
+IF (((CR4.PCE = 1) or (CPL = 0) or (CR0.PE = 0)) and (ECX indicates a supported counter))
+    THEN
+        EAX := counter[31:0];
+        EDX := ZeroExtend(counter[MSCB:32]);
+    ELSE (* ECX is not valid or CR4.PCE is 0 and CPL is 1, 2, or 3 and CR0.PE is 1 *)
+        #GP(0);
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the current privilege level is not 0 and the PCE flag in the CR4 register is clear.
If an invalid performance counter index is specified.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf an invalid performance counter index is specified.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the PCE flag in the CR4 register is clear.
If an invalid performance counter index is specified.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the current privilege level is not 0 and the PCE flag in the CR4 register is clear.
If an invalid performance counter index is specified.
#UDIf the LOCK prefix is used.
diff --git a/x86/rdrand.html b/x86/rdrand.html new file mode 100644 index 0000000..d3f899d --- /dev/null +++ b/x86/rdrand.html @@ -0,0 +1,115 @@ + +RDRAND + — Read Random Number

RDRAND + — Read Random Number

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NFx 0F C7 /6 RDRAND r16MV/VRDRANDRead a 16-bit random number and store in the destination register.
NFx 0F C7 /6 RDRAND r32MV/VRDRANDRead a 32-bit random number and store in the destination register.
NFx REX.W + 0F C7 /6 RDRAND r64MV/IRDRANDRead a 64-bit random number and store in the destination register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Loads a hardware generated random value and store it in the destination register. The size of the random value is determined by the destination register size and operating mode. The Carry Flag indicates whether a random value is available at the time the instruction is executed. CF=1 indicates that the data in the destination is valid. Otherwise CF=0 and the data in the destination operand will be returned as zeros for the specified width. All other flags are forced to 0 in either situation. Software must check the state of CF=1 for determining if a valid random value has been returned, otherwise it is expected to loop and retry execution of RDRAND (see Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, Section 7.3.17, “Random Number Generator Instructions”).

+

This instruction is available at all privilege levels.

+

In 64-bit mode, the instruction's default operand size is 32 bits. Using a REX prefix in the form of REX.B permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bit operands. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF HW_RND_GEN.ready = 1
+    THEN
+        CASE of
+            operand size is 64: DEST[63:0] := HW_RND_GEN.data;
+            operand size is 32: DEST[31:0] := HW_RND_GEN.data;
+            operand size is 16: DEST[15:0] := HW_RND_GEN.data;
+        ESAC
+        CF := 1;
+    ELSE
+        CASE of
+            operand size is 64: DEST[63:0] := 0;
+            operand size is 32: DEST[31:0] := 0;
+            operand size is 16: DEST[15:0] := 0;
+        ESAC
+        CF := 0;
+FI
+OF, SF, ZF, AF, PF := 0;
+
+

Flags Affected + ¶ +

+

The CF flag is set according to the result (see the “Operation” section above). The OF, SF, ZF, AF, and PF flags are set to 0.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RDRAND int _rdrand16_step( unsigned short * );
+
+
RDRAND int _rdrand32_step( unsigned int * );
+
+
RDRAND int _rdrand64_step( unsigned __int64 *);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.01H:ECX.RDRAND[bit 30] = 0.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/rdseed.html b/x86/rdseed.html new file mode 100644 index 0000000..93b8299 --- /dev/null +++ b/x86/rdseed.html @@ -0,0 +1,135 @@ + +RDSEED + — Read Random SEED

RDSEED + — Read Random SEED

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NFx 0F C7 /7 RDSEED r16MV/VRDSEEDRead a 16-bit NIST SP800-90B & C compliant random value and store in the destination register.
NFx 0F C7 /7 RDSEED r32MV/VRDSEEDRead a 32-bit NIST SP800-90B & C compliant random value and store in the destination register.
NFx REX.W + 0F C7 /7 RDSEED r64MV/IRDSEEDRead a 64-bit NIST SP800-90B & C compliant random value and store in the destination register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Loads a hardware generated random value and store it in the destination register. The random value is generated from an Enhanced NRBG (Non Deterministic Random Bit Generator) that is compliant to NIST SP800-90B and NIST SP800-90C in the XOR construction mode. The size of the random value is determined by the destination register size and operating mode. The Carry Flag indicates whether a random value is available at the time the instruction is executed. CF=1 indicates that the data in the destination is valid. Otherwise CF=0 and the data in the destination operand will be returned as zeros for the specified width. All other flags are forced to 0 in either situation. Software must check the state of CF=1 for determining if a valid random seed value has been returned, otherwise it is expected to loop and retry execution of RDSEED (see Section 1.2).

+

The RDSEED instruction is available at all privilege levels. The RDSEED instruction executes normally either inside or outside a transaction region.

+

In 64-bit mode, the instruction's default operand size is 32 bits. Using a REX prefix in the form of REX.B permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bit operands. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF HW_NRND_GEN.ready = 1
+    THEN
+        CASE of
+            operand size is 64: DEST[63:0] := HW_NRND_GEN.data;
+            operand size is 32: DEST[31:0] := HW_NRND_GEN.data;
+            operand size is 16: DEST[15:0] := HW_NRND_GEN.data;
+        ESAC;
+        CF := 1;
+    ELSE
+        CASE of
+            operand size is 64: DEST[63:0] := 0;
+            operand size is 32: DEST[31:0] := 0;
+            operand size is 16: DEST[15:0] := 0;
+        ESAC;
+        CF := 0;
+FI;
+OF, SF, ZF, AF, PF := 0;
+
+

Flags Affected + ¶ +

+

The CF flag is set according to the result (see the “Operation” section above). The OF, SF, ZF, AF, and PF flags are set to 0.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RDSEED int _rdseed16_step( unsigned short * );
+
+
RDSEED int _rdseed32_step( unsigned int * );
+
+
RDSEED int _rdseed64_step( unsigned __int64 *);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.RDSEED[bit 18] = 0.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.RDSEED[bit 18] = 0.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.RDSEED[bit 18] = 0.
+

Compatibility Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.RDSEED[bit 18] = 0.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.RDSEED[bit 18] = 0.
diff --git a/x86/rdsspd.rdsspq.html b/x86/rdsspd.rdsspq.html new file mode 100644 index 0000000..50add51 --- /dev/null +++ b/x86/rdsspd.rdsspq.html @@ -0,0 +1,103 @@ + +RDSSPD/RDSSPQ + — Read Shadow Stack Pointer

RDSSPD/RDSSPQ + — Read Shadow Stack Pointer

+ + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 1E /1 (mod=11) RDSSPD r32RV/VCET_SSCopy low 32 bits of shadow stack pointer (SSP) to r32.
F3 REX.W 0F 1E /1 (mod=11) RDSSPQ r64RV/N.E.CET_SSCopies shadow stack pointer (SSP) to r64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Copies the current shadow stack pointer (SSP) register to the register destination. This opcode is a NOP when CET shadow stacks are not enabled and on processors that do not support CET.

+

Operation + ¶ +

+
IF CPL = 3
+    IF CR4.CET & IA32_U_CET.SH_STK_EN
+        IF (operand size is 64 bit)
+            THEN
+                Dest := SSP;
+            ELSE
+                Dest := SSP[31:0];
+        FI;
+    FI;
+ELSE
+    IF CR4.CET & IA32_S_CET.SH_STK_EN
+        IF (operand size is 64 bit)
+            THEN
+                Dest := SSP;
+            ELSE
+                Dest := SSP[31:0];
+        FI;
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RDSSPD__int32 _rdsspd_i32(void);
+
+
RDSSPQ__int64 _rdsspq_i64(void);
+
+

Protected Mode Exceptions + ¶ +

+

None.

+

Real-Address Mode Exceptions + ¶ +

+

None.

+

Virtual-8086 Mode Exceptions + ¶ +

+

None.

+

Compatibility Mode Exceptions + ¶ +

+

None.

+

64-Bit Mode Exceptions + ¶ +

+

None.

diff --git a/x86/rdtsc.html b/x86/rdtsc.html new file mode 100644 index 0000000..261a0fe --- /dev/null +++ b/x86/rdtsc.html @@ -0,0 +1,104 @@ + +RDTSC + — Read Time-Stamp Counter

RDTSC + — Read Time-Stamp Counter

+ + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 31RDTSCZOValidValidRead time-stamp counter into EDX:EAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Reads the current value of the processor’s time-stamp counter (a 64-bit MSR) into the EDX:EAX registers. The EDX register is loaded with the high-order 32 bits of the MSR and the EAX register is loaded with the low-order 32 bits. (On processors that support the Intel 64 architecture, the high-order 32 bits of each of RAX and RDX are cleared.)

+

The processor monotonically increments the time-stamp counter MSR every clock cycle and resets it to 0 whenever the processor is reset. See “Time Stamp Counter” in Chapter 18 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B, for specific details of the time stamp counter behavior.

+

The time stamp disable (TSD) flag in register CR4 restricts the use of the RDTSC instruction as follows. When the flag is clear, the RDTSC instruction can be executed at any privilege level; when the flag is set, the instruction can only be executed at privilege level 0.

+

The time-stamp counter can also be read with the RDMSR instruction, when executing at privilege level 0.

+

The RDTSC instruction is not a serializing instruction. It does not necessarily wait until all previous instructions have been executed before reading the counter. Similarly, subsequent instructions may begin execution before the read operation is performed. The following items may guide software seeking to order executions of RDTSC:

+
    +
  • If software requires RDTSC to be executed only after all previous instructions have executed and all previous loads are globally visible,1 it can execute LFENCE immediately before RDTSC.
  • +
  • If software requires RDTSC to be executed only after all previous instructions have executed and all previous loads and stores are globally visible, it can execute the sequence MFENCE;LFENCE immediately before RDTSC.
  • +
  • If software requires RDTSC to be executed prior to execution of any subsequent instruction (including any memory accesses), it can execute the sequence LFENCE immediately after RDTSC.
+

This instruction was introduced by the Pentium processor.

+

See “Changes to Instruction Behavior in VMX Non-Root Operation” in Chapter 26 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more information about the behavior of this instruction in VMX non-root operation.

+
+

1. A load is considered to become globally visible when the value to be loaded is determined.

+

Operation + ¶ +

+
IF (CR4.TSD = 0) or (CPL = 0) or (CR0.PE = 0)
+    THEN EDX:EAX := TimeStampCounter;
+    ELSE (* CR4.TSD = 1 and (CPL = 1, 2, or 3) and CR0.PE = 1 *)
+        #GP(0);
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the TSD flag in register CR4 is set and the CPL is greater than 0.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the TSD flag in register CR4 is set.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/rdtscp.html b/x86/rdtscp.html new file mode 100644 index 0000000..56e7fb5 --- /dev/null +++ b/x86/rdtscp.html @@ -0,0 +1,109 @@ + +RDTSCP + — Read Time-Stamp Counter and Processor ID

RDTSCP + — Read Time-Stamp Counter and Processor ID

+ + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01 F9RDTSCPZOValidValidRead 64-bit time-stamp counter and IA32_TSC_AUX value into EDX:EAX and ECX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Reads the current value of the processor’s time-stamp counter (a 64-bit MSR) into the EDX:EAX registers and also reads the value of the IA32_TSC_AUX MSR (address C0000103H) into the ECX register. The EDX register is loaded with the high-order 32 bits of the IA32_TSC MSR; the EAX register is loaded with the low-order 32 bits of the IA32_TSC MSR; and the ECX register is loaded with the low-order 32-bits of IA32_TSC_AUX MSR. On processors that support the Intel 64 architecture, the high-order 32 bits of each of RAX, RDX, and RCX are cleared.

+

The processor monotonically increments the time-stamp counter MSR every clock cycle and resets it to 0 whenever the processor is reset. See “Time Stamp Counter” in Chapter 18 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B, for specific details of the time stamp counter behavior.

+

The time stamp disable (TSD) flag in register CR4 restricts the use of the RDTSCP instruction as follows. When the flag is clear, the RDTSCP instruction can be executed at any privilege level; when the flag is set, the instruction can only be executed at privilege level 0.

+

The RDTSCP instruction is not a serializing instruction, but it does wait until all previous instructions have executed and all previous loads are globally visible.1 But it does not wait for previous stores to be globally visible, and subsequent instructions may begin execution before the read operation is performed. The following items may guide software seeking to order executions of RDTSCP:

+
    +
  • If software requires RDTSCP to be executed only after all previous stores are globally visible, it can execute MFENCE immediately before RDTSCP.
  • +
  • If software requires RDTSCP to be executed prior to execution of any subsequent instruction (including any memory accesses), it can execute LFENCE immediately after RDTSCP.
+

See “Changes to Instruction Behavior in VMX Non-Root Operation” in Chapter 26 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more information about the behavior of this instruction in VMX non-root operation.

+
+

1. A load is considered to become globally visible when the value to be loaded is determined.

+

Operation + ¶ +

+
IF (CR4.TSD = 0) or (CPL = 0) or (CR0.PE = 0)
+    THEN
+        EDX:EAX := TimeStampCounter;
+        ECX := IA32_TSC_AUX[31:0];
+    ELSE (* CR4.TSD = 1 and (CPL = 1, 2, or 3) and CR0.PE = 1 *)
+        #GP(0);
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the TSD flag in register CR4 is set and the CPL is greater than 0.
#UDIf the LOCK prefix is used.
If CPUID.80000001H:EDX.RDTSCP[bit 27] = 0.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.80000001H:EDX.RDTSCP[bit 27] = 0.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the TSD flag in register CR4 is set.
#UDIf the LOCK prefix is used.
If CPUID.80000001H:EDX.RDTSCP[bit 27] = 0.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/rep.repe.repz.repne.repnz.html b/x86/rep.repe.repz.repne.repnz.html new file mode 100644 index 0000000..61a7ae1 --- /dev/null +++ b/x86/rep.repe.repz.repne.repnz.html @@ -0,0 +1,431 @@ + +REP/REPE/REPZ/REPNE/REPNZ + — Repeat String Operation Prefix

REP/REPE/REPZ/REPNE/REPNZ + — Repeat String Operation Prefix

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F3 6CREP INS m8, DXZOValidValidInput (E)CX bytes from port DX into ES:[(E)DI].
F3 6CREP INS m8, DXZOValidN.E.Input RCX bytes from port DX into [RDI].
F3 6DREP INS m16, DXZOValidValidInput (E)CX words from port DX into ES:[(E)DI.]
F3 6DREP INS m32, DXZOValidValidInput (E)CX doublewords from port DX into ES:[(E)DI].
F3 6DREP INS r/m32, DXZOValidN.E.Input RCX default size from port DX into [RDI].
F3 A4REP MOVS m8, m8ZOValidValidMove (E)CX bytes from DS:[(E)SI] to ES:[(E)DI].
F3 REX.W A4REP MOVS m8, m8ZOValidN.E.Move RCX bytes from [RSI] to [RDI].
F3 A5REP MOVS m16, m16ZOValidValidMove (E)CX words from DS:[(E)SI] to ES:[(E)DI].
F3 A5REP MOVS m32, m32ZOValidValidMove (E)CX doublewords from DS:[(E)SI] to ES:[(E)DI].
F3 REX.W A5REP MOVS m64, m64ZOValidN.E.Move RCX quadwords from [RSI] to [RDI].
F3 6EREP OUTS DX, r/m8ZOValidValidOutput (E)CX bytes from DS:[(E)SI] to port DX.
F3 REX.W 6EREP OUTS DX, r/m81ZOValidN.E.Output RCX bytes from [RSI] to port DX.
F3 6FREP OUTS DX, r/m16ZOValidValidOutput (E)CX words from DS:[(E)SI] to port DX.
F3 6FREP OUTS DX, r/m32ZOValidValidOutput (E)CX doublewords from DS:[(E)SI] to port DX.
F3 REX.W 6FREP OUTS DX, r/m32ZOValidN.E.Output RCX default size from [RSI] to port DX.
F3 ACREP LODS ALZOValidValidLoad (E)CX bytes from DS:[(E)SI] to AL.
F3 REX.W ACREP LODS ALZOValidN.E.Load RCX bytes from [RSI] to AL.
F3 ADREP LODS AXZOValidValidLoad (E)CX words from DS:[(E)SI] to AX.
F3 ADREP LODS EAXZOValidValidLoad (E)CX doublewords from DS:[(E)SI] to EAX.
F3 REX.W ADREP LODS RAXZOValidN.E.Load RCX quadwords from [RSI] to RAX.
F3 AAREP STOS m8ZOValidValidFill (E)CX bytes at ES:[(E)DI] with AL.
F3 REX.W AAREP STOS m8ZOValidN.E.Fill RCX bytes at [RDI] with AL.
F3 ABREP STOS m16ZOValidValidFill (E)CX words at ES:[(E)DI] with AX.
F3 ABREP STOS m32ZOValidValidFill (E)CX doublewords at ES:[(E)DI] with EAX.
F3 REX.W ABREP STOS m64ZOValidN.E.Fill RCX quadwords at [RDI] with RAX.
F3 A6REPE CMPS m8, m8ZOValidValidFind nonmatching bytes in ES:[(E)DI] and DS:[(E)SI].
F3 REX.W A6REPE CMPS m8, m8ZOValidN.E.Find non-matching bytes in [RDI] and [RSI].
F3 A7REPE CMPS m16, m16ZOValidValidFind nonmatching words in ES:[(E)DI] and DS:[(E)SI].
F3 A7REPE CMPS m32, m32ZOValidValidFind nonmatching doublewords in ES:[(E)DI] and DS:[(E)SI].
F3 REX.W A7REPE CMPS m64, m64ZOValidN.E.Find non-matching quadwords in [RDI] and [RSI].
F3 AEREPE SCAS m8ZOValidValidFind non-AL byte starting at ES:[(E)DI].
F3 REX.W AEREPE SCAS m8ZOValidN.E.Find non-AL byte starting at [RDI].
F3 AFREPE SCAS m16ZOValidValidFind non-AX word starting at ES:[(E)DI].
F3 AFREPE SCAS m32ZOValidValidFind non-EAX doubleword starting at ES:[(E)DI].
F3 REX.W AFREPE SCAS m64ZOValidN.E.Find non-RAX quadword starting at [RDI].
F2 A6REPNE CMPS m8, m8ZOValidValidFind matching bytes in ES:[(E)DI] and DS:[(E)SI].
F2 REX.W A6REPNE CMPS m8, m8ZOValidN.E.Find matching bytes in [RDI] and [RSI].
F2 A7REPNE CMPS m16, m16ZOValidValidFind matching words in ES:[(E)DI] and DS:[(E)SI].
F2 A7REPNE CMPS m32, m32ZOValidValidFind matching doublewords in ES:[(E)DI] and DS:[(E)SI].
F2 REX.W A7REPNE CMPS m64, m64ZOValidN.E.Find matching doublewords in [RDI] and [RSI].
F2 AEREPNE SCAS m8ZOValidValidFind AL, starting at ES:[(E)DI].
F2 REX.W AEREPNE SCAS m8ZOValidN.E.Find AL, starting at [RDI].
F2 AFREPNE SCAS m16ZOValidValidFind AX, starting at ES:[(E)DI].
F2 AFREPNE SCAS m32ZOValidValidFind EAX, starting at ES:[(E)DI].
F2 REX.W AFREPNE SCAS m64ZOValidN.E.Find RAX, starting at [RDI].
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Repeats a string instruction the number of times specified in the count register or until the indicated condition of the ZF flag is no longer met. The REP (repeat), REPE (repeat while equal), REPNE (repeat while not equal), REPZ (repeat while zero), and REPNZ (repeat while not zero) mnemonics are prefixes that can be added to one of the string instructions. The REP prefix can be added to the INS, OUTS, MOVS, LODS, and STOS instructions, and the REPE, REPNE, REPZ, and REPNZ prefixes can be added to the CMPS and SCAS instructions. (The REPZ and REPNZ prefixes are synonymous forms of the REPE and REPNE prefixes, respectively.) The F3H prefix is defined for the following instructions and undefined for the rest:

+
    +
  • F3H as REP/REPE/REPZ for string and input/output instruction.
  • +
  • F3H is a mandatory prefix for POPCNT, LZCNT, and ADOX.
+

The REP prefixes apply only to one string instruction at a time. To repeat a block of instructions, use the LOOP instruction or another looping construct. All of these repeat prefixes cause the associated instruction to be repeated until the count in register is decremented to 0. See Table 4-17.

+
+ + + + + + + + + + + + + + + + +
Repeat PrefixTermination Condition 1*Termination Condition 2
REPRCX or (E)CX = 0None
REPE/REPZRCX or (E)CX = 0ZF = 0
REPNE/REPNZRCX or (E)CX = 0ZF = 1
+
Table 4-17. Repeat Prefixes
+
+

* CountregisterisCX,ECXorRCXbydefault,dependingonattributesoftheoperatingmodes.

+

The REPE, REPNE, REPZ, and REPNZ prefixes also check the state of the ZF flag after each iteration and terminate the repeat loop if the ZF flag is not in the specified state. When both termination conditions are tested, the cause of a repeat termination can be determined either by testing the count register with a JECXZ instruction or by testing the ZF flag (with a JZ, JNZ, or JNE instruction).

+

When the REPE/REPZ and REPNE/REPNZ prefixes are used, the ZF flag does not require initialization because both the CMPS and SCAS instructions affect the ZF flag according to the results of the comparisons they make.

+

A repeating string operation can be suspended by an exception or interrupt. When this happens, the state of the registers is preserved to allow the string operation to be resumed upon a return from the exception or interrupt handler. The source and destination registers point to the next string elements to be operated on, the EIP register points to the string instruction, and the ECX register has the value it held following the last successful iteration of the instruction. This mechanism allows long string operations to proceed without affecting the interrupt response time of the system.

+

When a fault occurs during the execution of a CMPS or SCAS instruction that is prefixed with REPE or REPNE, the EFLAGS value is restored to the state prior to the execution of the instruction. Since the SCAS and CMPS instructions do not use EFLAGS as an input, the processor can resume the instruction after the page fault handler.

+

Use the REP INS and REP OUTS instructions with caution. Not all I/O ports can handle the rate at which these instructions execute. Note that a REP STOS instruction is the fastest way to initialize a large block of memory.

+

In 64-bit mode, the operand size of the count register is associated with the address size attribute. Thus the default count register is RCX; REX.W has no effect on the address size and the count register. In 64-bit mode, if 67H is used to override address size attribute, the count register is ECX and any implicit source/destination operand will use the corresponding 32-bit index register. See the summary chart at the beginning of this section for encoding data and limits.

+

REP INS may read from the I/O port without writing to the memory location if an exception or VM exit occurs due to the write (e.g., #PF). If this would be problematic, for example because the I/O port read has side-effects, software should ensure the write to the memory location does not cause an exception or VM exit.

+

Operation + ¶ +

+
IF AddressSize = 16
+    THEN
+            Use CX for CountReg;
+            Implicit Source/Dest operand for memory use of SI/DI;
+    ELSE IF AddressSize = 64
+            THEN Use RCX for CountReg;
+            Implicit Source/Dest operand for memory use of RSI/RDI;
+    ELSE
+            Use ECX for CountReg;
+            Implicit Source/Dest operand for memory use of ESI/EDI;
+FI;
+WHILE CountReg ≠ 0
+        DO
+                Service pending interrupts (if any);
+                Execute associated string instruction;
+                CountReg := (CountReg – 1);
+                IF CountReg = 0
+                    THEN exit WHILE loop; FI;
+                IF (Repeat prefix is REPZ or REPE) and (ZF = 0)
+                or (Repeat prefix is REPNZ or REPNE) and (ZF = 1)
+                    THEN exit WHILE loop; FI;
+        OD;
+
+

Flags Affected + ¶ +

+

None; however, the CMPS and SCAS instructions do set the status flags in the EFLAGS register.

+

Exceptions (All Operating Modes) + ¶ +

+

Exceptions may be generated by an instruction associated with the prefix.

+

64-Bit Mode Exceptions + ¶ +

+ + + +
#GP(0)If the memory address is in a non-canonical form.
diff --git a/x86/ret.html b/x86/ret.html new file mode 100644 index 0000000..772960b --- /dev/null +++ b/x86/ret.html @@ -0,0 +1,745 @@ + +RET + — Return From Procedure

RET + — Return From Procedure

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
C3RETZOValidValidNear return to calling procedure.
CBRETZOValidValidFar return to calling procedure.
C2 iwRET imm16IValidValidNear return to calling procedure and pop imm16 bytes from stack.
CA iwRET imm16IValidValidFar return to calling procedure and pop imm16 bytes from stack.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
Iimm16N/AN/AN/A
+

Description + ¶ +

+

Transfers program control to a return address located on the top of the stack. The address is usually placed on the stack by a CALL instruction, and the return is made to the instruction that follows the CALL instruction.

+

The optional source operand specifies the number of stack bytes to be released after the return address is popped; the default is none. This operand can be used to release parameters from the stack that were passed to the called procedure and are no longer needed. It must be used when the CALL instruction used to switch to a new procedure uses a call gate with a non-zero word count to access the new procedure. Here, the source operand for the RET instruction must specify the same number of bytes as is specified in the word count field of the call gate.

+

The RET instruction can be used to execute three different types of returns:

+
    +
  • Near return — A return to a calling procedure within the current code segment (the segment currently pointed to by the CS register), sometimes referred to as an intrasegment return.
  • +
  • Far return — A return to a calling procedure located in a different segment than the current code segment, sometimes referred to as an intersegment return.
  • +
  • Inter-privilege-level far return — A far return to a different privilege level than that of the currently executing program or procedure.
+

The inter-privilege-level return type can only be executed in protected mode. See the section titled “Calling Procedures Using Call and RET” in Chapter 6 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for detailed information on near, far, and inter-privilege-level returns.

+

When executing a near return, the processor pops the return instruction pointer (offset) from the top of the stack into the EIP register and begins program execution at the new instruction pointer. The CS register is unchanged.

+

When executing a far return, the processor pops the return instruction pointer from the top of the stack into the EIP register, then pops the segment selector from the top of the stack into the CS register. The processor then begins program execution in the new code segment at the new instruction pointer.

+

The mechanics of an inter-privilege-level far return are similar to an intersegment return, except that the processor examines the privilege levels and access rights of the code and stack segments being returned to determine if the control transfer is allowed to be made. The DS, ES, FS, and GS segment registers are cleared by the RET instruction during an inter-privilege-level return if they refer to segments that are not allowed to be accessed at the new privilege level. Since a stack switch also occurs on an inter-privilege level return, the ESP and SS registers are loaded from the stack.

+

If parameters are passed to the called procedure during an inter-privilege level call, the optional source operand must be used with the RET instruction to release the parameters on the return. Here, the parameters are released both from the called procedure’s stack and the calling procedure’s stack (that is, the stack being returned to).

+

In 64-bit mode, the default operation size of this instruction is the stack-address size, i.e., 64 bits. This applies to near returns, not far returns; the default operation size of far returns is 32 bits.

+

Refer to Chapter 6, “Procedure Calls, Interrupts, and Exceptions‚” and Chapter 17, “Control-flow Enforcement Technology (CET)‚” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for CET details.

+

Instruction ordering. Instructions following a far return may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the far return have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible).

+

Unlike near indirect CALL and near indirect JMP, the processor will not speculatively execute the next sequential instruction after a near RET unless that instruction is also the target of a jump or is a target in a branch predictor.

+

Operation + ¶ +

+
(* Near return *)
+IF instruction = near return
+    THEN;
+            IF OperandSize = 32
+                    THEN
+                        IF top 4 bytes of stack not within stack limits
+                            THEN #SS(0); FI;
+                        EIP := Pop();
+                        IF ShadowStackEnabled(CPL)
+                            tempSsEIP = ShadowStackPop4B();
+                            IF EIP != TempSsEIP
+                                THEN #CP(NEAR_RET); FI;
+                        FI;
+                    ELSE
+                        IF OperandSize = 64
+                            THEN
+                                IF top 8 bytes of stack not within stack limits
+                                    THEN #SS(0); FI;
+                                RIP := Pop();
+                                IF ShadowStackEnabled(CPL)
+                                    tempSsEIP = ShadowStackPop8B();
+                                    IF RIP != tempSsEIP
+                                        THEN #CP(NEAR_RET); FI;
+                                FI;
+                            ELSE (* OperandSize = 16 *)
+                                IF top 2 bytes of stack not within stack limits
+                                    THEN #SS(0); FI;
+                                tempEIP := Pop();
+                                tempEIP := tempEIP AND 0000FFFFH;
+                                IF tempEIP not within code segment limits
+                                    THEN #GP(0); FI;
+                                EIP := tempEIP;
+                                IF ShadowStackEnabled(CPL)
+                                    tempSsEip = ShadowStackPop4B();
+                                    IF EIP != tempSsEIP
+                                        THEN #CP(NEAR_RET); FI;
+                                FI;
+                        FI;
+            FI;
+    IF instruction has immediate operand
+            THEN (* Release parameters from stack *)
+                    IF StackAddressSize = 32
+                        THEN
+                            ESP := ESP + SRC;
+                        ELSE
+                            IF StackAddressSize = 64
+                                THEN
+                                    RSP := RSP + SRC;
+                                ELSE (* StackAddressSize = 16 *)
+                                    SP := SP + SRC;
+                            FI;
+                    FI;
+    FI;
+FI;
+(* Real-address mode or virtual-8086 mode *)
+IF ((PE = 0) or (PE = 1 AND VM = 1)) and instruction = far return
+    THEN
+            IF OperandSize = 32
+                    THEN
+                        IF top 8 bytes of stack not within stack limits
+                            THEN #SS(0); FI;
+                        EIP := Pop();
+                        CS := Pop(); (* 32-bit pop, high-order 16 bits discarded *)
+                    ELSE (* OperandSize = 16 *)
+                        IF top 4 bytes of stack not within stack limits
+                            THEN #SS(0); FI;
+                        tempEIP := Pop();
+                        tempEIP := tempEIP AND 0000FFFFH;
+                        IF tempEIP not within code segment limits
+                            THEN #GP(0); FI;
+                        EIP := tempEIP;
+                        CS := Pop(); (* 16-bit pop *)
+            FI;
+    IF instruction has immediate operand
+            THEN (* Release parameters from stack *)
+                    SP := SP + (SRC AND FFFFH);
+    FI;
+FI;
+(* Protected mode, not virtual-8086 mode *)
+IF (PE = 1 and VM = 0 and IA32_EFER.LMA = 0) and instruction = far return
+    THEN
+            IF OperandSize = 32
+                    THEN
+                        IF second doubleword on stack is not within stack limits
+                            THEN #SS(0); FI;
+                    ELSE (* OperandSize = 16 *)
+                        IF second word on stack is not within stack limits
+                            THEN #SS(0); FI;
+            FI;
+    IF return code segment selector is NULL
+            THEN #GP(0); FI;
+    IF return code segment selector addresses descriptor beyond descriptor table limit
+            THEN #GP(selector); FI;
+    Obtain descriptor to which return code segment selector points from descriptor table;
+    IF return code segment descriptor is not a code segment
+            THEN #GP(selector); FI;
+    IF return code segment selector RPL < CPL
+            THEN #GP(selector); FI;
+    IF return code segment descriptor is conforming
+    and return code segment DPL > return code segment selector RPL
+            THEN #GP(selector); FI;
+    IF return code segment descriptor is non-conforming and return code
+    segment DPL ≠ return code segment selector RPL
+            THEN #GP(selector); FI;
+    IF return code segment descriptor is not present
+            THEN #NP(selector); FI:
+    IF return code segment selector RPL > CPL
+            THEN GOTO RETURN-TO-OUTER-PRIVILEGE-LEVEL;
+            ELSE GOTO RETURN-TO-SAME-PRIVILEGE-LEVEL;
+    FI;
+FI;
+RETURN-TO-SAME-PRIVILEGE-LEVEL:
+    IF the return instruction pointer is not within the return code segment limit
+            THEN #GP(0); FI;
+    IF OperandSize = 32
+            THEN
+                    EIP := Pop();
+                    CS := Pop(); (* 32-bit pop, high-order 16 bits discarded *)
+            ELSE (* OperandSize = 16 *)
+                    EIP := Pop();
+                    EIP := EIP AND 0000FFFFH;
+                    CS := Pop(); (* 16-bit pop *)
+    FI;
+    IF instruction has immediate operand
+            THEN (* Release parameters from stack *)
+                    IF StackAddressSize = 32
+                        THEN
+                            ESP := ESP + SRC;
+                        ELSE (* StackAddressSize = 16 *)
+                            SP := SP + SRC;
+                    FI;
+    FI;
+    IF ShadowStackEnabled(CPL)
+            (* SSP must be 8 byte aligned *)
+            IF SSP AND 0x7 != 0
+                    THEN #CP(FAR-RET/IRET); FI;
+            tempSsCS = shadow_stack_load 8 bytes from SSP+16;
+            tempSsLIP = shadow_stack_load 8 bytes from SSP+8;
+            prevSSP = shadow_stack_load 8 bytes from SSP;
+            SSP = SSP + 24;
+            (* do a 64 bit-compare to check if any bits beyond bit 15 are set *)
+            tempCS = CS; (* zero pad to 64 bit *)
+            IF tempCS != tempSsCS
+                    THEN #CP(FAR-RET/IRET); FI;
+            (* do a 64 bit-compare; pad CSBASE+RIP with 0 for 32 bit LIP*)
+            IF CSBASE + RIP != tempSsLIP
+                    THEN #CP(FAR-RET/IRET); FI;
+            (* prevSSP must be 4 byte aligned *)
+            IF prevSSP AND 0x3 != 0
+                    THEN #CP(FAR-RET/IRET); FI;
+            (* In legacy mode SSP must be in low 4GB *)
+            IF prevSSP[63:32] != 0
+                    THEN #GP(0); FI;
+            SSP := prevSSP
+    FI;
+RETURN-TO-OUTER-PRIVILEGE-LEVEL:
+    IF top (16 + SRC) bytes of stack are not within stack limits (OperandSize = 32)
+    or top (8 + SRC) bytes of stack are not within stack limits (OperandSize = 16)
+                    THEN #SS(0); FI;
+    Read return segment selector;
+    IF stack segment selector is NULL
+            THEN #GP(0); FI;
+    IF return stack segment selector index is not within its descriptor table limits
+            THEN #GP(selector); FI;
+    Read segment descriptor pointed to by return segment selector;
+    IF stack segment selector RPL ≠ RPL of the return code segment selector
+    or stack segment is not a writable data segment
+    or stack segment descriptor DPL ≠ RPL of the return code segment selector
+                    THEN #GP(selector); FI;
+    IF stack segment not present
+            THEN #SS(StackSegmentSelector); FI;
+    IF the return instruction pointer is not within the return code segment limit
+            THEN #GP(0); FI;
+    IF OperandSize = 32
+            THEN
+                    EIP := Pop();
+                    CS := Pop(); (* 32-bit pop, high-order 16 bits discarded; segment descriptor loaded *)
+                    CS(RPL) := ReturnCodeSegmentSelector(RPL);
+                    IF instruction has immediate operand
+                        THEN (* Release parameters from called procedure’s stack *)
+                            IF StackAddressSize = 32
+                                THEN
+                                    ESP := ESP + SRC;
+                                ELSE (* StackAddressSize = 16 *)
+                                    SP := SP + SRC;
+                            FI;
+                    FI;
+                    tempESP := Pop();
+                    tempSS := Pop(); (* 32-bit pop, high-order 16 bits discarded; seg. descriptor loaded *)
+            ELSE (* OperandSize = 16 *)
+                    EIP := Pop();
+                    EIP := EIP AND 0000FFFFH;
+                    CS := Pop(); (* 16-bit pop; segment descriptor loaded *)
+                    CS(RPL) := ReturnCodeSegmentSelector(RPL);
+                    IF instruction has immediate operand
+                        THEN (* Release parameters from called procedure’s stack *)
+                            IF StackAddressSize = 32
+                                THEN
+                                    ESP := ESP + SRC;
+                                ELSE (* StackAddressSize = 16 *)
+                                    SP := SP + SRC;
+                            FI;
+                    FI;
+                    tempESP := Pop();
+                    tempSS := Pop(); (* 16-bit pop; segment descriptor loaded *)
+            FI;
+    IF ShadowStackEnabled(CPL)
+            (* check if 8 byte aligned *)
+            IF SSP AND 0x7 != 0
+                    THEN #CP(FAR-RET/IRET); FI;
+            IF ReturnCodeSegmentSelector(RPL) !=3
+                    THEN
+                        tempSsCS = shadow_stack_load 8 bytes from SSP+16;
+                        tempSsLIP = shadow_stack_load 8 bytes from SSP+8;
+                        tempSSP = shadow_stack_load 8 bytes from SSP;
+                        SSP = SSP + 24;
+                        (* Do 64 bit compare to detect bits beyond 15 being set *)
+                        tempCS = CS; (* zero extended to 64 bit *)
+                        IF tempCS != tempSsCS
+                            THEN #CP(FAR-RET/IRET); FI;
+                        (* Do 64 bit compare; pad CSBASE+RIP with 0 for 32 bit LA *)
+                        IF CSBASE + RIP != tempSsLIP
+                            THEN #CP(FAR-RET/IRET); FI;
+                        (* check if 4 byte aligned *)
+                        IF tempSSP AND 0x3 != 0
+                            THEN #CP(FAR-RET/IRET); FI;
+            FI;
+    FI;
+            tempOldCPL = CPL;
+            CPL := ReturnCodeSegmentSelector(RPL);
+            ESP := tempESP;
+            SS := tempSS;
+            tempOldSSP = SSP;
+            IF ShadowStackEnabled(CPL)
+                    IF CPL = 3
+                        THEN tempSSP := IA32_PL3_SSP; FI;
+                    IF tempSSP[63:32] != 0
+                        THEN #GP(0); FI;
+                    SSP := tempSSP
+            FI;
+            (* Now past all faulting points; safe to free the token. The token free is done using the old SSP
+                * and using a supervisor override as old CPL was a supervisor privilege level *)
+            IF ShadowStackEnabled(tempOldCPL)
+                    expected_token_value = tempOldSSP | BUSY_BIT (* busy bit - bit position 0 - must be set *)
+                    new_token_value = tempOldSSP (* clear the busy bit *)
+                    shadow_stack_lock_cmpxchg8b(tempOldSSP, new_token_value, expected_token_value)
+            FI;
+    FI;
+    FOR each SegReg in (ES, FS, GS, and DS)
+            DO
+                    tempDesc := descriptor cache for SegReg (* hidden part of segment register *)
+                    IF (SegmentSelector == NULL) OR (tempDesc(DPL) < CPL AND tempDesc(Type) is (data or non-conforming code)))
+                        THEN (* Segment register invalid *)
+                            SegmentSelector := 0; (*Segment selector becomes null*)
+                    FI;
+            OD;
+    IF instruction has immediate operand
+            THEN (* Release parameters from calling procedure’s stack *)
+                    IF StackAddressSize = 32
+                        THEN
+                            ESP := ESP + SRC;
+                        ELSE (* StackAddressSize = 16 *)
+                            SP := SP + SRC;
+                    FI;
+    FI;
+(* IA-32e Mode *)
+    IF (PE = 1 and VM = 0 and IA32_EFER.LMA = 1) and instruction = far return
+            THEN
+                    IF OperandSize = 32
+                        THEN
+                            IF second doubleword on stack is not within stack limits
+                                THEN #SS(0); FI;
+                            IF first or second doubleword on stack is not in canonical space
+                                THEN #SS(0); FI;
+                        ELSE
+                            IF OperandSize = 16
+                                THEN
+                                    IF second word on stack is not within stack limits
+                                        THEN #SS(0); FI;
+                                    IF first or second word on stack is not in canonical space
+                                        THEN #SS(0); FI;
+                                ELSE (* OperandSize = 64 *)
+                                    IF first or second quadword on stack is not in canonical space
+                                        THEN #SS(0); FI;
+                            FI
+                    FI;
+            IF return code segment selector is NULL
+                    THEN GP(0); FI;
+            IF return code segment selector addresses descriptor beyond descriptor table limit
+                    THEN GP(selector); FI;
+            IF return code segment selector addresses descriptor in non-canonical space
+                    THEN GP(selector); FI;
+            Obtain descriptor to which return code segment selector points from descriptor table;
+            IF return code segment descriptor is not a code segment
+                    THEN #GP(selector); FI;
+            IF return code segment descriptor has L-bit = 1 and D-bit = 1
+                    THEN #GP(selector); FI;
+            IF return code segment selector RPL < CPL
+                    THEN #GP(selector); FI;
+            IF return code segment descriptor is conforming
+            and return code segment DPL > return code segment selector RPL
+                    THEN #GP(selector); FI;
+            IF return code segment descriptor is non-conforming
+            and return code segment DPL ≠ return code segment selector RPL
+                    THEN #GP(selector); FI;
+            IF return code segment descriptor is not present
+                    THEN #NP(selector); FI:
+            IF return code segment selector RPL > CPL
+                    THEN GOTO IA-32E-MODE-RETURN-TO-OUTER-PRIVILEGE-LEVEL;
+                    ELSE GOTO IA-32E-MODE-RETURN-TO-SAME-PRIVILEGE-LEVEL;
+            FI;
+    FI;
+IA-32E-MODE-RETURN-TO-SAME-PRIVILEGE-LEVEL:
+IF the return instruction pointer is not within the return code segment limit
+    THEN #GP(0); FI;
+IF the return instruction pointer is not within canonical address space
+    THEN #GP(0); FI;
+IF OperandSize = 32
+    THEN
+            EIP := Pop();
+            CS := Pop(); (* 32-bit pop, high-order 16 bits discarded *)
+    ELSE
+            IF OperandSize = 16
+                    THEN
+                        EIP := Pop();
+                        EIP := EIP AND 0000FFFFH;
+                        CS := Pop(); (* 16-bit pop *)
+                    ELSE (* OperandSize = 64 *)
+                        RIP := Pop();
+                        CS := Pop(); (* 64-bit pop, high-order 48 bits discarded *)
+            FI;
+FI;
+IF instruction has immediate operand
+    THEN (* Release parameters from stack *)
+            IF StackAddressSize = 32
+                    THEN
+                        ESP := ESP + SRC;
+                    ELSE
+                        IF StackAddressSize = 16
+                            THEN
+                                SP := SP + SRC;
+                            ELSE (* StackAddressSize = 64 *)
+                                RSP := RSP + SRC;
+                        FI;
+            FI;
+FI;
+IF ShadowStackEnabled(CPL)
+    IF SSP AND 0x7 != 0 (* check if aligned to 8 bytes *)
+            THEN #CP(FAR-RET/IRET); FI;
+    tempSsCS = shadow_stack_load 8 bytes from SSP+16;
+    tempSsLIP = shadow_stack_load 8 bytes from SSP+8;
+    tempSSP = shadow_stack_load 8 bytes from SSP;
+    SSP = SSP + 24;
+    tempCS = CS; (* zero padded to 64 bit *)
+    IF tempCS != tempSsCS (* 64 bit compare; CS zero padded to 64 bits *)
+            THEN #CP(FAR-RET/IRET); FI;
+    IF CSBASE + RIP != tempSsLIP (* 64 bit compare *)
+            THEN #CP(FAR-RET/IRET); FI;
+    IF tempSSP AND 0x3 != 0 (* check if aligned to 4 bytes *)
+            THEN #CP(FAR-RET/IRET); FI;
+    IF (CS.L = 0 AND tempSSP[63:32] != 0) OR
+        (CS.L = 1 AND tempSSP is not canonical relative to the current paging mode)
+            THEN #GP(0); FI;
+    SSP := tempSSP
+FI;
+IA-32E-MODE-RETURN-TO-OUTER-PRIVILEGE-LEVEL:
+IF top (16 + SRC) bytes of stack are not within stack limits (OperandSize = 32)
+or top (8 + SRC) bytes of stack are not within stack limits (OperandSize = 16)
+    THEN #SS(0); FI;
+IF top (16 + SRC) bytes of stack are not in canonical address space (OperandSize =32)
+or top (8 + SRC) bytes of stack are not in canonical address space (OperandSize = 16)
+or top (32 + SRC) bytes of stack are not in canonical address space (OperandSize = 64)
+    THEN #SS(0); FI;
+Read return stack segment selector;
+IF stack segment selector is NULL
+    THEN
+            IF new CS descriptor L-bit = 0
+                    THEN #GP(selector);
+            IF stack segment selector RPL = 3
+                    THEN #GP(selector);
+FI;
+IF return stack segment descriptor is not within descriptor table limits
+            THEN #GP(selector); FI;
+IF return stack segment descriptor is in non-canonical address space
+            THEN #GP(selector); FI;
+Read segment descriptor pointed to by return segment selector;
+IF stack segment selector RPL ≠ RPL of the return code segment selector
+or stack segment is not a writable data segment
+or stack segment descriptor DPL ≠ RPL of the return code segment selector
+    THEN #GP(selector); FI;
+IF stack segment not present
+    THEN #SS(StackSegmentSelector); FI;
+IF the return instruction pointer is not within the return code segment limit
+    THEN #GP(0); FI:
+IF the return instruction pointer is not within canonical address space
+    THEN #GP(0); FI;
+IF OperandSize = 32
+    THEN
+            EIP := Pop();
+            CS := Pop(); (* 32-bit pop, high-order 16 bits discarded, segment descriptor loaded *)
+            CS(RPL) := ReturnCodeSegmentSelector(RPL);
+            IF instruction has immediate operand
+                    THEN (* Release parameters from called procedure’s stack *)
+                        IF StackAddressSize = 32
+                            THEN
+                                ESP := ESP + SRC;
+                            ELSE
+                                IF StackAddressSize = 16
+                                    THEN
+                                        SP := SP + SRC;
+                                    ELSE (* StackAddressSize = 64 *)
+                                        RSP := RSP + SRC;
+                                FI;
+                        FI;
+            FI;
+            tempESP := Pop();
+            tempSS := Pop(); (* 32-bit pop, high-order 16 bits discarded, segment descriptor loaded *)
+    ELSE
+            IF OperandSize = 16
+                    THEN
+                        EIP := Pop();
+                        EIP := EIP AND 0000FFFFH;
+                        CS := Pop(); (* 16-bit pop; segment descriptor loaded *)
+                        CS(RPL) := ReturnCodeSegmentSelector(RPL);
+                        IF instruction has immediate operand
+                            THEN (* Release parameters from called procedure’s stack *)
+                                IF StackAddressSize = 32
+                                    THEN
+                                        ESP := ESP + SRC;
+                                    ELSE
+                                        IF StackAddressSize = 16
+                                            THEN
+                                                SP := SP + SRC;
+                                            ELSE (* StackAddressSize = 64 *)
+                                                RSP := RSP + SRC;
+                                        FI;
+                                FI;
+                        FI;
+                        tempESP := Pop();
+                        tempSS := Pop(); (* 16-bit pop; segment descriptor loaded *)
+                    ELSE (* OperandSize = 64 *)
+                        RIP := Pop();
+                        CS := Pop(); (* 64-bit pop; high-order 48 bits discarded; seg. descriptor loaded *)
+                        CS(RPL) := ReturnCodeSegmentSelector(RPL);
+                        IF instruction has immediate operand
+                            THEN (* Release parameters from called procedure’s stack *)
+                                RSP := RSP + SRC;
+                        FI;
+                        tempESP := Pop();
+                        tempSS := Pop(); (* 64-bit pop; high-order 48 bits discarded; seg. desc. loaded *)
+            FI;
+FI;
+IF ShadowStackEnabled(CPL)
+    (* check if 8 byte aligned *)
+    IF SSP AND 0x7 != 0
+            THEN #CP(FAR-RET/IRET); FI;
+    IF ReturnCodeSegmentSelector(RPL) !=3
+            THEN
+                    tempSsCS = shadow_stack_load 8 bytes from SSP+16;
+                    tempSsLIP = shadow_stack_load 8 bytes from SSP+8;
+                    tempSSP = shadow_stack_load 8 bytes from SSP;
+                    SSP = SSP + 24;
+                    (* Do 64 bit compare to detect bits beyond 15 being set *)
+                    tempCS = CS; (* zero padded to 64 bit *)
+                    IF tempCS != tempSsCS
+                        THEN #CP(FAR-RET/IRET); FI;
+                    (* Do 64 bit compare; pad CSBASE+RIP with 0 for 32 bit LIP *)
+                    IF CSBASE + RIP != tempSsLIP
+                        THEN #CP(FAR-RET/IRET); FI;
+                    (* check if 4 byte aligned *)
+                    IF tempSSP AND 0x3 != 0
+                        THEN #CP(FAR-RET/IRET); FI;
+    FI;
+FI;
+tempOldCPL = CPL;
+CPL := ReturnCodeSegmentSelector(RPL);
+ESP := tempESP;
+SS := tempSS;
+tempOldSSP = SSP;
+IF ShadowStackEnabled(CPL)
+    IF CPL = 3
+            THEN tempSSP := IA32_PL3_SSP; FI;
+    IF (CS.L = 0 AND tempSSP[63:32] != 0) OR
+        (CS.L = 1 AND tempSSP is not canonical relative to the current paging mode)
+            THEN #GP(0); FI;
+    SSP := tempSSP
+FI;
+(* Now past all faulting points; safe to free the token. The token free is done using the old SSP
+* and using a supervisor override as old CPL was a supervisor privilege level *)
+IF ShadowStackEnabled(tempOldCPL)
+    expected_token_value = tempOldSSP | BUSY_BIT (* busy bit - bit position 0 - must be set *)
+    new_token_value = tempOldSSP (* clear the busy bit *)
+    shadow_stack_lock_cmpxchg8b(tempOldSSP, new_token_value, expected_token_value)
+FI;
+FOR each of segment register (ES, FS, GS, and DS)
+    DO
+            IF segment register points to data or non-conforming code segment
+            and CPL > segment descriptor DPL; (* DPL in hidden part of segment register *)
+                    THEN SegmentSelector := 0; (* SegmentSelector invalid *)
+            FI;
+    OD;
+IF instruction has immediate operand
+    THEN (* Release parameters from calling procedure’s stack *)
+            IF StackAddressSize = 32
+                    THEN
+                        ESP := ESP + SRC;
+                    ELSE
+                        IF StackAddressSize = 16
+                            THEN
+                                SP := SP + SRC;
+                            ELSE (* StackAddressSize = 64 *)
+                                RSP := RSP + SRC;
+                        FI;
+            FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the return code or stack segment selector is NULL.
If the return instruction pointer is not within the return code segment limit.
If returning to 32-bit or compatibility mode and the previous SSP from shadow stack (when returning to CPL <3) or from IA32_PL3_SSP (returning to CPL 3) is beyond 4GB.
#GP(selector)If the RPL of the return code segment selector is less then the CPL.
If the return code or stack segment selector index is not within its descriptor table limits.
If the return code segment descriptor does not indicate a code segment.
If the return code segment is non-conforming and the segment selector’s DPL is not equal to the RPL of the code segment’s segment selector
If the return code segment is conforming and the segment selector’s DPL greater than the RPL of the code segment’s segment selector
If the stack segment is not a writable data segment.
If the stack segment selector RPL is not equal to the RPL of the return code segment selector.
If the stack segment descriptor DPL is not equal to the RPL of the return code segment selector.
#SS(0)If the top bytes of stack are not within stack limits.
If the return stack segment is not present.
#NP(selector)If the return code segment is not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory access occurs when the CPL is 3 and alignment checking is enabled.
#CP(Far-RET/IRET)If the previous SSP from shadow stack (when returning to CPL <3) or from IA32_PL3_SSP (returning to CPL 3) is not 4 byte aligned.
If return instruction pointer from stack and shadow stack do not match.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf the return instruction pointer is not within the return code segment limit
#SSIf the top bytes of stack are not within stack limits.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the return instruction pointer is not within the return code segment limit
#SS(0)If the top bytes of stack are not within stack limits.
#PF(fault-code)If a page fault occurs.
#AC(0)If an unaligned memory access occurs when alignment checking is enabled.
+

Compatibility Mode Exceptions + ¶ +

+

Same as 64-bit mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the return instruction pointer is non-canonical.
If the return instruction pointer is not within the return code segment limit.
If the stack segment selector is NULL going back to compatibility mode.
If the stack segment selector is NULL going back to CPL3 64-bit mode.
If a NULL stack segment selector RPL is not equal to CPL going back to non-CPL3 64-bit mode.
If the return code segment selector is NULL.
If returning to 32-bit or compatibility mode and the previous SSP from shadow stack (when returning to CPL <3) or from IA32_PL3_SSP (returning to CPL 3) is beyond 4GB.
#GP(selector)If the proposed segment descriptor for a code segment does not indicate it is a code segment.
If the proposed new code segment descriptor has both the D-bit and L-bit set.
If the DPL for a nonconforming-code segment is not equal to the RPL of the code segment selector.
If CPL is greater than the RPL of the code segment selector.
If the DPL of a conforming-code segment is greater than the return code segment selector RPL.
If a segment selector index is outside its descriptor table limits.
If a segment descriptor memory address is non-canonical.
If the stack segment is not a writable data segment.
If the stack segment descriptor DPL is not equal to the RPL of the return code segment selector.
If the stack segment selector RPL is not equal to the RPL of the return code segment selector.
#SS(0)If an attempt to pop a value off the stack violates the SS limit.
If an attempt to pop a value off the stack causes a non-canonical address to be referenced.
#NP(selector)If the return code or stack segment is not present.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#CP(Far-RET/IRET)If the previous SSP from shadow stack (when returning to CPL <3) or from IA32_PL3_SSP (returning to CPL 3) is not 4 byte aligned.
If return instruction pointer from stack and shadow stack do not match.
diff --git a/x86/rorx.html b/x86/rorx.html new file mode 100644 index 0000000..03b06df --- /dev/null +++ b/x86/rorx.html @@ -0,0 +1,77 @@ + +RORX + — Rotate Right Logical Without Affecting Flags

RORX + — Rotate Right Logical Without Affecting Flags

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.F2.0F3A.W0 F0 /r ib RORX r32, r/m32, imm8RMIV/VBMI2Rotate 32-bit r/m32 right imm8 times without affecting arithmetic flags.
VEX.LZ.F2.0F3A.W1 F0 /r ib RORX r64, r/m64, imm8RMIV/N.E.BMI2Rotate 64-bit r/m64 right imm8 times without affecting arithmetic flags.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Rotates the bits of second operand right by the count value specified in imm8 without affecting arithmetic flags. The RORX instruction does not read or write the arithmetic flags.

+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

Operation + ¶ +

+
IF (OperandSize = 32)
+    y := imm8 AND 1FH;
+    DEST := (SRC >> y) | (SRC << (32-y));
+ELSEIF (OperandSize = 64 )
+    y := imm8 AND 3FH;
+    DEST := (SRC >> y) | (SRC << (64-y));
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
Auto-generated from high-level language.
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/roundpd.html b/x86/roundpd.html new file mode 100644 index 0000000..fef61f4 --- /dev/null +++ b/x86/roundpd.html @@ -0,0 +1,152 @@ + +ROUNDPD + — Round Packed Double Precision Floating-Point Values

ROUNDPD + — Round Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 09 /r ib ROUNDPD xmm1, xmm2/m128, imm8RMIV/VSSE4_1Round packed double precision floating-point values in xmm2/m128 and place the result in xmm1. The rounding mode is determined by imm8.
VEX.128.66.0F3A.WIG 09 /r ib VROUNDPD xmm1, xmm2/m128, imm8RMIV/VAVXRound packed double precision floating-point values in xmm2/m128 and place the result in xmm1. The rounding mode is determined by imm8.
VEX.256.66.0F3A.WIG 09 /r ib VROUNDPD ymm1, ymm2/m256, imm8RMIV/VAVXRound packed double precision floating-point values in ymm2/m256 and place the result in ymm1. The rounding mode is determined by imm8.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Round the 2 double precision floating-point values in the source operand (second operand) using the rounding mode specified in the immediate operand (third operand) and place the results in the destination operand (first operand). The rounding process rounds each input floating-point value to an integer value and returns the integer result as a double precision floating-point value.

+

The immediate operand specifies control fields for the rounding operation, three bit fields are defined and shown in Figure 4-24. Bit 3 of the immediate byte controls processor behavior for a precision exception, bit 2 selects the source of rounding mode control. Bits 1:0 specify a non-sticky rounding-mode value (Table 4-18 lists the encoded values for rounding-mode field).

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN. If DAZ is set to ‘1 then denormals will be converted to zero before rounding.

+

128-bit Legacy SSE version: The second source can be an XMM register or 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the source operand second source operand or a 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + +
83210
Reserved
P — Precision Mask; 0: normal, 1: inexact
RS — Rounding select; 1: MXCSR.RC, 0: Imm8.RC
RC — Rounding mode
+
Figure 4-24. Bit Control Fields of Immediate Byte for ROUNDxx Instruction
+
Table 4-18. Rounding Modes and Encoding of Rounding Control (RC) Field
+

Rounding RC Field Description Mode Setting

+

Round to 00B Rounded result is the closest to the infinitely precise result. If two values are equally close, the result is nearest (even) the even value (i.e., the integer value with the least-significant bit of zero).

+

Round down 01B Rounded result is closest to but no greater than the infinitely precise result. (toward −∞)

+

Round up 10B Rounded result is closest to but no less than the infinitely precise result. (toward +∞)

+

Round toward 11B Rounded result is closest to but no greater in absolute value than the infinitely precise result. zero (Truncate)

+

Operation + ¶ +

+
IF (imm[2] = ‘1)
+    THEN // rounding mode is determined by MXCSR.RC
+        DEST[63:0] := ConvertDPFPToInteger_M(SRC[63:0]);
+        DEST[127:64] := ConvertDPFPToInteger_M(SRC[127:64]);
+    ELSE // rounding mode is determined by IMM8.RC
+        DEST[63:0] := ConvertDPFPToInteger_Imm(SRC[63:0]);
+        DEST[127:64] := ConvertDPFPToInteger_Imm(SRC[127:64]);
+FI
+
+

ROUNDPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := RoundToInteger(SRC[63:0]], ROUND_CONTROL)
+DEST[127:64] := RoundToInteger(SRC[127:64]], ROUND_CONTROL)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VROUNDPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := RoundToInteger(SRC[63:0]], ROUND_CONTROL)
+DEST[127:64] := RoundToInteger(SRC[127:64]], ROUND_CONTROL)
+DEST[MAXVL-1:128] := 0
+
+

VROUNDPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := RoundToInteger(SRC[63:0], ROUND_CONTROL)
+DEST[127:64] := RoundToInteger(SRC[127:64]], ROUND_CONTROL)
+DEST[191:128] := RoundToInteger(SRC[191:128]], ROUND_CONTROL)
+DEST[255:192] := RoundToInteger(SRC[255:192] ], ROUND_CONTROL)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
__m128 _mm_round_pd(__m128d s1, int iRoundMode);
+
+
__m128 _mm_floor_pd(__m128d s1);
+
+
__m128 _mm_ceil_pd(__m128d s1)
+
+
__m256 _mm256_round_pd(__m256d s1, int iRoundMode);
+
+
__m256 _mm256_floor_pd(__m256d s1);
+
+
__m256 _mm256_ceil_pd(__m256d s1)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (signaled only if SRC = SNaN).

+

Precision (signaled only if imm[3] = ‘0; if imm[3] = ‘1, then the Precision Mask in the MXSCSR is ignored and precision exception is not signaled.)

+

Note that Denormal is not signaled by ROUNDPD.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/roundps.html b/x86/roundps.html new file mode 100644 index 0000000..4079afc --- /dev/null +++ b/x86/roundps.html @@ -0,0 +1,135 @@ + +ROUNDPS + — Round Packed Single Precision Floating-Point Values

ROUNDPS + — Round Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 08 /r ib ROUNDPS xmm1, xmm2/m128, imm8RMIV/VSSE4_1Round packed single precision floating-point values in xmm2/m128 and place the result in xmm1. The rounding mode is determined by imm8.
VEX.128.66.0F3A.WIG 08 /r ib VROUNDPS xmm1, xmm2/m128, imm8RMIV/VAVXRound packed single precision floating-point values in xmm2/m128 and place the result in xmm1. The rounding mode is determined by imm8.
VEX.256.66.0F3A.WIG 08 /r ib VROUNDPS ymm1, ymm2/m256, imm8RMIV/VAVXRound packed single precision floating-point values in ymm2/m256 and place the result in ymm1. The rounding mode is determined by imm8.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Round the 4 single precision floating-point values in the source operand (second operand) using the rounding mode specified in the immediate operand (third operand) and place the results in the destination operand (first operand). The rounding process rounds each input floating-point value to an integer value and returns the integer result as a single precision floating-point value.

+

The immediate operand specifies control fields for the rounding operation, three bit fields are defined and shown in Figure 4-24. Bit 3 of the immediate byte controls processor behavior for a precision exception, bit 2 selects the source of rounding mode control. Bits 1:0 specify a non-sticky rounding-mode value (Table 4-18 lists the encoded values for rounding-mode field).

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN. If DAZ is set to ‘1 then denormals will be converted to zero before rounding.

+

128-bit Legacy SSE version: The second source can be an XMM register or 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the source operand second source operand or a 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+
IF (imm[2] = ‘1)
+    THEN // rounding mode is determined by MXCSR.RC
+        DEST[31:0] := ConvertSPFPToInteger_M(SRC[31:0]);
+        DEST[63:32] := ConvertSPFPToInteger_M(SRC[63:32]);
+        DEST[95:64] := ConvertSPFPToInteger_M(SRC[95:64]);
+        DEST[127:96] := ConvertSPFPToInteger_M(SRC[127:96]);
+    ELSE // rounding mode is determined by IMM8.RC
+        DEST[31:0] := ConvertSPFPToInteger_Imm(SRC[31:0]);
+        DEST[63:32] := ConvertSPFPToInteger_Imm(SRC[63:32]);
+        DEST[95:64] := ConvertSPFPToInteger_Imm(SRC[95:64]);
+        DEST[127:96] := ConvertSPFPToInteger_Imm(SRC[127:96]);
+FI;
+
+

ROUNDPS(128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := RoundToInteger(SRC[31:0], ROUND_CONTROL)
+DEST[63:32] := RoundToInteger(SRC[63:32], ROUND_CONTROL)
+DEST[95:64] := RoundToInteger(SRC[95:64]], ROUND_CONTROL)
+DEST[127:96] := RoundToInteger(SRC[127:96]], ROUND_CONTROL)
+DEST[MAXVL-1:128] (Unmodified)
+
+

VROUNDPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := RoundToInteger(SRC[31:0], ROUND_CONTROL)
+DEST[63:32] := RoundToInteger(SRC[63:32], ROUND_CONTROL)
+DEST[95:64] := RoundToInteger(SRC[95:64]], ROUND_CONTROL)
+DEST[127:96] := RoundToInteger(SRC[127:96]], ROUND_CONTROL)
+DEST[MAXVL-1:128] := 0
+
+

VROUNDPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := RoundToInteger(SRC[31:0], ROUND_CONTROL)
+DEST[63:32] := RoundToInteger(SRC[63:32], ROUND_CONTROL)
+DEST[95:64] := RoundToInteger(SRC[95:64]], ROUND_CONTROL)
+DEST[127:96] := RoundToInteger(SRC[127:96]], ROUND_CONTROL)
+DEST[159:128] := RoundToInteger(SRC[159:128]], ROUND_CONTROL)
+DEST[191:160] := RoundToInteger(SRC[191:160]], ROUND_CONTROL)
+DEST[223:192] := RoundToInteger(SRC[223:192] ], ROUND_CONTROL)
+DEST[255:224] := RoundToInteger(SRC[255:224] ], ROUND_CONTROL)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
__m128 _mm_round_ps(__m128 s1, int iRoundMode);
+
+
__m128 _mm_floor_ps(__m128 s1);
+
+
__m128 _mm_ceil_ps(__m128 s1)
+
+
__m256 _mm256_round_ps(__m256 s1, int iRoundMode);
+
+
__m256 _mm256_floor_ps(__m256 s1);
+
+
__m256 _mm256_ceil_ps(__m256 s1)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (signaled only if SRC = SNaN).

+

Precision (signaled only if imm[3] = ‘0; if imm[3] = ‘1, then the Precision Mask in the MXSCSR is ignored and precision exception is not signaled.)

+

Note that Denormal is not signaled by ROUNDPS.

+

Other Exceptions + ¶ +

+

See Table 2-19, “Type 2 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/roundsd.html b/x86/roundsd.html new file mode 100644 index 0000000..ae99e6e --- /dev/null +++ b/x86/roundsd.html @@ -0,0 +1,101 @@ + +ROUNDSD + — Round Scalar Double Precision Floating-Point Values

ROUNDSD + — Round Scalar Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 0B /r ib ROUNDSD xmm1, xmm2/m64, imm8RMIV/VSSE4_1Round the low packed double precision floating-point value in xmm2/m64 and place the result in xmm1. The rounding mode is determined by imm8.
VEX.LIG.66.0F3A.WIG 0B /r ib VROUNDSD xmm1, xmm2, xmm3/m64, imm8RVMIV/VAVXRound the low packed double precision floating-point value in xmm3/m64 and place the result in xmm1. The rounding mode is determined by imm8. Upper packed double precision floating-point value (bits[127:64]) from xmm2 is copied to xmm1[127:64].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (w)ModRM:r/m (r)imm8N/A
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Round the double precision floating-point value in the lower qword of the source operand (second operand) using the rounding mode specified in the immediate operand (third operand) and place the result in the destination operand (first operand). The rounding process rounds a double precision floating-point input to an integer value and returns the integer result as a double precision floating-point value in the lowest position. The upper double precision floating-point value in the destination is retained.

+

The immediate operand specifies control fields for the rounding operation, three bit fields are defined and shown in Figure 4-24. Bit 3 of the immediate byte controls processor behavior for a precision exception, bit 2 selects the source of rounding mode control. Bits 1:0 specify a non-sticky rounding-mode value (Table 4-18 lists the encoded values for rounding-mode field).

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN. If DAZ is set to ‘1 then denormals will be converted to zero before rounding.

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:64) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

Operation + ¶ +

+
IF (imm[2] = ‘1)
+    THEN // rounding mode is determined by MXCSR.RC
+        DEST[63:0] := ConvertDPFPToInteger_M(SRC[63:0]);
+    ELSE // rounding mode is determined by IMM8.RC
+        DEST[63:0] := ConvertDPFPToInteger_Imm(SRC[63:0]);
+FI;
+DEST[127:63] remains unchanged ;
+
+

ROUNDSD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := RoundToInteger(SRC[63:0], ROUND_CONTROL)
+DEST[MAXVL-1:64] (Unmodified)
+
+

VROUNDSD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := RoundToInteger(SRC2[63:0], ROUND_CONTROL)
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
ROUNDSD __m128d mm_round_sd(__m128d dst, __m128d s1, int iRoundMode);
+
+
ROUNDSD __m128d mm_floor_sd(__m128d dst, __m128d s1);
+
+
ROUNDSD __m128d mm_ceil_sd(__m128d dst, __m128d s1);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (signaled only if SRC = SNaN).

+

Precision (signaled only if imm[3] = ‘0; if imm[3] = ‘1, then the Precision Mask in the MXSCSR is ignored and precision exception is not signaled.)

+

Note that Denormal is not signaled by ROUNDSD.

+

Other Exceptions + ¶ +

+

See Table 2-20, “Type 3 Class Exception Conditions.”

diff --git a/x86/roundss.html b/x86/roundss.html new file mode 100644 index 0000000..f25e380 --- /dev/null +++ b/x86/roundss.html @@ -0,0 +1,101 @@ + +ROUNDSS + — Round Scalar Single Precision Floating-Point Values

ROUNDSS + — Round Scalar Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 3A 0A /r ib ROUNDSS xmm1, xmm2/m32, imm8RMIV/VSSE4_1Round the low packed single precision floating-point value in xmm2/m32 and place the result in xmm1. The rounding mode is determined by imm8.
VEX.LIG.66.0F3A.WIG 0A /r ib VROUNDSS xmm1, xmm2, xmm3/m32, imm8RVMIV/VAVXRound the low packed single precision floating-point value in xmm3/m32 and place the result in xmm1. The rounding mode is determined by imm8. Also, upper packed single precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMIModRM:reg (w)ModRM:r/m (r)imm8N/A
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Round the single precision floating-point value in the lowest dword of the source operand (second operand) using the rounding mode specified in the immediate operand (third operand) and place the result in the destination operand (first operand). The rounding process rounds a single precision floating-point input to an integer value and returns the result as a single precision floating-point value in the lowest position. The upper three single precision floating-point values in the destination are retained.

+

The immediate operand specifies control fields for the rounding operation, three bit fields are defined and shown in Figure 4-24. Bit 3 of the immediate byte controls processor behavior for a precision exception, bit 2 selects the source of rounding mode control. Bits 1:0 specify a non-sticky rounding-mode value (Table 4-18 lists the encoded values for rounding-mode field).

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN. If DAZ is set to ‘1 then denormals will be converted to zero before rounding.

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:32) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

Operation + ¶ +

+
IF (imm[2] = ‘1)
+    THEN // rounding mode is determined by MXCSR.RC
+        DEST[31:0] := ConvertSPFPToInteger_M(SRC[31:0]);
+    ELSE // rounding mode is determined by IMM8.RC
+        DEST[31:0] := ConvertSPFPToInteger_Imm(SRC[31:0]);
+FI;
+DEST[127:32] remains unchanged ;
+
+

ROUNDSS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := RoundToInteger(SRC[31:0], ROUND_CONTROL)
+DEST[MAXVL-1:32] (Unmodified)
+
+

VROUNDSS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := RoundToInteger(SRC2[31:0], ROUND_CONTROL)
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
ROUNDSS __m128 mm_round_ss(__m128 dst, __m128 s1, int iRoundMode);
+
+
ROUNDSS __m128 mm_floor_ss(__m128 dst, __m128 s1);
+
+
ROUNDSS __m128 mm_ceil_ss(__m128 dst, __m128 s1);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (signaled only if SRC = SNaN).

+

Precision (signaled only if imm[3] = ‘0; if imm[3] = ‘1, then the Precision Mask in the MXSCSR is ignored and precision exception is not signaled.)

+

Note that Denormal is not signaled by ROUNDSS.

+

Other Exceptions + ¶ +

+

See Table 2-20, “Type 3 Class Exception Conditions.”

diff --git a/x86/rsm.html b/x86/rsm.html new file mode 100644 index 0000000..e0f92f1 --- /dev/null +++ b/x86/rsm.html @@ -0,0 +1,91 @@ + +RSM + — Resume From System Management Mode

RSM + — Resume From System Management Mode

+ + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F AARSMZOValidValidResume operation of interrupted program.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Returns program control from system management mode (SMM) to the application program or operating-system procedure that was interrupted when the processor received an SMM interrupt. The processor’s state is restored from the dump created upon entering SMM. If the processor detects invalid state information during state restoration, it enters the shutdown state. The following invalid information can cause a shutdown:

+
    +
  • Any reserved bit of CR4 is set to 1.
  • +
  • Any illegal combination of bits in CR0, such as (PG=1 and PE=0) or (NW=1 and CD=0).
  • +
  • (Intel Pentium and Intel486TM processors only.) The value stored in the state dump base field is not a 32-KByte aligned address.
+

The contents of the model-specific registers are not affected by a return from SMM.

+

The SMM state map used by RSM supports resuming processor context for non-64-bit modes and 64-bit mode.

+

See Chapter 32, “System Management Mode,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more information about SMM and the behavior of the RSM instruction.

+

Operation + ¶ +

+
ReturnFromSMM;
+IF (IA-32e mode supported) or (CPUID DisplayFamily_DisplayModel = 06H_0CH )
+    THEN
+        ProcessorState := Restore(SMMDump(IA-32e SMM STATE MAP));
+    Else
+        ProcessorState := Restore(SMMDump(Non-32-Bit-Mode SMM STATE MAP));
+FI
+
+

Flags Affected + ¶ +

+

All.

+

Protected Mode Exceptions + ¶ +

+ + + + + +
#UDIf an attempt is made to execute this instruction when the processor is not in SMM.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/rsqrtps.html b/x86/rsqrtps.html new file mode 100644 index 0000000..7d91eb2 --- /dev/null +++ b/x86/rsqrtps.html @@ -0,0 +1,114 @@ + +RSQRTPS + — Compute Reciprocals of Square Roots of Packed Single Precision Floating-PointValues

RSQRTPS + — Compute Reciprocals of Square Roots of Packed Single Precision Floating-PointValues

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 52 /r RSQRTPS xmm1, xmm2/m128RMV/VSSEComputes the approximate reciprocals of the square roots of the packed single precision floating-point values in xmm2/m128 and stores the results in xmm1.
VEX.128.0F.WIG 52 /r VRSQRTPS xmm1, xmm2/m128RMV/VAVXComputes the approximate reciprocals of the square roots of packed single precision values in xmm2/mem and stores the results in xmm1.
VEX.256.0F.WIG 52 /r VRSQRTPS ymm1, ymm2/m256RMV/VAVXComputes the approximate reciprocals of the square roots of packed single precision values in ymm2/mem and stores the results in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Performs a SIMD computation of the approximate reciprocals of the square roots of the four packed single precision floating-point values in the source operand (second operand) and stores the packed single precision floating-point results in the destination operand. The source operand can be an XMM register or a 128-bit memory location. The destination operand is an XMM register. See Figure 10-5 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a SIMD single precision floating-point operation.

+

The relative error for this approximation is:

+

|Relative Error| ≤ 1.5 ∗ 2−12

+

The RSQRTPS instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ of the sign of the source value is returned. A denormal source value is treated as a 0.0 (of the same sign). When a source value is a negative value (other than −0.0), a floating-point indefinite is returned. When a source value is an SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding YMM register destination are unmodified.

+

VEX.128 encoded version: the first source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding YMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

RSQRTPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SQRT(SRC[31:0]))
+DEST[63:32] := APPROXIMATE(1/SQRT(SRC1[63:32]))
+DEST[95:64] := APPROXIMATE(1/SQRT(SRC1[95:64]))
+DEST[127:96] := APPROXIMATE(1/SQRT(SRC2[127:96]))
+DEST[MAXVL-1:128] (Unmodified)
+
+

VRSQRTPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SQRT(SRC[31:0]))
+DEST[63:32] := APPROXIMATE(1/SQRT(SRC1[63:32]))
+DEST[95:64] := APPROXIMATE(1/SQRT(SRC1[95:64]))
+DEST[127:96] := APPROXIMATE(1/SQRT(SRC2[127:96]))
+DEST[MAXVL-1:128] := 0
+
+

VRSQRTPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SQRT(SRC[31:0]))
+DEST[63:32] := APPROXIMATE(1/SQRT(SRC1[63:32]))
+DEST[95:64] := APPROXIMATE(1/SQRT(SRC1[95:64]))
+DEST[127:96] := APPROXIMATE(1/SQRT(SRC2[127:96]))
+DEST[159:128] := APPROXIMATE(1/SQRT(SRC2[159:128]))
+DEST[191:160] := APPROXIMATE(1/SQRT(SRC2[191:160]))
+DEST[223:192] := APPROXIMATE(1/SQRT(SRC2[223:192]))
+DEST[255:224] := APPROXIMATE(1/SQRT(SRC2[255:224]))
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RSQRTPS __m128 _mm_rsqrt_ps(__m128 a)
+
+
RSQRTPS __m256 _mm256_rsqrt_ps (__m256 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv ≠ 1111B.
diff --git a/x86/rsqrtss.html b/x86/rsqrtss.html new file mode 100644 index 0000000..13a216b --- /dev/null +++ b/x86/rsqrtss.html @@ -0,0 +1,89 @@ + +RSQRTSS + — Compute Reciprocal of Square Root of Scalar Single Precision Floating-Point Value

RSQRTSS + — Compute Reciprocal of Square Root of Scalar Single Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 52 /r RSQRTSS xmm1, xmm2/m32RMV/VSSEComputes the approximate reciprocal of the square root of the low single precision floating-point value in xmm2/m32 and stores the results in xmm1.
VEX.LIG.F3.0F.WIG 52 /r VRSQRTSS xmm1, xmm2, xmm3/m32RVMV/VAVXComputes the approximate reciprocal of the square root of the low single precision floating-point value in xmm3/m32 and stores the results in xmm1. Also, upper single precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Computes an approximate reciprocal of the square root of the low single precision floating-point value in the source operand (second operand) stores the single precision floating-point result in the destination operand. The source operand can be an XMM register or a 32-bit memory location. The destination operand is an XMM register. The three high-order doublewords of the destination operand remain unchanged. See Figure 10-6 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for an illustration of a scalar single precision floating-point operation.

+

The relative error for this approximation is:

+

|Relative Error| ≤ 1.5 ∗ 2−12

+

The RSQRTSS instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ of the sign of the source value is returned. A denormal source value is treated as a 0.0 (of the same sign). When a source value is a negative value (other than −0.0), a floating-point indefinite is returned. When a source value is an SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits this instruction to access additional registers (XMM8-XMM15).

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:32) of the corresponding YMM destination register remain unchanged.

+

VEX.128 encoded version: Bits (MAXVL-1:128) of the destination YMM register are zeroed.

+

Operation + ¶ +

+

RSQRTSS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SQRT(SRC2[31:0]))
+DEST[MAXVL-1:32] (Unmodified)
+
+

VRSQRTSS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := APPROXIMATE(1/SQRT(SRC2[31:0]))
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RSQRTSS __m128 _mm_rsqrt_ss(__m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-22, “Type 5 Class Exception Conditions.”

diff --git a/x86/rstorssp.html b/x86/rstorssp.html new file mode 100644 index 0000000..f316be9 --- /dev/null +++ b/x86/rstorssp.html @@ -0,0 +1,171 @@ + +RSTORSSP + — Restore Saved Shadow Stack Pointer

RSTORSSP + — Restore Saved Shadow Stack Pointer

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 01 /5 (mod!=11, /5, memory only) RSTORSSP m64MV/VCET_SSRestore SSP.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r, w)N/AN/AN/A
+

Description + ¶ +

+

Restores SSP from the shadow-stack-restore token pointed to by m64. If the SSP restore was successful then the instruction replaces the shadow-stack-restore token with a previous-ssp token. The instruction sets the CF flag to indicate whether the SSP address recorded in the shadow-stack-restore token that was processed was 4 byte aligned, i.e., whether an alignment hole was created when the restore-shadow-stack token was pushed on this shadow stack.

+

Following RSTORSSP if a restore-shadow-stack token needs to be saved on the previous shadow stack, use the SAVEPREVSSP instruction.

+

If pushing a restore-shadow-stack token on the previous shadow stack is not required, the previous-ssp token can be popped using the INCSSPQ instruction. If the CF flag was set to indicate presence of an alignment hole, an additional INCSSPD instruction is needed to advance the SSP past the alignment hole.

+

Operation + ¶ +

+
IF CPL = 3
+        IF (CR4.CET & IA32_U_CET.SH_STK_EN) = 0
+            THEN #UD; FI;
+ELSE
+        IF (CR4.CET & IA32_S_CET.SH_STK_EN) = 0
+            THEN #UD; FI;
+FI;
+SSP_LA = Linear_Address(mem operand)
+IF SSP_LA not aligned to 8 bytes
+        THEN #GP(0); FI;
+previous_ssp_token = SSP | (IA32_EFER.LMA AND CS.L) | 0x02
+Start Atomic Execution
+restore_ssp_token = Locked shadow_stack_load 8 bytes from SSP_LA
+fault = 0
+IF ((restore_ssp_token & 0x03) != (IA32_EFER.LMA & CS.L))
+        THEN fault = 1; FI; (* If L flag in token does not match IA32_EFER.LMA & CS.L or bit 1 is not 0 *)
+IF ((IA32_EFER.LMA AND CS.L) = 0 AND restore_ssp_token[63:32] != 0)
+        THEN fault = 1; FI; (* If compatibility/legacy mode and SSP to be restored not below 4G *)
+TMP = restore_ssp_token & ~0x01
+TMP = (TMP - 8)
+TMP = TMP & ~0x07
+IF TMP != SSP_LA
+        THEN fault = 1; FI; (* If address in token does not match the requested top of stack *)
+TMP = (fault == 0) ? previous_ssp_token : restore_ssp_token
+shadow_stack_store 8 bytes of TMP to SSP_LA and release lock
+End Atomic Execution
+IF fault == 1
+    THEN #CP(RSTORSSP); FI;
+SSP = SSP_LA
+// Set the CF if the SSP in the restore token was 4 byte aligned, i.e., there is an alignment hole
+RFLAGS.CF = (restore_ssp_token & 0x04) ? 1 : 0;
+RFLAGS.ZF,PF,AF,OF,SF := 0;
+
+

Flags Affected + ¶ +

+

CF is set to indicate if the shadow stack pointer in the restore token was 4 byte aligned, else it is cleared. ZF, PF, AF, OF, and SF are cleared.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
RSTORSSP void _rstorssp(void *);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
IF CPL = 3 and IA32_U_CET.SH_STK_EN = 0.
IF CPL < 3 and IA32_S_CET.SH_STK_EN = 0.
#GP(0)If linear address of memory operand not 8 byte aligned.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If destination is located in a non-writeable segment.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#CP(rstorssp)If L bit in token does not match (IA32_EFER.LMA & CS.L).
If address in token does not match linear address of memory operand.
If in 32-bit or compatibility mode and the address in token is not below 4G.
#PF(fault-code)If a page fault occurs.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe RSTORSSP instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe RSTORSSP instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same as protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
If CPL = 3 and IA32_U_CET.SH_STK_EN = 0.
If CPL < 3 and IA32_S_CET.SH_STK_EN = 0.
#GP(0)If linear address of memory operand not 8 byte aligned.
If a memory address is in a non-canonical form.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#CP(rstorssp)If L bit in token does not match (IA32_EFER.LMA & CS.L).
If address in token does not match linear address of memory operand.
#PF(fault-code)If a page fault occurs.
diff --git a/x86/sahf.html b/x86/sahf.html new file mode 100644 index 0000000..93f6934 --- /dev/null +++ b/x86/sahf.html @@ -0,0 +1,92 @@ + +SAHF + — Store AH Into Flags

SAHF + — Store AH Into Flags

+ +

Opcode1

+ + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
9ESAHFZOInvalid*ValidLoads SF, ZF, AF, PF, and CF from AH into the EFLAGS register.
+
+

1. Valid in specific steppings. See Description section.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Loads the SF, ZF, AF, PF, and CF flags of the EFLAGS register with values from the corresponding bits in the AH register (bits 7, 6, 4, 2, and 0, respectively). Bits 1, 3, and 5 of register AH are ignored; the corresponding reserved bits (1, 3, and 5) in the EFLAGS register remain as shown in the “Operation” section below.

+

This instruction executes as described above in compatibility mode and legacy mode. It is valid in 64-bit mode only if CPUID.80000001H:ECX.LAHF-SAHF[bit 0] = 1.

+

Operation + ¶ +

+
IF IA-64 Mode
+    THEN
+        IF CPUID.80000001H.ECX[0] = 1;
+            THEN
+                RFLAGS(SF:ZF:0:AF:0:PF:1:CF) := AH;
+            ELSE
+                #UD;
+        FI
+    ELSE
+        EFLAGS(SF:ZF:0:AF:0:PF:1:CF) := AH;
+FI;
+
+

Flags Affected + ¶ +

+

The SF, ZF, AF, PF, and CF flags are loaded with values from the AH register. Bits 1, 3, and 5 of the EFLAGS register are unaffected, with the values remaining 1, 0, and 0, respectively.

+

Protected Mode Exceptions + ¶ +

+

None.

+

Real-Address Mode Exceptions + ¶ +

+

None.

+

Virtual-8086 Mode Exceptions + ¶ +

+

None.

+

Compatibility Mode Exceptions + ¶ +

+

None.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + +
#UDIf CPUID.80000001H.ECX[0] = 0.
If the LOCK prefix is used.
diff --git a/x86/sal.sar.shl.shr.html b/x86/sal.sar.shl.shr.html new file mode 100644 index 0000000..d84c5c6 --- /dev/null +++ b/x86/sal.sar.shl.shr.html @@ -0,0 +1,633 @@ + +SAL/SAR/SHL/SHR + — Shift

SAL/SAR/SHL/SHR + — Shift

+ + + + +

Opcode1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
D0 /4SAL r/m8, 1M1ValidValidMultiply r/m8 by 2, once.
REX + D0 /4SAL r/m82, 1M1ValidN.E.Multiply r/m8 by 2, once.
D2 /4SAL r/m8, CLMCValidValidMultiply r/m8 by 2, CL times.
REX + D2 /4SAL r/m82, CLMCValidN.E.Multiply r/m8 by 2, CL times.
C0 /4 ibSAL r/m8, imm8MIValidValidMultiply r/m8 by 2, imm8 times.
REX + C0 /4 ibSAL r/m82, imm8MIValidN.E.Multiply r/m8 by 2, imm8 times.
D1 /4SAL r/m16, 1M1ValidValidMultiply r/m16 by 2, once.
D3 /4SAL r/m16, CLMCValidValidMultiply r/m16 by 2, CL times.
C1 /4 ibSAL r/m16, imm8MIValidValidMultiply r/m16 by 2, imm8 times.
D1 /4SAL r/m32, 1M1ValidValidMultiply r/m32 by 2, once.
REX.W + D1 /4SAL r/m64, 1M1ValidN.E.Multiply r/m64 by 2, once.
D3 /4SAL r/m32, CLMCValidValidMultiply r/m32 by 2, CL times.
REX.W + D3 /4SAL r/m64, CLMCValidN.E.Multiply r/m64 by 2, CL times.
C1 /4 ibSAL r/m32, imm8MIValidValidMultiply r/m32 by 2, imm8 times.
REX.W + C1 /4 ibSAL r/m64, imm8MIValidN.E.Multiply r/m64 by 2, imm8 times.
D0 /7SAR r/m8, 1M1ValidValidSigned divide3 r/m8 by 2, once.
REX + D0 /7SAR r/m82, 1M1ValidN.E.Signed divide3 r/m8 by 2, once.
D2 /7SAR r/m8, CLMCValidValidSigned divide3 r/m8 by 2, CL times.
REX + D2 /7SAR r/m82, CLMCValidN.E.Signed divide3 r/m8 by 2, CL times.
C0 /7 ibSAR r/m8, imm8MIValidValidSigned divide3 r/m8 by 2, imm8 times.
REX + C0 /7 ibSAR r/m82, imm8MIValidN.E.Signed divide3 r/m8 by 2, imm8 times.
D1 /7SAR r/m16,1M1ValidValidSigned divide3 r/m16 by 2, once.
D3 /7SAR r/m16, CLMCValidValidSigned divide3 r/m16 by 2, CL times.
C1 /7 ibSAR r/m16, imm8MIValidValidSigned divide3 r/m16 by 2, imm8 times.
D1 /7SAR r/m32, 1M1ValidValidSigned divide3 r/m32 by 2, once.
REX.W + D1 /7SAR r/m64, 1M1ValidN.E.Signed divide3 r/m64 by 2, once.
D3 /7SAR r/m32, CLMCValidValidSigned divide3 r/m32 by 2, CL times.
REX.W + D3 /7SAR r/m64, CLMCValidN.E.Signed divide3 r/m64 by 2, CL times.
C1 /7 ibSAR r/m32, imm8MIValidValidSigned divide3 r/m32 by 2, imm8 times.
REX.W + C1 /7 ibSAR r/m64, imm8MIValidN.E.Signed divide3 r/m64 by 2, imm8 times
D0 /4SHL r/m8, 1M1ValidValidMultiply r/m8 by 2, once.
REX + D0 /4SHL r/m82, 1M1ValidN.E.Multiply r/m8 by 2, once.
D2 /4SHL r/m8, CLMCValidValidMultiply r/m8 by 2, CL times.
REX + D2 /4SHL r/m82, CLMCValidN.E.Multiply r/m8 by 2, CL times.
C0 /4 ibSHL r/m8, imm8MIValidValidMultiply r/m8 by 2, imm8 times.
REX + C0 /4 ibSHL r/m82, imm8MIValidN.E.Multiply r/m8 by 2, imm8 times.
D1 /4SHL r/m16,1M1ValidValidMultiply r/m16 by 2, once.
D3 /4SHL r/m16, CLMCValidValidMultiply r/m16 by 2, CL times.
C1 /4 ibSHL r/m16, imm8MIValidValidMultiply r/m16 by 2, imm8 times.
D1 /4SHL r/m32,1M1ValidValidMultiply r/m32 by 2, once.
+

Opcode1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
REX.W + D1 /4SHL r/m64,1M1ValidN.E.Multiply r/m64 by 2, once.
D3 /4SHL r/m32, CLMCValidValidMultiply r/m32 by 2, CL times.
REX.W + D3 /4SHL r/m64, CLMCValidN.E.Multiply r/m64 by 2, CL times.
C1 /4 ibSHL r/m32, imm8MIValidValidMultiply r/m32 by 2, imm8 times.
REX.W + C1 /4 ibSHL r/m64, imm8MIValidN.E.Multiply r/m64 by 2, imm8 times.
D0 /5SHR r/m8,1M1ValidValidUnsigned divide r/m8 by 2, once.
REX + D0 /5SHR r/m82, 1M1ValidN.E.Unsigned divide r/m8 by 2, once.
D2 /5SHR r/m8, CLMCValidValidUnsigned divide r/m8 by 2, CL times.
REX + D2 /5SHR r/m82, CLMCValidN.E.Unsigned divide r/m8 by 2, CL times.
C0 /5 ibSHR r/m8, imm8MIValidValidUnsigned divide r/m8 by 2, imm8 times.
REX + C0 /5 ibSHR r/m82, imm8MIValidN.E.Unsigned divide r/m8 by 2, imm8 times.
D1 /5SHR r/m16, 1M1ValidValidUnsigned divide r/m16 by 2, once.
D3 /5SHR r/m16, CLMCValidValidUnsigned divide r/m16 by 2, CL times
C1 /5 ibSHR r/m16, imm8MIValidValidUnsigned divide r/m16 by 2, imm8 times.
D1 /5SHR r/m32, 1M1ValidValidUnsigned divide r/m32 by 2, once.
REX.W + D1 /5SHR r/m64, 1M1ValidN.E.Unsigned divide r/m64 by 2, once.
D3 /5SHR r/m32, CLMCValidValidUnsigned divide r/m32 by 2, CL times.
REX.W + D3 /5SHR r/m64, CLMCValidN.E.Unsigned divide r/m64 by 2, CL times.
C1 /5 ibSHR r/m32, imm8MIValidValidUnsigned divide r/m32 by 2, imm8 times.
REX.W + C1 /5 ibSHR r/m64, imm8MIValidN.E.Unsigned divide r/m64 by 2, imm8 times.
+
+

1. See the IA-32 Architecture Compatibility section below.

+

2. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

3. Not the same form of division as IDIV; rounding is toward negative infinity.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
M1ModRM:r/m (r, w)1N/AN/A
MCModRM:r/m (r, w)CLN/AN/A
MIModRM:r/m (r, w)imm8N/AN/A
+

Description + ¶ +

+

Shifts the bits in the first operand (destination operand) to the left or right by the number of bits specified in the second operand (count operand). Bits shifted beyond the destination operand boundary are first shifted into the CF flag, then discarded. At the end of the shift operation, the CF flag contains the last bit shifted out of the destination operand.

+

The destination operand can be a register or a memory location. The count operand can be an immediate value or the CL register. The count is masked to 5 bits (or 6 bits with a 64-bit operand). The count range is limited to 0 to 31 (or 63 with a 64-bit operand). A special opcode encoding is provided for a count of 1.

+

The shift arithmetic left (SAL) and shift logical left (SHL) instructions perform the same operation; they shift the bits in the destination operand to the left (toward more significant bit locations). For each shift count, the most significant bit of the destination operand is shifted into the CF flag, and the least significant bit is cleared (see Figure 7-7 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).

+

The shift arithmetic right (SAR) and shift logical right (SHR) instructions shift the bits of the destination operand to the right (toward less significant bit locations). For each shift count, the least significant bit of the destination operand is shifted into the CF flag, and the most significant bit is either set or cleared depending on the instruction type. The SHR instruction clears the most significant bit (see Figure 7-8 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1); the SAR instruction sets or clears the most significant bit to correspond to the sign (most significant bit) of the original value in the destination operand. In effect, the SAR instruction fills the empty bit position’s shifted value with the sign of the unshifted value (see Figure 7-9 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).

+

The SAR and SHR instructions can be used to perform signed or unsigned division, respectively, of the destination operand by powers of 2. For example, using the SAR instruction to shift a signed integer 1 bit to the right divides the value by 2.

+

Using the SAR instruction to perform a division operation does not produce the same result as the IDIV instruction. The quotient from the IDIV instruction is rounded toward zero, whereas the “quotient” of the SAR instruction is rounded toward negative infinity. This difference is apparent only for negative numbers. For example, when the IDIV instruction is used to divide -9 by 4, the result is -2 with a remainder of -1. If the SAR instruction is used to shift -9 right by two bits, the result is -3 and the “remainder” is +3; however, the SAR instruction stores only the most significant bit of the remainder (in the CF flag).

+

The OF flag is affected only on 1-bit shifts. For left shifts, the OF flag is set to 0 if the most-significant bit of the result is the same as the CF flag (that is, the top two bits of the original operand were the same); otherwise, it is set to 1. For the SAR instruction, the OF flag is cleared for all 1-bit shifts. For the SHR instruction, the OF flag is set to the most-significant bit of the original operand.

+

In 64-bit mode, the instruction’s default operation size is 32 bits and the mask width for CL is 5 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64-bits and sets the mask width for CL to 6 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

IA-32 Architecture Compatibility + ¶ +

+

The 8086 does not mask the shift count. However, all other IA-32 processors (starting with the Intel 286 processor) do mask the shift count to 5 bits, resulting in a maximum count of 31. This masking is done in all operating modes (including the virtual-8086 mode) to reduce the maximum execution time of the instructions.

+

Operation + ¶ +

+
IF OperandSize = 64
+    THEN
+        countMASK := 3FH;
+    ELSE
+        countMASK := 1FH;
+FI
+tempCOUNT := (COUNT AND countMASK);
+tempDEST := DEST;
+WHILE (tempCOUNT ≠ 0)
+DO
+    IF instruction is SAL or SHL
+        THEN
+            CF := MSB(DEST);
+        ELSE (* Instruction is SAR or SHR *)
+            CF := LSB(DEST);
+    FI;
+    IF instruction is SAL or SHL
+        THEN
+            DEST := DEST ∗ 2;
+        ELSE
+            IF instruction is SAR
+                THEN
+                    DEST := DEST / 2; (* Signed divide, rounding toward negative infinity *)
+                ELSE (* Instruction is SHR *)
+                    DEST := DEST / 2 ; (* Unsigned divide *)
+            FI;
+    FI;
+    tempCOUNT := tempCOUNT – 1;
+OD;
+(* Determine overflow for the various instructions *)
+IF (COUNT and countMASK) = 1
+    THEN
+        IF instruction is SAL or SHL
+            THEN
+                OF := MSB(DEST) XOR CF;
+            ELSE
+                IF instruction is SAR
+                    THEN
+                        OF := 0;
+                    ELSE (* Instruction is SHR *)
+                        OF := MSB(tempDEST);
+                FI;
+        FI;
+    ELSE IF (COUNT AND countMASK) = 0
+        THEN
+            All flags unchanged;
+        ELSE (* COUNT not 1 or 0 *)
+            OF := undefined;
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

The CF flag contains the value of the last bit shifted out of the destination operand; it is undefined for SHL and SHR instructions where the count is greater than or equal to the size (in bits) of the destination operand. The OF flag is affected only for 1-bit shifts (see “Description” above); otherwise, it is undefined. The SF, ZF, and PF flags are set according to the result. If the count is 0, the flags are not affected. For a non-zero count, the AF flag is undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/sarx.shlx.shrx.html b/x86/sarx.shlx.shrx.html new file mode 100644 index 0000000..e4affb0 --- /dev/null +++ b/x86/sarx.shlx.shrx.html @@ -0,0 +1,121 @@ + +SARX/SHLX/SHRX + — Shift Without Affecting Flags

SARX/SHLX/SHRX + — Shift Without Affecting Flags

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.LZ.F3.0F38.W0 F7 /r SARX r32a, r/m32, r32bRMVV/VBMI2Shift r/m32 arithmetically right with count specified in r32b.
VEX.LZ.66.0F38.W0 F7 /r SHLX r32a, r/m32, r32bRMVV/VBMI2Shift r/m32 logically left with count specified in r32b.
VEX.LZ.F2.0F38.W0 F7 /r SHRX r32a, r/m32, r32bRMVV/VBMI2Shift r/m32 logically right with count specified in r32b.
VEX.LZ.F3.0F38.W1 F7 /r SARX r64a, r/m64, r64bRMVV/N.E.BMI2Shift r/m64 arithmetically right with count specified in r64b.
VEX.LZ.66.0F38.W1 F7 /r SHLX r64a, r/m64, r64bRMVV/N.E.BMI2Shift r/m64 logically left with count specified in r64b.
VEX.LZ.F2.0F38.W1 F7 /r SHRX r64a, r/m64, r64bRMVV/N.E.BMI2Shift r/m64 logically right with count specified in r64b.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMVModRM:reg (w)ModRM:r/m (r)VEX.vvvv (r)N/A
+

Description + ¶ +

+

Shifts the bits of the first source operand (the second operand) to the left or right by a COUNT value specified in the second source operand (the third operand). The result is written to the destination operand (the first operand).

+

The shift arithmetic right (SARX) and shift logical right (SHRX) instructions shift the bits of the destination operand to the right (toward less significant bit locations), SARX keeps and propagates the most significant bit (sign bit) while shifting.

+

The logical shift left (SHLX) shifts the bits of the destination operand to the left (toward more significant bit locations).

+

This instruction is not supported in real mode and virtual-8086 mode. The operand size is always 32 bits if not in 64-bit mode. In 64-bit mode operand size 64 requires VEX.W1. VEX.W1 is ignored in non-64-bit modes. An attempt to execute this instruction with VEX.L not equal to 0 will cause #UD.

+

If the value specified in the first source operand exceeds OperandSize -1, the COUNT value is masked.

+

SARX,SHRX, and SHLX instructions do not update flags.

+

Operation + ¶ +

+
TEMP := SRC1;
+IF VEX.W1 and CS.L = 1
+THEN
+    countMASK := 3FH;
+ELSE
+    countMASK := 1FH;
+FI
+COUNT := (SRC2 AND countMASK)
+DEST[OperandSize -1] = TEMP[OperandSize -1];
+DO WHILE (COUNT ≠ 0)
+    IF instruction is SHLX
+        THEN
+            DEST[] := DEST *2;
+        ELSE IF instruction is SHRX
+            THEN
+                DEST[] := DEST /2; //unsigned divide
+        ELSE // SARX
+                DEST[] := DEST /2; // signed divide, round toward negative infinity
+    FI;
+    COUNT := COUNT - 1;
+OD
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
Auto-generated from high-level language.
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-29, “Type 13 Class Exception Conditions.”

diff --git a/x86/saveprevssp.html b/x86/saveprevssp.html new file mode 100644 index 0000000..a1f600f --- /dev/null +++ b/x86/saveprevssp.html @@ -0,0 +1,152 @@ + +SAVEPREVSSP + — Save Previous Shadow Stack Pointer

SAVEPREVSSP + — Save Previous Shadow Stack Pointer

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 01 EA (mod!=11, /5, RM=010) SAVEPREVSSPZOV/VCET_SSSave a restore-shadow-stack token on previous shadow stack.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Push a restore-shadow-stack token on the previous shadow stack at the next 8 byte aligned boundary. The previous SSP is obtained from the previous-ssp token at the top of the current shadow stack.

+

Operation + ¶ +

+
IF CPL = 3
+    IF (CR4.CET & IA32_U_CET.SH_STK_EN) = 0
+        THEN #UD; FI;
+ELSE
+    IF (CR4.CET & IA32_S_CET.SH_STK_EN) = 0
+        THEN #UD; FI;
+FI;
+IF SSP not aligned to 8 bytes
+    THEN #GP(0); FI;
+(* Pop the “previous-ssp” token from current shadow stack *)
+previous_ssp_token = ShadowStackPop8B(SSP)
+(* If the CF flag indicates there was a alignment hole on current shadow stack then pop that alignment hole *)
+(* Note that the alignment hole must be zero and can be present only when in legacy/compatibility mode *)
+IF RFLAGS.CF == 1 AND (IA32_EFER.LMA AND CS.L)
+    #GP(0)
+FI;
+IF RFLAGS.CF == 1
+    must_be_zero = ShadowStackPop4B(SSP)
+    IF must_be_zero != 0 THEN #GP(0)
+FI;
+(* Previous SSP token must have the bit 1 set *)
+IF ((previous_ssp_token & 0x02) == 0)
+    THEN #GP(0); (* bit 1 was 0 *)
+IF ((IA32_EFER.LMA AND CS.L) = 0 AND previous_ssp_token [63:32] != 0)
+THEN #GP(0); FI; (* If compatibility/legacy mode and SSP not in 4G *)
+(* Save Prev SSP from previous_ssp_token to the old shadow stack at next 8 byte aligned address *)
+old_SSP = previous_ssp_token & ~0x03
+temp := (old_SSP | (IA32_EFER.LMA & CS.L));
+Shadow_stack_store 4 bytes of 0 to (old_SSP - 4)
+old_SSP := old_SSP & ~0x07;
+Shadow_stack_store 8 bytes of temp to (old_SSP - 8)
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SAVEPREVSSP void _saveprevssp(void);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
IF CPL = 3 and IA32_U_CET.SH_STK_EN = 0.
IF CPL < 3 and IA32_S_CET.SH_STK_EN = 0.
#GP(0)If SSP not 8 byte aligned.
If alignment hole on shadow stack is not 0.
If bit 1 of the previous-ssp token is not set to 1.
If in 32-bit/compatibility mode and SSP recorded in previous-ssp token is beyond 4G.
#PF(fault-code)If a page fault occurs.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe SAVEPREVSSP instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe SAVEPREVSSP instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same as protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
If CPL = 3 and IA32_U_CET.SH_STK_EN = 0.
If CPL < 3 and IA32_S_CET.SH_STK_EN = 0.
#GP(0)If SSP not 8 byte aligned.
If carry flag is set.
If bit 1 of the previous-ssp token is not set to 1.
#PF(fault-code)If a page fault occurs.
diff --git a/x86/sbb.html b/x86/sbb.html new file mode 100644 index 0000000..d7a205f --- /dev/null +++ b/x86/sbb.html @@ -0,0 +1,315 @@ + +SBB + — Integer Subtraction With Borrow

SBB + — Integer Subtraction With Borrow

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
1C ibSBB AL, imm8IValidValidSubtract with borrow imm8 from AL.
1D iwSBB AX, imm16IValidValidSubtract with borrow imm16 from AX.
1D idSBB EAX, imm32IValidValidSubtract with borrow imm32 from EAX.
REX.W + 1D idSBB RAX, imm32IValidN.E.Subtract with borrow sign-extended imm.32 to 64-bits from RAX.
80 /3 ibSBB r/m8, imm8MIValidValidSubtract with borrow imm8 from r/m8.
REX + 80 /3 ibSBB r/m81, imm8MIValidN.E.Subtract with borrow imm8 from r/m8.
81 /3 iwSBB r/m16, imm16MIValidValidSubtract with borrow imm16 from r/m16.
81 /3 idSBB r/m32, imm32MIValidValidSubtract with borrow imm32 from r/m32.
REX.W + 81 /3 idSBB r/m64, imm32MIValidN.E.Subtract with borrow sign-extended imm32 to 64-bits from r/m64.
83 /3 ibSBB r/m16, imm8MIValidValidSubtract with borrow sign-extended imm8 from r/m16.
83 /3 ibSBB r/m32, imm8MIValidValidSubtract with borrow sign-extended imm8 from r/m32.
REX.W + 83 /3 ibSBB r/m64, imm8MIValidN.E.Subtract with borrow sign-extended imm8 from r/m64.
18 /rSBB r/m8, r8MRValidValidSubtract with borrow r8 from r/m8.
REX + 18 /rSBB r/m81, r8MRValidN.E.Subtract with borrow r8 from r/m8.
19 /rSBB r/m16, r16MRValidValidSubtract with borrow r16 from r/m16.
19 /rSBB r/m32, r32MRValidValidSubtract with borrow r32 from r/m32.
REX.W + 19 /rSBB r/m64, r64MRValidN.E.Subtract with borrow r64 from r/m64.
1A /rSBB r8, r/m8RMValidValidSubtract with borrow r/m8 from r8.
REX + 1A /rSBB r81, r/m81RMValidN.E.Subtract with borrow r/m8 from r8.
1B /rSBB r16, r/m16RMValidValidSubtract with borrow r/m16 from r16.
1B /rSBB r32, r/m32RMValidValidSubtract with borrow r/m32 from r32.
REX.W + 1B /rSBB r64, r/m64RMValidN.E.Subtract with borrow r/m64 from r64.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
IAL/AX/EAX/RAXimm8/16/32N/AN/A
MIModRM:r/m (w)imm8/16/32N/AN/A
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Adds the source operand (second operand) and the carry (CF) flag, and subtracts the result from the destination operand (first operand). The result of the subtraction is stored in the destination operand. The destination operand can be a register or a memory location; the source operand can be an immediate, a register, or a memory location.

+

(However, two memory operands cannot be used in one instruction.) The state of the CF flag represents a borrow from a previous subtraction.

+

When an immediate value is used as an operand, it is sign-extended to the length of the destination operand format.

+

The SBB instruction does not distinguish between signed or unsigned operands. Instead, the processor evaluates the result for both data types and sets the OF and CF flags to indicate a borrow in the signed or unsigned result, respectively. The SF flag indicates the sign of the signed result.

+

The SBB instruction is usually executed as part of a multibyte or multiword subtraction in which a SUB instruction is followed by a SBB instruction.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := (DEST – (SRC + CF));
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SBB extern unsigned char _subborrow_u8(unsigned char c_in, unsigned char src1, unsigned char src2, unsigned char *diff_out);
+
+
SBB extern unsigned char _subborrow_u16(unsigned char c_in, unsigned short src1, unsigned short src2, unsigned short *diff_out);
+
+
SBB extern unsigned char _subborrow_u32(unsigned char c_in, unsigned int src1, unsigned char int, unsigned int *diff_out);
+
+
SBB extern unsigned char _subborrow_u64(unsigned char c_in, unsigned __int64 src1, unsigned __int64 src2, unsigned __int64 *diff_out);
+
+

Flags Affected + ¶ +

+

The OF, SF, ZF, AF, PF, and CF flags are set according to the result.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/scas.scasb.scasw.scasd.html b/x86/scas.scasb.scasw.scasd.html new file mode 100644 index 0000000..6dea6b8 --- /dev/null +++ b/x86/scas.scasb.scasw.scasd.html @@ -0,0 +1,247 @@ + +SCAS/SCASB/SCASW/SCASD + — Scan String

SCAS/SCASB/SCASW/SCASD + — Scan String

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
AESCAS m8ZOValidValidCompare AL with byte at ES:(E)DI or RDI, then set status flags.1
AFSCAS m16ZOValidValidCompare AX with word at ES:(E)DI or RDI, then set status flags.1
AFSCAS m32ZOValidValidCompare EAX with doubleword at ES(E)DI or RDI then set status flags.1
REX.W + AFSCAS m64ZOValidN.E.Compare RAX with quadword at RDI or EDI then set status flags.
AESCASBZOValidValidCompare AL with byte at ES:(E)DI or RDI then set status flags.1
AFSCASWZOValidValidCompare AX with word at ES:(E)DI or RDI then set status flags.1
AFSCASDZOValidValidCompare EAX with doubleword at ES:(E)DI or RDI then set status flags.1
REX.W + AFSCASQZOValidN.E.Compare RAX with quadword at RDI or EDI then set status flags.
+
+

1. In 64-bit mode, only 64-bit (RDI) and 32-bit (EDI) address sizes are supported. In non-64-bit mode, only 32-bit (EDI) and 16-bit (DI) address sizes are supported.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

In non-64-bit modes and in default 64-bit mode: this instruction compares a byte, word, doubleword or quadword specified using a memory operand with the value in AL, AX, or EAX. It then sets status flags in EFLAGS recording the results. The memory operand address is read from ES:(E)DI register (depending on the address-size attribute of the instruction and the current operational mode). Note that ES cannot be overridden with a segment override prefix.

+

At the assembly-code level, two forms of this instruction are allowed. The explicit-operand form and the no-operands form. The explicit-operand form (specified using the SCAS mnemonic) allows a memory operand to be specified explicitly. The memory operand must be a symbol that indicates the size and location of the operand value. The register operand is then automatically selected to match the size of the memory operand (AL register for byte comparisons, AX for word comparisons, EAX for doubleword comparisons). The explicit-operand form is provided to allow documentation. Note that the documentation provided by this form can be misleading. That is, the memory operand symbol must specify the correct type (size) of the operand (byte, word, or doubleword) but it does not have to specify the correct location. The location is always specified by ES:(E)DI.

+

The no-operands form of the instruction uses a short form of SCAS. Again, ES:(E)DI is assumed to be the memory operand and AL, AX, or EAX is assumed to be the register operand. The size of operands is selected by the mnemonic: SCASB (byte comparison), SCASW (word comparison), or SCASD (doubleword comparison).

+

After the comparison, the (E)DI register is incremented or decremented automatically according to the setting of the DF flag in the EFLAGS register. If the DF flag is 0, the (E)DI register is incremented; if the DF flag is 1, the (E)DI register is decremented. The register is incremented or decremented by 1 for byte operations, by 2 for word operations, and by 4 for doubleword operations.

+

SCAS, SCASB, SCASW, SCASD, and SCASQ can be preceded by the REP prefix for block comparisons of ECX bytes, words, doublewords, or quadwords. Often, however, these instructions will be used in a LOOP construct that takes some action based on the setting of status flags. See “REP/REPE/REPZ /REPNE/REPNZ—Repeat String Operation Prefix” in this chapter for a description of the REP prefix.

+

In 64-bit mode, the instruction’s default address size is 64-bits, 32-bit address size is supported using the prefix 67H. Using a REX prefix in the form of REX.W promotes operation on doubleword operand to 64 bits. The 64-bit no-operand mnemonic is SCASQ. Address of the memory operand is specified in either RDI or EDI, and AL/AX/EAX/RAX may be used as the register operand. After a comparison, the destination register is incremented or decremented by the current operand size (depending on the value of the DF flag). See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+

Non-64-bit Mode: + ¶ +

+
IF (Byte comparison)
+    THEN
+        temp := AL − SRC;
+        SetStatusFlags(temp);
+            THEN IF DF = 0
+                THEN (E)DI := (E)DI + 1;
+                ELSE (E)DI := (E)DI – 1; FI;
+    ELSE IF (Word comparison)
+        THEN
+            temp := AX − SRC;
+            SetStatusFlags(temp);
+            IF DF = 0
+                THEN (E)DI := (E)DI + 2;
+                ELSE (E)DI := (E)DI – 2; FI;
+        FI;
+    ELSE IF (Doubleword comparison)
+        THEN
+            temp := EAX – SRC;
+            SetStatusFlags(temp);
+            IF DF = 0
+                THEN (E)DI := (E)DI + 4;
+                ELSE (E)DI := (E)DI – 4; FI;
+        FI;
+FI;
+
+

64-bit Mode: + ¶ +

+
IF (Byte comparison)
+    THEN
+        temp := AL − SRC;
+        SetStatusFlags(temp);
+            THEN IF DF = 0
+                THEN (R|E)DI := (R|E)DI + 1;
+                ELSE (R|E)DI := (R|E)DI – 1; FI;
+    ELSE IF (Word comparison)
+        THEN
+            temp := AX − SRC;
+            SetStatusFlags(temp);
+            IF DF = 0
+                THEN (R|E)DI := (R|E)DI + 2;
+                ELSE (R|E)DI := (R|E)DI – 2; FI;
+        FI;
+    ELSE IF (Doubleword comparison)
+        THEN
+            temp := EAX – SRC;
+            SetStatusFlags(temp);
+            IF DF = 0
+                THEN (R|E)DI := (R|E)DI + 4;
+                ELSE (R|E)DI := (R|E)DI – 4; FI;
+        FI;
+    ELSE IF (Quadword comparison using REX.W )
+        THEN
+            temp := RAX − SRC;
+            SetStatusFlags(temp);
+            IF DF = 0
+                THEN (R|E)DI := (R|E)DI + 8;
+                ELSE (R|E)DI := (R|E)DI – 8;
+            FI;
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

The OF, SF, ZF, AF, PF, and CF flags are set according to the temporary result of the comparison.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the limit of the ES segment.
If the ES register contains a NULL segment selector.
If an illegal memory operand effective address in the ES segment is given.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/senduipi.html b/x86/senduipi.html new file mode 100644 index 0000000..537df6a --- /dev/null +++ b/x86/senduipi.html @@ -0,0 +1,151 @@ + +SENDUIPI + — Send User Interprocessor Interrupt

SENDUIPI + — Send User Interprocessor Interrupt

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F C7 /6 SENDUIPI regAV/IUINTRSend interprocessor user interrupt.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r)N/AN/AN/A
+

Description + ¶ +

+

The SENDUIPI instruction sends the user interprocessor interrupt (IPI) indicated by its register operand. (The operand always has 64 bits; operand-size overrides such as the prefix 66 are ignored.)

+

SENDUIPI uses a data structure called the user-interrupt target table (UITT). This table is located at the linear address UITTADDR (in the IA32_UINTR_TT MSR); it comprises UITTSZ+1 16-byte entries, where UITTSZ = IA32_UINT_MISC[31:0]. SENDUIPI uses the UITT entry (UITTE) indexed by the instruction's register operand. Each UITTE has the following format:

+
    +
  • Bit 0: V, a valid bit.
  • +
  • Bits 7:1 are reserved and must be 0.
  • +
  • Bits 15:8: UV, the user-interrupt vector (in the range 0–63, so bits 15:14 must be 0).
  • +
  • Bits 63:16 are reserved.
  • +
  • Bits 127:64: UPIDADDR, the linear address of a user posted-interrupt descriptor (UPID). (UPIDADDR is 64-byte aligned, so bits 69:64 of each UITTE must be 0.)
+

Each UPID has the following format (fields and bits not referenced are reserved):

+
    +
  • Bit 0 (ON) indicates an outstanding notification. If this bit is set, there is a notification outstanding for one or more user interrupts in PIR.
  • +
  • Bit 1 (SN) indicates that notifications should be suppressed. If this bit is set, agents (including SENDUIPI) should not send notifications when posting user interrupts in this descriptor.
  • +
  • Bits 23:16 (NV) contain the notification vector. This is used by agents sending user-interrupt notifications (including SENDUIPI).
  • +
  • Bits 63:32 (NDST) contain the notification destination. This is the target physical APIC ID (in xAPIC mode, bits 47:40 are the 8-bit APIC ID; in x2APIC mode, the entire field forms the 32-bit APIC ID).
  • +
  • Bits 127:64 (PIF) contain posted-interrupt requests. There is one bit for each user-interrupt vector. There is a user-interrupt request for a vector if the corresponding bit is 1.
+

Although SENDUIPI may be executed at any privilege level, all of the instruction’s memory accesses (to a UITTE and a UPID) are performed with supervisor privilege.

+

SENDUIPI sends a user interrupt by posting a user interrupt with vector V in the UPID referenced by UPIDADDR and then sending, as an ordinary IPI, any notification interrupt specified in that UPID.

+

Operation + ¶ +

+
IF reg > UITTSZ;
+    THEN #GP(0);
+FI;
+read tempUITTE from 16 bytes at UITTADDR+ (reg « 4);
+IF tempUITTE.V = 0 or tempUITTE sets any reserved bit
+    THEN #GP(0);
+FI;
+read tempUPID from 16 bytes at tempUITTE.UPIDADDR;// under lock
+IF tempUPID sets any reserved bits or bits that must be zero
+    THEN #GP(0); // release lock
+FI;
+tempUPID.PIR[tempUITTE.UV] := 1;
+IF tempUPID.SN = tempUPID.ON = 0
+    THEN
+        tempUPID.ON := 1;
+        sendNotify := 1;
+    ELSE sendNotify := 0;
+FI;
+write tempUPID to 16 bytes at tempUITTE.UPIDADDR;// release lock
+IF sendNotify = 1
+    THEN
+        IF local APIC is in x2APIC mode
+            THEN send ordinary IPI with vector tempUPID.NV
+                to 32-bit physical APIC ID tempUPID.NDST;
+            ELSE send ordinary IPI with vector tempUPID.NV
+                to 8-bit physical APIC ID tempUPID.NDST[15:8];
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDThe SENDUIPI instruction is not recognized in protected mode.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe SENDUIPI instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe SENDUIPI instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe SENDUIPI instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If executed inside an enclave.
If CR4.UINTR = 0.
If IA32_UINTR_TT[0] = 0.
If CPUID.07H.0H:EDX.UINTR[bit 5] = 0.
#PFIf a page fault occurs.
#GPIf the value of the register operand exceeds UITTSZ.
If the selected UITTE is not valid or sets any reserved bits.
If the selected UPID sets any reserved bits.
If there is an attempt to access memory using a linear address that is not canonical relative to the current paging mode.
diff --git a/x86/senter.html b/x86/senter.html new file mode 100644 index 0000000..7588bdd --- /dev/null +++ b/x86/senter.html @@ -0,0 +1,414 @@ + +GETSEC[SENTER] + — Enter a Measured Environment

GETSEC[SENTER] + — Enter a Measured Environment

+ + + + + + + + + +
OpcodeInstructionDescription
NP 0F 37 (EAX=4)GETSEC[SENTER]Launch a measured environment. EBX holds the SINIT authenticated code module physical base address. ECX holds the SINIT authenticated code module size (bytes). EDX controls the level of functionality supported by the measured environment launch.
+

Description + ¶ +

+

The GETSEC[SENTER] instruction initiates the launch of a measured environment and places the initiating logical processor (ILP) into the authenticated code execution mode. The SENTER leaf of GETSEC is selected with EAX set to 4 at execution. The physical base address of the AC module to be loaded and authenticated is specified in EBX. The size of the module in bytes is specified in ECX. EDX controls the level of functionality supported by the measured environment launch. To enable the full functionality of the protected environment launch, EDX must be initialized to zero.

+

The authenticated code base address and size parameters (in bytes) are passed to the GETSEC[SENTER] instruction using EBX and ECX respectively. The ILP evaluates the contents of these registers according to the rules for the AC module address in GETSEC[ENTERACCS]. AC module execution follows the same rules, as set by GETSEC[ENTERACCS].

+

The launching software must ensure that the TPM.ACCESS_0.activeLocality bit is clear before executing the GETSEC[SENTER] instruction.

+

There are restrictions enforced by the processor for execution of the GETSEC[SENTER] instruction:

+
    +
  • Execution is not allowed unless the processor is in protected mode or IA-32e mode with CPL = 0 and EFLAGS.VM = 0.
  • +
  • Processor cache must be available and not disabled using the CR0.CD and NW bits.
  • +
  • For enforcing consistency of operation with numeric exception reporting using Interrupt 16, CR0.NE must be set.
  • +
  • An Intel TXT-capable chipset must be present as communicated to the processor by sampling of the power-on configuration capability field after reset.
  • +
  • The processor can not be in authenticated code execution mode or already in a measured environment (as launched by a previous GETSEC[ENTERACCS] or GETSEC[SENTER] instruction).
  • +
  • To avoid potential operability conflicts between modes, the processor is not allowed to execute this instruction if it currently is in SMM or VMX operation.
  • +
  • To ensure consistent handling of SIPI messages, the processor executing the GETSEC[SENTER] instruction must also be designated the BSP (boot-strap processor) as defined by IA32_APIC_BASE.BSP (Bit 8).
  • +
  • EDX must be initialized to a setting supportable by the processor. Unless enumeration by the GETSEC[PARAMETERS] leaf reports otherwise, only a value of zero is supported.
+

Failure to abide by the above conditions results in the processor signaling a general protection violation.

+

This instruction leaf starts the launch of a measured environment by initiating a rendezvous sequence for all logical processors in the platform. The rendezvous sequence involves the initiating logical processor sending a message (by executing GETSEC[SENTER]) and other responding logical processors (RLPs) acknowledging the message, thus synchronizing the RLP(s) with the ILP.

+

In response to a message signaling the completion of rendezvous, RLPs clear the bootstrap processor indicator flag (IA32_APIC_BASE.BSP) and enter an SENTER sleep state. In this sleep state, RLPs enter an idle processor condition while waiting to be activated after a measured environment has been established by the system executive. RLPs in the SENTER sleep state can only be activated by the GETSEC leaf function WAKEUP in a measured environment.

+

A successful launch of the measured environment results in the initiating logical processor entering the authenticated code execution mode. Prior to reaching this point, the ILP performs the following steps internally:

+
    +
  • Inhibit processor response to the external events: INIT, A20M, NMI, and SMI.
  • +
  • Establish and check the location and size of the authenticated code module to be executed by the ILP.
  • +
  • Check for the existence of an Intel® TXT-capable chipset.
  • +
  • Verify the current power management configuration is acceptable.
  • +
  • Broadcast a message to enable protection of memory and I/O from activities from other processor agents.
  • +
  • Load the designated AC module into authenticated code execution area.
  • +
  • Isolate the content of authenticated code execution area from further state modification by external agents.
  • +
  • Authenticate the AC module.
  • +
  • Updated the Trusted Platform Module (TPM) with the authenticated code module's hash.
  • +
  • Initialize processor state based on the authenticated code module header information.
  • +
  • Unlock the Intel® TXT-capable chipset private configuration register space and TPM locality 3 space.
  • +
  • Begin execution in the authenticated code module at the defined entry point.
+

As an integrity check for proper processor hardware operation, execution of GETSEC[SENTER] will also check the contents of all the machine check status registers (as reported by the MSRs IA32_MCi_STATUS) for any valid uncorrectable error condition. In addition, the global machine check status register IA32_MCG_STATUS MCIP bit must be cleared and the IERR processor package pin (or its equivalent) must be not asserted, indicating that no machine check exception processing is currently in-progress. These checks are performed twice: once by the ILP prior to the broadcast of the rendezvous message to RLPs, and later in response to RLPs acknowledging the rendezvous message. Any outstanding valid uncorrectable machine check error condition present in the machine check status registers at the first check point will result in the ILP signaling a general protection violation. If an outstanding valid uncorrectable machine check error condition is present at the second check point, then this will result in the corresponding logical processor signaling the more severe TXT-shutdown condition with an error code of 12.

+

Before loading and authentication of the target code module is performed, the processor also checks that the current voltage and bus ratio encodings correspond to known good values supportable by the processor. The MSR IA32_PERF_STATUS values are compared against either the processor supported maximum operating target setting, system reset setting, or the thermal monitor operating target. If the current settings do not meet any of these criteria then the SENTER function will attempt to change the voltage and bus ratio select controls in a processor-specific manner. This adjustment may be to the thermal monitor, minimum (if different), or maximum operating target depending on the processor.

+

This implies that some thermal operating target parameters configured by BIOS may be overridden by SENTER. The measured environment software may need to take responsibility for restoring such settings that are deemed to be safe, but not necessarily recognized by SENTER. If an adjustment is not possible when an out of range setting is discovered, then the processor will abort the measured launch. This may be the case for chipset controlled settings of these values or if the controllability is not enabled on the processor. In this case it is the responsibility of the external software to program the chipset voltage ID and/or bus ratio select settings to known good values recognized by the processor, prior to executing SENTER.

+
+

For a mobile processor, an adjustment can be made according to the thermal monitor operating target. For a quad-core processor the SENTER adjustment mechanism may result in a more conservative but non-uniform voltage setting, depending on the pre-SENTER settings per core.

+

The ILP and RLPs mask the response to the assertion of the external signals INIT#, A20M, NMI#, and SMI#. The purpose of this masking control is to prevent exposure to existing external event handlers until a protected handler has been put in place to directly handle these events. Masked external pin events may be unmasked conditionally or unconditionally via the GETSEC[EXITAC], GETSEC[SEXIT], GETSEC[SMCTRL] or for specific VMX related operations such as a VM entry or the VMXOFF instruction (see respective GETSEC leaves and Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more details). The state of the A20M pin is masked and forced internally to a de-asserted state so that external assertion is not recognized. A20M masking as set by

+

GETSEC[SENTER] is undone only after taking down the measured environment with the GETSEC[SEXIT] instruction or processor reset. INTR is masked by simply clearing the EFLAGS.IF bit. It is the responsibility of system software to control the processor response to INTR through appropriate management of EFLAGS.

+

To prevent other (logical) processors from interfering with the ILP operating in authenticated code execution mode, memory (excluding implicit write-back transactions) and I/O activities originating from other processor agents are blocked. This protection starts when the ILP enters into authenticated code execution mode. Only memory and I/O transactions initiated from the ILP are allowed to proceed. Exiting authenticated code execution mode is done by executing GETSEC[EXITAC]. The protection of memory and I/O activities remains in effect until the ILP executes GETSEC[EXITAC].

+

Once the authenticated code module has been loaded into the authenticated code execution area, it is protected against further modification from external bus snoops. There is also a requirement that the memory type for the authenticated code module address range be WB (via initialization of the MTRRs prior to execution of this instruction). If this condition is not satisfied, it is a violation of security and the processor will force a TXT system reset (after writing an error code to the chipset LT.ERRORCODE register). This action is referred to as a Intel® TXT reset condition. It is performed when it is considered unreliable to signal an error through the conventional exception reporting mechanism.

+

To conform to the minimum granularity of MTRR MSRs for specifying the memory type, authenticated code RAM (ACRAM) is allocated to the processor in 4096 byte granular blocks. If an AC module size as specified in ECX is not a multiple of 4096 then the processor will allocate up to the next 4096 byte boundary for mapping as ACRAM with indeterminate data. This pad area will not be visible to the authenticated code module as external memory nor can it depend on the value of the data used to fill the pad area.

+

Once successful authentication has been completed by the ILP, the computed hash is stored in a trusted storage facility in the platform. The following trusted storage facility are supported:

+
    +
  • If the platform register FTM_INTERFACE_ID.[bits 3:0] = 0, the computed hash is stored to the platform’s TPM at PCR17 after this register is implicitly reset. PCR17 is a dedicated register for holding the computed hash of the authenticated code module loaded and subsequently executed by the GETSEC[SENTER]. As part of this process, the dynamic PCRs 18-22 are reset so they can be utilized by subsequently software for registration of code and data modules.
  • +
  • If the platform register FTM_INTERFACE_ID.[bits 3:0] = 1, the computed hash is stored in a firmware trusted module (FTM) using a modified protocol similar to the protocol used to write to TPM’s PCR17.
+

After successful execution of SENTER, either PCR17 (if FTM is not enabled) or the FTM (if enabled) contains the measurement of AC code and the SENTER launching parameters.

+

After authentication is completed successfully, the private configuration space of the Intel® TXT-capable chipset is unlocked so that the authenticated code module and measured environment software can gain access to this normally restricted chipset state. The Intel® TXT-capable chipset private configuration space can be locked later by software writing to the chipset LT.CMD.CLOSE-PRIVATE register or unconditionally using the GETSEC[SEXIT] instruction.

+

The SENTER leaf function also initializes some processor architecture state for the ILP from contents held in the header of the authenticated code module. Since the authenticated code module is relocatable, all address references are relative to the base address passed in via EBX. The ILP GDTR base value is initialized to EBX + [GDTBasePtr] and GDTR limit set to [GDTLimit]. The CS selector is initialized to the value held in the AC module header field SegSel, while the DS, SS, and ES selectors are initialized to CS+8. The segment descriptor fields are initialized implicitly with BASE=0, LIMIT=FFFFFh, G=1, D=1, P=1, S=1, read/write/accessed for DS, SS, and ES, while execute/read/accessed for CS. Execution in the authenticated code module for the ILP begins with the EIP set to EBX + [EntryPoint]. AC module defined fields used for initializing processor state are consistency checked with a failure resulting in an TXT-shutdown condition.

+

Table 7-6 provides a summary of processor state initialization for the ILP and RLP(s) after successful completion of GETSEC[SENTER]. For both ILP and RLP(s), paging is disabled upon entry to the measured environment. It is up to the ILP to establish a trusted paging environment, with appropriate mappings, to meet protection requirements established during the launch of the measured environment. RLP state initialization is not completed until a subsequent wake-up has been signaled by execution of the GETSEC[WAKEUP] function by the ILP.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Register StateILP after GETSEC[SENTER]RLP after GETSEC[WAKEUP]
CR0PG←0, AM←0, WP←0; Others unchangedPG←0, CD←0, NW←0, AM←0, WP←0; PE←1, NE←1
CR400004000H00004000H
EFLAGS00000002H00000002H
IA32_EFER0H0
EIP[EntryPoint from MLE header1][LT.MLE.JOIN + 12]
EBXUnchanged [SINIT.BASE]Unchanged
EDXSENTER control flagsUnchanged
EBPSINIT.BASEUnchanged
CSSel=[SINIT SegSel], base=0, limit=FFFFFh, G=1, D=1, AR=9BHSel = [LT.MLE.JOIN + 8], base = 0, limit = FFFFFH, G = 1, D = 1, AR = 9BH
DS, ES, SSSel=[SINIT SegSel] +8, base=0, limit=FFFFFh, G=1, D=1, AR=93HSel = [LT.MLE.JOIN + 8] +8, base = 0, limit = FFFFFH, G = 1, D = 1, AR = 93H
GDTRBase= SINIT.base (EBX) + [SINIT.GDTBasePtr], Limit=[SINIT.GDTLimit]Base = [LT.MLE.JOIN + 4], Limit = [LT.MLE.JOIN]
DR700000400H00000400H
IA32_DEBUGCTL0H0H
Performance counters and counter control registers0H0H
IA32_MISC_ENABLESee Table 7-5See Table 7-5
IA32_SMM_MONITOR _CTLBit 2←0Bit 2←0
+
Table 7-6. Register State Initialization After GETSEC[SENTER] and GETSEC[WAKEUP]
+
+

1. See the Intel® Trusted Execution Technology Measured Launched Environment Programming Guide for MLE header format.

+

Segmentation related processor state that has not been initialized by GETSEC[SENTER] requires appropriate initialization before use. Since a new GDT context has been established, the previous state of the segment selector values held in FS, GS, TR, and LDTR may no longer be valid. The IDTR will also require reloading with a new IDT context after launching the measured environment before exceptions or the external interrupts INTR and NMI can be handled. In the meantime, the programmer must take care in not executing an INT n instruction or any other condition that would result in an exception or trap signaling.

+

Debug exception and trap related signaling is also disabled as part of execution of GETSEC[SENTER]. This is achieved by clearing DR7, TF in EFLAGs, and the MSR IA32_DEBUGCTL as defined in Table 7-6. These can be reenabled once supporting exception handler(s), descriptor tables, and debug registers have been properly re-initialized following SENTER. Also, any pending single-step trap condition will be cleared at the completion of SENTER for both the ILP and RLP(s).

+

Performance related counters and counter control registers are cleared as part of execution of SENTER on both the ILP and RLP. This implies any active performance counters at the time of SENTER execution will be disabled. To reactive the processor performance counters, this state must be re-initialized and re-enabled.

+

Since MCE along with all other state bits (with the exception of SMXE) are cleared in CR4 upon execution of SENTER processing, any enabled machine check error condition that occurs will result in the processor performing the TXT-shutdown action. This also applies to an RLP while in the SENTER sleep state. For each logical processor CR4.MCE

+

must be reestablished with a valid machine check exception handler to otherwise avoid an TXT-shutdown under such conditions.

+

The MSR IA32_EFER is also unconditionally cleared as part of the processor state initialized by SENTER for both the ILP and RLP. Since paging is disabled upon entering authenticated code execution mode, a new paging environment will have to be re-established if it is desired to enable IA-32e mode while operating in authenticated code execution mode.

+

The miscellaneous feature control MSR, IA32_MISC_ENABLE, is initialized as part of the measured environment launch. Certain bits of this MSR are preserved because preserving these bits may be important to maintain previously established platform settings. See the footnote for Table 7-5 The remaining bits are cleared for the purpose of establishing a more consistent environment for the execution of authenticated code modules. Among the impact of initializing this MSR, any previous condition established by the MONITOR instruction will be cleared.

+

Effect of MSR IA32_FEATURE_CONTROL MSR

+

Bits 15:8 of the IA32_FEATURE_CONTROL MSR affect the execution of GETSEC[SENTER]. These bits consist of two fields:

+
    +
  • Bit 15: a global enable control for execution of SENTER.
  • +
  • Bits 14:8: a parameter control field providing the ability to qualify SENTER execution based on the level of functionality specified with corresponding EDX parameter bits 6:0.
+

The layout of these fields in the IA32_FEATURE_CONTROL MSR is shown in Table 7-1.

+

Prior to the execution of GETSEC[SENTER], the lock bit of IA32_FEATURE_CONTROL MSR must be bit set to affirm the settings to be used. Once the lock bit is set, only a power-up reset condition will clear this MSR. The IA32_FEA-TURE_CONTROL MSR must be configured in accordance to the intended usage at platform initialization. Note that this MSR is only available on SMX or VMX enabled processors. Otherwise, IA32_FEATURE_CONTROL is treated as reserved.

+

The Intel® Trusted Execution Technology Measured Launched Environment Programming Guide provides additional details and requirements for programming measured environment software to launch in an Intel TXT platform.

+

Operation in a Uni-Processor Platform + ¶ +

+

(* The state of the internal flag ACMODEFLAG and SENTERFLAG persist across instruction boundary *)

+

GETSEC[SENTER] (ILP Only):

+

IF (CR4.SMXE=0)

+

THEN #UD;

+

ELSE IF (in VMX non-root operation)

+

THEN VM Exit (reason=”GETSEC instruction”);

+

ELSE IF (GETSEC leaf unsupported)

+

THEN #UD;

+

ELSE IF ((in VMX root operation) or

+

(CR0.PE=0) or (CR0.CD=1) or (CR0.NW=1) or (CR0.NE=0) or

+

(CPL>0) or (EFLAGS.VM=1) or

+

(IA32_APIC_BASE.BSP=0) or (TXT chipset not present) or

+

(SENTERFLAG=1) or (ACMODEFLAG=1) or (IN_SMM=1) or

+

(TPM interface is not present) or

+

(EDX ≠ (SENTER_EDX_support_mask & EDX)) or

+

(IA32_FEATURE_CONTROL[0]=0) or (IA32_FEATURE_CONTROL[15]=0) or

+

((IA32_FEATURE_CONTROL[14:8] & EDX[6:0]) ≠ EDX[6:0]))

+

THEN #GP(0);

+

IF (GETSEC[PARAMETERS].Parameter_Type = 5, MCA_Handling (bit 6) = 0)

+

FOR I = 0 to IA32_MCG_CAP.COUNT-1 DO

+

IF IA32_MC[I]_STATUS = uncorrectable error

+

THEN #GP(0);

+

FI;

+

OD;

+

FI;

+

IF (IA32_MCG_STATUS.MCIP=1) or (IERR pin is asserted)

+

THEN #GP(0);

+

ACBASE := EBX;

+

ACSIZE := ECX;

+

IF (((ACBASE MOD 4096) ≠ 0) or ((ACSIZE MOD 64) ≠ 0 ) or (ACSIZE < minimum

+

module size) or (ACSIZE > AC RAM capacity) or ((ACBASE+ACSIZE) > (2^32 -1)))

+

THEN #GP(0);

+

Mask SMI, INIT, A20M, and NMI external pin events;

+

SignalTXTMsg(SENTER);

+

DO

+

WHILE (no SignalSENTER message);

+

TXT_SENTER__MSG_EVENT (ILP & RLP):

+

Mask and clear SignalSENTER event;

+

Unmask SignalSEXIT event;

+

IF (in VMX operation)

+

THEN TXT-SHUTDOWN(#IllegalEvent);

+

FOR I = 0 to IA32_MCG_CAP.COUNT-1 DO

+

IF IA32_MC[I]_STATUS = uncorrectable error

+

THEN TXT-SHUTDOWN(#UnrecovMCError);

+

FI;

+

OD;

+

IF (IA32_MCG_STATUS.MCIP=1) or (IERR pin is asserted)

+

THEN TXT-SHUTDOWN(#UnrecovMCError);

+

IF (Voltage or bus ratio status are NOT at a known good state)

+

THEN IF (Voltage select and bus ratio are internally adjustable)

+

THEN

+

Make product-specific adjustment on operating parameters;

+

ELSE

+

TXT-SHUTDOWN(#IIlegalVIDBRatio);

+

FI;

+

IA32_MISC_ENABLE := (IA32_MISC_ENABLE & MASK_CONST*)

+

(* The hexadecimal value of MASK_CONST may vary due to processor implementations *)

+

A20M := 0;

+

IA32_DEBUGCTL := 0;

+

Invalidate processor TLB(s);

+

Drain outgoing transactions;

+

Clear performance monitor counters and control;

+

SENTERFLAG := 1;

+

SignalTXTMsg(SENTERAck);

+

IF (logical processor is not ILP)

+

THEN GOTO RLP_SENTER_ROUTINE;

+

(* ILP waits for all logical processors to ACK *)

+

DO

+

DONE := TXT.READ(LT.STS);

+

WHILE (not DONE);

+

SignalTXTMsg(SENTERContinue);

+

SignalTXTMsg(ProcessorHold);

+

FOR I=ACBASE to ACBASE+ACSIZE-1 DO

+

ACRAM[I-ACBASE].ADDR := I;

+

ACRAM[I-ACBASE].DATA := LOAD(I);

+

OD;

+

IF (ACRAM memory type ≠ WB)

+

THEN TXT-SHUTDOWN(#BadACMMType);

+

IF (AC module header version is not supported) OR (ACRAM[ModuleType] ≠ 2)

+

THEN TXT-SHUTDOWN(#UnsupportedACM);

+

KEY := GETKEY(ACRAM, ACBASE);

+

KEYHASH := HASH(KEY);

+

CSKEYHASH := LT.READ(LT.PUBLIC.KEY);

+

IF (KEYHASH ≠ CSKEYHASH)

+

THEN TXT-SHUTDOWN(#AuthenticateFail);

+

SIGNATURE := DECRYPT(ACRAM, ACBASE, KEY);

+

(* The value of SIGNATURE_LEN_CONST is implementation-specific*)

+

FOR I=0 to SIGNATURE_LEN_CONST - 1 DO

+

ACRAM[SCRATCH.I] := SIGNATURE[I];

+

COMPUTEDSIGNATURE := HASH(ACRAM, ACBASE, ACSIZE);

+

FOR I=0 to SIGNATURE_LEN_CONST - 1 DO

+

ACRAM[SCRATCH.SIGNATURE_LEN_CONST+I] := COMPUTEDSIGNATURE[I];

+

IF (SIGNATURE ≠ COMPUTEDSIGNATURE)

+

THEN TXT-SHUTDOWN(#AuthenticateFail);

+

ACMCONTROL := ACRAM[CodeControl];

+

IF ((ACMCONTROL.0 = 0) and (ACMCONTROL.1 = 1) and (snoop hit to modified line detected on ACRAM load))

+

THEN TXT-SHUTDOWN(#UnexpectedHITM);

+

IF (ACMCONTROL reserved bits are set)

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF ((ACRAM[GDTBasePtr] < (ACRAM[HeaderLen] * 4 + Scratch_size)) OR

+

((ACRAM[GDTBasePtr] + ACRAM[GDTLimit]) >= ACSIZE))

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF ((ACMCONTROL.0 = 1) and (ACMCONTROL.1 = 1) and (snoop hit to modified

+

line detected on ACRAM load))

+

THEN ACEntryPoint := ACBASE+ACRAM[ErrorEntryPoint];

+

ELSE

+

ACEntryPoint := ACBASE+ACRAM[EntryPoint];

+

IF ((ACEntryPoint >= ACSIZE) or (ACEntryPoint < (ACRAM[HeaderLen] * 4 + Scratch_size)))

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF ((ACRAM[SegSel] > (ACRAM[GDTLimit] - 15)) or (ACRAM[SegSel] < 8))

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF ((ACRAM[SegSel].TI=1) or (ACRAM[SegSel].RPL≠0))

+

THEN TXT-SHUTDOWN(#BadACMFormat);

+

IF (FTM_INTERFACE_ID.[3:0] = 1 ) (* Alternate FTM Interface has been enabled *)

+

THEN (* TPM_LOC_CTRL_4 is located at 0FED44008H, TMP_DATA_BUFFER_4 is located at 0FED44080H *)

+

WRITE(TPM_LOC_CTRL_4) := 01H; (* Modified HASH.START protocol *)

+

(* Write to firmware storage *)

+

WRITE(TPM_DATA_BUFFER_4) := SIGNATURE_LEN_CONST + 4;

+

FOR I=0 to SIGNATURE_LEN_CONST - 1 DO

+

WRITE(TPM_DATA_BUFFER_4 + 2 + I ) := ACRAM[SCRATCH.I];

+

WRITE(TPM_DATA_BUFFER_4 + 2 + SIGNATURE_LEN_CONST) := EDX;

+

WRITE(FTM.LOC_CTRL) := 06H; (* Modified protocol combining HASH.DATA and HASH.END *)

+

ELSE IF (FTM_INTERFACE_ID.[3:0] = 0 ) (* Use standard TPM Interface *)

+

ACRAM[SCRATCH.SIGNATURE_LEN_CONST] := EDX;

+

WRITE(TPM.HASH.START) := 0;

+

FOR I=0 to SIGNATURE_LEN_CONST + 3 DO

+

WRITE(TPM.HASH.DATA) := ACRAM[SCRATCH.I];

+

WRITE(TPM.HASH.END) := 0;

+

ACMODEFLAG := 1;

+

CR0.[PG.AM.WP] := 0;

+

CR4 := 00004000h;

+

EFLAGS := 00000002h;

+

IA32_EFER := 0;

+

EBP := ACBASE;

+

GDTR.BASE := ACBASE+ACRAM[GDTBasePtr];

+

GDTR.LIMIT := ACRAM[GDTLimit];

+

CS.SEL := ACRAM[SegSel];

+

CS.BASE := 0;

+

CS.LIMIT := FFFFFh;

+

CS.G := 1;

+

CS.D := 1;

+

CS.AR := 9Bh;

+

DS.SEL := ACRAM[SegSel]+8;

+

DS.BASE := 0;

+

DS.LIMIT := FFFFFh;

+

DS.G := 1;

+

DS.D := 1;

+

DS.AR := 93h;

+

SS := DS;

+

ES := DS;

+

DR7 := 00000400h;

+

IA32_DEBUGCTL := 0;

+

SignalTXTMsg(UnlockSMRAM);

+

SignalTXTMsg(OpenPrivate);

+

SignalTXTMsg(OpenLocality3);

+

EIP := ACEntryPoint;

+

END;

+

RLP_SENTER_ROUTINE: (RLP Only)

+

Mask SMI, INIT, A20M, and NMI external pin events

+

Unmask SignalWAKEUP event;

+

Wait for SignalSENTERContinue message;

+

IA32_APIC_BASE.BSP := 0;

+

GOTO SENTER sleep state;

+

END;

+

Flags Affected + ¶ +

+

All flags are cleared.

+

Use of Prefixes + ¶ +

+

LOCK Causes #UD.

+

REP* Cause #UD (includes REPNE/REPNZ and REP/REPE/REPZ).

+

Operand size Causes #UD.

+

NP 66/F2/F3 prefixes are not allowed.

+

Segmentoverrides Ignored.

+

Address size Ignored.

+

REX Ignored.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[SENTER] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)IfCR0.CD=1orCR0.NW=1orCR0.NE=0orCR0.PE=0orCPL>0orEFLAGS.VM=1.
If in VMX root operation.
If the initiating processor is not designated as the bootstrap processor via the MSR bit IA32_APIC_BASE.BSP.
If an Intel® TXT-capable chipset is not present.
If an Intel® TXT-capable chipset interface to TPM is not detected as present.
If a protected partition is already active or the processor is already in authenticated code mode.
If the processor is in SMM.
If a valid uncorrectable machine check error is logged in IA32_MC[I]_STATUS.
If the authenticated code base is not on a 4096 byte boundary.
If the authenticated code size > processor's authenticated code execution area storage capacity.
If the authenticated code size is not modulo 64.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[SENTER] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[SENTER] is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[SENTER] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[SENTER] is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+ + + +
#GPIf AC code module does not reside in physical address below 2^32 -1.
+

64-Bit Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+ + + +
#GPIf AC code module does not reside in physical address below 2^32 -1.
+

VM-Exit Condition + ¶ +

+

Reason (GETSEC) If in VMX non-root operation.

diff --git a/x86/serialize.html b/x86/serialize.html new file mode 100644 index 0000000..bc0f526 --- /dev/null +++ b/x86/serialize.html @@ -0,0 +1,68 @@ + +SERIALIZE + — Serialize Instruction Execution

SERIALIZE + — Serialize Instruction Execution

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 E8 SERIALIZEZOV/VSERIALIZESerialize instruction fetch and execution.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

Description + ¶ +

+

Serializes instruction execution. Before the next instruction is fetched and executed, the SERIALIZE instruction ensures that all modifications to flags, registers, and memory by previous instructions are completed, draining all buffered writes to memory. This instruction is also a serializing instruction as defined in the section “Serializing Instructions” in Chapter 9 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

SERIALIZE does not modify registers, arithmetic flags, or memory.

+

Operation + ¶ +

+
Wait_On_Fetch_And_Execution_Of_Next_Instruction_Until(preceding_instructions_complete_and_preceding_stores_globally_visible);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SERIALIZE void _serialize(void);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.07H.0H:EDX.SERIALIZE[bit 14] = 0.
diff --git a/x86/setcc.html b/x86/setcc.html new file mode 100644 index 0000000..167b070 --- /dev/null +++ b/x86/setcc.html @@ -0,0 +1,545 @@ + +SETcc + — Set Byte on Condition

SETcc + — Set Byte on Condition

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 97SETA r/m8MValidValidSet byte if above (CF=0 and ZF=0).
REX + 0F 97SETA r/m81MValidN.E.Set byte if above (CF=0 and ZF=0).
0F 93SETAE r/m8MValidValidSet byte if above or equal (CF=0).
REX + 0F 93SETAE r/m81MValidN.E.Set byte if above or equal (CF=0).
0F 92SETB r/m8MValidValidSet byte if below (CF=1).
REX + 0F 92SETB r/m81MValidN.E.Set byte if below (CF=1).
0F 96SETBE r/m8MValidValidSet byte if below or equal (CF=1 or ZF=1).
REX + 0F 96SETBE r/m81MValidN.E.Set byte if below or equal (CF=1 or ZF=1).
0F 92SETC r/m8MValidValidSet byte if carry (CF=1).
REX + 0F 92SETC r/m81MValidN.E.Set byte if carry (CF=1).
0F 94SETE r/m8MValidValidSet byte if equal (ZF=1).
REX + 0F 94SETE r/m81MValidN.E.Set byte if equal (ZF=1).
0F 9FSETG r/m8MValidValidSet byte if greater (ZF=0 and SF=OF).
REX + 0F 9FSETG r/m81MValidN.E.Set byte if greater (ZF=0 and SF=OF).
0F 9DSETGE r/m8MValidValidSet byte if greater or equal (SF=OF).
REX + 0F 9DSETGE r/m81MValidN.E.Set byte if greater or equal (SF=OF).
0F 9CSETL r/m8MValidValidSet byte if less (SF≠ OF).
REX + 0F 9CSETL r/m81MValidN.E.Set byte if less (SF≠ OF).
0F 9ESETLE r/m8MValidValidSet byte if less or equal (ZF=1 or SF≠ OF).
REX + 0F 9ESETLE r/m81MValidN.E.Set byte if less or equal (ZF=1 or SF≠ OF).
0F 96SETNA r/m8MValidValidSet byte if not above (CF=1 or ZF=1).
REX + 0F 96SETNA r/m81MValidN.E.Set byte if not above (CF=1 or ZF=1).
0F 92SETNAE r/m8MValidValidSet byte if not above or equal (CF=1).
REX + 0F 92SETNAE r/m81MValidN.E.Set byte if not above or equal (CF=1).
0F 93SETNB r/m8MValidValidSet byte if not below (CF=0).
REX + 0F 93SETNB r/m81MValidN.E.Set byte if not below (CF=0).
0F 97SETNBE r/m8MValidValidSet byte if not below or equal (CF=0 and ZF=0).
REX + 0F 97SETNBE r/m81MValidN.E.Set byte if not below or equal (CF=0 and ZF=0).
0F 93SETNC r/m8MValidValidSet byte if not carry (CF=0).
REX + 0F 93SETNC r/m81MValidN.E.Set byte if not carry (CF=0).
0F 95SETNE r/m8MValidValidSet byte if not equal (ZF=0).
REX + 0F 95SETNE r/m81MValidN.E.Set byte if not equal (ZF=0).
0F 9ESETNG r/m8MValidValidSet byte if not greater (ZF=1 or SF≠ OF)
REX + 0F 9ESETNG r/m81MValidN.E.Set byte if not greater (ZF=1 or SF≠ OF).
0F 9CSETNGE r/m8MValidValidSet byte if not greater or equal (SF≠ OF).
REX + 0F 9CSETNGE r/m81MValidN.E.Set byte if not greater or equal (SF≠ OF).
0F 9DSETNL r/m8MValidValidSet byte if not less (SF=OF).
REX + 0F 9DSETNL r/m81MValidN.E.Set byte if not less (SF=OF).
0F 9FSETNLE r/m8MValidValidSet byte if not less or equal (ZF=0 and SF=OF).
REX + 0F 9FSETNLE r/m81MValidN.E.Set byte if not less or equal (ZF=0 and SF=OF).
0F 91SETNO r/m8MValidValidSet byte if not overflow (OF=0).
REX + 0F 91SETNO r/m81MValidN.E.Set byte if not overflow (OF=0).
0F 9BSETNP r/m8MValidValidSet byte if not parity (PF=0).
REX + 0F 9BSETNP r/m81MValidN.E.Set byte if not parity (PF=0).
0F 99SETNS r/m8MValidValidSet byte if not sign (SF=0).
REX + 0F 99SETNS r/m81MValidN.E.Set byte if not sign (SF=0).
0F 95SETNZ r/m8MValidValidSet byte if not zero (ZF=0).
REX + 0F 95SETNZ r/m81MValidN.E.Set byte if not zero (ZF=0).
0F 90SETO r/m8MValidValidSet byte if overflow (OF=1)
REX + 0F 90SETO r/m81MValidN.E.Set byte if overflow (OF=1).
0F 9ASETP r/m8MValidValidSet byte if parity (PF=1).
REX + 0F 9ASETP r/m81MValidN.E.Set byte if parity (PF=1).
0F 9ASETPE r/m8MValidValidSet byte if parity even (PF=1).
REX + 0F 9ASETPE r/m81MValidN.E.Set byte if parity even (PF=1).
0F 9BSETPO r/m8MValidValidSet byte if parity odd (PF=0).
REX + 0F 9BSETPO r/m81MValidN.E.Set byte if parity odd (PF=0).
0F 98SETS r/m8MValidValidSet byte if sign (SF=1).
REX + 0F 98SETS r/m81MValidN.E.Set byte if sign (SF=1).
0F 94SETZ r/m8MValidValidSet byte if zero (ZF=1).
REX + 0F 94SETZ r/m81MValidN.E.Set byte if zero (ZF=1).
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Sets the destination operand to 0 or 1 depending on the settings of the status flags (CF, SF, OF, ZF, and PF) in the EFLAGS register. The destination operand points to a byte register or a byte in memory. The condition code suffix (cc) indicates the condition being tested for.

+

The terms “above” and “below” are associated with the CF flag and refer to the relationship between two unsigned integer values. The terms “greater” and “less” are associated with the SF and OF flags and refer to the relationship between two signed integer values.

+

Many of the SETcc instruction opcodes have alternate mnemonics. For example, SETG (set byte if greater) and SETNLE (set if not less or equal) have the same opcode and test for the same condition: ZF equals 0 and SF equals OF. These alternate mnemonics are provided to make code more intelligible. Appendix B, “EFLAGS Condition Codes,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, shows the alternate mnemonics for various test conditions.

+

Some languages represent a logical one as an integer with all bits set. This representation can be obtained by choosing the logically opposite condition for the SETcc instruction, then decrementing the result. For example, to test for overflow, use the SETNO instruction, then decrement the result.

+

The reg field of the ModR/M byte is not used for the SETCC instruction and those opcode bits are ignored by the processor.

+

In IA-64 mode, the operand size is fixed at 8 bits. Use of REX prefix enable uniform addressing to additional byte registers. Otherwise, this instruction’s operation is the same as in legacy mode and compatibility mode.

+

Operation + ¶ +

+
IF condition
+    THEN DEST := 1;
+    ELSE DEST := 0;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
diff --git a/x86/setssbsy.html b/x86/setssbsy.html new file mode 100644 index 0000000..98a2f5f --- /dev/null +++ b/x86/setssbsy.html @@ -0,0 +1,115 @@ + +SETSSBSY + — Mark Shadow Stack Busy

SETSSBSY + — Mark Shadow Stack Busy

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 01 E8 SETSSBSYZOV/VCET_SSSet busy flag in supervisor shadow stack token reference by IA32_PL0_SSP.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

The SETSSBSY instruction verifies the presence of a non-busy supervisor shadow stack token at the address in the IA32_PL0_SSP MSR and marks it busy. Following successful execution of the instruction, the SSP is set to the value of the IA32_PL0_SSP MSR.

+

Operation + ¶ +

+
IF (CR4.CET = 0)
+    THEN #UD; FI;
+IF (IA32_S_CET.SH_STK_EN = 0)
+    THEN #UD; FI;
+IF CPL > 0
+    THEN GP(0); FI;
+SSP_LA = IA32_PL0_SSP
+If SSP_LA not aligned to 8 bytes
+    THEN #GP(0); FI;
+expected_token_value = SSP_LA
+new_token_value = SSP_LA | BUSY_BIT
+IF shadow_stack_lock_cmpxchg8B(SSP_LA, new_token_value, expected_token_value) != expected_token_value
+    THEN #CP(SETSSBSY); FI;
+SSP = SSP_LA
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SETSSBSYvoid _setssbsy(void);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
IF IA32_S_CET.SH_STK_EN = 0.
#GP(0)If IA32_PL0_SSP not aligned to 8 bytes.
If CPL is not 0.
#CP(setssbsy)If busy bit in token is set.
If in 32-bit or compatibility mode, and the address in token is not below 4G.
#PF(fault-code)If a page fault occurs.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe SETSSBSY instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe SETSSBSY instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same as protected mode exceptions.

+

64-Bit Mode Exceptions + ¶ +

+

Same as protected mode exceptions.

diff --git a/x86/sexit.html b/x86/sexit.html new file mode 100644 index 0000000..7550128 --- /dev/null +++ b/x86/sexit.html @@ -0,0 +1,164 @@ + +GETSEC[SEXIT] + — Exit Measured Environment

GETSEC[SEXIT] + — Exit Measured Environment

+ + + + + + + + + +
OpcodeInstructionDescription
NP 0F 37 (EAX=5)GETSEC[SEXIT]Exit measured environment.
+

Description + ¶ +

+

The GETSEC[SEXIT] instruction initiates an exit of a measured environment established by GETSEC[SENTER]. The SEXIT leaf of GETSEC is selected with EAX set to 5 at execution. This instruction leaf sends a message to all logical processors in the platform to signal the measured environment exit.

+

There are restrictions enforced by the processor for the execution of the GETSEC[SEXIT] instruction:

+
    +
  • Execution is not allowed unless the processor is in protected mode (CR0.PE = 1) with CPL = 0 and EFLAGS.VM = 0.
  • +
  • The processor must be in a measured environment as launched by a previous GETSEC[SENTER] instruction, but not still in authenticated code execution mode.
  • +
  • To avoid potential interoperability conflicts between modes, the processor is not allowed to execute this instruction if it currently is in SMM or in VMX operation.
  • +
  • To ensure consistent handling of SIPI messages, the processor executing the GETSEC[SEXIT] instruction must also be designated the BSP (bootstrap processor) as defined by the register bit IA32_APIC_BASE.BSP (bit 8).
+

Failure to abide by the above conditions results in the processor signaling a general protection violation.

+

This instruction initiates a sequence to rendezvous the RLPs with the ILP. It then clears the internal processor flag indicating the processor is operating in a measured environment.

+

In response to a message signaling the completion of rendezvous, all RLPs restart execution with the instruction that was to be executed at the time GETSEC[SEXIT] was recognized. This applies to all processor conditions, with the following exceptions:

+
    +
  • If an RLP executed HLT and was in this halt state at the time of the message initiated by GETSEC[SEXIT], then execution resumes in the halt state.
  • +
  • If an RLP was executing MWAIT, then a message initiated by GETSEC[SEXIT] causes an exit of the MWAIT state, falling through to the next instruction.
  • +
  • If an RLP was executing an intermediate iteration of a string instruction, then the processor resumes execution of the string instruction at the point which the message initiated by GETSEC[SEXIT] was recognized.
  • +
  • If an RLP is still in the SENTER sleep state (never awakened with GETSEC[WAKEUP]), it will be sent to the wait-for-SIPI state after first clearing the bootstrap processor indicator flag (IA32_APIC_BASE.BSP) and any pending SIPI state. In this case, such RLPs are initialized to an architectural state consistent with having taken a soft reset using the INIT# pin.
+

Prior to completion of the GETSEC[SEXIT] operation, both the ILP and any active RLPs unmask the response of the external event signals INIT#, A20M, NMI#, and SMI#. This unmasking is performed unconditionally to recognize pin events which are masked after a GETSEC[SENTER]. The state of A20M is unmasked, as the A20M pin is not recognized while the measured environment is active.

+

On a successful exit of the measured environment, the ILP re-locks the Intel® TXT-capable chipset private configuration space. GETSEC[SEXIT] does not affect the content of any PCR.

+

At completion of GETSEC[SEXIT] by the ILP, execution proceeds to the next instruction. Since EFLAGS and the debug register state are not modified by this instruction, a pending trap condition is free to be signaled if previously enabled.

+

Operation in a Uni-Processor Platform + ¶ +

+

(* The state of the internal flag ACMODEFLAG and SENTERFLAG persist across instruction boundary *)

+

GETSEC[SEXIT] (ILP Only):

+

IF (CR4.SMXE=0)

+

THEN #UD;

+

ELSE IF (in VMX non-root operation)

+

THEN VM Exit (reason=”GETSEC instruction”);

+

ELSE IF (GETSEC leaf unsupported)

+

THEN #UD;

+

ELSE IF ((in VMX root operation) or

+

(CR0.PE=0) or (CPL>0) or (EFLAGS.VM=1) or

+

(IA32_APIC_BASE.BSP=0) or

+

(TXT chipset not present) or

+

(SENTERFLAG=0) or (ACMODEFLAG=1) or (IN_SMM=1))

+

THEN #GP(0);

+

SignalTXTMsg(SEXIT);

+

DO

+

WHILE (no SignalSEXIT message);

+

TXT_SEXIT_MSG_EVENT (ILP & RLP):

+

Mask and clear SignalSEXIT event;

+

Clear MONITOR FSM;

+

Unmask SignalSENTER event;

+

IF (in VMX operation)

+

THEN TXT-SHUTDOWN(#IllegalEvent);

+

SignalTXTMsg(SEXITAck);

+

IF (logical processor is not ILP)

+

THEN GOTO RLP_SEXIT_ROUTINE;

+

(* ILP waits for all logical processors to ACK *)

+

DO

+

DONE := READ(LT.STS);

+

WHILE (NOT DONE);

+

SignalTXTMsg(SEXITContinue);

+

SignalTXTMsg(ClosePrivate);

+

SENTERFLAG := 0;

+

Unmask SMI, INIT, A20M, and NMI external pin events;

+

END;

+

RLP_SEXIT_ROUTINE (RLPs Only):

+

Wait for SignalSEXITContinue message;

+

Unmask SMI, INIT, A20M, and NMI external pin events;

+

IF (prior execution state = HLT)

+

THEN reenter HLT state;

+

IF (prior execution state = SENTER sleep)

+

THEN

+

IA32_APIC_BASE.BSP := 0;

+

Clear pending SIPI state;

+

Call INIT_PROCESSOR_STATE;

+

Unmask SIPI event;

+

GOTO WAIT-FOR-SIPI;

+

FI;

+

END;

+

Flags Affected + ¶ +

+

ILP: None.

+

RLPs: All flags are modified for an RLP. returning to wait-for-SIPI state, none otherwise.

+

Use of Prefixes + ¶ +

+

LOCK Causes #UD.

+

REP* Cause #UD (includes REPNE/REPNZ and REP/REPE/REPZ).

+

Operand size Causes #UD.

+

NP 66/F2/F3 prefixes are not allowed.

+

Segmentoverrides Ignored.

+

Address size Ignored.

+

REX Ignored.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[SEXIT] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)If CR0.PE = 0 or CPL > 0 or EFLAGS.VM = 1.
If in VMX root operation.
If the initiating processor is not designated via the MSR bit IA32_APIC_BASE.BSP.
If an Intel® TXT-capable chipset is not present.
If a protected partition is not already active or the processor is already in authenticated code mode.
If the processor is in SMM.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[SEXIT] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[SEXIT] is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[SEXIT] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[SEXIT] is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+

64-Bit Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+

VM-Exit Condition + ¶ +

+

Reason (GETSEC) If in VMX non-root operation.

diff --git a/x86/sfence.html b/x86/sfence.html new file mode 100644 index 0000000..d79c7a5 --- /dev/null +++ b/x86/sfence.html @@ -0,0 +1,62 @@ + +SFENCE + — Store Fence

SFENCE + — Store Fence

+ + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F AE F8SFENCEZOValidValidSerializes store operations.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Orders processor execution relative to all memory stores prior to the SFENCE instruction. The processor ensures that every store prior to SFENCE is globally visible before any store after SFENCE becomes globally visible. The SFENCE instruction is ordered with respect to memory stores, other SFENCE instructions, MFENCE instructions, and any serializing instructions (such as the CPUID instruction). It is not ordered with respect to memory loads or the LFENCE instruction.

+

Weakly ordered memory types can be used to achieve higher processor performance through such techniques as out-of-order issue, write-combining, and write-collapsing. The degree to which a consumer of data recognizes or knows that the data is weakly ordered varies among applications and may be unknown to the producer of this data. The SFENCE instruction provides a performance-efficient way of ensuring store ordering between routines that produce weakly-ordered results and routines that consume this data.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Specification of the instruction's opcode above indicates a ModR/M byte of F8. For this instruction, the processor ignores the r/m field of the ModR/M byte. Thus, SFENCE is encoded by any opcode of the form 0F AE Fx, where x is in the range 8-F.

+

Operation + ¶ +

+
Wait_On_Following_Stores_Until(preceding_stores_globally_visible);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
void _mm_sfence(void)
+
+

Exceptions (All Operating Modes) + ¶ +

+

#UD If CPUID.01H:EDX.SSE[bit 25] = 0.

+

If the LOCK prefix is used.

diff --git a/x86/sgdt.html b/x86/sgdt.html new file mode 100644 index 0000000..314ab6e --- /dev/null +++ b/x86/sgdt.html @@ -0,0 +1,156 @@ + +SGDT + — Store Global Descriptor Table Register

SGDT + — Store Global Descriptor Table Register

+ +

Opcode1

+ + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01 /0ValidValidStore GDTR to m.
+

1. See the IA-32 Architecture Compatibility section below.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Stores the content of the global descriptor table register (GDTR) in the destination operand. The destination operand specifies a memory location.

+

In legacy or compatibility mode, the destination operand is a 6-byte memory location. If the operand-size attribute is 16 or 32 bits, the 16-bit limit field of the register is stored in the low 2 bytes of the memory location and the 32-bit base address is stored in the high 4 bytes.

+

In 64-bit mode, the operand size is fixed at 8+2 bytes. The instruction stores an 8-byte base and a 2-byte limit.

+

SGDT is useful only by operating-system software. However, it can be used in application programs without causing an exception to be generated if CR4.UMIP = 0. See “LGDT/LIDT—Load Global/Interrupt Descriptor Table Register” in Chapter 3, Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, for information on loading the GDTR and IDTR.

+

IA-32 Architecture Compatibility + ¶ +

+

The 16-bit form of the SGDT is compatible with the Intel 286 processor if the upper 8 bits are not referenced. The Intel 286 processor fills these bits with 1s; processor generations later than the Intel 286 processor fill these bits with 0s.

+

Operation + ¶ +

+
IF instruction is SGDT
+    IF OperandSize =16 or OperandSize = 32 (* Legacy or Compatibility Mode *)
+        THEN
+            DEST[0:15] := GDTR(Limit);
+            DEST[16:47] := GDTR(Base); (* Full 32-bit base address stored *)
+            FI;
+        ELSE (* 64-bit Mode *)
+            DEST[0:15] := GDTR(Limit);
+            DEST[16:79] := GDTR(Base); (* Full 64-bit base address stored *)
+    FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If CR4.UMIP = 1 and CPL > 0.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL = 3.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#UDIf the LOCK prefix is used.
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If CR4.UMIP = 1.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#UDIf the LOCK prefix is used.
#GP(0)If the memory address is in a non-canonical form.
If CR4.UMIP = 1 and CPL > 0.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL = 3.
diff --git a/x86/sha1msg1.html b/x86/sha1msg1.html new file mode 100644 index 0000000..f12395c --- /dev/null +++ b/x86/sha1msg1.html @@ -0,0 +1,74 @@ + +SHA1MSG1 + — Perform an Intermediate Calculation for the Next Four SHA1 Message Dwords

SHA1MSG1 + — Perform an Intermediate Calculation for the Next Four SHA1 Message Dwords

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 C9 /r SHA1MSG1 xmm1, xmm2/m128RMV/VSHAPerforms an intermediate calculation for the next four SHA1 message dwords using previous message dwords from xmm1 and xmm2/m128, storing the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (r, w)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The SHA1MSG1 instruction is one of two SHA1 message scheduling instructions. The instruction performs an intermediate calculation for the next four SHA1 message dwords.

+

Operation + ¶ +

+

SHA1MSG1 + ¶ +

+
W0 := SRC1[127:96] ;
+W1 := SRC1[95:64] ;
+W2 := SRC1[63: 32] ;
+W3 := SRC1[31: 0] ;
+W4 := SRC2[127:96] ;
+W5 := SRC2[95:64] ;
+DEST[127:96] := W2 XOR W0;
+DEST[95:64] := W3 XOR W1;
+DEST[63:32] := W4 XOR W2;
+DEST[31:0] := W5 XOR W3;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SHA1MSG1 __m128i _mm_sha1msg1_epu32(__m128i, __m128i);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/sha1msg2.html b/x86/sha1msg2.html new file mode 100644 index 0000000..63ad449 --- /dev/null +++ b/x86/sha1msg2.html @@ -0,0 +1,75 @@ + +SHA1MSG2 + — Perform a Final Calculation for the Next Four SHA1 Message Dwords

SHA1MSG2 + — Perform a Final Calculation for the Next Four SHA1 Message Dwords

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 CA /r SHA1MSG2 xmm1, xmm2/m128RMV/VSHAPerforms the final calculation for the next four SHA1 message dwords using intermediate results from xmm1 and the previous message dwords from xmm2/m128, storing the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (r, w)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The SHA1MSG2 instruction is one of two SHA1 message scheduling instructions. The instruction performs the final calculation to derive the next four SHA1 message dwords.

+

Operation + ¶ +

+

SHA1MSG2 + ¶ +

+
W13 := SRC2[95:64] ;
+W14 := SRC2[63: 32] ;
+W15 := SRC2[31: 0] ;
+W16 := (SRC1[127:96] XOR W13 ) ROL 1;
+W17 := (SRC1[95:64] XOR W14) ROL 1;
+W18 := (SRC1[63: 32] XOR W15) ROL 1;
+W19 := (SRC1[31: 0] XOR W16) ROL 1;
+DEST[127:96] := W16;
+DEST[95:64] := W17;
+DEST[63:32] := W18;
+DEST[31:0] := W19;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SHA1MSG2 __m128i _mm_sha1msg2_epu32(__m128i, __m128i);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/sha1nexte.html b/x86/sha1nexte.html new file mode 100644 index 0000000..ac24d86 --- /dev/null +++ b/x86/sha1nexte.html @@ -0,0 +1,69 @@ + +SHA1NEXTE + — Calculate SHA1 State Variable E After Four Rounds

SHA1NEXTE + — Calculate SHA1 State Variable E After Four Rounds

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 C8 /r SHA1NEXTE xmm1, xmm2/m128RMV/VSHACalculates SHA1 state variable E after four rounds of operation from the current SHA1 state variable A in xmm1. The calculated value of the SHA1 state variable E is added to the scheduled dwords in xmm2/m128, and stored with some of the scheduled dwords in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (r, w)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The SHA1NEXTE calculates the SHA1 state variable E after four rounds of operation from the current SHA1 state variable A in the destination operand. The calculated value of the SHA1 state variable E is added to the source operand, which contains the scheduled dwords.

+

Operation + ¶ +

+

SHA1NEXTE + ¶ +

+
TMP := (SRC1[127:96] ROL 30);
+DEST[127:96] := SRC2[127:96] + TMP;
+DEST[95:64] := SRC2[95:64];
+DEST[63:32] := SRC2[63:32];
+DEST[31:0] := SRC2[31:0];
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SHA1NEXTE __m128i _mm_sha1nexte_epu32(__m128i, __m128i);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/sha1rnds4.html b/x86/sha1rnds4.html new file mode 100644 index 0000000..21677bc --- /dev/null +++ b/x86/sha1rnds4.html @@ -0,0 +1,106 @@ + +SHA1RNDS4 + — Perform Four Rounds of SHA1 Operation

SHA1RNDS4 + — Perform Four Rounds of SHA1 Operation

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 3A CC /r ib SHA1RNDS4 xmm1, xmm2/m128, imm8RMIV/VSHAPerforms four rounds of SHA1 operation operating on SHA1 state (A,B,C,D) from xmm1, with a pre-computed sum of the next 4 round message dwords and state variable E from xmm2/m128. The immediate byte controls logic functions and round constants.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMIModRM:reg (r, w)ModRM:r/m (r)imm8
+

Description + ¶ +

+

The SHA1RNDS4 instruction performs four rounds of SHA1 operation using an initial SHA1 state (A,B,C,D) from the first operand (which is a source operand and the destination operand) and some pre-computed sum of the next 4 round message dwords, and state variable E from the second operand (a source operand). The updated SHA1 state (A,B,C,D) after four rounds of processing is stored in the destination operand.

+

Operation + ¶ +

+

SHA1RNDS4 + ¶ +

+
The function f() and Constant K are dependent on the value of the immediate.
+IF ( imm8[1:0] = 0 )
+    THEN f() := f0(),
+        K := K0;
+ELSE IF ( imm8[1:0]
+        = 1 )
+    THEN f() := f1(),
+        K := K1;
+ELSE IF ( imm8[1:0]
+        = 2 )
+    THEN f() := f2(),
+        K := K2;
+ELSE IF ( imm8[1:0]
+        = 3 )
+    THEN f() := f3(),
+        K := K3;
+FI;
+A := SRC1[127:96];
+B := SRC1[95:64];
+C := SRC1[63:32];
+D := SRC1[31:0];
+W0E := SRC2[127:96];
+W1 := SRC2[95:64];
+W2 := SRC2[63:32];
+W3 := SRC2[31:0];
+Round i = 0 operation:
+A_1 := f (B, C, D) + (A ROL 5) +W0E +K;
+B_1 := A;
+C_1 := B ROL 30;
+D_1 := C;
+E_1 := D;
+FOR i = 1 to 3
+    A_(i +1) := f (B_i, C_i, D_i) + (A_i ROL 5) +Wi+ E_i +K;
+    B_(i +1) := A_i;
+    C_(i +1) := B_i ROL 30;
+    D_(i +1) := C_i;
+    E_(i +1) := D_i;
+ENDFOR
+DEST[127:96] := A_4;
+DEST[95:64] := B_4;
+DEST[63:32] := C_4;
+DEST[31:0] := D_4;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SHA1RNDS4 __m128i _mm_sha1rnds4_epu32(__m128i, __m128i, const int);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/sha256msg1.html b/x86/sha256msg1.html new file mode 100644 index 0000000..974451c --- /dev/null +++ b/x86/sha256msg1.html @@ -0,0 +1,73 @@ + +SHA256MSG1 + — Perform an Intermediate Calculation for the Next Four SHA256 MessageDwords

SHA256MSG1 + — Perform an Intermediate Calculation for the Next Four SHA256 MessageDwords

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 CC /r SHA256MSG1 xmm1, xmm2/m128RMV/VSHAPerforms an intermediate calculation for the next four SHA256 message dwords using previous message dwords from xmm1 and xmm2/m128, storing the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (r, w)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The SHA256MSG1 instruction is one of two SHA256 message scheduling instructions. The instruction performs an intermediate calculation for the next four SHA256 message dwords.

+

Operation + ¶ +

+

SHA256MSG1 + ¶ +

+
W4 := SRC2[31: 0] ;
+W3 := SRC1[127:96] ;
+W2 := SRC1[95:64] ;
+W1 := SRC1[63: 32] ;
+W0 := SRC1[31: 0] ;
+DEST[127:96] := W3 + σ0( W4);
+DEST[95:64] := W2 + σ0( W3);
+DEST[63:32] := W1 + σ0( W2);
+DEST[31:0] := W0 + σ0( W1);
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SHA256MSG1 __m128i _mm_sha256msg1_epu32(__m128i, __m128i);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/sha256msg2.html b/x86/sha256msg2.html new file mode 100644 index 0000000..a0aaa11 --- /dev/null +++ b/x86/sha256msg2.html @@ -0,0 +1,74 @@ + +SHA256MSG2 + — Perform a Final Calculation for the Next Four SHA256 Message Dwords

SHA256MSG2 + — Perform a Final Calculation for the Next Four SHA256 Message Dwords

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 CD /r SHA256MSG2 xmm1, xmm2/m128RMV/VSHAPerforms the final calculation for the next four SHA256 message dwords using previous message dwords from xmm1 and xmm2/m128, storing the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMModRM:reg (r, w)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The SHA256MSG2 instruction is one of two SHA2 message scheduling instructions. The instruction performs the final calculation for the next four SHA256 message dwords.

+

Operation + ¶ +

+

SHA256MSG2 + ¶ +

+
W14 := SRC2[95:64] ;
+W15 := SRC2[127:96] ;
+W16 := SRC1[31: 0] + σ1( W14) ;
+W17 := SRC1[63: 32] + σ1( W15) ;
+W18 := SRC1[95: 64] + σ1( W16) ;
+W19 := SRC1[127: 96] + σ1( W17) ;
+DEST[127:96] := W19 ;
+DEST[95:64] := W18 ;
+DEST[63:32] := W17 ;
+DEST[31:0] := W16;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SHA256MSG2 __m128i _mm_sha256msg2_epu32(__m128i, __m128i);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/sha256rnds2.html b/x86/sha256rnds2.html new file mode 100644 index 0000000..d1e8d76 --- /dev/null +++ b/x86/sha256rnds2.html @@ -0,0 +1,97 @@ + +SHA256RNDS2 + — Perform Two Rounds of SHA256 Operation

SHA256RNDS2 + — Perform Two Rounds of SHA256 Operation

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 38 CB /r SHA256RNDS2 xmm1, xmm2/m128, <XMM0>RMIV/VSHAPerform 2 rounds of SHA256 operation using an initial SHA256 state (C,D,G,H) from xmm1, an initial SHA256 state (A,B,E,F) from xmm2/m128, and a pre-computed sum of the next 2 round message dwords and the corresponding round constants from the implicit operand XMM0, storing the updated SHA256 state (A,B,E,F) result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3
RMIModRM:reg (r, w)ModRM:r/m (r)Implicit XMM0 (r)
+

Description + ¶ +

+

The SHA256RNDS2 instruction performs 2 rounds of SHA256 operation using an initial SHA256 state (C,D,G,H) from the first operand, an initial SHA256 state (A,B,E,F) from the second operand, and a pre-computed sum of the next 2 round message dwords and the corresponding round constants from the implicit operand xmm0. Note that only the two lower dwords of XMM0 are used by the instruction.

+

The updated SHA256 state (A,B,E,F) is written to the first operand, and the second operand can be used as the updated state (C,D,G,H) in later rounds.

+

Operation + ¶ +

+

SHA256RNDS2 + ¶ +

+
A_0 := SRC2[127:96];
+B_0 := SRC2[95:64];
+C_0 := SRC1[127:96];
+D_0 := SRC1[95:64];
+E_0 := SRC2[63:32];
+F_0 := SRC2[31:0];
+G_0 := SRC1[63:32];
+H_0 := SRC1[31:0];
+WK0 := XMM0[31: 0];
+WK1 := XMM0[63: 32];
+FOR i = 0 to 1
+    A_(i +1) :=
+        Ch (E_i, F_i, G_i) +Σ1( E_i) +WKi+ H_i + Maj(A_i , B_i, C_i) +Σ0( A_i);
+    B_(i +1) :=
+        A_i;
+    C_(i +1) :=
+        B_i ;
+    D_(i +1) :=
+        C_i;
+    E_(i +1) :=
+        Ch (E_i, F_i, G_i) +Σ1( E_i) +WKi+ H_i + D_i;
+    F_(i +1) :=
+        E_i ;
+    G_(i +1) :=
+        F_i;
+    H_(i +1) :=
+        G_i;
+ENDFOR
+DEST[127:96] := A_2;
+DEST[95:64] := B_2;
+DEST[63:32] := E_2;
+DEST[31:0] := F_2;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
SHA256RNDS2 __m128i _mm_sha256rnds2_epu32(__m128i, __m128i, __m128i);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/shld.html b/x86/shld.html new file mode 100644 index 0000000..66d9f5e --- /dev/null +++ b/x86/shld.html @@ -0,0 +1,201 @@ + +SHLD + — Double Precision Shift Left

SHLD + — Double Precision Shift Left

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F A4 /r ibSHLD r/m16, r16, imm8MRIValidValidShift r/m16 to left imm8 places while shifting bits from r16 in from the right.
0F A5 /rSHLD r/m16, r16, CLMRCValidValidShift r/m16 to left CL places while shifting bits from r16 in from the right.
0F A4 /r ibSHLD r/m32, r32, imm8MRIValidValidShift r/m32 to left imm8 places while shifting bits from r32 in from the right.
REX.W + 0F A4 /r ibSHLD r/m64, r64, imm8MRIValidN.E.Shift r/m64 to left imm8 places while shifting bits from r64 in from the right.
0F A5 /rSHLD r/m32, r32, CLMRCValidValidShift r/m32 to left CL places while shifting bits from r32 in from the right.
REX.W + 0F A5 /rSHLD r/m64, r64, CLMRCValidN.E.Shift r/m64 to left CL places while shifting bits from r64 in from the right.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRIModRM:r/m (w)ModRM:reg (r)imm8N/A
MRCModRM:r/m (w)ModRM:reg (r)CLN/A
+

Description + ¶ +

+

The SHLD instruction is used for multi-precision shifts of 64 bits or more.

+

The instruction shifts the first operand (destination operand) to the left the number of bits specified by the third operand (count operand). The second operand (source operand) provides bits to shift in from the right (starting with bit 0 of the destination operand).

+

The destination operand can be a register or a memory location; the source operand is a register. The count operand is an unsigned integer that can be stored in an immediate byte or in the CL register. If the count operand is CL, the shift count is the logical AND of CL and a count mask. In non-64-bit modes and default 64-bit mode; only bits 0 through 4 of the count are used. This masks the count to a value between 0 and 31. If a count is greater than the operand size, the result is undefined.

+

If the count is 1 or greater, the CF flag is filled with the last bit shifted out of the destination operand. For a 1-bit shift, the OF flag is set if a sign change occurred; otherwise, it is cleared. If the count operand is 0, flags are not affected.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits (upgrading the count mask to 6 bits). See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF (In 64-Bit Mode and REX.W = 1)
+    THEN COUNT := COUNT MOD 64;
+    ELSE COUNT := COUNT MOD 32;
+FI
+SIZE := OperandSize;
+IF COUNT = 0
+    THEN
+        No operation;
+    ELSE
+        IF COUNT > SIZE
+            THEN (* Bad parameters *)
+                DEST is undefined;
+                CF, OF, SF, ZF, AF, PF are undefined;
+            ELSE (* Perform the shift *)
+                CF := BIT[DEST, SIZE – COUNT];
+                (* Last bit shifted out on exit *)
+                FOR i := SIZE – 1 DOWN TO COUNT
+                    DO
+                        Bit(DEST, i) := Bit(DEST, i – COUNT);
+                    OD;
+                FOR i := COUNT – 1 DOWN TO 0
+                    DO
+                        BIT[DEST, i] := BIT[SRC, i – COUNT + SIZE];
+                    OD;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

If the count is 1 or greater, the CF flag is filled with the last bit shifted out of the destination operand and the SF, ZF, and PF flags are set according to the value of the result. For a 1-bit shift, the OF flag is set if a sign change occurred; otherwise, it is cleared. For shifts greater than 1 bit, the OF flag is undefined. If a shift occurs, the AF flag is undefined. If the count operand is 0, the flags are not affected. If the count is greater than the operand size, the flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/shrd.html b/x86/shrd.html new file mode 100644 index 0000000..f4de21e --- /dev/null +++ b/x86/shrd.html @@ -0,0 +1,200 @@ + +SHRD + — Double Precision Shift Right

SHRD + — Double Precision Shift Right

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F AC /r ibSHRD r/m16, r16, imm8MRIValidValidShift r/m16 to right imm8 places while shifting bits from r16 in from the left.
0F AD /rSHRD r/m16, r16, CLMRCValidValidShift r/m16 to right CL places while shifting bits from r16 in from the left.
0F AC /r ibSHRD r/m32, r32, imm8MRIValidValidShift r/m32 to right imm8 places while shifting bits from r32 in from the left.
REX.W + 0F AC /r ibSHRD r/m64, r64, imm8MRIValidN.E.Shift r/m64 to right imm8 places while shifting bits from r64 in from the left.
0F AD /rSHRD r/m32, r32, CLMRCValidValidShift r/m32 to right CL places while shifting bits from r32 in from the left.
REX.W + 0F AD /rSHRD r/m64, r64, CLMRCValidN.E.Shift r/m64 to right CL places while shifting bits from r64 in from the left.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRIModRM:r/m (w)ModRM:reg (r)imm8N/A
MRCModRM:r/m (w)ModRM:reg (r)CLN/A
+

Description + ¶ +

+

The SHRD instruction is useful for multi-precision shifts of 64 bits or more.

+

The instruction shifts the first operand (destination operand) to the right the number of bits specified by the third operand (count operand). The second operand (source operand) provides bits to shift in from the left (starting with the most significant bit of the destination operand).

+

The destination operand can be a register or a memory location; the source operand is a register. The count operand is an unsigned integer that can be stored in an immediate byte or the CL register. If the count operand is CL, the shift count is the logical AND of CL and a count mask. In non-64-bit modes and default 64-bit mode, the width of the count mask is 5 bits. Only bits 0 through 4 of the count register are used (masking the count to a value between 0 and 31). If the count is greater than the operand size, the result is undefined.

+

If the count is 1 or greater, the CF flag is filled with the last bit shifted out of the destination operand. For a 1-bit shift, the OF flag is set if a sign change occurred; otherwise, it is cleared. If the count operand is 0, flags are not affected.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits (upgrading the count mask to 6 bits). See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF (In 64-Bit Mode and REX.W = 1)
+    THEN COUNT := COUNT MOD 64;
+    ELSE COUNT := COUNT MOD 32;
+FI
+SIZE := OperandSize;
+IF COUNT = 0
+    THEN
+        No operation;
+    ELSE
+        IF COUNT > SIZE
+            THEN (* Bad parameters *)
+                DEST is undefined;
+                CF, OF, SF, ZF, AF, PF are undefined;
+            ELSE (* Perform the shift *)
+                CF := BIT[DEST, COUNT – 1]; (* Last bit shifted out on exit *)
+                FOR i := 0 TO SIZE – 1 – COUNT
+                    DO
+                        BIT[DEST, i] := BIT[DEST, i + COUNT];
+                    OD;
+                FOR i := SIZE – COUNT TO SIZE – 1
+                    DO
+                        BIT[DEST,i] := BIT[SRC, i + COUNT – SIZE];
+                    OD;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

If the count is 1 or greater, the CF flag is filled with the last bit shifted out of the destination operand and the SF, ZF, and PF flags are set according to the value of the result. For a 1-bit shift, the OF flag is set if a sign change occurred; otherwise, it is cleared. For shifts greater than 1 bit, the OF flag is undefined. If a shift occurs, the AF flag is undefined. If the count operand is 0, the flags are not affected. If the count is greater than the operand size, the flags are undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/shufpd.html b/x86/shufpd.html new file mode 100644 index 0000000..3746561 --- /dev/null +++ b/x86/shufpd.html @@ -0,0 +1,389 @@ + +SHUFPD + — Packed Interleave Shuffle of Pairs of Double Precision Floating-Point Values

SHUFPD + — Packed Interleave Shuffle of Pairs of Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F C6 /r ib SHUFPD xmm1, xmm2/m128, imm8AV/VSSE2Shuffle two pairs of double precision floating-point values from xmm1 and xmm2/m128 using imm8 to select from each pair, interleaved result is stored in xmm1.
VEX.128.66.0F.WIG C6 /r ib VSHUFPD xmm1, xmm2, xmm3/m128, imm8BV/VAVXShuffle two pairs of double precision floating-point values from xmm2 and xmm3/m128 using imm8 to select from each pair, interleaved result is stored in xmm1.
VEX.256.66.0F.WIG C6 /r ib VSHUFPD ymm1, ymm2, ymm3/m256, imm8BV/VAVXShuffle four pairs of double precision floating-point values from ymm2 and ymm3/m256 using imm8 to select from each pair, interleaved result is stored in xmm1.
EVEX.128.66.0F.W1 C6 /r ib VSHUFPD xmm1{k1}{z}, xmm2, xmm3/m128/m64bcst, imm8CV/VAVX512VL AVX512FShuffle two pairs of double precision floating-point values from xmm2 and xmm3/m128/m64bcst using imm8 to select from each pair. store interleaved results in xmm1 subject to writemask k1.
EVEX.256.66.0F.W1 C6 /r ib VSHUFPD ymm1{k1}{z}, ymm2, ymm3/m256/m64bcst, imm8CV/VAVX512VL AVX512FShuffle four pairs of double precision floating-point values from ymm2 and ymm3/m256/m64bcst using imm8 to select from each pair. store interleaved results in ymm1 subject to writemask k1.
EVEX.512.66.0F.W1 C6 /r ib VSHUFPD zmm1{k1}{z}, zmm2, zmm3/m512/m64bcst, imm8CV/VAVX512FShuffle eight pairs of double precision floating-point values from zmm2 and zmm3/m512/m64bcst using imm8 to select from each pair. store interleaved results in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Selects a double precision floating-point value of an input pair using a bit control and move to a designated element of the destination operand. The low-to-high order of double precision element of the destination operand is interleaved between the first source operand and the second source operand at the granularity of input pair of 128 bits. Each bit in the imm8 byte, starting from bit 0, is the select control of the corresponding element of the destination to received the shuffled result of an input pair.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location The destination operand is a ZMM/YMM/XMM register updated according to the writemask. The select controls are the lower 8/4/2 bits of the imm8 byte.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. The select controls are the bit 3:0 of the imm8 byte, imm8[7:4) are ignored.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of

+

the corresponding ZMM register destination are zeroed. The select controls are the bit 1:0 of the imm8 byte, imm8[7:2) are ignored.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination operand and the first source operand is the same and is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified. The select controls are the bit 1:0 of the imm8 byte, imm8[7:2) are ignored.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X3 +X2 +X1 +X0 +SRC1 +Y3 +Y2 +Y1 +Y0 +SRC2 +DEST Y2 or Y3 +X2 or X3 +Y0 or Y1 +X0 or X1 +
Figure 4-25. 256-bit VSHUFPD Operation of Four Pairs of Double Precision Floating-Point Values
+

Operation + ¶ +

+

VSHUFPD (EVEX Encoded Versions When SRC2 is a Vector Register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF IMM0[0] = 0
+    THEN TMP_DEST[63:0] := SRC1[63:0]
+    ELSE TMP_DEST[63:0] := SRC1[127:64] FI;
+IF IMM0[1] = 0
+    THEN TMP_DEST[127:64] := SRC2[63:0]
+    ELSE TMP_DEST[127:64] := SRC2[127:64] FI;
+IF VL >= 256
+    IF IMM0[2] = 0
+        THEN TMP_DEST[191:128] := SRC1[191:128]
+        ELSE TMP_DEST[191:128] := SRC1[255:192] FI;
+    IF IMM0[3] = 0
+        THEN TMP_DEST[255:192] := SRC2[191:128]
+        ELSE TMP_DEST[255:192] := SRC2[255:192] FI;
+FI;
+IF VL >= 512
+    IF IMM0[4] = 0
+        THEN TMP_DEST[319:256] := SRC1[319:256]
+        ELSE TMP_DEST[319:256] := SRC1[383:320] FI;
+    IF IMM0[5] = 0
+        THEN TMP_DEST[383:320] := SRC2[319:256]
+        ELSE TMP_DEST[383:320] := SRC2[383:320] FI;
+    IF IMM0[6] = 0
+        THEN TMP_DEST[447:384] := SRC1[447:384]
+        ELSE TMP_DEST[447:384] := SRC1[511:448] FI;
+    IF IMM0[7] = 0
+        THEN TMP_DEST[511:448] := SRC2[447:384]
+        ELSE TMP_DEST[511:448] := SRC2[511:448] FI;
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSHUFPD (EVEX Encoded Versions When SRC2 is Memory) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0]
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i]
+    FI;
+ENDFOR;
+IF IMM0[0] = 0
+    THEN TMP_DEST[63:0] := SRC1[63:0]
+    ELSE TMP_DEST[63:0] := SRC1[127:64] FI;
+IF IMM0[1] = 0
+    THEN TMP_DEST[127:64] := TMP_SRC2[63:0]
+    ELSE TMP_DEST[127:64] := TMP_SRC2[127:64] FI;
+IF VL >= 256
+    IF IMM0[2] = 0
+        THEN TMP_DEST[191:128] := SRC1[191:128]
+        ELSE TMP_DEST[191:128] := SRC1[255:192] FI;
+    IF IMM0[3] = 0
+        THEN TMP_DEST[255:192] := TMP_SRC2[191:128]
+        ELSE TMP_DEST[255:192] := TMP_SRC2[255:192] FI;
+FI;
+IF VL >= 512
+    IF IMM0[4] = 0
+        THEN TMP_DEST[319:256] := SRC1[319:256]
+        ELSE TMP_DEST[319:256] := SRC1[383:320] FI;
+    IF IMM0[5] = 0
+        THEN TMP_DEST[383:320] := TMP_SRC2[319:256]
+        ELSE TMP_DEST[383:320] := TMP_SRC2[383:320] FI;
+    IF IMM0[6] = 0
+        THEN TMP_DEST[447:384] := SRC1[447:384]
+        ELSE TMP_DEST[447:384] := SRC1[511:448] FI;
+    IF IMM0[7] = 0
+        THEN TMP_DEST[511:448] := TMP_SRC2[447:384]
+        ELSE TMP_DEST[511:448] := TMP_SRC2[511:448] FI;
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSHUFPD (VEX.256 Encoded Version) + ¶ +

+
IF IMM0[0] = 0
+    THEN DEST[63:0] := SRC1[63:0]
+    ELSE DEST[63:0] := SRC1[127:64] FI;
+IF IMM0[1] = 0
+    THEN DEST[127:64] := SRC2[63:0]
+    ELSE DEST[127:64] := SRC2[127:64] FI;
+IF IMM0[2] = 0
+    THEN DEST[191:128] := SRC1[191:128]
+    ELSE DEST[191:128] := SRC1[255:192] FI;
+IF IMM0[3] = 0
+    THEN DEST[255:192] := SRC2[191:128]
+    ELSE DEST[255:192] := SRC2[255:192] FI;
+DEST[MAXVL-1:256] (Unmodified)
+
+

VSHUFPD (VEX.128 Encoded Version) + ¶ +

+
IF IMM0[0] = 0
+    THEN DEST[63:0] := SRC1[63:0]
+    ELSE DEST[63:0] := SRC1[127:64] FI;
+IF IMM0[1] = 0
+    THEN DEST[127:64] := SRC2[63:0]
+    ELSE DEST[127:64] := SRC2[127:64] FI;
+DEST[MAXVL-1:128] := 0
+
+

VSHUFPD (128-bit Legacy SSE Version) + ¶ +

+
IF IMM0[0] = 0
+    THEN DEST[63:0] := SRC1[63:0]
+    ELSE DEST[63:0] := SRC1[127:64] FI;
+IF IMM0[1] = 0
+    THEN DEST[127:64] := SRC2[63:0]
+    ELSE DEST[127:64] := SRC2[127:64] FI;
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSHUFPD __m512d _mm512_shuffle_pd(__m512d a, __m512d b, int imm);
+
+
VSHUFPD __m512d _mm512_mask_shuffle_pd(__m512d s, __mmask8 k, __m512d a, __m512d b, int imm);
+
+
VSHUFPD __m512d _mm512_maskz_shuffle_pd( __mmask8 k, __m512d a, __m512d b, int imm);
+
+
VSHUFPD __m256d _mm256_shuffle_pd (__m256d a, __m256d b, const int select);
+
+
VSHUFPD __m256d _mm256_mask_shuffle_pd(__m256d s, __mmask8 k, __m256d a, __m256d b, int imm);
+
+
VSHUFPD __m256d _mm256_maskz_shuffle_pd( __mmask8 k, __m256d a, __m256d b, int imm);
+
+
SHUFPD __m128d _mm_shuffle_pd (__m128d a, __m128d b, const int select);
+
+
VSHUFPD __m128d _mm_mask_shuffle_pd(__m128d s, __mmask8 k, __m128d a, __m128d b, int imm);
+
+
VSHUFPD __m128d _mm_maskz_shuffle_pd( __mmask8 k, __m128d a, __m128d b, int imm);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/shufps.html b/x86/shufps.html new file mode 100644 index 0000000..9c4d7ad --- /dev/null +++ b/x86/shufps.html @@ -0,0 +1,426 @@ + +SHUFPS + — Packed Interleave Shuffle of Quadruplets of Single Precision Floating-Point Values

SHUFPS + — Packed Interleave Shuffle of Quadruplets of Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F C6 /r ib SHUFPS xmm1, xmm3/m128, imm8AV/VSSESelect from quadruplet of single precision floating-point values in xmm1 and xmm2/m128 using imm8, interleaved result pairs are stored in xmm1.
VEX.128.0F.WIG C6 /r ib VSHUFPS xmm1, xmm2, xmm3/m128, imm8BV/VAVXSelect from quadruplet of single precision floating-point values in xmm1 and xmm2/m128 using imm8, interleaved result pairs are stored in xmm1.
VEX.256.0F.WIG C6 /r ib VSHUFPS ymm1, ymm2, ymm3/m256, imm8BV/VAVXSelect from quadruplet of single precision floating-point values in ymm2 and ymm3/m256 using imm8, interleaved result pairs are stored in ymm1.
EVEX.128.0F.W0 C6 /r ib VSHUFPS xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst, imm8CV/VAVX512VL AVX512FSelect from quadruplet of single precision floating-point values in xmm1 and xmm2/m128 using imm8, interleaved result pairs are stored in xmm1, subject to writemask k1.
EVEX.256.0F.W0 C6 /r ib VSHUFPS ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst, imm8CV/VAVX512VL AVX512FSelect from quadruplet of single precision floating-point values in ymm2 and ymm3/m256 using imm8, interleaved result pairs are stored in ymm1, subject to writemask k1.
EVEX.512.0F.W0 C6 /r ib VSHUFPS zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst, imm8CV/VAVX512FSelect from quadruplet of single precision floating-point values in zmm2 and zmm3/m512 using imm8, interleaved result pairs are stored in zmm1, subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)imm8N/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Selects a single precision floating-point value of an input quadruplet using a two-bit control and move to a designated element of the destination operand. Each 64-bit element-pair of a 128-bit lane of the destination operand is interleaved between the corresponding lane of the first source operand and the second source operand at the granularity 128 bits. Each two bits in the imm8 byte, starting from bit 0, is the select control of the corresponding element of a 128-bit lane of the destination to received the shuffled result of an input quadruplet. The two lower elements of a 128-bit lane in the destination receives shuffle results from the quadruple of the first source operand. The next two elements of the destination receives shuffle results from the quadruple of the second source operand.

+

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the writemask. imm8[7:0] provides 4 select controls for each applicable 128-bit lane of the destination.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. Imm8[7:0] provides 4 select controls for the high and low 128-bit of the destination.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed. Imm8[7:0] provides 4 select controls for each element of the destination.

+

128-bit Legacy SSE version: The source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified. Imm8[7:0] provides 4 select controls for each element of the destination.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X7 X6 X5 X4 X3 X2 X1 X0 +SRC1 +Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 +SRC2 +Y7..Y4 Y7..Y4 X7..X4 X7..X4 Y3..Y0 Y3..Y0 +DEST +X3..X0 X3..X0 +
Figure 4-26. 256-bit VSHUFPS Operation of Selection from Input Quadruplet and Pair-wise Interleaved Result
+

Operation + ¶ +

+
Select4(SRC, control) {
+CASE (control[1:0]) OF
+    0: TMP := SRC[31:0];
+    1: TMP := SRC[63:32];
+    2: TMP := SRC[95:64];
+    3: TMP := SRC[127:96];
+ESAC;
+RETURN TMP
+}
+
+

VPSHUFPS (EVEX Encoded Versions When SRC2 is a Vector Register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+TMP_DEST[31:0] := Select4(SRC1[127:0], imm8[1:0]);
+TMP_DEST[63:32] := Select4(SRC1[127:0], imm8[3:2]);
+TMP_DEST[95:64] := Select4(SRC2[127:0], imm8[5:4]);
+TMP_DEST[127:96] := Select4(SRC2[127:0], imm8[7:6]);
+IF VL >= 256
+    TMP_DEST[159:128] := Select4(SRC1[255:128], imm8[1:0]);
+    TMP_DEST[191:160] := Select4(SRC1[255:128], imm8[3:2]);
+    TMP_DEST[223:192] := Select4(SRC2[255:128], imm8[5:4]);
+    TMP_DEST[255:224] := Select4(SRC2[255:128], imm8[7:6]);
+FI;
+IF VL >= 512
+    TMP_DEST[287:256] := Select4(SRC1[383:256], imm8[1:0]);
+    TMP_DEST[319:288] := Select4(SRC1[383:256], imm8[3:2]);
+    TMP_DEST[351:320] := Select4(SRC2[383:256], imm8[5:4]);
+    TMP_DEST[383:352] := Select4(SRC2[383:256], imm8[7:6]);
+    TMP_DEST[415:384] := Select4(SRC1[511:384], imm8[1:0]);
+    TMP_DEST[447:416] := Select4(SRC1[511:384], imm8[3:2]);
+    TMP_DEST[479:448] := Select4(SRC2[511:384], imm8[5:4]);
+    TMP_DEST[511:480] := Select4(SRC2[511:384], imm8[7:6]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPSHUFPS (EVEX Encoded Versions When SRC2 is Memory) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF (EVEX.b = 1)
+        THEN TMP_SRC2[i+31:i] := SRC2[31:0]
+        ELSE TMP_SRC2[i+31:i] := SRC2[i+31:i]
+    FI;
+ENDFOR;
+TMP_DEST[31:0] := Select4(SRC1[127:0], imm8[1:0]);
+TMP_DEST[63:32] := Select4(SRC1[127:0], imm8[3:2]);
+TMP_DEST[95:64] := Select4(TMP_SRC2[127:0], imm8[5:4]);
+TMP_DEST[127:96] := Select4(TMP_SRC2[127:0], imm8[7:6]);
+IF VL >= 256
+    TMP_DEST[159:128] := Select4(SRC1[255:128], imm8[1:0]);
+    TMP_DEST[191:160] := Select4(SRC1[255:128], imm8[3:2]);
+    TMP_DEST[223:192] := Select4(TMP_SRC2[255:128], imm8[5:4]);
+    TMP_DEST[255:224] := Select4(TMP_SRC2[255:128], imm8[7:6]);
+FI;
+IF VL >= 512
+    TMP_DEST[287:256] := Select4(SRC1[383:256], imm8[1:0]);
+    TMP_DEST[319:288] := Select4(SRC1[383:256], imm8[3:2]);
+    TMP_DEST[351:320] := Select4(TMP_SRC2[383:256], imm8[5:4]);
+    TMP_DEST[383:352] := Select4(TMP_SRC2[383:256], imm8[7:6]);
+    TMP_DEST[415:384] := Select4(SRC1[511:384], imm8[1:0]);
+    TMP_DEST[447:416] := Select4(SRC1[511:384], imm8[3:2]);
+    TMP_DEST[479:448] := Select4(TMP_SRC2[511:384], imm8[5:4]);
+    TMP_DEST[511:480] := Select4(TMP_SRC2[511:384], imm8[7:6]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSHUFPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := Select4(SRC1[127:0], imm8[1:0]);
+DEST[63:32] := Select4(SRC1[127:0], imm8[3:2]);
+DEST[95:64] := Select4(SRC2[127:0], imm8[5:4]);
+DEST[127:96] := Select4(SRC2[127:0], imm8[7:6]);
+DEST[159:128] := Select4(SRC1[255:128], imm8[1:0]);
+DEST[191:160] := Select4(SRC1[255:128], imm8[3:2]);
+DEST[223:192] := Select4(SRC2[255:128], imm8[5:4]);
+DEST[255:224] := Select4(SRC2[255:128], imm8[7:6]);
+DEST[MAXVL-1:256] := 0
+
+

VSHUFPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := Select4(SRC1[127:0], imm8[1:0]);
+DEST[63:32] := Select4(SRC1[127:0], imm8[3:2]);
+DEST[95:64] := Select4(SRC2[127:0], imm8[5:4]);
+DEST[127:96] := Select4(SRC2[127:0], imm8[7:6]);
+DEST[MAXVL-1:128] := 0
+
+

SHUFPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := Select4(SRC1[127:0], imm8[1:0]);
+DEST[63:32] := Select4(SRC1[127:0], imm8[3:2]);
+DEST[95:64] := Select4(SRC2[127:0], imm8[5:4]);
+DEST[127:96] := Select4(SRC2[127:0], imm8[7:6]);
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSHUFPS __m512 _mm512_shuffle_ps(__m512 a, __m512 b, int imm);
+
+
VSHUFPS __m512 _mm512_mask_shuffle_ps(__m512 s, __mmask16 k, __m512 a, __m512 b, int imm);
+
+
VSHUFPS __m512 _mm512_maskz_shuffle_ps(__mmask16 k, __m512 a, __m512 b, int imm);
+
+
VSHUFPS __m256 _mm256_shuffle_ps (__m256 a, __m256 b, const int select);
+
+
VSHUFPS __m256 _mm256_mask_shuffle_ps(__m256 s, __mmask8 k, __m256 a, __m256 b, int imm);
+
+
VSHUFPS __m256 _mm256_maskz_shuffle_ps(__mmask8 k, __m256 a, __m256 b, int imm);
+
+
SHUFPS __m128 _mm_shuffle_ps (__m128 a, __m128 b, const int select);
+
+
VSHUFPS __m128 _mm_mask_shuffle_ps(__m128 s, __mmask8 k, __m128 a, __m128 b, int imm);
+
+
VSHUFPS __m128 _mm_maskz_shuffle_ps(__mmask8 k, __m128 a, __m128 b, int imm);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/sidt.html b/x86/sidt.html new file mode 100644 index 0000000..76ad07c --- /dev/null +++ b/x86/sidt.html @@ -0,0 +1,156 @@ + +SIDT + — Store Interrupt Descriptor Table Register

SIDT + — Store Interrupt Descriptor Table Register

+ +

Opcode1

+ + + + + + + + + + + + + + +
InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01 /1ValidValidStore IDTR to m.
+

1. See the IA-32 Architecture Compatibility section below.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Stores the content the interrupt descriptor table register (IDTR) in the destination operand. The destination operand specifies a 6-byte memory location.

+

In non-64-bit modes, the 16-bit limit field of the register is stored in the low 2 bytes of the memory location and the 32-bit base address is stored in the high 4 bytes.

+

In 64-bit mode, the operand size fixed at 8+2 bytes. The instruction stores 8-byte base and 2-byte limit values.

+

SIDT is only useful in operating-system software; however, it can be used in application programs without causing an exception to be generated if CR4.UMIP = 0. See “LGDT/LIDT—Load Global/Interrupt Descriptor Table Register” in Chapter 3, Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2A, for information on loading the GDTR and IDTR.

+

IA-32 Architecture Compatibility + ¶ +

+

The 16-bit form of SIDT is compatible with the Intel 286 processor if the upper 8 bits are not referenced. The Intel 286 processor fills these bits with 1s; processor generations later than the Intel 286 processor fill these bits with 0s.

+

Operation + ¶ +

+
IF instruction is SIDT
+    THEN
+        IF OperandSize =16 or OperandSize = 32 (* Legacy or Compatibility Mode *)
+            THEN
+                DEST[0:15] := IDTR(Limit);
+                DEST[16:47] := IDTR(Base); FI; (* Full 32-bit base address stored *)
+            ELSE (* 64-bit Mode *)
+                DEST[0:15] := IDTR(Limit);
+                DEST[16:79] := IDTR(Base); (* Full 64-bit base address stored *)
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If CR4.UMIP = 1 and CPL > 0.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL = 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If CR4.UMIP = 1.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#UDIf the LOCK prefix is used.
#GP(0)If the memory address is in a non-canonical form.
If CR4.UMIP = 1 and CPL > 0.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL = 3.
diff --git a/x86/sldt.html b/x86/sldt.html new file mode 100644 index 0000000..5c1d3fc --- /dev/null +++ b/x86/sldt.html @@ -0,0 +1,120 @@ + +SLDT + — Store Local Descriptor Table Register

SLDT + — Store Local Descriptor Table Register

+ + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 00 /0SLDT r/m16MValidValidStores segment selector from LDTR in r/m16.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Stores the segment selector from the local descriptor table register (LDTR) in the destination operand. The destination operand can be a general-purpose register or a memory location. The segment selector stored with this instruction points to the segment descriptor (located in the GDT) for the current LDT. This instruction can only be executed in protected mode.

+

Outside IA-32e mode, when the destination operand is a 32-bit register, the 16-bit segment selector is copied into the low-order 16 bits of the register. The high-order 16 bits of the register are cleared for the Pentium 4, Intel Xeon, and P6 family processors. They are undefined for Pentium, Intel486, and Intel386 processors. When the destination operand is a memory location, the segment selector is written to memory as a 16-bit quantity, regardless of the operand size.

+

In compatibility mode, when the destination operand is a 32-bit register, the 16-bit segment selector is copied into the low-order 16 bits of the register. The high-order 16 bits of the register are cleared. When the destination operand is a memory location, the segment selector is written to memory as a 16-bit quantity, regardless of the operand size.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). The behavior of SLDT with a 64-bit register is to zero-extend the 16-bit selector and store it in the register. If the destination is memory and operand size is 64, SLDT will write the 16-bit selector to memory as a 16-bit quantity, regardless of the operand size.

+

Operation + ¶ +

+
DEST := LDTR(SegmentSelector);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If CR4.UMIP = 1 and CPL > 0.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL = 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe SLDT instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe SLDT instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
If CR4.UMIP = 1 and CPL > 0.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL = 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/smctrl.html b/x86/smctrl.html new file mode 100644 index 0000000..779b28d --- /dev/null +++ b/x86/smctrl.html @@ -0,0 +1,142 @@ + +GETSEC[SMCTRL] + — SMX Mode Control

GETSEC[SMCTRL] + — SMX Mode Control

+ + + + + + + + + +
OpcodeInstructionDescription
NP 0F 37 (EAX = 7)GETSEC[SMCTRL]Perform specified SMX mode control as selected with the input EBX.
+

Description + ¶ +

+

The GETSEC[SMCTRL] instruction is available for performing certain SMX specific mode control operations. The operation to be performed is selected through the input register EBX. Currently only an input value in EBX of 0 is supported. All other EBX settings will result in the signaling of a general protection violation.

+

If EBX is set to 0, then the SMCTRL leaf is used to re-enable SMI events. SMI is masked by the ILP executing the GETSEC[SENTER] instruction (SMI is also masked in the responding logical processors in response to SENTER rendezvous messages.). The determination of when this instruction is allowed and the events that are unmasked is dependent on the processor context (See Table 7-11). For brevity, the usage of SMCTRL where EBX=0 will be referred to as GETSEC[SMCTRL(0)].

+

As part of support for launching a measured environment, the SMI, NMI, and INIT events are masked after GETSEC[SENTER], and remain masked after exiting authenticated execution mode. Unmasking these events should be accompanied by securely enabling these event handlers. These security concerns can be addressed in VMX operation by a MVMM.

+

The VM monitor can choose two approaches:

+
    +
  • In a dual monitor approach, the executive software will set up an SMM monitor in parallel to the executive VMM (i.e., the MVMM), see Chapter 32, “System Management Mode‚” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C. The SMM monitor is dedicated to handling SMI events without compromising the security of the MVMM. This usage model of handling SMI while a measured environment is active does not require the use of GETSEC[SMCTRL(0)] as event re-enabling after the VMX environment launch is handled implicitly and through separate VMX based controls.
  • +
  • If a dedicated SMM monitor will not be established and SMIs are to be handled within the measured environment, then GETSEC[SMCTRL(0)] can be used by the executive software to re-enable SMI that has been masked as a result of SENTER.
+

Table 7-11 defines the processor context in which GETSEC[SMCTRL(0)] can be used and which events will be unmasked. Note that the events that are unmasked are dependent upon the currently operating processor context.

+
+ + + + + + + + + + + + + + + + + + + + + +
ILP Mode of OperationSMCTRL execution action
In VMX non-root operationVM exit
SENTERFLAG = 0#GP(0), illegal context
In authenticated code execution mode (ACMODEFLAG = 1)#GP(0), illegal context
SENTERFLAG = 1, not in VMX operation, not in SMMUnmask SMI
SENTERFLAG = 1, in VMX root operation, not in SMMUnmask SMI if SMM monitor is not configured, otherwise #GP(0)
SENTERFLAG = 1, In VMX root operation, in SMM#GP(0), illegal context
+
Table 7-11. Supported Actions for GETSEC[SMCTRL(0)]
+

Operation + ¶ +

+
(* The state of the internal flag ACMODEFLAG and SENTERFLAG persist across instruction boundary *)
+IF (CR4.SMXE=0)
+    THEN #UD;
+ELSE IF (in VMX non-root operation)
+    THEN VM Exit (reason=”GETSEC instruction”);
+ELSE IF (GETSEC leaf unsupported)
+    THEN #UD;
+ELSE IF ((CR0.PE=0) or (CPL>0) OR (EFLAGS.VM=1))
+    THEN #GP(0);
+ELSE IF((EBX=0) and (SENTERFLAG=1) and (ACMODEFLAG=0) and (IN_SMM=0) and
+        (((in VMX root operation) and (SMM monitor not configured)) or (not in VMX operation)) )
+    THEN unmask SMI;
+ELSE
+    #GP(0);
+END
+
+

Flags Affected + ¶ +

+

None.

+

Use of Prefixes + ¶ +

+

LOCK Causes #UD.

+

REP* Cause #UD (includes REPNE/REPNZ and REP/REPE/REPZ).

+

Operand size Causes #UD.

+

NP 66/F2/F3 prefixes are not allowed.

+

Segmentoverrides Ignored.

+

Address size Ignored.

+

REX Ignored.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[SMCTRL] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)IfCR0.PE=0orCPL>0orEFLAGS.VM=1.
If in VMX root operation.
If a protected partition is not already active or the processor is currently in authenticated code mode.
If the processor is in SMM.
If the SMM monitor is not configured.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[SMCTRL] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[SMCTRL] is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[SMCTRL] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[SMCTRL] is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+

64-Bit Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+

VM-exit Condition + ¶ +

+

Reason (GETSEC) If in VMX non-root operation.

diff --git a/x86/smsw.html b/x86/smsw.html new file mode 100644 index 0000000..9b852d7 --- /dev/null +++ b/x86/smsw.html @@ -0,0 +1,163 @@ + +SMSW + — Store Machine Status Word

SMSW + — Store Machine Status Word

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode*InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01 /4SMSW r/m16MValidValidStore machine status word to r/m16.
0F 01 /4SMSW r32/m16MValidValidStore machine status word in low-order 16 bits of r32/m16; high-order 16 bits of r32 are undefined.
REX.W + 0F 01 /4SMSW r64/m16MValidValidStore machine status word in low-order 16 bits of r64/m16; high-order 16 bits of r32 are undefined.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Stores the machine status word (bits 0 through 15 of control register CR0) into the destination operand. The destination operand can be a general-purpose register or a memory location.

+

In non-64-bit modes, when the destination operand is a 32-bit register, the low-order 16 bits of register CR0 are copied into the low-order 16 bits of the register and the high-order 16 bits are undefined. When the destination operand is a memory location, the low-order 16 bits of register CR0 are written to memory as a 16-bit quantity, regardless of the operand size.

+

In 64-bit mode, the behavior of the SMSW instruction is defined by the following examples:

+
    +
  • SMSW r16 operand size 16, store CR0[15:0] in r16
  • +
  • SMSW r32 operand size 32, zero-extend CR0[31:0], and store in r32
  • +
  • SMSW r64 operand size 64, zero-extend CR0[63:0], and store in r64
  • +
  • SMSW m16 operand size 16, store CR0[15:0] in m16
  • +
  • SMSW m16 operand size 32, store CR0[15:0] in m16 (not m32)
  • +
  • SMSW m16 operands size 64, store CR0[15:0] in m16 (not m64)
+

SMSW is only useful in operating-system software. However, it is not a privileged instruction and can be used in application programs if CR4.UMIP = 0. It is provided for compatibility with the Intel 286 processor. Programs and procedures intended to run on IA-32 and Intel 64 processors beginning with the Intel386 processors should use the MOV CR instruction to load the machine status word.

+

See “Changes to Instruction Behavior in VMX Non-Root Operation” in Chapter 26 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3C, for more information about the behavior of this instruction in VMX non-root operation.

+

Operation + ¶ +

+
DEST := CR0[15:0];
+(* Machine status word *)
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If CR4.UMIP = 1 and CPL > 0.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL = 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If CR4.UMIP = 1.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
If CR4.UMIP = 1 and CPL > 0.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while CPL = 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/sqrtpd.html b/x86/sqrtpd.html new file mode 100644 index 0000000..205ad0a --- /dev/null +++ b/x86/sqrtpd.html @@ -0,0 +1,178 @@ + +SQRTPD + — Square Root of Double Precision Floating-Point Values

SQRTPD + — Square Root of Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 51 /r SQRTPD xmm1, xmm2/m128AV/VSSE2Computes Square Roots of the packed double precision floating-point values in xmm2/m128 and stores the result in xmm1.
VEX.128.66.0F.WIG 51 /r VSQRTPD xmm1, xmm2/m128AV/VAVXComputes Square Roots of the packed double precision floating-point values in xmm2/m128 and stores the result in xmm1.
VEX.256.66.0F.WIG 51 /r VSQRTPD ymm1, ymm2/m256AV/VAVXComputes Square Roots of the packed double precision floating-point values in ymm2/m256 and stores the result in ymm1.
EVEX.128.66.0F.W1 51 /r VSQRTPD xmm1 {k1}{z}, xmm2/m128/m64bcstBV/VAVX512VL AVX512FComputes Square Roots of the packed double precision floating-point values in xmm2/m128/m64bcst and stores the result in xmm1 subject to writemask k1.
EVEX.256.66.0F.W1 51 /r VSQRTPD ymm1 {k1}{z}, ymm2/m256/m64bcstBV/VAVX512VL AVX512FComputes Square Roots of the packed double precision floating-point values in ymm2/m256/m64bcst and stores the result in ymm1 subject to writemask k1.
EVEX.512.66.0F.W1 51 /r VSQRTPD zmm1 {k1}{z}, zmm2/m512/m64bcst{er}BV/VAVX512FComputes Square Roots of the packed double precision floating-point values in zmm2/m512/m64bcst and stores the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Performs a SIMD computation of the square roots of the two, four or eight packed double precision floating-point values in the source operand (the second operand) stores the packed double precision floating-point results in the destination operand (the first operand).

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the writemask.

+

VEX.256 encoded version: The source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: the source operand second source operand or a 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VSQRTPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1) AND (SRC *is register*)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+63:i] := SQRT(SRC[63:0])
+                ELSE DEST[i+63:i] := SQRT(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSQRTPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SQRT(SRC[63:0])
+DEST[127:64] := SQRT(SRC[127:64])
+DEST[191:128] := SQRT(SRC[191:128])
+DEST[255:192] := SQRT(SRC[255:192])
+DEST[MAXVL-1:256] := 0
+.
+
+

VSQRTPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SQRT(SRC[63:0])
+DEST[127:64] := SQRT(SRC[127:64])
+DEST[MAXVL-1:128] := 0
+
+

SQRTPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SQRT(SRC[63:0])
+DEST[127:64] := SQRT(SRC[127:64])
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSQRTPD __m512d _mm512_sqrt_round_pd(__m512d a, int r);
+
+
VSQRTPD __m512d _mm512_mask_sqrt_round_pd(__m512d s, __mmask8 k, __m512d a, int r);
+
+
VSQRTPD __m512d _mm512_maskz_sqrt_round_pd( __mmask8 k, __m512d a, int r);
+
+
VSQRTPD __m256d _mm256_sqrt_pd (__m256d a);
+
+
VSQRTPD __m256d _mm256_mask_sqrt_pd(__m256d s, __mmask8 k, __m256d a, int r);
+
+
VSQRTPD __m256d _mm256_maskz_sqrt_pd( __mmask8 k, __m256d a, int r);
+
+
SQRTPD __m128d _mm_sqrt_pd (__m128d a);
+
+
VSQRTPD __m128d _mm_mask_sqrt_pd(__m128d s, __mmask8 k, __m128d a, int r);
+
+
VSQRTPD __m128d _mm_maskz_sqrt_pd( __mmask8 k, __m128d a, int r);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions,” additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/sqrtps.html b/x86/sqrtps.html new file mode 100644 index 0000000..5d1f096 --- /dev/null +++ b/x86/sqrtps.html @@ -0,0 +1,184 @@ + +SQRTPS + — Square Root of Single Precision Floating-Point Values

SQRTPS + — Square Root of Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 51 /r SQRTPS xmm1, xmm2/m128AV/VSSEComputes Square Roots of the packed single precision floating-point values in xmm2/m128 and stores the result in xmm1.
VEX.128.0F.WIG 51 /r VSQRTPS xmm1, xmm2/m128AV/VAVXComputes Square Roots of the packed single precision floating-point values in xmm2/m128 and stores the result in xmm1.
VEX.256.0F.WIG 51/r VSQRTPS ymm1, ymm2/m256AV/VAVXComputes Square Roots of the packed single precision floating-point values in ymm2/m256 and stores the result in ymm1.
EVEX.128.0F.W0 51 /r VSQRTPS xmm1 {k1}{z}, xmm2/m128/m32bcstBV/VAVX512VL AVX512FComputes Square Roots of the packed single precision floating-point values in xmm2/m128/m32bcst and stores the result in xmm1 subject to writemask k1.
EVEX.256.0F.W0 51 /r VSQRTPS ymm1 {k1}{z}, ymm2/m256/m32bcstBV/VAVX512VL AVX512FComputes Square Roots of the packed single precision floating-point values in ymm2/m256/m32bcst and stores the result in ymm1 subject to writemask k1.
EVEX.512.0F.W0 51/r VSQRTPS zmm1 {k1}{z}, zmm2/m512/m32bcst{er}BV/VAVX512FComputes Square Roots of the packed single precision floating-point values in zmm2/m512/m32bcst and stores the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Performs a SIMD computation of the square roots of the four, eight or sixteen packed single precision floating-point values in the source operand (second operand) stores the packed single precision floating-point results in the destination operand.

+

EVEX.512 encoded versions: The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the writemask.

+

VEX.256 encoded version: The source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register. The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 encoded version: the source operand second source operand or a 128-bit memory location. The destination operand is an XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VSQRTPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1) AND (SRC *is register*)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+31:i] := SQRT(SRC[31:0])
+                ELSE DEST[i+31:i] := SQRT(SRC[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSQRTPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SQRT(SRC[31:0])
+DEST[63:32] := SQRT(SRC[63:32])
+DEST[95:64] := SQRT(SRC[95:64])
+DEST[127:96] := SQRT(SRC[127:96])
+DEST[159:128] := SQRT(SRC[159:128])
+DEST[191:160] := SQRT(SRC[191:160])
+DEST[223:192] := SQRT(SRC[223:192])
+DEST[255:224] := SQRT(SRC[255:224])
+
+

VSQRTPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SQRT(SRC[31:0])
+DEST[63:32] := SQRT(SRC[63:32])
+DEST[95:64] := SQRT(SRC[95:64])
+DEST[127:96] := SQRT(SRC[127:96])
+DEST[MAXVL-1:128] := 0
+
+

SQRTPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SQRT(SRC[31:0])
+DEST[63:32] := SQRT(SRC[63:32])
+DEST[95:64] := SQRT(SRC[95:64])
+DEST[127:96] := SQRT(SRC[127:96])
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSQRTPS __m512 _mm512_sqrt_round_ps(__m512 a, int r);
+
+
VSQRTPS __m512 _mm512_mask_sqrt_round_ps(__m512 s, __mmask16 k, __m512 a, int r);
+
+
VSQRTPS __m512 _mm512_maskz_sqrt_round_ps( __mmask16 k, __m512 a, int r);
+
+
VSQRTPS __m256 _mm256_sqrt_ps (__m256 a);
+
+
VSQRTPS __m256 _mm256_mask_sqrt_ps(__m256 s, __mmask8 k, __m256 a, int r);
+
+
VSQRTPS __m256 _mm256_maskz_sqrt_ps( __mmask8 k, __m256 a, int r);
+
+
SQRTPS __m128 _mm_sqrt_ps (__m128 a);
+
+
VSQRTPS __m128 _mm_mask_sqrt_ps(__m128 s, __mmask8 k, __m128 a, int r);
+
+
VSQRTPS __m128 _mm_maskz_sqrt_ps( __mmask8 k, __m128 a, int r);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions,” additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/sqrtsd.html b/x86/sqrtsd.html new file mode 100644 index 0000000..0a4c463 --- /dev/null +++ b/x86/sqrtsd.html @@ -0,0 +1,131 @@ + +SQRTSD + — Compute Square Root of Scalar Double Precision Floating-Point Value

SQRTSD + — Compute Square Root of Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 51/r SQRTSD xmm1,xmm2/m64AV/VSSE2Computes square root of the low double precision floating-point value in xmm2/m64 and stores the results in xmm1.
VEX.LIG.F2.0F.WIG 51/r VSQRTSD xmm1,xmm2, xmm3/m64BV/VAVXComputes square root of the low double precision floating-point value in xmm3/m64 and stores the results in xmm1. Also, upper double precision floating-point value (bits[127:64]) from xmm2 is copied to xmm1[127:64].
EVEX.LLIG.F2.0F.W1 51/r VSQRTSD xmm1 {k1}{z}, xmm2, xmm3/m64{er}CV/VAVX512FComputes square root of the low double precision floating-point value in xmm3/m64 and stores the results in xmm1 under writemask k1. Also, upper double precision floating-point value (bits[127:64]) from xmm2 is copied to xmm1[127:64].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Computes the square root of the low double precision floating-point value in the second source operand and stores the double precision floating-point result in the destination operand. The second source operand can be an XMM register or a 64-bit memory location. The first source and destination operands are XMM registers.

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. The quadword at bits 127:64 of the destination operand remains unchanged. Bits (MAXVL-1:64) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded versions: Bits 127:64 of the destination operand are copied from the corresponding bits of the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: The low quadword element of the destination operand is updated according to the write-mask.

+

Software should ensure VSQRTSD is encoded with VEX.L=0. Encoding VSQRTSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VSQRTSD (EVEX Encoded Version) + ¶ +

+
IF (EVEX.b = 1) AND (SRC2 *is register*)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := SQRT(SRC2[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VSQRTSD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SQRT(SRC2[63:0])
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

SQRTSD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SQRT(SRC[63:0])
+DEST[MAXVL-1:64] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSQRTSD __m128d _mm_sqrt_round_sd(__m128d a, __m128d b, int r);
+
+
VSQRTSD __m128d _mm_mask_sqrt_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, int r);
+
+
VSQRTSD __m128d _mm_maskz_sqrt_round_sd(__mmask8 k, __m128d a, __m128d b, int r);
+
+
SQRTSD __m128d _mm_sqrt_sd (__m128d a, __m128d b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/sqrtss.html b/x86/sqrtss.html new file mode 100644 index 0000000..f24a15b --- /dev/null +++ b/x86/sqrtss.html @@ -0,0 +1,131 @@ + +SQRTSS + — Compute Square Root of Scalar Single Precision Value

SQRTSS + — Compute Square Root of Scalar Single Precision Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 51 /r SQRTSS xmm1, xmm2/m32AV/VSSEComputes square root of the low single precision floating-point value in xmm2/m32 and stores the results in xmm1.
VEX.LIG.F3.0F.WIG 51 /r VSQRTSS xmm1, xmm2, xmm3/m32BV/VAVXComputes square root of the low single precision floating-point value in xmm3/m32 and stores the results in xmm1. Also, upper single precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32].
EVEX.LLIG.F3.0F.W0 51 /r VSQRTSS xmm1 {k1}{z}, xmm2, xmm3/m32{er}CV/VAVX512FComputes square root of the low single precision floating-point value in xmm3/m32 and stores the results in xmm1 under writemask k1. Also, upper single precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Computes the square root of the low single precision floating-point value in the second source operand and stores the single precision floating-point result in the destination operand. The second source operand can be an XMM register or a 32-bit memory location. The first source and destination operands is an XMM register.

+

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:32) of the corresponding YMM destination register remain unchanged.

+

VEX.128 and EVEX encoded versions: Bits 127:32 of the destination operand are copied from the corresponding bits of the first source operand. Bits (MAXVL-1:128) of the destination ZMM register are zeroed.

+

EVEX encoded version: The low doubleword element of the destination operand is updated according to the write-mask.

+

Software should ensure VSQRTSS is encoded with VEX.L=0. Encoding VSQRTSS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VSQRTSS (EVEX Encoded Version) + ¶ +

+
IF (EVEX.b = 1) AND (SRC2 *is register*)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := SQRT(SRC2[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VSQRTSS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SQRT(SRC2[31:0])
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

SQRTSS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SQRT(SRC2[31:0])
+DEST[MAXVL-1:32] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSQRTSS __m128 _mm_sqrt_round_ss(__m128 a, __m128 b, int r);
+
+
VSQRTSS __m128 _mm_mask_sqrt_round_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, int r);
+
+
VSQRTSS __m128 _mm_maskz_sqrt_round_ss( __mmask8 k, __m128 a, __m128 b, int r);
+
+
SQRTSS __m128 _mm_sqrt_ss(__m128 a)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/stac.html b/x86/stac.html new file mode 100644 index 0000000..e8e22c1 --- /dev/null +++ b/x86/stac.html @@ -0,0 +1,101 @@ + +STAC + — Set AC Flag in EFLAGS Register

STAC + — Set AC Flag in EFLAGS Register

+ + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 CB STACZOV/VSMAPSet the AC flag in the EFLAGS register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Sets the AC flag bit in EFLAGS register. This may enable alignment checking of user-mode data accesses. This allows explicit supervisor-mode data accesses to user-mode pages even if the SMAP bit is set in the CR4 register.

+

This instruction's operation is the same in non-64-bit modes and 64-bit mode. Attempts to execute STAC when CPL > 0 cause #UD.

+

Operation + ¶ +

+
EFLAGS.AC := 1;
+
+

Flags Affected + ¶ +

+

AC set. Other flags are unaffected.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + +
#UDIf the LOCK prefix is used.
If the CPL > 0.
If CPUID.(EAX=07H, ECX=0H):EBX.SMAP[bit 20] = 0.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + +
#UDIf the LOCK prefix is used.
If CPUID.(EAX=07H, ECX=0H):EBX.SMAP[bit 20] = 0.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe STAC instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + + + + + +
#UDIf the LOCK prefix is used.
If the CPL > 0.
If CPUID.(EAX=07H, ECX=0H):EBX.SMAP[bit 20] = 0.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + +
#UDIf the LOCK prefix is used.
If the CPL > 0.
If CPUID.(EAX=07H, ECX=0H):EBX.SMAP[bit 20] = 0.
diff --git a/x86/stc.html b/x86/stc.html new file mode 100644 index 0000000..2ee4e37 --- /dev/null +++ b/x86/stc.html @@ -0,0 +1,57 @@ + +STC + — Set Carry Flag

STC + — Set Carry Flag

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
F9STCZOValidValidSet CF flag.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Sets the CF flag in the EFLAGS register. Operation is the same in all modes.

+

Operation + ¶ +

+
CF := 1;
+
+

Flags Affected + ¶ +

+

The CF flag is set. The OF, ZF, SF, AF, and PF flags are unaffected.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/std.html b/x86/std.html new file mode 100644 index 0000000..36769ce --- /dev/null +++ b/x86/std.html @@ -0,0 +1,57 @@ + +STD + — Set Direction Flag

STD + — Set Direction Flag

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-bit ModeCompat/Leg ModeDescription
FDSTDZOValidValidSet DF flag.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Sets the DF flag in the EFLAGS register. When the DF flag is set to 1, string operations decrement the index registers (ESI and/or EDI). Operation is the same in all modes.

+

Operation + ¶ +

+
DF := 1;
+
+

Flags Affected + ¶ +

+

The DF flag is set. The CF, OF, ZF, SF, AF, and PF flags are unaffected.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD If the LOCK prefix is used.

diff --git a/x86/sti.html b/x86/sti.html new file mode 100644 index 0000000..db40511 --- /dev/null +++ b/x86/sti.html @@ -0,0 +1,174 @@ + +STI + — Set Interrupt Flag

STI + — Set Interrupt Flag

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
FBSTIZOValidValidSet interrupt flag; external, maskable interrupts enabled at the end of the next instruction.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

In most cases, STI sets the interrupt flag (IF) in the EFLAGS register. This allows the processor to respond to maskable hardware interrupts.

+

If IF = 0, maskable hardware interrupts remain inhibited on the instruction boundary following an execution of STI. (The delayed effect of this instruction is provided to allow interrupts to be enabled just before returning from a procedure or subroutine. For instance, if an STI instruction is followed by an RET instruction, the RET instruction is allowed to execute before external interrupts are recognized. No interrupts can be recognized if an execution of CLI immediately follow such an execution of STI.) The inhibition ends after delivery of another event (e.g., exception) or the execution of the next instruction.

+

The IF flag and the STI and CLI instructions do not prohibit the generation of exceptions and nonmaskable interrupts (NMIs). However, NMIs (and system-management interrupts) may be inhibited on the instruction boundary following an execution of STI that begins with IF = 0.

+

Operation is different in two modes defined as follows:

+
    +
  • PVI mode (protected-mode virtual interrupts): CR0.PE = 1, EFLAGS.VM = 0, CPL = 3, and CR4.PVI = 1;
  • +
  • VME mode (virtual-8086 mode extensions): CR0.PE = 1, EFLAGS.VM = 1, and CR4.VME = 1.
+

If IOPL < 3, EFLAGS.VIP = 1, and either VME mode or PVI mode is active, STI sets the VIF flag in the EFLAGS register, leaving IF unaffected.

+

Table 4-19 indicates the action of the STI instruction depending on the processor operating mode, IOPL, CPL, and EFLAGS.VIP.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModeIOPLEFLAGS.VIPSTI Result
Real-addressX1XIF = 1
Protected, not PVI2≥ CPLXIF = 1
< CPLX#GP fault
Protected, PVI33XIF = 1
0–20VIF = 1
1#GP fault
Virtual-8086, not VME33XIF = 1
0–2X#GP fault
Virtual-8086, VME33XIF = 1
0–20VIF = 1
1#GP fault
+
Table 4-19. Decision Table for STI Results
+
+

1. X = This setting has no effect on instruction operation.

+

2. For this table, “protected mode” applies whenever CR0.PE = 1 and EFLAGS.VM = 0; it includes compatibility mode and 64-bit mode.

+

3. PVI mode and virtual-8086 mode each imply CPL = 3.

+

Operation + ¶ +

+
IF CR0.PE = 0 (* Executing in real-address mode *)
+    THEN IF := 1; (* Set Interrupt Flag *)
+    ELSE
+        IF IOPL ≥ CPL (* CPL = 3 if EFLAGS.VM = 1 *)
+            THEN IF := 1; (* Set Interrupt Flag *)
+            ELSE
+                IF VME mode OR PVI mode
+                    THEN
+                        IF EFLAGS.VIP = 0
+                            THEN VIF := 1; (* Set Virtual Interrupt Flag *)
+                            ELSE #GP(0);
+                        FI;
+                    ELSE #GP(0);
+                FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

Either the IF flag or the VIF flag is set to 1. Other flags are unaffected.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If CPL is greater than IOPL and PVI mode is not active.
If CPL is greater than IOPL and EFLAGS.VIP = 1.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If IOPL is less than 3 and VME mode is not active.
If IOPL is less than 3 and EFLAGS.VIP = 1.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/stmxcsr.html b/x86/stmxcsr.html new file mode 100644 index 0000000..35113aa --- /dev/null +++ b/x86/stmxcsr.html @@ -0,0 +1,75 @@ + +STMXCSR + — Store MXCSR Register State

STMXCSR + — Store MXCSR Register State

+ + + + + + + + + + + + + + + + + + + +
Opcode*/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F AE /3 STMXCSR m32MV/VSSEStore contents of MXCSR register to m32.
VEX.LZ.0F.WIG AE /3 VSTMXCSR m32MV/VAVXStore contents of MXCSR register to m32.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Stores the contents of the MXCSR control and status register to the destination operand. The destination operand is a 32-bit memory location. The reserved bits in the MXCSR register are stored as 0s.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

VEX.L must be 0, otherwise instructions will #UD.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+
m32 := MXCSR;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
_mm_getcsr(void)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-22, “Type 5 Class Exception Conditions,” additionally:

+ + + + + +
#UDIf VEX.L= 1,
If VEX.vvvv ≠ 1111B.
diff --git a/x86/stos.stosb.stosw.stosd.stosq.html b/x86/stos.stosb.stosw.stosd.stosq.html new file mode 100644 index 0000000..982f493 --- /dev/null +++ b/x86/stos.stosb.stosw.stosd.stosq.html @@ -0,0 +1,241 @@ + +STOS/STOSB/STOSW/STOSD/STOSQ + — Store String

STOS/STOSB/STOSW/STOSD/STOSQ + — Store String

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
AASTOS m8ZOValidValidFor legacy mode, store AL at address ES:(E)DI; For 64-bit mode store AL at address RDI or EDI.
ABSTOS m16ZOValidValidFor legacy mode, store AX at address ES:(E)DI; For 64-bit mode store AX at address RDI or EDI.
ABSTOS m32ZOValidValidFor legacy mode, store EAX at address ES:(E)DI; For 64-bit mode store EAX at address RDI or EDI.
REX.W + ABSTOS m64ZOValidN.E.Store RAX at address RDI or EDI.
AASTOSBZOValidValidFor legacy mode, store AL at address ES:(E)DI; For 64-bit mode store AL at address RDI or EDI.
ABSTOSWZOValidValidFor legacy mode, store AX at address ES:(E)DI; For 64-bit mode store AX at address RDI or EDI.
ABSTOSDZOValidValidFor legacy mode, store EAX at address ES:(E)DI; For 64-bit mode store EAX at address RDI or EDI.
REX.W + ABSTOSQZOValidN.E.Store RAX at address RDI or EDI.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

In non-64-bit and default 64-bit mode; stores a byte, word, or doubleword from the AL, AX, or EAX register (respectively) into the destination operand. The destination operand is a memory location, the address of which is read from either the ES:EDI or ES:DI register (depending on the address-size attribute of the instruction and the mode of operation). The ES segment cannot be overridden with a segment override prefix.

+

At the assembly-code level, two forms of the instruction are allowed: the “explicit-operands” form and the “no-operands” form. The explicit-operands form (specified with the STOS mnemonic) allows the destination operand to be specified explicitly. Here, the destination operand should be a symbol that indicates the size and location of the destination value. The source operand is then automatically selected to match the size of the destination operand (the AL register for byte operands, AX for word operands, EAX for doubleword operands). The explicit-operands form is provided to allow documentation; however, note that the documentation provided by this form can be misleading. That is, the destination operand symbol must specify the correct type (size) of the operand (byte, word, or doubleword), but it does not have to specify the correct location. The location is always specified by the ES:(E)DI register. These must be loaded correctly before the store string instruction is executed.

+

The no-operands form provides “short forms” of the byte, word, doubleword, and quadword versions of the STOS instructions. Here also ES:(E)DI is assumed to be the destination operand and AL, AX, or EAX is assumed to be the source operand. The size of the destination and source operands is selected by the mnemonic: STOSB (byte read from register AL), STOSW (word from AX), STOSD (doubleword from EAX).

+

After the byte, word, or doubleword is transferred from the register to the memory location, the (E)DI register is incremented or decremented according to the setting of the DF flag in the EFLAGS register. If the DF flag is 0, the register is incremented; if the DF flag is 1, the register is decremented (the register is incremented or decremented by 1 for byte operations, by 2 for word operations, by 4 for doubleword operations).

+
+

To improve performance, more recent processors support modifications to the processor’s operation during the string store operations initiated with STOS and STOSB. See Section 7.3.9.3 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for additional information on fast-string operation.

+

In 64-bit mode, the default address size is 64 bits, 32-bit address size is supported using the prefix 67H. Using a REX prefix in the form of REX.W promotes operation on doubleword operand to 64 bits. The promoted no-operand mnemonic is STOSQ. STOSQ (and its explicit operands variant) store a quadword from the RAX register into the destination addressed by RDI or EDI. See the summary chart at the beginning of this section for encoding data and limits.

+

The STOS, STOSB, STOSW, STOSD, STOSQ instructions can be preceded by the REP prefix for block stores of ECX bytes, words, or doublewords. More often, however, these instructions are used within a LOOP construct because data needs to be moved into the AL, AX, or EAX register before it can be stored. See “REP/REPE/REPZ /REPNE/REPNZ—Repeat String Operation Prefix” in this chapter for a description of the REP prefix.

+

Operation + ¶ +

+

Non-64-bit Mode: + ¶ +

+
IF (Byte store)
+    THEN
+        DEST := AL;
+            THEN IF DF = 0
+                THEN (E)DI := (E)DI + 1;
+                ELSE (E)DI := (E)DI – 1;
+            FI;
+    ELSE IF (Word store)
+        THEN
+            DEST := AX;
+                THEN IF DF = 0
+                    THEN (E)DI := (E)DI + 2;
+                    ELSE (E)DI := (E)DI – 2;
+                FI;
+        FI;
+    ELSE IF (Doubleword store)
+        THEN
+            DEST := EAX;
+                THEN IF DF = 0
+                    THEN (E)DI := (E)DI + 4;
+                    ELSE (E)DI := (E)DI – 4;
+                FI;
+        FI;
+FI;
+
+

64-bit Mode: + ¶ +

+
IF (Byte store)
+    THEN
+        DEST := AL;
+            THEN IF DF = 0
+                THEN (R|E)DI := (R|E)DI + 1;
+                ELSE (R|E)DI := (R|E)DI – 1;
+            FI;
+    ELSE IF (Word store)
+        THEN
+            DEST := AX;
+                THEN IF DF = 0
+                    THEN (R|E)DI := (R|E)DI + 2;
+                    ELSE (R|E)DI := (R|E)DI – 2;
+                FI;
+        FI;
+    ELSE IF (Doubleword store)
+        THEN
+            DEST := EAX;
+                THEN IF DF = 0
+                    THEN (R|E)DI := (R|E)DI + 4;
+                    ELSE (R|E)DI := (R|E)DI – 4;
+                FI;
+        FI;
+    ELSE IF (Quadword store using REX.W )
+        THEN
+            DEST := RAX;
+                THEN IF DF = 0
+                    THEN (R|E)DI := (R|E)DI + 8;
+                    ELSE (R|E)DI := (R|E)DI – 8;
+                FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the limit of the ES segment.
If the ES register contains a NULL segment selector.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPIf a memory operand effective address is outside the ES segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the ES segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/str.html b/x86/str.html new file mode 100644 index 0000000..75c527d --- /dev/null +++ b/x86/str.html @@ -0,0 +1,118 @@ + +STR + — Store Task Register

STR + — Store Task Register

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 00 /1STR r/m16MValidValidStores segment selector from TR in r/m16.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Stores the segment selector from the task register (TR) in the destination operand. The destination operand can be a general-purpose register or a memory location. The segment selector stored with this instruction points to the task state segment (TSS) for the currently running task.

+

When the destination operand is a 32-bit register, the 16-bit segment selector is copied into the lower 16 bits of the register and the upper 16 bits of the register are cleared. When the destination operand is a memory location, the segment selector is written to memory as a 16-bit quantity, regardless of operand size.

+

In 64-bit mode, operation is the same. The size of the memory operand is fixed at 16 bits. In register stores, the 2-byte TR is zero extended if stored to a 64-bit register.

+

The STR instruction is useful only in operating-system software. It can only be executed in protected mode.

+

Operation + ¶ +

+
DEST := TR(SegmentSelector);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is a memory operand that is located in a non-writable segment or if the effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If CR4.UMIP = 1 and CPL > 0.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe STR instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe STR instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
If CR4.UMIP = 1 and CPL > 0.
#SS(0)If the stack address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/sttilecfg.html b/x86/sttilecfg.html new file mode 100644 index 0000000..e9a83b6 --- /dev/null +++ b/x86/sttilecfg.html @@ -0,0 +1,86 @@ + +STTILECFG + — Store Tile Configuration

STTILECFG + — Store Tile Configuration

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 49 !(11):000:bbb STTILECFG m512AV/N.E.AMX-TILEStore tile configuration in m512.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

The STTILECFG instruction takes a pointer to a 64-byte memory location (described in Table 3-10 in the “LDTILECFG—Load Tile Configuration” entry) that will, after successful execution of this instruction, contain the description of the tiles that were configured. In order to configure tiles, the AMX-TILE bit in CPUID must be set and the operating system has to have enabled the tiles architecture.

+

If the tiles are not configured, then STTILECFG stores 64B of zeros to the indicated memory location.

+

Any attempt to execute the STTILECFG instruction inside an Intel TSX transaction will result in a transaction abort.

+

Operation + ¶ +

+

STTILECFG mem + ¶ +

+
if TILES_CONFIGURED == 0:
+    //write 64 bytes of zeros at mem pointer
+    buf[0..63] := 0
+    write_memory(mem, 64, buf)
+else:
+    buf.byte[0] := tilecfg.palette_id
+    buf.byte[1] := tilecfg.start_row
+    buf.byte[2..15] := 0
+    p := 16
+    for n in 0 ... palette_table[tilecfg.palette_id].max_names-1:
+        buf.word[p/2] := tilecfg.t[n].colsb
+        p := p + 2
+    if p < 47:
+        buf.byte[p..47] := 0
+    p := 48
+    for n in 0 ... palette_table[tilecfg.palette_id].max_names-1:
+        buf.byte[p++] := tilecfg.t[n].rows
+    if p < 63:
+        buf.byte[p..63] := 0
+    write_memory(mem, 64, buf)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
STTILECFGvoid _tile_storeconfig(void *);
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

AMX-E2; see Section 2.10, “Intel® AMX Instruction Exception Classes,” for details.

diff --git a/x86/stui.html b/x86/stui.html new file mode 100644 index 0000000..3e7fbd2 --- /dev/null +++ b/x86/stui.html @@ -0,0 +1,95 @@ + +STUI + — Set User Interrupt Flag

STUI + — Set User Interrupt Flag

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 01 EF STUIZOV/IUINTRSet user interrupt flag.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

Description + ¶ +

+

STUI sets the user interrupt flag (UIF). Its effect takes place immediately; a user interrupt may be delivered on the instruction boundary following STUI. (This is in contrast with STI, whose effect is delayed by one instruction).

+

An execution of STUI inside a transactional region causes a transactional abort; the abort loads EAX as it would have had it been due to an execution of STI.

+

Operation + ¶ +

+
UIF := 1;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDThe STUI instruction is not recognized in protected mode.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe STUI instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe STUI instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe STUI instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + +
#UDIf the LOCK prefix is used.
If executed inside an enclave.
If CR4.UINTR = 0.
If CPUID.07H.0H:EDX.UINTR[bit 5] = 0.
diff --git a/x86/style.css.html b/x86/style.css.html new file mode 100644 index 0000000..0772f35 --- /dev/null +++ b/x86/style.css.html @@ -0,0 +1,89 @@ +html, body { + padding-top: 0; + margin-top: 0; +} + +table, th, td { + border-collapse: collapse; + border: 1px #ccc solid; +} + +table { + margin: 10pt; +} + +th, td { + padding: 2pt 8pt; +} + +h2, h3, h4, h5, h6 { + border-bottom: 1px #ddd dashed; +} + +header { + border-bottom: 1px #ddd dashed; + font-size: 8pt; +} + +footer { + border-top: 1px #ddd dashed; + font-size: 8pt; +} + +svg { + display: block; +} + +figure { + margin: 1em; +} + +blockquote { + padding: 1em; + margin: 0 1em; + background-color: rgba(0%, 0%, 0%, 0.05); + font-size: 10pt; +} + +blockquote > p { + margin: 0; +} + +blockquote > p + p { + margin-top: 1em; +} + +nav > ul { + list-style-type: none; + font-size: 9pt; + padding: 0; + margin: 0.5em 0 0.2em 0; +} + +nav > ul > li { + margin: 0; + width: 50%; + display: inline-block; +} + +nav > ul > li:nth-child(2) { + text-align: right; +} + +.anchor { + margin-left: 0.5em; + text-decoration: none; + color: #ddd; +} + +.anchor:hover { + color: #00b; +} + +.exceptions + table td { + vertical-align: top; +} + +.not-imported { + color: #900; +} diff --git a/x86/sub.html b/x86/sub.html new file mode 100644 index 0000000..0b48fc9 --- /dev/null +++ b/x86/sub.html @@ -0,0 +1,301 @@ + +SUB + — Subtract

SUB + — Subtract

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
2C ibSUB AL, imm8IValidValidSubtract imm8 from AL.
2D iwSUB AX, imm16IValidValidSubtract imm16 from AX.
2D idSUB EAX, imm32IValidValidSubtract imm32 from EAX.
REX.W + 2D idSUB RAX, imm32IValidN.E.Subtract imm32 sign-extended to 64-bits from RAX.
80 /5 ibSUB r/m8, imm8MIValidValidSubtract imm8 from r/m8.
REX + 80 /5 ibSUB r/m81, imm8MIValidN.E.Subtract imm8 from r/m8.
81 /5 iwSUB r/m16, imm16MIValidValidSubtract imm16 from r/m16.
81 /5 idSUB r/m32, imm32MIValidValidSubtract imm32 from r/m32.
REX.W + 81 /5 idSUB r/m64, imm32MIValidN.E.Subtract imm32 sign-extended to 64-bits from r/m64.
83 /5 ibSUB r/m16, imm8MIValidValidSubtract sign-extended imm8 from r/m16.
83 /5 ibSUB r/m32, imm8MIValidValidSubtract sign-extended imm8 from r/m32.
REX.W + 83 /5 ibSUB r/m64, imm8MIValidN.E.Subtract sign-extended imm8 from r/m64.
28 /rSUB r/m8, r8MRValidValidSubtract r8 from r/m8.
REX + 28 /rSUB r/m81, r81MRValidN.E.Subtract r8 from r/m8.
29 /rSUB r/m16, r16MRValidValidSubtract r16 from r/m16.
29 /rSUB r/m32, r32MRValidValidSubtract r32 from r/m32.
REX.W + 29 /rSUB r/m64, r64MRValidN.E.Subtract r64 from r/m64.
2A /rSUB r8, r/m8RMValidValidSubtract r/m8 from r8.
REX + 2A /rSUB r81, r/m81RMValidN.E.Subtract r/m8 from r8.
2B /rSUB r16, r/m16RMValidValidSubtract r/m16 from r16.
2B /rSUB r32, r/m32RMValidValidSubtract r/m32 from r32.
REX.W + 2B /rSUB r64, r/m64RMValidN.E.Subtract r/m64 from r64.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
IAL/AX/EAX/RAXimm8/16/32N/AN/A
MIModRM:r/m (r, w)imm8/16/32N/AN/A
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Subtracts the second operand (source operand) from the first operand (destination operand) and stores the result in the destination operand. The destination operand can be a register or a memory location; the source operand can be an immediate, register, or memory location. (However, two memory operands cannot be used in one instruction.) When an immediate value is used as an operand, it is sign-extended to the length of the destination operand format.

+

The SUB instruction performs integer subtraction. It evaluates the result for both signed and unsigned integer operands and sets the OF and CF flags to indicate an overflow in the signed or unsigned result, respectively. The SF flag indicates the sign of the signed result.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

Operation + ¶ +

+
DEST := (DEST – SRC);
+
+

Flags Affected + ¶ +

+

The OF, SF, ZF, AF, PF, and CF flags are set according to the result.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/subpd.html b/x86/subpd.html new file mode 100644 index 0000000..35c2b96 --- /dev/null +++ b/x86/subpd.html @@ -0,0 +1,199 @@ + +SUBPD + — Subtract Packed Double Precision Floating-Point Values

SUBPD + — Subtract Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/E n64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 5C /r SUBPD xmm1, xmm2/m128AV/VSSE2Subtract packed double precision floating-point values in xmm2/mem from xmm1 and store result in xmm1.
VEX.128.66.0F.WIG 5C /r VSUBPD xmm1,xmm2, xmm3/m128BV/VAVXSubtract packed double precision floating-point values in xmm3/mem from xmm2 and store result in xmm1.
VEX.256.66.0F.WIG 5C /r VSUBPD ymm1, ymm2, ymm3/m256BV/VAVXSubtract packed double precision floating-point values in ymm3/mem from ymm2 and store result in ymm1.
EVEX.128.66.0F.W1 5C /r VSUBPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FSubtract packed double precision floating-point values from xmm3/m128/m64bcst to xmm2 and store result in xmm1 with writemask k1.
EVEX.256.66.0F.W1 5C /r VSUBPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FSubtract packed double precision floating-point values from ymm3/m256/m64bcst to ymm2 and store result in ymm1 with writemask k1.
EVEX.512.66.0F.W1 5C /r VSUBPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}CV/VAVX512FSubtract packed double precision floating-point values from zmm3/m512/m64bcst to zmm2 and store result in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD subtract of the two, four or eight packed double precision floating-point values of the second Source operand from the first Source operand, and stores the packed double precision floating-point results in the destination operand.

+

VEX.128 and EVEX.128 encoded versions: The second source operand is an XMM register or an 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 and EVEX.256 encoded versions: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX.512 encoded version: The second source operand is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location. The first source operand and destination operands are ZMM registers. The destination operand is conditionally updated according to the writemask.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper Bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

VSUBPD (EVEX Encoded Versions When SRC2 Operand is a Vector Register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC1[i+63:i] - SRC2[i+63:i]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[63:0] := 0
+        FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSUBPD (EVEX Encoded Versions When SRC2 Operand is a Memory Source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1)
+                THEN DEST[i+63:i] := SRC1[i+63:i] - SRC2[63:0];
+                ELSE EST[i+63:i] := SRC1[i+63:i] - SRC2[i+63:i];
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[63:0] := 0
+        FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSUBPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] - SRC2[63:0]
+DEST[127:64] := SRC1[127:64] - SRC2[127:64]
+DEST[191:128] := SRC1[191:128] - SRC2[191:128]
+DEST[255:192] := SRC1[255:192] - SRC2[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VSUBPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] - SRC2[63:0]
+DEST[127:64] := SRC1[127:64] - SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

SUBPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] - SRC[63:0]
+DEST[127:64] := DEST[127:64] - SRC[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSUBPD __m512d _mm512_sub_pd (__m512d a, __m512d b);
+
+
VSUBPD __m512d _mm512_mask_sub_pd (__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VSUBPD __m512d _mm512_maskz_sub_pd (__mmask8 k, __m512d a, __m512d b);
+
+
VSUBPD __m512d _mm512_sub_round_pd (__m512d a, __m512d b, int);
+
+
VSUBPD __m512d _mm512_mask_sub_round_pd (__m512d s, __mmask8 k, __m512d a, __m512d b, int);
+
+
VSUBPD __m512d _mm512_maskz_sub_round_pd (__mmask8 k, __m512d a, __m512d b, int);
+
+
VSUBPD __m256d _mm256_sub_pd (__m256d a, __m256d b);
+
+
VSUBPD __m256d _mm256_mask_sub_pd (__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VSUBPD __m256d _mm256_maskz_sub_pd (__mmask8 k, __m256d a, __m256d b);
+
+
SUBPD __m128d _mm_sub_pd (__m128d a, __m128d b);
+
+
VSUBPD __m128d _mm_mask_sub_pd (__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VSUBPD __m128d _mm_maskz_sub_pd (__mmask8 k, __m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/subps.html b/x86/subps.html new file mode 100644 index 0000000..fb9b36d --- /dev/null +++ b/x86/subps.html @@ -0,0 +1,207 @@ + +SUBPS + — Subtract Packed Single Precision Floating-Point Values

SUBPS + — Subtract Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/E n64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 5C /r SUBPS xmm1, xmm2/m128AV/VSSESubtract packed single precision floating-point values in xmm2/mem from xmm1 and store result in xmm1.
VEX.128.0F.WIG 5C /r VSUBPS xmm1,xmm2, xmm3/m128BV/VAVXSubtract packed single precision floating-point values in xmm3/mem from xmm2 and stores result in xmm1.
VEX.256.0F.WIG 5C /r VSUBPS ymm1, ymm2, ymm3/m256BV/VAVXSubtract packed single precision floating-point values in ymm3/mem from ymm2 and stores result in ymm1.
EVEX.128.0F.W0 5C /r VSUBPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FSubtract packed single precision floating-point values from xmm3/m128/m32bcst to xmm2 and stores result in xmm1 with writemask k1.
EVEX.256.0F.W0 5C /r VSUBPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FSubtract packed single precision floating-point values from ymm3/m256/m32bcst to ymm2 and stores result in ymm1 with writemask k1.
EVEX.512.0F.W0 5C /r VSUBPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}CV/VAVX512FSubtract packed single precision floating-point values in zmm3/m512/m32bcst from zmm2 and stores result in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD subtract of the packed single precision floating-point values in the second Source operand from the First Source operand, and stores the packed single precision floating-point results in the destination operand.

+

VEX.128 and EVEX.128 encoded versions: The second source operand is an XMM register or an 128-bit memory location. The first source operand and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 and EVEX.256 encoded versions: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX.512 encoded version: The second source operand is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32-bit memory location. The first source operand and destination operands are ZMM registers. The destination operand is conditionally updated according to the writemask.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper Bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

VSUBPS (EVEX Encoded Versions When SRC2 Operand is a Vector Register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC1[i+31:i] - SRC2[i+31:i]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[31:0] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VSUBPS (EVEX Encoded Versions When SRC2 Operand is a Memory Source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256),(16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1)
+                THEN DEST[i+31:i] := SRC1[i+31:i] - SRC2[31:0];
+                ELSE DEST[i+31:i] := SRC1[i+31:i] - SRC2[i+31:i];
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[31:0] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VSUBPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC2[31:0]
+DEST[63:32] := SRC1[63:32] - SRC2[63:32]
+DEST[95:64] := SRC1[95:64] - SRC2[95:64]
+DEST[127:96] := SRC1[127:96] - SRC2[127:96]
+DEST[159:128] := SRC1[159:128] - SRC2[159:128]
+DEST[191:160] := SRC1[191:160] - SRC2[191:160]
+DEST[223:192] := SRC1[223:192] - SRC2[223:192]
+DEST[255:224] := SRC1[255:224] - SRC2[255:224].
+DEST[MAXVL-1:256] := 0
+
+

VSUBPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC2[31:0]
+DEST[63:32] := SRC1[63:32] - SRC2[63:32]
+DEST[95:64] := SRC1[95:64] - SRC2[95:64]
+DEST[127:96] := SRC1[127:96] - SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

SUBPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC2[31:0]
+DEST[63:32] := SRC1[63:32] - SRC2[63:32]
+DEST[95:64] := SRC1[95:64] - SRC2[95:64]
+DEST[127:96] := SRC1[127:96] - SRC2[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSUBPS __m512 _mm512_sub_ps (__m512 a, __m512 b);
+
+
VSUBPS __m512 _mm512_mask_sub_ps (__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VSUBPS __m512 _mm512_maskz_sub_ps (__mmask16 k, __m512 a, __m512 b);
+
+
VSUBPS __m512 _mm512_sub_round_ps (__m512 a, __m512 b, int);
+
+
VSUBPS __m512 _mm512_mask_sub_round_ps (__m512 s, __mmask16 k, __m512 a, __m512 b, int);
+
+
VSUBPS __m512 _mm512_maskz_sub_round_ps (__mmask16 k, __m512 a, __m512 b, int);
+
+
VSUBPS __m256 _mm256_sub_ps (__m256 a, __m256 b);
+
+
VSUBPS __m256 _mm256_mask_sub_ps (__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VSUBPS __m256 _mm256_maskz_sub_ps (__mmask16 k, __m256 a, __m256 b);
+
+
SUBPS __m128 _mm_sub_ps (__m128 a, __m128 b);
+
+
VSUBPS __m128 _mm_mask_sub_ps (__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VSUBPS __m128 _mm_maskz_sub_ps (__mmask16 k, __m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/subsd.html b/x86/subsd.html new file mode 100644 index 0000000..1200264 --- /dev/null +++ b/x86/subsd.html @@ -0,0 +1,136 @@ + +SUBSD + — Subtract Scalar Double Precision Floating-Point Value

SUBSD + — Subtract Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 5C /r SUBSD xmm1, xmm2/m64AV/VSSE2Subtract the low double precision floating-point value in xmm2/m64 from xmm1 and store the result in xmm1.
VEX.LIG.F2.0F.WIG 5C /r VSUBSD xmm1,xmm2, xmm3/m64BV/VAVXSubtract the low double precision floating-point value in xmm3/m64 from xmm2 and store the result in xmm1.
EVEX.LLIG.F2.0F.W1 5C /r VSUBSD xmm1 {k1}{z}, xmm2, xmm3/m64{er}CV/VAVX512FSubtract the low double precision floating-point value in xmm3/m64 from xmm2 and store the result in xmm1 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Subtract the low double precision floating-point value in the second source operand from the first source operand and stores the double precision floating-point result in the low quadword of the destination operand.

+

The second source operand can be an XMM register or a 64-bit memory location. The first source and destination operands are XMM registers.

+

128-bit Legacy SSE version: The destination and first source operand are the same. Bits (MAXVL-1:64) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded versions: Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: The low quadword element of the destination operand is updated according to the write-mask.

+

Software should ensure VSUBSD is encoded with VEX.L=0. Encoding VSUBSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VSUBSD (EVEX Encoded Version) + ¶ +

+
IF (SRC2 *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := SRC1[63:0] - SRC2[63:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VSUBSD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] - SRC2[63:0]
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

SUBSD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] - SRC[63:0]
+DEST[MAXVL-1:64] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSUBSD __m128d _mm_mask_sub_sd (__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VSUBSD __m128d _mm_maskz_sub_sd (__mmask8 k, __m128d a, __m128d b);
+
+
VSUBSD __m128d _mm_sub_round_sd (__m128d a, __m128d b, int);
+
+
VSUBSD __m128d _mm_mask_sub_round_sd (__m128d s, __mmask8 k, __m128d a, __m128d b, int);
+
+
VSUBSD __m128d _mm_maskz_sub_round_sd (__mmask8 k, __m128d a, __m128d b, int);
+
+
SUBSD __m128d _mm_sub_sd (__m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/subss.html b/x86/subss.html new file mode 100644 index 0000000..0d2b29a --- /dev/null +++ b/x86/subss.html @@ -0,0 +1,136 @@ + +SUBSS + — Subtract Scalar Single Precision Floating-Point Value

SUBSS + — Subtract Scalar Single Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 5C /r SUBSS xmm1, xmm2/m32AV/VSSESubtract the low single precision floating-point value in xmm2/m32 from xmm1 and store the result in xmm1.
VEX.LIG.F3.0F.WIG 5C /r VSUBSS xmm1,xmm2, xmm3/m32BV/VAVXSubtract the low single precision floating-point value in xmm3/m32 from xmm2 and store the result in xmm1.
EVEX.LLIG.F3.0F.W0 5C /r VSUBSS xmm1 {k1}{z}, xmm2, xmm3/m32{er}CV/VAVX512FSubtract the low single precision floating-point value in xmm3/m32 from xmm2 and store the result in xmm1 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CTuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Subtract the low single precision floating-point value from the second source operand and the first source operand and store the double precision floating-point result in the low doubleword of the destination operand.

+

The second source operand can be an XMM register or a 32-bit memory location. The first source and destination operands are XMM registers.

+

128-bit Legacy SSE version: The destination and first source operand are the same. Bits (MAXVL-1:32) of the corresponding destination register remain unchanged.

+

VEX.128 and EVEX encoded versions: Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX encoded version: The low doubleword element of the destination operand is updated according to the write-mask.

+

Software should ensure VSUBSS is encoded with VEX.L=0. Encoding VSUBSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

VSUBSS (EVEX Encoded Version) + ¶ +

+
IF (SRC2 *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := SRC1[31:0] - SRC2[31:0]
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VSUBSS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] - SRC2[31:0]
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

SUBSS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := DEST[31:0] - SRC[31:0]
+DEST[MAXVL-1:32] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSUBSS __m128 _mm_mask_sub_ss (__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VSUBSS __m128 _mm_maskz_sub_ss (__mmask8 k, __m128 a, __m128 b);
+
+
VSUBSS __m128 _mm_sub_round_ss (__m128 a, __m128 b, int);
+
+
VSUBSS __m128 _mm_mask_sub_round_ss (__m128 s, __mmask8 k, __m128 a, __m128 b, int);
+
+
VSUBSS __m128 _mm_maskz_sub_round_ss (__mmask8 k, __m128 a, __m128 b, int);
+
+
SUBSS __m128 _mm_sub_ss (__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/swapgs.html b/x86/swapgs.html new file mode 100644 index 0000000..929a5b7 --- /dev/null +++ b/x86/swapgs.html @@ -0,0 +1,101 @@ + +SWAPGS + — Swap GS Base Register

SWAPGS + — Swap GS Base Register

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 01 F8SWAPGSZOValidInvalidExchanges the current GS base register value with the value contained in MSR address C0000102H.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

SWAPGS exchanges the current GS base register value with the value contained in MSR address C0000102H (IA32_KERNEL_GS_BASE). The SWAPGS instruction is a privileged instruction intended for use by system software.

+

When using SYSCALL to implement system calls, there is no kernel stack at the OS entry point. Neither is there a straightforward method to obtain a pointer to kernel structures from which the kernel stack pointer could be read. Thus, the kernel cannot save general purpose registers or reference memory.

+

By design, SWAPGS does not require any general purpose registers or memory operands. No registers need to be saved before using the instruction. SWAPGS exchanges the CPL 0 data pointer from the IA32_KERNEL_GS_BASE MSR with the GS base register. The kernel can then use the GS prefix on normal memory references to access kernel data structures. Similarly, when the OS kernel is entered using an interrupt or exception (where the kernel stack is already set up), SWAPGS can be used to quickly get a pointer to the kernel data structures.

+

The IA32_KERNEL_GS_BASE MSR itself is only accessible using RDMSR/WRMSR instructions. Those instructions are only accessible at privilege level 0. The WRMSR instruction ensures that the IA32_KERNEL_GS_BASE MSR contains a canonical address.

+

Operation + ¶ +

+
IF CS.L ≠ 1 (* Not in 64-Bit Mode *)
+    THEN
+        #UD; FI;
+IF CPL ≠ 0
+    THEN #GP(0); FI;
+tmp := GS.base;
+GS.base := IA32_KERNEL_GS_BASE;
+IA32_KERNEL_GS_BASE := tmp;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDIf Mode ≠ 64-Bit.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf Mode ≠ 64-Bit.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDIf Mode ≠ 64-Bit.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDIf Mode ≠ 64-Bit.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If CPL ≠ 0.
#UDIf the LOCK prefix is used.
diff --git a/x86/syscall.html b/x86/syscall.html new file mode 100644 index 0000000..6e536f8 --- /dev/null +++ b/x86/syscall.html @@ -0,0 +1,150 @@ + +SYSCALL + — Fast System Call

SYSCALL + — Fast System Call

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 05SYSCALLZOValidInvalidFast call to privilege level 0 system procedures.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

SYSCALL invokes an OS system-call handler at privilege level 0. It does so by loading RIP from the IA32_LSTAR MSR (after saving the address of the instruction following SYSCALL into RCX). (The WRMSR instruction ensures that the IA32_LSTAR MSR always contain a canonical address.)

+

SYSCALL also saves RFLAGS into R11 and then masks RFLAGS using the IA32_FMASK MSR (MSR address C0000084H); specifically, the processor clears in RFLAGS every bit corresponding to a bit that is set in the IA32_FMASK MSR.

+

SYSCALL loads the CS and SS selectors with values derived from bits 47:32 of the IA32_STAR MSR. However, the CS and SS descriptor caches are not loaded from the descriptors (in GDT or LDT) referenced by those selectors. Instead, the descriptor caches are loaded with fixed values. See the Operation section for details. It is the responsibility of OS software to ensure that the descriptors (in GDT or LDT) referenced by those selector values correspond to the fixed values loaded into the descriptor caches; the SYSCALL instruction does not ensure this correspondence.

+

The SYSCALL instruction does not save the stack pointer (RSP). If the OS system-call handler will change the stack pointer, it is the responsibility of software to save the previous value of the stack pointer. This might be done prior to executing SYSCALL, with software restoring the stack pointer with the instruction following SYSCALL (which will be executed after SYSRET). Alternatively, the OS system-call handler may save the stack pointer and restore it before executing SYSRET.

+

When shadow stacks are enabled at a privilege level where the SYSCALL instruction is invoked, the SSP is saved to the IA32_PL3_SSP MSR. If shadow stacks are enabled at privilege level 0, the SSP is loaded with 0. Refer to Chapter 6, “Procedure Calls, Interrupts, and Exceptions‚” and Chapter 17, “Control-flow Enforcement Technology (CET)‚” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for additional CET details.

+

Instruction ordering. Instructions following a SYSCALL may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the SYSCALL have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible).

+

Operation + ¶ +

+
IF (CS.L ≠ 1 ) or (IA32_EFER.LMA ≠ 1) or (IA32_EFER.SCE ≠ 1)
+(* Not in 64-Bit Mode or SYSCALL/SYSRET not enabled in IA32_EFER *)
+    THEN #UD;
+FI;
+RCX := RIP; (* Will contain address of next instruction *)
+RIP := IA32_LSTAR;
+R11 := RFLAGS;
+RFLAGS := RFLAGS AND NOT(IA32_FMASK);
+CS.Selector := IA32_STAR[47:32] AND FFFCH (* Operating system provides CS; RPL forced to 0 *)
+(* Set rest of CS to a fixed value *)
+CS.Base := 0;
+                (* Flat segment *)
+CS.Limit := FFFFFH;
+                (* With 4-KByte granularity, implies a 4-GByte limit *)
+CS.Type := 11;
+                (* Execute/read code, accessed *)
+CS.S := 1;
+CS.DPL := 0;
+CS.P := 1;
+CS.L := 1;
+                (* Entry is to 64-bit mode *)
+CS.D := 0;
+                (* Required if CS.L = 1 *)
+CS.G := 1;
+                (* 4-KByte granularity *)
+IF ShadowStackEnabled(CPL)
+    THEN (* adjust so bits 63:N get the value of bit N–1, where N is the CPU’s maximum linear-address width *)
+        IA32_PL3_SSP := LA_adjust(SSP);
+            (* With shadow stacks enabled the system call is supported from Ring 3 to Ring 0 *)
+            (* OS supporting Ring 0 to Ring 0 system calls or Ring 1/2 to ring 0 system call *)
+            (* Must preserve the contents of IA32_PL3_SSP to avoid losing ring 3 state *)
+FI;
+CPL := 0;
+IF ShadowStackEnabled(CPL)
+    SSP := 0;
+FI;
+IF EndbranchEnabled(CPL)
+    IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+    IA32_S_CET.SUPPRESS = 0
+FI;
+SS.Selector := IA32_STAR[47:32] + 8;
+                (* SS just above CS *)
+(* Set rest of SS to a fixed value *)
+SS.Base := 0;
+                (* Flat segment *)
+SS.Limit := FFFFFH;
+                (* With 4-KByte granularity, implies a 4-GByte limit *)
+SS.Type := 3;
+                (* Read/write data, accessed *)
+SS.S := 1;
+SS.DPL := 0;
+SS.P := 1;
+SS.B := 1;
+                (* 32-bit stack segment *)
+SS.G := 1;
+                (* 4-KByte granularity *)
+
+

Flags Affected + ¶ +

+

All.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDThe SYSCALL instruction is not recognized in protected mode.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe SYSCALL instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe SYSCALL instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe SYSCALL instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + +
#UDIf IA32_EFER.SCE = 0.
If the LOCK prefix is used.
diff --git a/x86/sysenter.html b/x86/sysenter.html new file mode 100644 index 0000000..305d88c --- /dev/null +++ b/x86/sysenter.html @@ -0,0 +1,181 @@ + +SYSENTER + — Fast System Call

SYSENTER + — Fast System Call

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 34SYSENTERZOValidValidFast call to privilege level 0 system procedures.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Executes a fast call to a level 0 system procedure or routine. SYSENTER is a companion instruction to SYSEXIT. The instruction is optimized to provide the maximum performance for system calls from user code running at privilege level 3 to operating system or executive procedures running at privilege level 0.

+

When executed in IA-32e mode, the SYSENTER instruction transitions the logical processor to 64-bit mode; otherwise, the logical processor remains in protected mode.

+

Prior to executing the SYSENTER instruction, software must specify the privilege level 0 code segment and code entry point, and the privilege level 0 stack segment and stack pointer by writing values to the following MSRs:

+
    +
  • IA32_SYSENTER_CS (MSR address 174H) — The lower 16 bits of this MSR are the segment selector for the privilege level 0 code segment. This value is also used to determine the segment selector of the privilege level 0 stack segment (see the Operation section). This value cannot indicate a null selector.
  • +
  • IA32_SYSENTER_EIP (MSR address 176H) — The value of this MSR is loaded into RIP (thus, this value references the first instruction of the selected operating procedure or routine). In protected mode, only bits 31:0 are loaded.
  • +
  • IA32_SYSENTER_ESP (MSR address 175H) — The value of this MSR is loaded into RSP (thus, this value contains the stack pointer for the privilege level 0 stack). This value cannot represent a non-canonical address. In protected mode, only bits 31:0 are loaded.
+

These MSRs can be read from and written to using RDMSR/WRMSR. The WRMSR instruction ensures that the IA32_SYSENTER_EIP and IA32_SYSENTER_ESP MSRs always contain canonical addresses.

+

While SYSENTER loads the CS and SS selectors with values derived from the IA32_SYSENTER_CS MSR, the CS and SS descriptor caches are not loaded from the descriptors (in GDT or LDT) referenced by those selectors. Instead, the descriptor caches are loaded with fixed values. See the Operation section for details. It is the responsibility of OS software to ensure that the descriptors (in GDT or LDT) referenced by those selector values correspond to the fixed values loaded into the descriptor caches; the SYSENTER instruction does not ensure this correspondence.

+

The SYSENTER instruction can be invoked from all operating modes except real-address mode.

+

The SYSENTER and SYSEXIT instructions are companion instructions, but they do not constitute a call/return pair. When executing a SYSENTER instruction, the processor does not save state information for the user code (e.g., the instruction pointer), and neither the SYSENTER nor the SYSEXIT instruction supports passing parameters on the stack.

+

To use the SYSENTER and SYSEXIT instructions as companion instructions for transitions between privilege level 3 code and privilege level 0 operating system procedures, the following conventions must be followed:

+
    +
  • The segment descriptors for the privilege level 0 code and stack segments and for the privilege level 3 code and stack segments must be contiguous in a descriptor table. This convention allows the processor to compute the segment selectors from the value entered in the SYSENTER_CS_MSR MSR.
  • +
  • The fast system call “stub” routines executed by user code (typically in shared libraries or DLLs) must save the required return IP and processor state information if a return to the calling procedure is required. Likewise, the operating system or executive procedures called with SYSENTER instructions must have access to and use this saved return and state information when returning to the user code.
+

The SYSENTER and SYSEXIT instructions were introduced into the IA-32 architecture in the Pentium II processor. The availability of these instructions on a processor is indicated with the SYSENTER/SYSEXIT present (SEP) feature

+

flag returned to the EDX register by the CPUID instruction. An operating system that qualifies the SEP flag must also qualify the processor family and model to ensure that the SYSENTER/SYSEXIT instructions are actually present. For example:

+

IF CPUID SEP bit is set

+

THEN IF (Family = 6) and (Model < 3) and (Stepping < 3)

+

THEN

+

SYSENTER/SYSEXIT_Not_Supported; FI;

+

ELSE

+

SYSENTER/SYSEXIT_Supported; FI;

+

FI;

+

When the CPUID instruction is executed on the Pentium Pro processor (model 1), the processor returns a the SEP flag as set, but does not support the SYSENTER/SYSEXIT instructions.

+

When shadow stacks are enabled at privilege level where SYSENTER instruction is invoked, the SSP is saved to the IA32_PL3_SSP MSR. If shadow stacks are enabled at privilege level 0, the SSP is loaded with 0. Refer to Chapter 6, “Procedure Calls, Interrupts, and Exceptions‚” and Chapter 17, “Control-flow Enforcement Technology (CET)‚” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for additional CET details.

+

Instruction ordering. Instructions following a SYSENTER may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the SYSENTER have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible).

+

Operation + ¶ +

+
IF CR0.PE = 0 OR IA32_SYSENTER_CS[15:2] = 0 THEN #GP(0); FI;
+RFLAGS.VM := 0;
+                    (* Ensures protected mode execution *)
+RFLAGS.IF := 0;
+                    (* Mask interrupts *)
+IF in IA-32e mode
+    THEN
+        RSP := IA32_SYSENTER_ESP;
+        RIP := IA32_SYSENTER_EIP;
+ELSE
+        ESP := IA32_SYSENTER_ESP[31:0];
+        EIP := IA32_SYSENTER_EIP[31:0];
+FI;
+CS.Selector := IA32_SYSENTER_CS[15:0] AND
+                    FFFCH;
+                    (* Operating system provides CS; RPL forced to 0 *)
+(* Set rest of CS to a fixed value *)
+CS.Base := 0;
+                    (* Flat segment *)
+CS.Limit := FFFFFH;
+                    (* With 4-KByte granularity, implies a 4-GByte limit *)
+CS.Type := 11;
+                    (* Execute/read code, accessed *)
+CS.S := 1;
+CS.DPL := 0;
+CS.P := 1;
+IF in IA-32e mode
+    THEN
+        CS.L := 1;
+                    (* Entry is to 64-bit mode *)
+        CS.D := 0;
+                    (* Required if CS.L = 1 *)
+    ELSE
+        CS.L := 0;
+        CS.D := 1;
+                    (* 32-bit code segment*)
+FI;
+CS.G := 1;
+                    (* 4-KByte granularity *)
+IF ShadowStackEnabled(CPL)
+    THEN
+        IF IA32_EFER.LMA = 0
+            THEN IA32_PL3_SSP := SSP;
+            ELSE (* adjust so bits 63:N get the value of bit N–1, where N is the CPU’s maximum linear-address width *)
+                IA32_PL3_SSP := LA_adjust(SSP);
+        FI;
+FI;
+CPL := 0;
+IF ShadowStackEnabled(CPL)
+    SSP := 0;
+FI;
+IF EndbranchEnabled(CPL)
+    IA32_S_CET.TRACKER = WAIT_FOR_ENDBRANCH
+    IA32_S_CET.SUPPRESS = 0
+FI;
+SS.Selector := CS.Selector + 8;
+                    (* SS just above CS *)
+(* Set rest of SS to a fixed value *)
+SS.Base := 0;
+                    (* Flat segment *)
+SS.Limit := FFFFFH;
+                    (* With 4-KByte granularity, implies a 4-GByte limit *)
+SS.Type := 3;
+                    (* Read/write data, accessed *)
+SS.S := 1;
+SS.DPL := 0;
+SS.P := 1;
+SS.B := 1;
+                    (* 32-bit stack segment*)
+SS.G := 1;
+                    (* 4-KByte granularity *)
+
+

Flags Affected + ¶ +

+

VM, IF (see Operation above).

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If IA32_SYSENTER_CS[15:2] = 0.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPThe SYSENTER instruction is not recognized in real-address mode.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/sysexit.html b/x86/sysexit.html new file mode 100644 index 0000000..d1db8a6 --- /dev/null +++ b/x86/sysexit.html @@ -0,0 +1,182 @@ + +SYSEXIT + — Fast Return from Fast System Call

SYSEXIT + — Fast Return from Fast System Call

+ + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 35SYSEXITZOValidValidFast return to privilege level 3 user code.
REX.W + 0F 35SYSEXITZOValidValidFast return to 64-bit mode privilege level 3 user code.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Executes a fast return to privilege level 3 user code. SYSEXIT is a companion instruction to the SYSENTER instruction. The instruction is optimized to provide the maximum performance for returns from system procedures executing at protections levels 0 to user procedures executing at protection level 3. It must be executed from code executing at privilege level 0.

+

With a 64-bit operand size, SYSEXIT remains in 64-bit mode; otherwise, it either enters compatibility mode (if the logical processor is in IA-32e mode) or remains in protected mode (if it is not).

+

Prior to executing SYSEXIT, software must specify the privilege level 3 code segment and code entry point, and the privilege level 3 stack segment and stack pointer by writing values into the following MSR and general-purpose registers:

+
    +
  • IA32_SYSENTER_CS (MSR address 174H) — Contains a 32-bit value that is used to determine the segment selectors for the privilege level 3 code and stack segments (see the Operation section)
  • +
  • RDX — The canonical address in this register is loaded into RIP (thus, this value references the first instruction to be executed in the user code). If the return is not to 64-bit mode, only bits 31:0 are loaded.
  • +
  • ECX — The canonical address in this register is loaded into RSP (thus, this value contains the stack pointer for the privilege level 3 stack). If the return is not to 64-bit mode, only bits 31:0 are loaded.
+

The IA32_SYSENTER_CS MSR can be read from and written to using RDMSR and WRMSR.

+

While SYSEXIT loads the CS and SS selectors with values derived from the IA32_SYSENTER_CS MSR, the CS and SS descriptor caches are not loaded from the descriptors (in GDT or LDT) referenced by those selectors. Instead, the descriptor caches are loaded with fixed values. See the Operation section for details. It is the responsibility of OS software to ensure that the descriptors (in GDT or LDT) referenced by those selector values correspond to the fixed values loaded into the descriptor caches; the SYSEXIT instruction does not ensure this correspondence.

+

The SYSEXIT instruction can be invoked from all operating modes except real-address mode and virtual-8086 mode.

+

The SYSENTER and SYSEXIT instructions were introduced into the IA-32 architecture in the Pentium II processor. The availability of these instructions on a processor is indicated with the SYSENTER/SYSEXIT present (SEP) feature flag returned to the EDX register by the CPUID instruction. An operating system that qualifies the SEP flag must also qualify the processor family and model to ensure that the SYSENTER/SYSEXIT instructions are actually present. For example:

+

IF CPUID SEP bit is set

+

THEN IF (Family = 6) and (Model < 3) and (Stepping < 3)

+

THEN

+

SYSENTER/SYSEXIT_Not_Supported; FI;

+

ELSE

+

SYSENTER/SYSEXIT_Supported; FI;

+

FI;

+

When the CPUID instruction is executed on the Pentium Pro processor (model 1), the processor returns a the SEP flag as set, but does not support the SYSENTER/SYSEXIT instructions.

+

When shadow stacks are enabled at privilege level 3 the instruction loads SSP with value from IA32_PL3_SSP MSR. Refer to Chapter 6, “Interrupt and Exception Handling‚” and Chapter 17, “Control-flow Enforcement Technology (CET)‚” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for additional CET details.

+

Instruction ordering. Instructions following a SYSEXIT may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the SYSEXIT have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible).

+

Operation + ¶ +

+
IF IA32_SYSENTER_CS[15:2] = 0 OR CR0.PE = 0 OR CPL ≠ 0 THEN #GP(0); FI;
+IF operand size is 64-bit
+    THEN (* Return to 64-bit mode *)
+        RSP := RCX;
+        RIP := RDX;
+    ELSE (* Return to protected mode or compatibility mode *)
+        RSP := ECX;
+        RIP := EDX;
+FI;
+IF operand size is 64-bit (* Operating system provides CS; RPL forced to 3 *)
+    THEN CS.Selector := IA32_SYSENTER_CS[15:0] + 32;
+    ELSE CS.Selector := IA32_SYSENTER_CS[15:0] + 16;
+FI;
+CS.Selector := CS.Selector OR 3;
+            (* RPL forced to 3 *)
+(* Set rest of CS to a fixed value *)
+CS.Base := 0;
+            (* Flat segment *)
+CS.Limit := FFFFFH;
+            (* With 4-KByte granularity, implies a 4-GByte limit *)
+CS.Type := 11;
+            (* Execute/read code, accessed *)
+CS.S := 1;
+CS.DPL := 3;
+CS.P := 1;
+IF operand size is 64-bit
+    THEN (* return to 64-bit mode *)
+        CS.L := 1;
+            (* 64-bit code segment *)
+        CS.D := 0;
+    ELSE (* return to protected mode or compatibility mode *)
+        CS.L := 0;
+        CS.D := 1;
+            (* 32-bit code segment*)
+FI;
+CS.G := 1;
+            (* 4-KByte granularity *)
+CPL := 3;
+IF ShadowStackEnabled(CPL)
+    THEN SSP := IA32_PL3_SSP;
+FI;
+SS.Selector := CS.Selector + 8;
+            (* SS just above CS *)
+(* Set rest of SS to a fixed value *)
+SS.Base := 0;
+            (* Flat segment *)
+SS.Limit := FFFFFH;
+            (* With 4-KByte granularity, implies a 4-GByte limit *)
+SS.Type := 3;
+            (* Read/write data, accessed *)
+SS.S := 1;
+SS.DPL := 3;
+SS.P := 1;
+SS.B := 1;
+            (* 32-bit stack segment*)
+SS.G := 1; (* 4-KByte granularity *)
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If IA32_SYSENTER_CS[15:2] = 0.
If CPL ≠ 0.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + +
#GPThe SYSEXIT instruction is not recognized in real-address mode.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)The SYSEXIT instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If IA32_SYSENTER_CS = 0.
If CPL ≠ 0.
If RCX or RDX contains a non-canonical address.
#UDIf the LOCK prefix is used.
diff --git a/x86/sysret.html b/x86/sysret.html new file mode 100644 index 0000000..086b945 --- /dev/null +++ b/x86/sysret.html @@ -0,0 +1,182 @@ + +SYSRET + — Return From Fast System Call

SYSRET + — Return From Fast System Call

+ + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 07SYSRETZOValidInvalidReturn to compatibility mode from fast system call.
REX.W + 0F 07SYSRETZOValidInvalidReturn to 64-bit mode from fast system call.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

SYSRET is a companion instruction to the SYSCALL instruction. It returns from an OS system-call handler to user code at privilege level 3. It does so by loading RIP from RCX and loading RFLAGS from R11.1 With a 64-bit operand size, SYSRET remains in 64-bit mode; otherwise, it enters compatibility mode and only the low 32 bits of the registers are loaded.

+

SYSRET loads the CS and SS selectors with values derived from bits 63:48 of the IA32_STAR MSR. However, the CS and SS descriptor caches are not loaded from the descriptors (in GDT or LDT) referenced by those selectors. Instead, the descriptor caches are loaded with fixed values. See the Operation section for details. It is the responsibility of OS software to ensure that the descriptors (in GDT or LDT) referenced by those selector values correspond to the fixed values loaded into the descriptor caches; the SYSRET instruction does not ensure this correspondence.

+

The SYSRET instruction does not modify the stack pointer (ESP or RSP). For that reason, it is necessary for software to switch to the user stack. The OS may load the user stack pointer (if it was saved after SYSCALL) before executing SYSRET; alternatively, user code may load the stack pointer (if it was saved before SYSCALL) after receiving control from SYSRET.

+

If the OS loads the stack pointer before executing SYSRET, it must ensure that the handler of any interrupt or exception delivered between restoring the stack pointer and successful execution of SYSRET is not invoked with the user stack. It can do so using approaches such as the following:

+
    +
  • External interrupts. The OS can prevent an external interrupt from being delivered by clearing EFLAGS.IF before loading the user stack pointer.
  • +
  • Nonmaskable interrupts (NMIs). The OS can ensure that the NMI handler is invoked with the correct stack by using the interrupt stack table (IST) mechanism for gate 2 (NMI) in the IDT (see Section 6.14.5, “Interrupt Stack Table,” in Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A).
  • +
  • General-protection exceptions (#GP). The SYSRET instruction generates #GP(0) if the value of RCX is not canonical. The OS can address this possibility using one or more of the following approaches: +
      +
    • Confirming that the value of RCX is canonical before executing SYSRET.
    • +
    • Confirming that the value of RCX is canonical before executing SYSRET.
    • +
    • Using paging to ensure that the SYSCALL instruction will never save a non-canonical value into RCX.
    • +
    • Using paging to ensure that the SYSCALL instruction will never save a non-canonical value into RCX.
    • +
    • Using the IST mechanism for gate 13 (#GP) in the IDT.
    • +
    • Using the IST mechanism for gate 13 (#GP) in the IDT.
+

When shadow stacks are enabled at privilege level 3 the instruction loads SSP with value from IA32_PL3_SSP MSR. Refer to Chapter 6, “Procedure Calls, Interrupts, and Exceptions‚” and Chapter 17, “Control-flow Enforcement Technology (CET)‚” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for additional CET details.

+
+

1. Regardless of the value of R11, the RF and VM flags are always 0 in RFLAGS after execution of SYSRET. In addition, all reserved bits in RFLAGS retain the fixed values.

+

Instruction ordering. Instructions following a SYSRET may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the SYSRET have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible).

+

Operation + ¶ +

+
IF (CS.L ≠ 1 ) or (IA32_EFER.LMA ≠ 1) or (IA32_EFER.SCE ≠ 1)
+(* Not in 64-Bit Mode or SYSCALL/SYSRET not enabled in IA32_EFER *)
+    THEN #UD; FI;
+IF (CPL ≠ 0) THEN #GP(0); FI;
+IF (operand size is 64-bit)
+    THEN (* Return to 64-Bit Mode *)
+        IF (RCX is not canonical) THEN #GP(0);
+        RIP := RCX;
+    ELSE (* Return to Compatibility Mode *)
+        RIP := ECX;
+FI;
+RFLAGS := (R11 & 3C7FD7H) | 2; (*
+                Clear RF, VM, reserved bits; set bit 1 *)
+IF (operand size is 64-bit)
+    THEN CS.Selector := IA32_STAR[63:48]+16;
+    ELSE CS.Selector := IA32_STAR[63:48];
+FI;
+CS.Selector := CS.Selector OR 3;
+            (* RPL forced to 3 *)
+(* Set rest of CS to a fixed value *)
+CS.Base := 0;
+            (* Flat segment *)
+CS.Limit := FFFFFH;
+            (* With 4-KByte granularity, implies a 4-GByte limit *)
+CS.Type := 11;
+            (* Execute/read code, accessed *)
+CS.S := 1;
+CS.DPL := 3;
+CS.P := 1;
+IF (operand size is 64-bit)
+    THEN (* Return to 64-Bit Mode *)
+        CS.L := 1;
+            (* 64-bit code segment *)
+        CS.D := 0;
+            (* Required if CS.L = 1 *)
+    ELSE (* Return to Compatibility Mode *)
+        CS.L := 0;
+            (* Compatibility mode *)
+        CS.D := 1;
+            (* 32-bit code segment *)
+FI;
+CS.G := 1;
+            (* 4-KByte granularity *)
+CPL := 3;
+IF ShadowStackEnabled(CPL)
+    SSP := IA32_PL3_SSP;
+FI;
+SS.Selector := (IA32_STAR[63:48]+8) OR 3;
+            (* RPL forced to 3 *)
+(* Set rest of SS to a fixed value *)
+SS.Base := 0;
+            (* Flat segment *)
+SS.Limit := FFFFFH;
+            (* With 4-KByte granularity, implies a 4-GByte limit *)
+SS.Type := 3;
+            (* Read/write data, accessed *)
+SS.S := 1;
+SS.DPL := 3;
+SS.P := 1;
+SS.B := 1;
+            (* 32-bit stack segment*)
+SS.G := 1;
+            (* 4-KByte granularity *)
+
+

Flags Affected + ¶ +

+

All.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDThe SYSRET instruction is not recognized in protected mode.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe SYSRET instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe SYSRET instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe SYSRET instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + +
#UDIf IA32_EFER.SCE = 0.
If the LOCK prefix is used.
#GP(0)If CPL ≠ 0.
If the return is to 64-bit mode and RCX contains a non-canonical address.
diff --git a/x86/tdpbf16ps.html b/x86/tdpbf16ps.html new file mode 100644 index 0000000..2844049 --- /dev/null +++ b/x86/tdpbf16ps.html @@ -0,0 +1,97 @@ + +TDPBF16PS + — Dot Product of BF16 Tiles Accumulated into Packed Single Precision Tile

TDPBF16PS + — Dot Product of BF16 Tiles Accumulated into Packed Single Precision Tile

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.F3.0F38.W0 5C 11:rrr:bbb TDPBF16PS tmm1, tmm2, tmm3AV/N.E.AMX-BF16Matrix multiply BF16 elements from tmm2 and tmm3, and accumulate the packed single precision elements in tmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)VEX.vvvv (r)N/A
+

Description + ¶ +

+

This instruction performs a set of SIMD dot-products of two BF16 elements and accumulates the results into a packed single precision tile. Each dword element in input tiles tmm2 and tmm3 is interpreted as a BF16 pair. For each possible combination of (row of tmm2, column of tmm3), the instruction performs a set of SIMD dot-products on all corresponding BF16 pairs (one pair from tmm2 and one pair from tmm3), adds the results of those dot-products, and then accumulates the result into the corresponding row and column of tmm1.

+

“Round to nearest even” rounding mode is used when doing each accumulation of the FMA. Output denormals are always flushed to zero and input denormals are always treated as zero. MXCSR is not consulted nor updated.

+

Any attempt to execute the TDPBF16PS instruction inside a TSX transaction will result in a transaction abort.

+

Operation + ¶ +

+
define make_fp32(x):
+    // The x parameter is bfloat16. Pack it in to upper 16b of a dword.
+    // The bit pattern is a legal fp32 value. Return that bit pattern.
+    dword: = 0
+    dword[31:16] := x
+return dword
+
+

TDPBF16PS tsrcdest, tsrc1, tsrc2 + ¶ +

+
// C = m x n (tsrcdest), A = m x k (tsrc1), B = k x n (tsrc2)
+# src1 and src2 elements are pairs of bfloat16
+elements_src1 := tsrc1.colsb / 4
+elements_src2 := tsrc2.colsb / 4
+elements_dest := tsrcdest.colsb / 4
+elements_temp := tsrcdest.colsb / 2
+for m in 0 ... tsrcdest.rows-1:
+    temp1[ 0 ... elements_temp-1 ] := 0
+    for k in 0 ... elements_src1-1:
+        for n in 0 ... elements_dest-1:
+            // FP32 FMA with DAZ=FTZ=1, RNE rounding.
+            // MXCSR is neither consulted nor updated.
+            // No exceptions raised or denoted.
+            temp1.fp32[2*n+0] += make_fp32(tsrc1.row[m].bfloat16[2*k+0]) * make_fp32(tsrc2.row[k].bfloat16[2*n+0])
+            temp1.fp32[2*n+1] += make_fp32(tsrc1.row[m].bfloat16[2*k+1]) * make_fp32(tsrc2.row[k].bfloat16[2*n+1])
+    for n in 0 ... elements_dest-1:
+        // DAZ=FTZ=1, RNE rounding.
+        // MXCSR is neither consulted nor updated.
+        // No exceptions raised or denoted.
+        tmpf32 := temp1.fp32[2*n] + temp1.fp32[2*n+1]
+        tsrcdest.row[m].fp32[n] := tsrcdest.row[m].fp32[n] + tmpf32
+    write_row_and_zero(tsrcdest, m, tmp, tsrcdest.colsb)
+zero_upper_rows(tsrcdest, tsrcdest.rows)
+zero_tilecfg_start()
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
TDPBF16PS void _tile_dpbf16ps(__tile dst, __tile src1, __tile src2);
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

AMX-E4; see Section 2.10, “Intel® AMX Instruction Exception Classes,” for details.

diff --git a/x86/tdpbssd.tdpbsud.tdpbusd.tdpbuud.html b/x86/tdpbssd.tdpbsud.tdpbusd.tdpbuud.html new file mode 100644 index 0000000..9cf6730 --- /dev/null +++ b/x86/tdpbssd.tdpbsud.tdpbusd.tdpbuud.html @@ -0,0 +1,119 @@ + +TDPBSSD/TDPBSUD/TDPBUSD/TDPBUUD + — Dot Product of Signed/Unsigned Bytes with DwordAccumulation

TDPBSSD/TDPBSUD/TDPBUSD/TDPBUUD + — Dot Product of Signed/Unsigned Bytes with DwordAccumulation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.F2.0F38.W0 5E 11:rrr:bbb TDPBSSD tmm1, tmm2, tmm3AV/N.E.AMX-INT8Matrix multiply signed byte elements from tmm2 by signed byte elements from tmm3 and accumulate the dword elements in tmm1.
VEX.128.F3.0F38.W0 5E 11:rrr:bbb TDPBSUD tmm1, tmm2, tmm3AV/N.E.AMX-INT8Matrix multiply signed byte elements from tmm2 by unsigned byte elements from tmm3 and accumulate the dword elements in tmm1.
VEX.128.66.0F38.W0 5E 11:rrr:bbb TDPBUSD tmm1, tmm2, tmm3AV/N.E.AMX-INT8Matrix multiply unsigned byte elements from tmm2 by signed byte elements from tmm3 and accumulate the dword elements in tmm1.
VEX.128.NP.0F38.W0 5E 11:rrr:bbb TDPBUUD tmm1, tmm2, tmm3AV/N.E.AMX-INT8Matrix multiply unsigned byte elements from tmm2 by unsigned byte elements from tmm3 and accumulate the dword elements in tmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)VEX.vvvv (r)N/A
+

Description + ¶ +

+

For each possible combination of (row of tmm2, column of tmm3), the instruction performs a set of SIMD dot-products on all corresponding four byte elements, one from tmm2 and one from tmm3, adds the results of those dot-products, and then accumulates the result into the corresponding row and column of tmm1. Each dword in input tiles tmm2 and tmm3 is interpreted as four byte elements. These may be signed or unsigned. Each letter in the two-letter pattern SU, US, SS, UU indicates the signed/unsigned nature of the values in tmm2 and tmm3, respectively.

+

Any attempt to execute the TDPBSSD/TDPBSUD/TDPBUSD/TDPBUUD instructions inside an Intel TSX transaction will result in a transaction abort.

+

Operation + ¶ +

+
define DPBD(c,x,y):// arguments are dwords
+    if *x operand is signed*:
+        extend_src1 := SIGN_EXTEND
+    else:
+        extend_src1 := ZERO_EXTEND
+    if *y operand is signed*:
+        extend_src2 := SIGN_EXTEND
+    else:
+        extend_src2 := ZERO_EXTEND
+    p0dword := extend_src1(x.byte[0]) * extend_src2(y.byte[0])
+    p1dword := extend_src1(x.byte[1]) * extend_src2(y.byte[1])
+    p2dword := extend_src1(x.byte[2]) * extend_src2(y.byte[2])
+    p3dword := extend_src1(x.byte[3]) * extend_src2(y.byte[3])
+    c := c + p0dword + p1dword + p2dword + p3dword
+
+

TDPBSSD, TDPBSUD, TDPBUSD, TDPBUUD tsrcdest, tsrc1, tsrc2 (Register Only Version) + ¶ +

+
// C = m x n (tsrcdest), A = m x k (tsrc1), B = k x n (tsrc2)
+tsrc1_elements_per_row := tsrc1.colsb / 4
+tsrc2_elements_per_row := tsrc2.colsb / 4
+tsrcdest_elements_per_row := tsrcdest.colsb / 4
+for m in 0 ... tsrcdest.rows-1:
+    tmp := tsrcdest.row[m]
+    for k in 0 ... tsrc1_elements_per_row-1:
+        for n in 0 ... tsrcdest_elements_per_row-1:
+            DPBD( tmp.dword[n], tsrc1.row[m].dword[k], tsrc2.row[k].dword[n] )
+    write_row_and_zero(tsrcdest, m, tmp, tsrcdest.colsb)
+zero_upper_rows(tsrcdest, tsrcdest.rows)
+zero_tilecfg_start()
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
TDPBSSD void _tile_dpbssd(__tile dst, __tile src1, __tile src2);
+
+
TDPBSUD void _tile_dpbsud(__tile dst, __tile src1, __tile src2);
+
+
TDPBUSD void _tile_dpbusd(__tile dst, __tile src1, __tile src2);
+
+
TDPBUUD void _tile_dpbuud(__tile dst, __tile src1, __tile src2);
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

AMX-E4; see Section 2.10, “Intel® AMX Instruction Exception Classes,” for details.

diff --git a/x86/test.html b/x86/test.html new file mode 100644 index 0000000..905dd43 --- /dev/null +++ b/x86/test.html @@ -0,0 +1,244 @@ + +TEST + — Logical Compare

TEST + — Logical Compare

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
A8 ibTEST AL, imm8IValidValidAND imm8 with AL; set SF, ZF, PF according to result.
A9 iwTEST AX, imm16IValidValidAND imm16 with AX; set SF, ZF, PF according to result.
A9 idTEST EAX, imm32IValidValidAND imm32 with EAX; set SF, ZF, PF according to result.
REX.W + A9 idTEST RAX, imm32IValidN.E.AND imm32 sign-extended to 64-bits with RAX; set SF, ZF, PF according to result.
F6 /0 ibTEST r/m8, imm8MIValidValidAND imm8 with r/m8; set SF, ZF, PF according to result.
REX + F6 /0 ibTEST r/m81, imm8MIValidN.E.AND imm8 with r/m8; set SF, ZF, PF according to result.
F7 /0 iwTEST r/m16, imm16MIValidValidAND imm16 with r/m16; set SF, ZF, PF according to result.
F7 /0 idTEST r/m32, imm32MIValidValidAND imm32 with r/m32; set SF, ZF, PF according to result.
REX.W + F7 /0 idTEST r/m64, imm32MIValidN.E.AND imm32 sign-extended to 64-bits with r/m64; set SF, ZF, PF according to result.
84 /rTEST r/m8, r8MRValidValidAND r8 with r/m8; set SF, ZF, PF according to result.
REX + 84 /rTEST r/m81, r81MRValidN.E.AND r8 with r/m8; set SF, ZF, PF according to result.
85 /rTEST r/m16, r16MRValidValidAND r16 with r/m16; set SF, ZF, PF according to result.
85 /rTEST r/m32, r32MRValidValidAND r32 with r/m32; set SF, ZF, PF according to result.
REX.W + 85 /rTEST r/m64, r64MRValidN.E.AND r64 with r/m64; set SF, ZF, PF according to result.
+
+

1. In 64-bit mode, r/m8 can not be encoded to access the following byte registers if a REX prefix is used: AH, BH, CH, DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
IAL/AX/EAX/RAXimm8/16/32N/AN/A
MIModRM:r/m (r)imm8/16/32N/AN/A
MRModRM:r/m (r)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Computes the bit-wise logical AND of first operand (source 1 operand) and the second operand (source 2 operand) and sets the SF, ZF, and PF status flags according to the result. The result is then discarded.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
TEMP := SRC1 AND SRC2;
+SF := MSB(TEMP);
+IF TEMP = 0
+    THEN ZF := 1;
+    ELSE ZF := 0;
+FI:
+PF := BitwiseXNOR(TEMP[0:7]);
+CF := 0;
+OF := 0;
+(* AF is undefined *)
+
+

Flags Affected + ¶ +

+

The OF and CF flags are set to 0. The SF, ZF, and PF flags are set according to the result (see the “Operation” section above). The state of the AF flag is undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/testui.html b/x86/testui.html new file mode 100644 index 0000000..4750963 --- /dev/null +++ b/x86/testui.html @@ -0,0 +1,93 @@ + +TESTUI + — Determine User Interrupt Flag

TESTUI + — Determine User Interrupt Flag

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 01 ED TESTUIZOV/IUINTRCopies the current value of UIF into EFLAGS.CF.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

TESTUI copies the current value of the user interrupt flag (UIF) into EFLAGS.CF. This instruction can be executed regardless of CPL.

+

TESTUI may be executed normally inside a transactional region.

+

Operation + ¶ +

+
CF := UIF;
+ZF := AF := OF := PF := SF := 0;
+
+

Flags Affected + ¶ +

+

The ZF, OF, AF, PF, SF flags are cleared and the CF flags to the value of the user interrupt flag.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDThe TESTUI instruction is not recognized in protected mode.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe TESTUI instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe TESTUI instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe TESTUI instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + +
#UDIf the LOCK prefix is used.
If executed inside an enclave.
If CR4.UINTR = 0.
If CPUID.07H.0H:EDX.UINTR[bit 5] = 0.
diff --git a/x86/tileloadd.tileloaddt1.html b/x86/tileloadd.tileloaddt1.html new file mode 100644 index 0000000..c047731 --- /dev/null +++ b/x86/tileloadd.tileloaddt1.html @@ -0,0 +1,87 @@ + +TILELOADD/TILELOADDT1 + — Load Tile

TILELOADD/TILELOADDT1 + — Load Tile

+ + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.F2.0F38.W0 4B !(11):rrr:100 TILELOADD tmm1, sibmemAV/N.E.AMX-TILELoad data into tmm1 as specified by information in sibmem.
VEX.128.66.0F38.W0 4B !(11):rrr:100 TILELOADDT1 tmm1, sibmemAV/N.E.AMX-TILELoad data into tmm1 as specified by information in sibmem with hint to optimize data caching.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction is required to use SIB addressing. The index register serves as a stride indicator. If the SIB encoding omits an index register, the value zero is assumed for the content of the index register.

+

This instruction loads a tile destination with rows and columns as specified by the tile configuration. The “T1” version provides a hint to the implementation that the data would be reused but does not need to be resident in the nearest cache levels.

+

The TILECFG.start_row in the TILECFG data should be initialized to '0' in order to load the entire tile and is set to zero on successful completion of the TILELOADD instruction. TILELOADD is a restartable instruction and the TILECFG.start_row will be non-zero when restartable events occur during the instruction execution.

+

Only memory operands are supported and they can only be accessed using a SIB addressing mode, similar to the V[P]GATHER*/V[P]SCATTER* instructions.

+

Any attempt to execute the TILELOADD/TILELOADDT1 instructions inside an Intel TSX transaction will result in a transaction abort.

+

Operation + ¶ +

+
TILELOADD[,T1] tdest, tsib
+start := tilecfg.start_row
+zero_upper_rows(tdest,start)
+membegin := tsib.base + displacement
+// if no index register in the SIB encoding, the value zero is used.
+stride := tsib.index << tsib.scale
+nbytes := tdest.colsb
+while start < tdest.rows:
+    memptr := membegin + start * stride
+    write_row_and_zero(tdest, start, read_memory(memptr, nbytes), nbytes)
+    start := start + 1
+zero_tilecfg_start()
+// In the case of a memory fault in the middle of an instruction, the tilecfg.start_row := start
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
TILELOADD void _tile_loadd(__tile dst, const void *base, int stride);
+
+
TILELOADDT1 void _tile_stream_loadd(__tile dst, const void *base, int stride);
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

AMX-E3; see Section 2.10, “Intel® AMX Instruction Exception Classes,” for details.

diff --git a/x86/tilerelease.html b/x86/tilerelease.html new file mode 100644 index 0000000..b2e9821 --- /dev/null +++ b/x86/tilerelease.html @@ -0,0 +1,65 @@ + +TILERELEASE + — Release Tile

TILERELEASE + — Release Tile

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.NP.0F38.W0 49 C0 TILERELEASEAV/N.E.AMX-TILEInitialize TILECFG and TILEDATA.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AN/AN/AN/AN/A
+

Description + ¶ +

+

This instruction returns TILECFG and TILEDATA to the INIT state.

+

Any attempt to execute the TILERELEASE instruction inside an Intel TSX transaction will result in a transaction abort.

+

Operation + ¶ +

+
zero_all_tile_data()
+tilecfg := 0// equivalent to 64B of zeros
+TILES_CONFIGURED := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
TILERELEASE void _tile_release(void);
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

AMX-E6; see Section 2.10, “Intel® AMX Instruction Exception Classes,” for details.

diff --git a/x86/tilestored.html b/x86/tilestored.html new file mode 100644 index 0000000..c298cff --- /dev/null +++ b/x86/tilestored.html @@ -0,0 +1,76 @@ + +TILESTORED + — Store Tile

TILESTORED + — Store Tile

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.F3.0F38.W0 4B !(11):rrr:100 TILESTORED sibmem, tmm1AV/N.E.AMX-TILEStore a tile in sibmem as specified in tmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

This instruction is required to use SIB addressing. The index register serves as a stride indicator. If the SIB encoding omits an index register, the value zero is assumed for the content of the index register.

+

This instruction stores a tile source of rows and columns as specified by the tile configuration.

+

The TILECFG.start_row in the TILECFG data should be initialized to '0' in order to store the entire tile and are set to zero on successful completion of the TILESTORED instruction. TILESTORED is a restartable instruction and the TILECFG.start_row will be non-zero when restartable events occur during the instruction execution.

+

Only memory operands are supported and they can only be accessed using a SIB addressing mode, similar to the V[P]GATHER*/V[P]SCATTER* instructions.

+

Any attempt to execute the TILESTORED instruction inside an Intel TSX transaction will result in a transaction abort.

+

Operation + ¶ +

+
TILESTORED tsib, tsrc
+start := tilecfg.start_row
+membegin := tsib.base + displacement
+// if no index register in the SIB encoding, the value zero is used.
+stride := tsib.index << tsib.scale
+while start < tdest.rows:
+    memptr := membegin + start * stride
+    write_memory(memptr, tsrc.colsb, tsrc.row[start])
+    start := start + 1
+zero_tilecfg_start()
+// In the case of a memory fault in the middle of an instruction, the tilecfg.start_row := start
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
TILESTORED void _tile_stored(__tile src, void *base, int stride);
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

AMX-E3; see Section 2.10, “Intel® AMX Instruction Exception Classes,” for details.

diff --git a/x86/tilezero.html b/x86/tilezero.html new file mode 100644 index 0000000..3397b4a --- /dev/null +++ b/x86/tilezero.html @@ -0,0 +1,68 @@ + +TILEZERO + — Zero Tile

TILEZERO + — Zero Tile

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.F2.0F38.W0 49 11:rrr:000 TILEZERO tmm1AV/N.E.AMX-TILEZero the destination tile.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)N/AN/AN/A
+

Description + ¶ +

+

This instruction zeroes the destination tile.

+

Any attempt to execute the TILEZERO instruction inside an Intel TSX transaction will result in a transaction abort.

+

Operation + ¶ +

+
TILEZERO tdest
+nbytes := palette_table[palette_id].bytes_per_row
+for i in 0 ... palette_table[palette_id].max_rows-1:
+    for j in 0 ... nbytes-1:
+        tdest.row[i].byte[j] := 0
+zero_tilecfg_start()
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
TILEZERO void _tile_zero(__tile dst);
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions + ¶ +

+

AMX-E5; see Section 2.10, “Intel® AMX Instruction Exception Classes,” for details.

diff --git a/x86/tpause.html b/x86/tpause.html new file mode 100644 index 0000000..e7bba35 --- /dev/null +++ b/x86/tpause.html @@ -0,0 +1,119 @@ + +TPAUSE + — Timed PAUSE

TPAUSE + — Timed PAUSE

+ + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F AE /6 TPAUSE r32, <edx>, <eax>AV/VWAITPKGDirects the processor to enter an implementation-dependent optimized state until the TSC reaches the value in EDX:EAX.
+

Instruction Operand Encoding1 + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

TPAUSE instructs the processor to enter an implementation-dependent optimized state. There are two such optimized states to choose from: light-weight power/performance optimized state, and improved power/performance optimized state. The selection between the two is governed by the explicit input register bit[0] source operand.

+

TPAUSE is available when CPUID.7.0:ECX.WAITPKG[bit 5] is enumerated as 1. TPAUSE may be executed at any privilege level. This instruction’s operation is the same in non-64-bit modes and in 64-bit mode.

+

Unlike PAUSE, the TPAUSE instruction will not cause an abort when used inside a transactional region, described in the chapter Chapter 16, “Programming with Intel® Transactional Synchronization Extensions,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+
+

1. The Mod field of the ModR/M byte must have value 11B.

+

The input register contains information such as the preferred optimized state the processor should enter as described in the following table. Bits other than bit 0 are reserved and will result in #GP if non-zero.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Bit ValueState NameWakeup TimePower SavingsOther Benefits
bit[0] = 0C0.2SlowerLargerImproves performance of the other SMT thread(s) on the same core.
bit[0] = 1C0.1FasterSmallerN/A
bits[31:1]N/AN/AN/AReserved
+
Table 4-20. TPAUSE Input Register Bit Definitions
+

The instruction execution wakes up when the time-stamp counter reaches or exceeds the implicit EDX:EAX 64-bit input value.

+

Prior to executing the TPAUSE instruction, an operating system may specify the maximum delay it allows the processor to suspend its operation. It can do so by writing TSC-quanta value to the following 32-bit MSR (IA32_UMWAIT_CONTROL at MSR index E1H):

+
    +
  • IA32_UMWAIT_CONTROL[31:2] — Determines the maximum time in TSC-quanta that the processor can reside in either C0.1 or C0.2. A zero value indicates no maximum time. The maximum time value is a 32-bit value where the upper 30 bits come from this field and the lower two bits are zero.
  • +
  • IA32_UMWAIT_CONTROL[1] — Reserved.
  • +
  • IA32_UMWAIT_CONTROL[0] — C0.2 is not allowed by the OS. Value of “1” means all C0.2 requests revert to C0.1.
+

If the processor that executed a TPAUSE instruction wakes due to the expiration of the operating system time-limit, the instructions sets RFLAGS.CF; otherwise, that flag is cleared.

+

The following additional events cause the processor to exit the implementation-dependent optimized state: a store to the read-set range within the transactional region, an NMI or SMI, a debug exception, a machine check exception, the BINIT# signal, the INIT# signal, and the RESET# signal.

+

Other implementation-dependent events may cause the processor to exit the implementation-dependent optimized state proceeding to the instruction following TPAUSE. In addition, an external interrupt causes the processor to exit the implementation-dependent optimized state regardless of whether maskable-interrupts are inhibited (EFLAGS.IF =0). It should be noted that if maskable-interrupts are inhibited execution will proceed to the instruction following TPAUSE.

+

Operation + ¶ +

+
os_deadline := TSC+(IA32_UMWAIT_CONTROL[31:2]<<2)
+instr_deadline := UINT64(EDX:EAX)
+IF os_deadline < instr_deadline:
+    deadline := os_deadline
+    using_os_deadline := 1
+ELSE:
+    deadline := instr_deadline
+    using_os_deadline := 0
+WHILE TSC < deadline:
+    implementation_dependent_optimized_state(Source register, deadline, IA32_UMWAIT_CONTROL[0])
+IF using_os_deadline AND TSC ≥ deadline:
+    RFLAGS.CF := 1
+ELSE:
+    RFLAGS.CF := 0
+RFLAGS.AF,PF,SF,ZF,OF := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
TPAUSE uint8_t _tpause(uint32_t control, uint64_t counter);
+
+

Numeric Exceptions + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#GP(0) If src[31:1] != 0.

+

If CR4.TSD = 1 and CPL != 0.

+

#UD If CPUID.7.0:ECX.WAITPKG[bit 5]=0.

diff --git a/x86/tzcnt.html b/x86/tzcnt.html new file mode 100644 index 0000000..3b9239b --- /dev/null +++ b/x86/tzcnt.html @@ -0,0 +1,161 @@ + +TZCNT + — Count the Number of Trailing Zero Bits

TZCNT + — Count the Number of Trailing Zero Bits

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F BC /r TZCNT r16, r/m16AV/VBMI1Count the number of trailing zero bits in r/m16, return result in r16.
F3 0F BC /r TZCNT r32, r/m32AV/VBMI1Count the number of trailing zero bits in r/m32, return result in r32.
F3 REX.W 0F BC /r TZCNT r64, r/m64AV/N.E.BMI1Count the number of trailing zero bits in r/m64, return result in r64.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
AModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

TZCNT counts the number of trailing least significant zero bits in source operand (second operand) and returns the result in destination operand (first operand). TZCNT is an extension of the BSF instruction. The key difference between TZCNT and BSF instruction is that TZCNT provides operand size as output when source operand is zero while in the case of BSF instruction, if source operand is zero, the content of destination operand are undefined. On processors that do not support TZCNT, the instruction byte encoding is executed as BSF.

+

Operation + ¶ +

+
temp := 0
+DEST := 0
+DO WHILE ( (temp < OperandSize) and (SRC[ temp] = 0) )
+    temp := temp +1
+    DEST := DEST+ 1
+OD
+IF DEST = OperandSize
+    CF := 1
+ELSE
+    CF := 0
+FI
+IF DEST = 0
+    ZF := 1
+ELSE
+    ZF := 0
+FI
+
+

Flags Affected + ¶ +

+

ZF is set to 1 in case of zero output (least significant bit of the source is set), and to 0 otherwise, CF is set to 1 if the input was zero and cleared otherwise. OF, SF, PF, and AF flags are undefined.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
TZCNT unsigned __int32 _tzcnt_u32(unsigned __int32 src);
+
+
TZCNT unsigned __int64 _tzcnt_u64(unsigned __int64 src);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#GP(0)For an illegal memory operand effective address in the CS, DS, ES, FS or GS segments.
If the DS, ES, FS, or GS register is used to access memory and it contains a null segment selector.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)For an illegal address in the SS segment.
#UDIf LOCK prefix is used.
+

Virtual 8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If any part of the operand lies outside of the effective address space from 0 to 0FFFFH.
#SS(0)For an illegal address in the SS segment.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in Protected Mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code) For a page fault.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf LOCK prefix is used.
diff --git a/x86/ucomisd.html b/x86/ucomisd.html new file mode 100644 index 0000000..b1a7baf --- /dev/null +++ b/x86/ucomisd.html @@ -0,0 +1,114 @@ + +UCOMISD + — Unordered Compare Scalar Double Precision Floating-Point Values and Set EFLAGS

UCOMISD + — Unordered Compare Scalar Double Precision Floating-Point Values and Set EFLAGS

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 2E /r UCOMISD xmm1, xmm2/m64AV/VSSE2Compare low double precision floating-point values in xmm1 and xmm2/mem64 and set the EFLAGS flags accordingly.
VEX.LIG.66.0F.WIG 2E /r VUCOMISD xmm1, xmm2/m64AV/VAVXCompare low double precision floating-point values in xmm1 and xmm2/mem64 and set the EFLAGS flags accordingly.
EVEX.LLIG.66.0F.W1 2E /r VUCOMISD xmm1, xmm2/m64{sae}BV/VAVX512FCompare low double precision floating-point values in xmm1 and xmm2/m64 and set the EFLAGS flags accordingly.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r)ModRM:r/m (r)N/AN/A
BTuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Performs an unordered compare of the double precision floating-point values in the low quadwords of operand 1 (first operand) and operand 2 (second operand), and sets the ZF, PF, and CF flags in the EFLAGS register according to the result (unordered, greater than, less than, or equal). The OF, SF, and AF flags in the EFLAGS register are set to 0. The unordered result is returned if either source operand is a NaN (QNaN or SNaN).

+

Operand 1 is an XMM register; operand 2 can be an XMM register or a 64 bit memory

+

location.

+

The UCOMISD instruction differs from the COMISD instruction in that it signals a SIMD floating-point invalid operation exception (#I) only when a source operand is an SNaN. The COMISD instruction signals an invalid operation exception only if a source operand is either an SNaN or a QNaN.

+

The EFLAGS register is not updated if an unmasked SIMD floating-point exception is generated.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Software should ensure VCOMISD is encoded with VEX.L=0. Encoding VCOMISD with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

(V)UCOMISD (All Versions) + ¶ +

+
RESULT := UnorderedCompare(DEST[63:0] <> SRC[63:0]) {
+(* Set EFLAGS *) CASE (RESULT) OF
+    UNORDERED: ZF,PF,CF := 111;
+    GREATER_THAN: ZF,PF,CF := 000;
+    LESS_THAN: ZF,PF,CF := 001;
+    EQUAL: ZF,PF,CF := 100;
+ESAC;
+OF, AF, SF := 0; }
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VUCOMISD int _mm_comi_round_sd(__m128d a, __m128d b, int imm, int sae);
+
+
UCOMISD int _mm_ucomieq_sd(__m128d a, __m128d b)
+
+
UCOMISD int _mm_ucomilt_sd(__m128d a, __m128d b)
+
+
UCOMISD int _mm_ucomile_sd(__m128d a, __m128d b)
+
+
UCOMISD int _mm_ucomigt_sd(__m128d a, __m128d b)
+
+
UCOMISD int _mm_ucomige_sd(__m128d a, __m128d b)
+
+
UCOMISD int _mm_ucomineq_sd(__m128d a, __m128d b)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN operands), Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/ucomiss.html b/x86/ucomiss.html new file mode 100644 index 0000000..32b1973 --- /dev/null +++ b/x86/ucomiss.html @@ -0,0 +1,113 @@ + +UCOMISS + — Unordered Compare Scalar Single Precision Floating-Point Values and Set EFLAGS

UCOMISS + — Unordered Compare Scalar Single Precision Floating-Point Values and Set EFLAGS

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 2E /r UCOMISS xmm1, xmm2/m32AV/VSSECompare low single precision floating-point values in xmm1 and xmm2/mem32 and set the EFLAGS flags accordingly.
VEX.LIG.0F.WIG 2E /r VUCOMISS xmm1, xmm2/m32AV/VAVXCompare low single precision floating-point values in xmm1 and xmm2/mem32 and set the EFLAGS flags accordingly.
EVEX.LLIG.0F.W0 2E /r VUCOMISS xmm1, xmm2/m32{sae}BV/VAVX512FCompare low single precision floating-point values in xmm1 and xmm2/mem32 and set the EFLAGS flags accordingly.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r)ModRM:r/m (r)N/AN/A
BTuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Compares the single precision floating-point values in the low doublewords of operand 1 (first operand) and operand 2 (second operand), and sets the ZF, PF, and CF flags in the EFLAGS register according to the result (unordered, greater than, less than, or equal). The OF, SF, and AF flags in the EFLAGS register are set to 0. The unordered result is returned if either source operand is a NaN (QNaN or SNaN).

+

Operand 1 is an XMM register; operand 2 can be an XMM register or a 32 bit memory location.

+

The UCOMISS instruction differs from the COMISS instruction in that it signals a SIMD floating-point invalid operation exception (#I) only if a source operand is an SNaN. The COMISS instruction signals an invalid operation exception when a source operand is either a QNaN or SNaN.

+

The EFLAGS register is not updated if an unmasked SIMD floating-point exception is generated.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Software should ensure VCOMISS is encoded with VEX.L=0. Encoding VCOMISS with VEX.L=1 may encounter unpredictable behavior across different processor generations.

+

Operation + ¶ +

+

(V)UCOMISS (All Versions) + ¶ +

+
RESULT := UnorderedCompare(DEST[31:0] <> SRC[31:0]) {
+(* Set EFLAGS *) CASE (RESULT) OF
+    UNORDERED: ZF,PF,CF := 111;
+    GREATER_THAN: ZF,PF,CF := 000;
+    LESS_THAN: ZF,PF,CF := 001;
+    EQUAL: ZF,PF,CF := 100;
+ESAC;
+OF, AF, SF := 0; }
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VUCOMISS int _mm_comi_round_ss(__m128 a, __m128 b, int imm, int sae);
+
+
UCOMISS int _mm_ucomieq_ss(__m128 a, __m128 b);
+
+
UCOMISS int _mm_ucomilt_ss(__m128 a, __m128 b);
+
+
UCOMISS int _mm_ucomile_ss(__m128 a, __m128 b);
+
+
UCOMISS int _mm_ucomigt_ss(__m128 a, __m128 b);
+
+
UCOMISS int _mm_ucomige_ss(__m128 a, __m128 b);
+
+
UCOMISS int _mm_ucomineq_ss(__m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN Operands), Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions,” additionally:

+ + + +
#UDIf VEX.vvvv != 1111B.
+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/ud.html b/x86/ud.html new file mode 100644 index 0000000..05df17f --- /dev/null +++ b/x86/ud.html @@ -0,0 +1,82 @@ + +UD + — Undefined Instruction

UD + — Undefined Instruction

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F FF /rUD01 r32, r/m32RMValidValidRaise invalid opcode exception.
0F B9 /rUD1 r32, r/m32RMValidValidRaise invalid opcode exception.
0F 0BUD2ZOValidValidRaise invalid opcode exception.
+
+

1. Some processors decode the UD0 instruction without a ModR/M byte. As a result, those processors would deliver an invalid-opcode exception instead of a fault on instruction fetch when the instruction with a ModR/M byte (and any implied bytes) would cross a page or segment boundary.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
RMModRM:reg (r)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Generates an invalid opcode exception. This instruction is provided for software testing to explicitly generate an invalid opcode exception. The opcodes for this instruction are reserved for this purpose.

+

Other than raising the invalid opcode exception, this instruction has no effect on processor state or memory.

+

Even though it is the execution of the UD instruction that causes the invalid opcode exception, the instruction pointer saved by delivery of the exception references the UD instruction (and not the following instruction).

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
#UD (* Generates invalid opcode exception *);
+
+

Flags Affected + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#UD Raises an invalid opcode exception in all operating modes.

diff --git a/x86/uiret.html b/x86/uiret.html new file mode 100644 index 0000000..035a0a2 --- /dev/null +++ b/x86/uiret.html @@ -0,0 +1,130 @@ + +UIRET + — User-Interrupt Return

UIRET + — User-Interrupt Return

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 01 EC UIRETZOV/IUINTRReturn from handling a user interrupt.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

Description + ¶ +

+

UIRET returns from the handling of a user interrupt. It can be executed regardless of CPL.

+

Execution of UIRET inside a transactional region causes a transactional abort; the abort loads EAX as it would have had it been due to an execution of IRET.

+

UIRET can be tracked by Architectural Last Branch Records (LBRs), Intel Processor Trace (Intel PT), and Performance Monitoring. For both Intel PT and LBRs, UIRET is recorded in precisely the same manner as IRET. Hence for LBRs, UIRETs fall into the OTHER_BRANCH category, which implies that IA32_LBR_CTL.OTHER_BRANCH[bit 22] must be set to record user-interrupt delivery, and that the IA32_LBR_x_INFO.BR_TYPE field will indicate OTHER_BRANCH for any recorded user interrupt. For Intel PT, control flow tracing must be enabled by setting IA32_RTIT_CTL.BranchEn[bit 13].

+

UIRET will also increment performance counters for which counting BR_INST_RETIRED.FAR_BRANCH is enabled.

+

Operation + ¶ +

+
Pop tempRIP;
+Pop tempRFLAGS; // see below for how this is used to load RFLAGS
+Pop tempRSP;
+IF tempRIP is not canonical in current paging mode
+    THEN #GP(0);
+FI;
+IF ShadowStackEnabled(CPL)
+    THEN
+        PopShadowStack SSRIP;
+        IF SSRIP ≠ tempRIP
+            THEN #CP (FAR-RET/IRET);
+        FI;
+FI;
+RIP := tempRIP;
+// update in RFLAGS only CF, PF, AF, ZF, SF, TF, DF, OF, NT, RF, AC, and ID
+RFLAGS := (RFLAGS & ~254DD5H) | (tempRFLAGS & 254DD5H);
+RSP := tempRSP;
+UIF := 1;
+Clear any cache-line monitoring established by MONITOR or UMONITOR;
+
+

Flags Affected + ¶ +

+

See the Operation section.

+

Protected Mode Exceptions + ¶ +

+ + + +
#UDThe UIRET instruction is not recognized in protected mode.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe UIRET instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe UIRET instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe UIRET instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the return instruction pointer is non-canonical.
#SS(0)If an attempt to pop a value off the stack causes a non-canonical address to be referenced.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#CPIf return instruction pointer from stack and shadow stack do not match.
#UDIf the LOCK prefix is used.
If executed inside an enclave.
If CR4.UINTR = 0.
If CPUID.07H.0H:EDX.UINTR[bit 5] = 0.
diff --git a/x86/umonitor.html b/x86/umonitor.html new file mode 100644 index 0000000..f1fdc0d --- /dev/null +++ b/x86/umonitor.html @@ -0,0 +1,127 @@ + +UMONITOR + — User Level Set Up Monitor Address

UMONITOR + — User Level Set Up Monitor Address

+ + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F AE /6 UMONITOR r16/r32/r64AV/VWAITPKGSets up a linear address range to be monitored by hardware and activates the monitor. The address range should be a write-back memory caching type. The address is contained in r16/r32/r64.
+

Instruction Operand Encoding1 + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

The UMONITOR instruction arms address monitoring hardware using an address specified in the source register (the address range that the monitoring hardware checks for store operations can be determined by using the CPUID monitor leaf function, EAX=05H). A store to an address within the specified address range triggers the monitoring hardware. The state of monitor hardware is used by UMWAIT.

+

The content of the source register is an effective address. By default, the DS segment is used to create a linear address that is monitored. Segment overrides can be used. The address range must use memory of the write-back type. Only write-back memory is guaranteed to correctly trigger the monitoring hardware. Additional information on determining what address range to use in order to prevent false wake-ups is described in Chapter 9, “MultipleProcessor Management‚” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+
+

1. The Mod field of the ModR/M byte must have value 11B.

+

The UMONITOR instruction is ordered as a load operation with respect to other memory transactions. The instruction is subject to the permission checking and faults associated with a byte load. Like a load, UMONITOR sets the A-bit but not the D-bit in page tables.

+

UMONITOR and UMWAIT are available when CPUID.7.0:ECX.WAITPKG[bit 5] is enumerated as 1. UMONITOR and UMWAIT may be executed at any privilege level. Except for the width of the source register, the instruction’s operation is the same in non-64-bit modes and in 64-bit mode.

+

UMONITOR does not interoperate with the legacy MWAIT instruction. If UMONITOR was executed prior to executing MWAIT and following the most recent execution of the legacy MONITOR instruction, MWAIT will not enter an optimized state. Execution will continue to the instruction following MWAIT.

+

The UMONITOR instruction causes a transactional abort when used inside a transactional region.

+

The width of the source register (16b, 32b or 64b) is determined by the effective addressing width, which is affected in the standard way by the machine mode settings and 67 prefix.

+

Operation + ¶ +

+
UMONITOR sets up an address range for the monitor hardware using the content of source register as an effective
+address and puts the monitor hardware in armed state. A store to the specified address range will trigger the
+monitor hardware.
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
UMONITOR void _umonitor(void *address);
+
+

Numeric Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)If the specified segment is not SS and the source register is outside the specified segment limit.
If the specified segment register contains a NULL segment selector.
#SS(0)If the specified segment is SS and the source register is outside the SS segment limit.
#PF(fault-code)For a page fault.
#UDIf CPUID.7.0:ECX.WAITPKG[bit 5]=0.
+

Real Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf the specified segment is not SS and the source register is outside of the effective address space from 0 to FFFFH.
#SSIf the specified segment is SS and the source register is outside of the effective address space from 0 to FFFFH.
#UDIf CPUID.7.0:ECX.WAITPKG[bit 5]=0.
+

Virtual 8086 Mode Exceptions + ¶ +

+

Same exceptions as in real address mode; additionally:

+ + + +
#PF(fault-code)For a page fault.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the specified segment is not SS and the linear address is in non-canonical form.
#SS(0)If the specified segment is SS and the source register is in non-canonical form.
#PF(fault-code)For a page fault.
#UDIf CPUID.7.0:ECX.WAITPKG[bit 5]=0.
diff --git a/x86/umwait.html b/x86/umwait.html new file mode 100644 index 0000000..8bfade8 --- /dev/null +++ b/x86/umwait.html @@ -0,0 +1,125 @@ + +UMWAIT + — User Level Monitor Wait

UMWAIT + — User Level Monitor Wait

+ + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F AE /6 UMWAIT r32, <edx>, <eax>AV/VWAITPKGA hint that allows the processor to stop instruction execution and enter an implementation-dependent optimized state until occurrence of a class of events.
+

Instruction Operand Encoding1 + ¶ +

+
+

1. The Mod field of the ModR/M byte must have value 11B.

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

UMWAIT instructs the processor to enter an implementation-dependent optimized state while monitoring a range of addresses. The optimized state may be either a light-weight power/performance optimized state or an improved power/performance optimized state. The selection between the two states is governed by the explicit input register bit[0] source operand.

+

UMWAIT is available when CPUID.7.0:ECX.WAITPKG[bit 5] is enumerated as 1. UMWAIT may be executed at any privilege level. This instruction’s operation is the same in non-64-bit modes and in 64-bit mode.

+

The input register contains information such as the preferred optimized state the processor should enter as described in the following table. Bits other than bit 0 are reserved and will result in #GP if nonzero.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Bit ValueState NameWakeup TimePower SavingsOther Benefits
bit[0] = 0C0.2SlowerLargerImproves performance of the other SMT thread(s) on the same core.
bit[0] = 1C0.1FasterSmallerN/A
bits[31:1]N/AN/AN/AReserved
+
Table 4-21. UMWAIT Input Register Bit Definitions
+

The instruction wakes up when the time-stamp counter reaches or exceeds the implicit EDX:EAX 64-bit input value (if the monitoring hardware did not trigger beforehand).

+

Prior to executing the UMWAIT instruction, an operating system may specify the maximum delay it allows the processor to suspend its operation. It can do so by writing TSC-quanta value to the following 32bit MSR (IA32_UM-WAIT_CONTROL at MSR index E1H):

+
    +
  • IA32_UMWAIT_CONTROL[31:2] — Determines the maximum time in TSC-quanta that the processor can reside in either C0.1 or C0.2. A zero value indicates no maximum time. The maximum time value is a 32-bit value where the upper 30 bits come from this field and the lower two bits are zero.
  • +
  • IA32_UMWAIT_CONTROL[1] — Reserved.
  • +
  • IA32_UMWAIT_CONTROL[0] — C0.2 is not allowed by the OS. Value of “1” means all C0.2 requests revert to C0.1.
+

If the processor that executed a UMWAIT instruction wakes due to the expiration of the operating system timelimit, the instructions sets RFLAGS.CF; otherwise, that flag is cleared.

+

The UMWAIT instruction causes a transactional abort when used inside a transactional region.

+

The UMWAIT instruction operates with the UMONITOR instruction. The two instructions allow the definition of an address at which to wait (UMONITOR) and an implementation-dependent optimized operation to perform while waiting (UMWAIT). The execution of UMWAIT is a hint to the processor that it can enter an implementation-dependent-optimized state while waiting for an event or a store operation to the address range armed by UMONITOR. The UMWAIT instruction will not wait (will not enter an implementation-dependent optimized state) if any of the

+

following instructions were executed before UMWAIT and after the most recent execution of UMONITOR: IRET, MONITOR, SYSEXIT, SYSRET, and far RET (the last if it is changing CPL).

+

The following additional events cause the processor to exit the implementation-dependent optimized state: a store to the address range armed by the UMONITOR instruction, an NMI or SMI, a debug exception, a machine check exception, the BINIT# signal, the INIT# signal, and the RESET# signal. Other implementation-dependent events may also cause the processor to exit the implementation-dependent optimized state.

+

In addition, an external interrupt causes the processor to exit the implementation-dependent optimized state regardless of whether maskable-interrupts are inhibited (EFLAGS.IF =0).

+

Following exit from the implementation-dependent-optimized state, control passes to the instruction after the UMWAIT instruction. A pending interrupt that is not masked (including an NMI or an SMI) may be delivered before execution of that instruction.

+

Unlike the HLT instruction, the UMWAIT instruction does not restart at the UMWAIT instruction following the handling of an SMI.

+

If the preceding UMONITOR instruction did not successfully arm an address range or if UMONITOR was not executed prior to executing UMWAIT and following the most recent execution of the legacy MONITOR instruction (UMWAIT does not interoperate with MONITOR), then the processor will not enter an optimized state. Execution will continue to the instruction following UMWAIT.

+

A store to the address range armed by the UMONITOR instruction will cause the processor to exit UMWAIT if either the store was originated by other processor agents or the store was originated by a non-processor agent.

+

Operation + ¶ +

+
os_deadline := TSC+(IA32_UMWAIT_CONTROL[31:2]<<2)
+instr_deadline := UINT64(EDX:EAX)
+IF os_deadline < instr_deadline:
+    deadline := os_deadline
+    using_os_deadline := 1
+ELSE:
+    deadline := instr_deadline
+    using_os_deadline := 0
+WHILE monitor hardware armed AND TSC < deadline:
+    implementation_dependent_optimized_state(Source register, deadline, IA32_UMWAIT_CONTROL[0] )
+IF using_os_deadline AND TSC ≥ deadline:
+    RFLAGS.CF := 1
+ELSE:
+    RFLAGS.CF := 0
+RFLAGS.AF,PF,SF,ZF,OF := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
UMWAIT uint8_t _umwait(uint32_t control, uint64_t counter);
+
+

Numeric Exceptions + ¶ +

+

None.

+

Exceptions (All Operating Modes) + ¶ +

+

#GP(0) If src[31:1] != 0.

+

If CR4.TSD = 1 and CPL != 0.

+

#UD If CPUID.7.0:ECX.WAITPKG[bit 5]=0.

diff --git a/x86/unpckhpd.html b/x86/unpckhpd.html new file mode 100644 index 0000000..07c0944 --- /dev/null +++ b/x86/unpckhpd.html @@ -0,0 +1,225 @@ + +UNPCKHPD + — Unpack and Interleave High Packed Double Precision Floating-Point Values

UNPCKHPD + — Unpack and Interleave High Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 15 /r UNPCKHPD xmm1, xmm2/m128AV/VSSE2Unpacks and Interleaves double precision floating-point values from high quadwords of xmm1 and xmm2/m128.
VEX.128.66.0F.WIG 15 /r VUNPCKHPD xmm1,xmm2, xmm3/m128BV/VAVXUnpacks and Interleaves double precision floating-point values from high quadwords of xmm2 and xmm3/m128.
VEX.256.66.0F.WIG 15 /r VUNPCKHPD ymm1,ymm2, ymm3/m256BV/VAVXUnpacks and Interleaves double precision floating-point values from high quadwords of ymm2 and ymm3/m256.
EVEX.128.66.0F.W1 15 /r VUNPCKHPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FUnpacks and Interleaves double precision floating-point values from high quadwords of xmm2 and xmm3/m128/m64bcst subject to writemask k1.
EVEX.256.66.0F.W1 15 /r VUNPCKHPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FUnpacks and Interleaves double precision floating-point values from high quadwords of ymm2 and ymm3/m256/m64bcst subject to writemask k1.
EVEX.512.66.0F.W1 15 /r VUNPCKHPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FUnpacks and Interleaves double precision floating-point values from high quadwords of zmm2 and zmm3/m512/m64bcst subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an interleaved unpack of the high double precision floating-point values from the first source operand and the second source operand. See Figure 4-15 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 2B.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified. When unpacking from a memory operand, an implementation may fetch only the appropriate 64 bits; however, alignment to 16-byte boundary and normal segment checking will still be enforced.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 64-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.

+

EVEX.128 encoded version: The first source operand is a XMM register. The second source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 64-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

Operation + ¶ +

+

VUNPCKHPD (EVEX Encoded Versions When SRC2 is a Register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF VL >= 128
+    TMP_DEST[63:0] := SRC1[127:64]
+    TMP_DEST[127:64] := SRC2[127:64]
+FI;
+IF VL >= 256
+    TMP_DEST[191:128] := SRC1[255:192]
+    TMP_DEST[255:192] := SRC2[255:192]
+FI;
+IF VL >= 512
+    TMP_DEST[319:256] := SRC1[383:320]
+    TMP_DEST[383:320] := SRC2[383:320]
+    TMP_DEST[447:384] := SRC1[511:448]
+    TMP_DEST[511:448] := SRC2[511:448]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VUNPCKHPD (EVEX Encoded Version When SRC2 is Memory) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0]
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i]
+    FI;
+ENDFOR;
+IF VL >= 128
+    TMP_DEST[63:0] := SRC1[127:64]
+    TMP_DEST[127:64] := TMP_SRC2[127:64]
+FI;
+IF VL >= 256
+    TMP_DEST[191:128] := SRC1[255:192]
+    TMP_DEST[255:192] := TMP_SRC2[255:192]
+FI;
+IF VL >= 512
+    TMP_DEST[319:256] := SRC1[383:320]
+    TMP_DEST[383:320] := TMP_SRC2[383:320]
+    TMP_DEST[447:384] := SRC1[511:448]
+    TMP_DEST[511:448] := TMP_SRC2[511:448]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VUNPCKHPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[127:64]
+DEST[127:64] := SRC2[127:64]
+DEST[191:128] := SRC1[255:192]
+DEST[255:192] := SRC2[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VUNPCKHPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[127:64]
+DEST[127:64] := SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

UNPCKHPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SRC1[127:64]
+DEST[127:64] := SRC2[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VUNPCKHPD __m512d _mm512_unpackhi_pd( __m512d a, __m512d b);
+
+
VUNPCKHPD __m512d _mm512_mask_unpackhi_pd(__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VUNPCKHPD __m512d _mm512_maskz_unpackhi_pd(__mmask8 k, __m512d a, __m512d b);
+
+
VUNPCKHPD __m256d _mm256_unpackhi_pd(__m256d a, __m256d b)
+
+
VUNPCKHPD __m256d _mm256_mask_unpackhi_pd(__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VUNPCKHPD __m256d _mm256_maskz_unpackhi_pd(__mmask8 k, __m256d a, __m256d b);
+
+
UNPCKHPD __m128d _mm_unpackhi_pd(__m128d a, __m128d b)
+
+
VUNPCKHPD __m128d _mm_mask_unpackhi_pd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VUNPCKHPD __m128d _mm_maskz_unpackhi_pd(__mmask8 k, __m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instructions, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/unpckhps.html b/x86/unpckhps.html new file mode 100644 index 0000000..c7c92ab --- /dev/null +++ b/x86/unpckhps.html @@ -0,0 +1,418 @@ + +UNPCKHPS + — Unpack and Interleave High Packed Single Precision Floating-Point Values

UNPCKHPS + — Unpack and Interleave High Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 15 /r UNPCKHPS xmm1, xmm2/m128AV/VSSEUnpacks and Interleaves single precision floating-point values from high quadwords of xmm1 and xmm2/m128.
VEX.128.0F.WIG 15 /r VUNPCKHPS xmm1, xmm2, xmm3/m128BV/VAVXUnpacks and Interleaves single precision floating-point values from high quadwords of xmm2 and xmm3/m128.
VEX.256.0F.WIG 15 /r VUNPCKHPS ymm1, ymm2, ymm3/m256BV/VAVXUnpacks and Interleaves single precision floating-point values from high quadwords of ymm2 and ymm3/m256.
EVEX.128.0F.W0 15 /r VUNPCKHPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FUnpacks and Interleaves single precision floating-point values from high quadwords of xmm2 and xmm3/m128/m32bcst and write result to xmm1 subject to writemask k1.
EVEX.256.0F.W0 15 /r VUNPCKHPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FUnpacks and Interleaves single precision floating-point values from high quadwords of ymm2 and ymm3/m256/m32bcst and write result to ymm1 subject to writemask k1.
EVEX.512.0F.W0 15 /r VUNPCKHPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FUnpacks and Interleaves single precision floating-point values from high quadwords of zmm2 and zmm3/m512/m32bcst and write result to zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an interleaved unpack of the high single precision floating-point values from the first source operand and the second source operand.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified. When unpacking from a memory operand, an implementation may fetch only the appropriate 64 bits; however, alignment to 16-byte boundary and normal segment checking will still be enforced.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

VEX.256 encoded version: The second source operand is an YMM register or an 256-bit memory location. The first source operand and destination operands are YMM registers.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X7 X6 X5 X4 X3 X2 X1 X0 +SRC2 Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 +DEST Y7 X7 Y6 X6 Y3 X3 Y2 X2 +
Figure 4-27. VUNPCKHPS Operation
+

EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 32-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.

+

EVEX.128 encoded version: The first source operand is a XMM register. The second source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 32-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

Operation + ¶ +

+

VUNPCKHPS (EVEX Encoded Version When SRC2 is a Register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF VL >= 128
+    TMP_DEST[31:0] := SRC1[95:64]
+    TMP_DEST[63:32] := SRC2[95:64]
+    TMP_DEST[95:64] := SRC1[127:96]
+    TMP_DEST[127:96] := SRC2[127:96]
+FI;
+IF VL >= 256
+    TMP_DEST[159:128] := SRC1[223:192]
+    TMP_DEST[191:160] := SRC2[223:192]
+    TMP_DEST[223:192] := SRC1[255:224]
+    TMP_DEST[255:224] := SRC2[255:224]
+FI;
+IF VL >= 512
+    TMP_DEST[287:256] := SRC1[351:320]
+    TMP_DEST[319:288] := SRC2[351:320]
+    TMP_DEST[351:320] := SRC1[383:352]
+    TMP_DEST[383:352] := SRC2[383:352]
+    TMP_DEST[415:384] := SRC1[479:448]
+    TMP_DEST[447:416] := SRC2[479:448]
+    TMP_DEST[479:448] := SRC1[511:480]
+    TMP_DEST[511:480] := SRC2[511:480]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VUNPCKHPS (EVEX Encoded Version When SRC2 is Memory) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF (EVEX.b = 1)
+        THEN TMP_SRC2[i+31:i] := SRC2[31:0]
+        ELSE TMP_SRC2[i+31:i] := SRC2[i+31:i]
+    FI;
+ENDFOR;
+IF VL >= 128
+    TMP_DEST[31:0] := SRC1[95:64]
+    TMP_DEST[63:32] := TMP_SRC2[95:64]
+    TMP_DEST[95:64] := SRC1[127:96]
+    TMP_DEST[127:96] := TMP_SRC2[127:96]
+FI;
+IF VL >= 256
+    TMP_DEST[159:128] := SRC1[223:192]
+    TMP_DEST[191:160] := TMP_SRC2[223:192]
+    TMP_DEST[223:192] := SRC1[255:224]
+    TMP_DEST[255:224] := TMP_SRC2[255:224]
+FI;
+IF VL >= 512
+    TMP_DEST[287:256] := SRC1[351:320]
+    TMP_DEST[319:288] := TMP_SRC2[351:320]
+    TMP_DEST[351:320] := SRC1[383:352]
+    TMP_DEST[383:352] := TMP_SRC2[383:352]
+    TMP_DEST[415:384] := SRC1[479:448]
+    TMP_DEST[447:416] := TMP_SRC2[479:448]
+    TMP_DEST[479:448] := SRC1[511:480]
+    TMP_DEST[511:480] := TMP_SRC2[511:480]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking* ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VUNPCKHPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[95:64]
+DEST[63:32] := SRC2[95:64]
+DEST[95:64] := SRC1[127:96]
+DEST[127:96] := SRC2[127:96]
+DEST[159:128] := SRC1[223:192]
+DEST[191:160] := SRC2[223:192]
+DEST[223:192] := SRC1[255:224]
+DEST[255:224] := SRC2[255:224]
+DEST[MAXVL-1:256] := 0
+
+

VUNPCKHPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[95:64]
+DEST[63:32] := SRC2[95:64]
+DEST[95:64] := SRC1[127:96]
+DEST[127:96] := SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

UNPCKHPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[95:64]
+DEST[63:32] := SRC2[95:64]
+DEST[95:64] := SRC1[127:96]
+DEST[127:96] := SRC2[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VUNPCKHPS __m512 _mm512_unpackhi_ps( __m512 a, __m512 b);
+
+
VUNPCKHPS __m512 _mm512_mask_unpackhi_ps(__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VUNPCKHPS __m512 _mm512_maskz_unpackhi_ps(__mmask16 k, __m512 a, __m512 b);
+
+
VUNPCKHPS __m256 _mm256_unpackhi_ps (__m256 a, __m256 b);
+
+
VUNPCKHPS __m256 _mm256_mask_unpackhi_ps(__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VUNPCKHPS __m256 _mm256_maskz_unpackhi_ps(__mmask8 k, __m256 a, __m256 b);
+
+
UNPCKHPS __m128 _mm_unpackhi_ps (__m128 a, __m128 b);
+
+
VUNPCKHPS __m128 _mm_mask_unpackhi_ps(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VUNPCKHPS __m128 _mm_maskz_unpackhi_ps(__mmask8 k, __m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instructions, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/unpcklpd.html b/x86/unpcklpd.html new file mode 100644 index 0000000..4d002c6 --- /dev/null +++ b/x86/unpcklpd.html @@ -0,0 +1,225 @@ + +UNPCKLPD + — Unpack and Interleave Low Packed Double Precision Floating-Point Values

UNPCKLPD + — Unpack and Interleave Low Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 14 /r UNPCKLPD xmm1, xmm2/m128AV/VSSE2Unpacks and Interleaves double precision floating-point values from low quadwords of xmm1 and xmm2/m128.
VEX.128.66.0F.WIG 14 /r VUNPCKLPD xmm1,xmm2, xmm3/m128BV/VAVXUnpacks and Interleaves double precision floating-point values from low quadwords of xmm2 and xmm3/m128.
VEX.256.66.0F.WIG 14 /r VUNPCKLPD ymm1,ymm2, ymm3/m256BV/VAVXUnpacks and Interleaves double precision floating-point values from low quadwords of ymm2 and ymm3/m256.
EVEX.128.66.0F.W1 14 /r VUNPCKLPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FUnpacks and Interleaves double precision floating-point values from low quadwords of xmm2 and xmm3/m128/m64bcst subject to write mask k1.
EVEX.256.66.0F.W1 14 /r VUNPCKLPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FUnpacks and Interleaves double precision floating-point values from low quadwords of ymm2 and ymm3/m256/m64bcst subject to write mask k1.
EVEX.512.66.0F.W1 14 /r VUNPCKLPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FUnpacks and Interleaves double precision floating-point values from low quadwords of zmm2 and zmm3/m512/m64bcst subject to write mask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an interleaved unpack of the low double precision floating-point values from the first source operand and the second source operand.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified. When unpacking from a memory operand, an implementation may fetch only the appropriate 64 bits; however, alignment to 16-byte boundary and normal segment checking will still be enforced.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 64-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.

+

EVEX.128 encoded version: The first source operand is an XMM register. The second source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 64-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

Operation + ¶ +

+

VUNPCKLPD (EVEX Encoded Versions When SRC2 is a Register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF VL >= 128
+    TMP_DEST[63:0] := SRC1[63:0]
+    TMP_DEST[127:64] := SRC2[63:0]
+FI;
+IF VL >= 256
+    TMP_DEST[191:128] := SRC1[191:128]
+    TMP_DEST[255:192] := SRC2[191:128]
+FI;
+IF VL >= 512
+    TMP_DEST[319:256] := SRC1[319:256]
+    TMP_DEST[383:320] := SRC2[319:256]
+    TMP_DEST[447:384] := SRC1[447:384]
+    TMP_DEST[511:448] := SRC2[447:384]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VUNPCKLPD (EVEX Encoded Version When SRC2 is Memory) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0]
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i]
+    FI;
+ENDFOR;
+IF VL >= 128
+    TMP_DEST[63:0] := SRC1[63:0]
+    TMP_DEST[127:64] := TMP_SRC2[63:0]
+FI;
+IF VL >= 256
+    TMP_DEST[191:128] := SRC1[191:128]
+    TMP_DEST[255:192] := TMP_SRC2[191:128]
+FI;
+IF VL >= 512
+    TMP_DEST[319:256] := SRC1[319:256]
+    TMP_DEST[383:320] := TMP_SRC2[319:256]
+    TMP_DEST[447:384] := SRC1[447:384]
+    TMP_DEST[511:448] := TMP_SRC2[447:384]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VUNPCKLPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0]
+DEST[127:64] := SRC2[63:0]
+DEST[191:128] := SRC1[191:128]
+DEST[255:192] := SRC2[191:128]
+DEST[MAXVL-1:256] := 0
+
+

VUNPCKLPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0]
+DEST[127:64] := SRC2[63:0]
+DEST[MAXVL-1:128] := 0
+
+

UNPCKLPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := SRC1[63:0]
+DEST[127:64] := SRC2[63:0]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VUNPCKLPD __m512d _mm512_unpacklo_pd( __m512d a, __m512d b);
+
+
VUNPCKLPD __m512d _mm512_mask_unpacklo_pd(__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VUNPCKLPD __m512d _mm512_maskz_unpacklo_pd(__mmask8 k, __m512d a, __m512d b);
+
+
VUNPCKLPD __m256d _mm256_unpacklo_pd(__m256d a, __m256d b)
+
+
VUNPCKLPD __m256d _mm256_mask_unpacklo_pd(__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VUNPCKLPD __m256d _mm256_maskz_unpacklo_pd(__mmask8 k, __m256d a, __m256d b);
+
+
UNPCKLPD __m128d _mm_unpacklo_pd(__m128d a, __m128d b)
+
+
VUNPCKLPD __m128d _mm_mask_unpacklo_pd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VUNPCKLPD __m128d _mm_maskz_unpacklo_pd(__mmask8 k, __m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instructions, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/unpcklps.html b/x86/unpcklps.html new file mode 100644 index 0000000..8016105 --- /dev/null +++ b/x86/unpcklps.html @@ -0,0 +1,417 @@ + +UNPCKLPS + — Unpack and Interleave Low Packed Single Precision Floating-Point Values

UNPCKLPS + — Unpack and Interleave Low Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 14 /r UNPCKLPS xmm1, xmm2/m128AV/VSSEUnpacks and Interleaves single precision floating-point values from low quadwords of xmm1 and xmm2/m128.
VEX.128.0F.WIG 14 /r VUNPCKLPS xmm1,xmm2, xmm3/m128BV/VAVXUnpacks and Interleaves single precision floating-point values from low quadwords of xmm2 and xmm3/m128.
VEX.256.0F.WIG 14 /r VUNPCKLPS ymm1,ymm2,ymm3/m256BV/VAVXUnpacks and Interleaves single precision floating-point values from low quadwords of ymm2 and ymm3/m256.
EVEX.128.0F.W0 14 /r VUNPCKLPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FUnpacks and Interleaves single precision floating-point values from low quadwords of xmm2 and xmm3/mem and write result to xmm1 subject to write mask k1.
EVEX.256.0F.W0 14 /r VUNPCKLPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FUnpacks and Interleaves single precision floating-point values from low quadwords of ymm2 and ymm3/mem and write result to ymm1 subject to write mask k1.
EVEX.512.0F.W0 14 /r VUNPCKLPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FUnpacks and Interleaves single precision floating-point values from low quadwords of zmm2 and zmm3/m512/m32bcst and write result to zmm1 subject to write mask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an interleaved unpack of the low single precision floating-point values from the first source operand and the second source operand.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified. When unpacking from a memory operand, an implementation may fetch only the appropriate 64 bits; however, alignment to 16-byte boundary and normal segment checking will still be enforced.

+

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X7 X6 X5 X4 X3 X2 X1 X0 +SRC2 Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 +DEST Y5 X5 Y4 X4 Y1 X1 Y0 X0 +
Figure 4-28. VUNPCKLPS Operation
+

EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 32-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.

+

EVEX.128 encoded version: The first source operand is an XMM register. The second source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 32-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

Operation + ¶ +

+

VUNPCKLPS (EVEX Encoded Version When SRC2 is a ZMM Register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF VL >= 128
+    TMP_DEST[31:0] := SRC1[31:0]
+    TMP_DEST[63:32] := SRC2[31:0]
+    TMP_DEST[95:64] := SRC1[63:32]
+    TMP_DEST[127:96] := SRC2[63:32]
+FI;
+IF VL >= 256
+    TMP_DEST[159:128] := SRC1[159:128]
+    TMP_DEST[191:160] := SRC2[159:128]
+    TMP_DEST[223:192] := SRC1[191:160]
+    TMP_DEST[255:224] := SRC2[191:160]
+FI;
+IF VL >= 512
+    TMP_DEST[287:256] := SRC1[287:256]
+    TMP_DEST[319:288] := SRC2[287:256]
+    TMP_DEST[351:320] := SRC1[319:288]
+    TMP_DEST[383:352] := SRC2[319:288]
+    TMP_DEST[415:384] := SRC1[415:384]
+    TMP_DEST[447:416] := SRC2[415:384]
+    TMP_DEST[479:448] := SRC1[447:416]
+    TMP_DEST[511:480] := SRC2[447:416]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VUNPCKLPS (EVEX Encoded Version When SRC2 is Memory) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 31
+    IF (EVEX.b = 1)
+        THEN TMP_SRC2[i+31:i] := SRC2[31:0]
+        ELSE TMP_SRC2[i+31:i] := SRC2[i+31:i]
+    FI;
+ENDFOR;
+IF VL >= 128
+TMP_DEST[31:0] := SRC1[31:0]
+TMP_DEST[63:32] := TMP_SRC2[31:0]
+TMP_DEST[95:64] := SRC1[63:32]
+TMP_DEST[127:96] := TMP_SRC2[63:32]
+FI;
+IF VL >= 256
+    TMP_DEST[159:128] := SRC1[159:128]
+    TMP_DEST[191:160] := TMP_SRC2[159:128]
+    TMP_DEST[223:192] := SRC1[191:160]
+    TMP_DEST[255:224] := TMP_SRC2[191:160]
+FI;
+IF VL >= 512
+    TMP_DEST[287:256] := SRC1[287:256]
+    TMP_DEST[319:288] := TMP_SRC2[287:256]
+    TMP_DEST[351:320] := SRC1[319:288]
+    TMP_DEST[383:352] := TMP_SRC2[319:288]
+    TMP_DEST[415:384] := SRC1[415:384]
+    TMP_DEST[447:416] := TMP_SRC2[415:384]
+    TMP_DEST[479:448] := SRC1[447:416]
+    TMP_DEST[511:480] := TMP_SRC2[447:416]
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking* ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

UNPCKLPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0]
+DEST[63:32] := SRC2[31:0]
+DEST[95:64] := SRC1[63:32]
+DEST[127:96] := SRC2[63:32]
+DEST[159:128] := SRC1[159:128]
+DEST[191:160] := SRC2[159:128]
+DEST[223:192] := SRC1[191:160]
+DEST[255:224] := SRC2[191:160]
+DEST[MAXVL-1:256] := 0
+
+

VUNPCKLPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0]
+DEST[63:32] := SRC2[31:0]
+DEST[95:64] := SRC1[63:32]
+DEST[127:96] := SRC2[63:32]
+DEST[MAXVL-1:128] := 0
+
+

UNPCKLPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[31:0]
+DEST[63:32] := SRC2[31:0]
+DEST[95:64] := SRC1[63:32]
+DEST[127:96] := SRC2[63:32]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VUNPCKLPS __m512 _mm512_unpacklo_ps(__m512 a, __m512 b);
+
+
VUNPCKLPS __m512 _mm512_mask_unpacklo_ps(__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VUNPCKLPS __m512 _mm512_maskz_unpacklo_ps(__mmask16 k, __m512 a, __m512 b);
+
+
VUNPCKLPS __m256 _mm256_unpacklo_ps (__m256 a, __m256 b);
+
+
VUNPCKLPS __m256 _mm256_mask_unpacklo_ps(__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VUNPCKLPS __m256 _mm256_maskz_unpacklo_ps(__mmask8 k, __m256 a, __m256 b);
+
+
UNPCKLPS __m128 _mm_unpacklo_ps (__m128 a, __m128 b);
+
+
VUNPCKLPS __m128 _mm_mask_unpacklo_ps(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VUNPCKLPS __m128 _mm_maskz_unpacklo_ps(__mmask8 k, __m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instructions, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/v4fmaddps.v4fnmaddps.html b/x86/v4fmaddps.v4fnmaddps.html new file mode 100644 index 0000000..83ade13 --- /dev/null +++ b/x86/v4fmaddps.v4fnmaddps.html @@ -0,0 +1,110 @@ + +V4FMADDPS/V4FNMADDPS + — Packed Single Precision Floating-Point Fused Multiply-Add(4-Iterations)

V4FMADDPS/V4FNMADDPS + — Packed Single Precision Floating-Point Fused Multiply-Add(4-Iterations)

+ + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.F2.0F38.W0 9A /r V4FMADDPS zmm1{k1}{z}, zmm2+3, m128AV/VAVX512_4FMAPSMultiply packed single-precision floating-point values from source register block indicated by zmm2 by values from m128 and accumulate the result in zmm1.
EVEX.512.F2.0F38.W0 AA /r V4FNMADDPS zmm1{k1}{z}, zmm2+3, m128AV/VAVX512_4FMAPSMultiply and negate packed single-precision floating-point values from source register block indicated by zmm2 by values from m128 and accumulate the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Operand 1 Operand 2 Operand 3 Operand 4
A Tuple1_4X ModRM:reg (r, w) EVEX.vvvv (r) ModRM:r/m (r) N/A
+

Description + ¶ +

+

This instruction computes 4 sequential packed fused single-precision floating-point multiply-add instructions with a sequentially selected memory operand in each of the four steps.

+

In the above box, the notation of “+3” is used to denote that the instruction accesses 4 source registers based on that operand; sources are consecutive, start in a multiple of 4 boundary, and contain the encoded register operand.

+

This instruction supports memory fault suppression. The entire memory operand is loaded if any of the 16 lowest significant mask bits is set to 1 or if a “no masking” encoding is used.

+

The tuple type Tuple1_4X implies that four 32-bit elements (16 bytes) are referenced by the memory operation portion of this instruction.

+

Rounding is performed at every FMA (fused multiply and add) boundary. Exceptions are also taken sequentially. Pre- and post-computational exceptions of the first FMA take priority over the pre- and post-computational exceptions of the second FMA, etc.

+

Operation + ¶ +

+
src_reg_id is the 5 bit index of the vector register specified in the instruction as the src1 register.
+define NFMA_PS(kl, vl, dest, k1, msrc, regs_loaded, src_base, posneg):
+    tmpdest := dest
+    // reg[] is an array representing the SIMD register file.
+    FOR j := 0 to regs_loaded-1:
+        FOR i := 0 to kl-1:
+            IF k1[i] or *no writemask*:
+                IF posneg = 0:
+                    tmpdest.single[i] := RoundFPControl_MXCSR(tmpdest.single[i] - reg[src_base + j ].single[i] * msrc.single[j])
+                ELSE:
+                    tmpdest.single[i] := RoundFPControl_MXCSR(tmpdest.single[i] + reg[src_base + j ].single[i] * msrc.single[j])
+            ELSE IF *zeroing*:
+                tmpdest.single[i] := 0
+    dest := tmpdst
+    dest[MAX_VL-1:VL] := 0
+V4FMADDPS and V4FNMADDPS dest{k1}, src1, msrc (AVX512)
+KL, VL = (16,512)
+regs_loaded := 4
+src_base := src_reg_id & ~3 // for src1 operand
+posneg := 0 if negative form, 1 otherwise
+NFMA_PS(kl, vl, dest, k1, msrc, regs_loaded, src_base, posneg)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
V4FMADDPS __m512 _mm512_4fmadd_ps( __m512, __m512x4, __m128 *);
+
+
V4FMADDPS __m512 _mm512_mask_4fmadd_ps(__m512, __mmask16, __m512x4, __m128 *);
+
+
V4FMADDPS __m512 _mm512_maskz_4fmadd_ps(__mmask16, __m512, __m512x4, __m128 *);
+
+
V4FNMADDPS __m512 _mm512_4fnmadd_ps(__m512, __m512x4, __m128 *);
+
+
V4FNMADDPS __m512 _mm512_mask_4fnmadd_ps(__m512, __mmask16, __m512x4, __m128 *);
+
+
V4FNMADDPS __m512 _mm512_maskz_4fnmadd_ps(__mmask16, __m512, __m512x4, __m128 *);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

See Type E2; additionally:

+ + + + + + +
#UDIf the EVEX broadcast bit is set to 1.
#UDIf the MODRM.mod = 0b11.
diff --git a/x86/v4fmaddss.v4fnmaddss.html b/x86/v4fmaddss.v4fnmaddss.html new file mode 100644 index 0000000..4855a45 --- /dev/null +++ b/x86/v4fmaddss.v4fnmaddss.html @@ -0,0 +1,112 @@ + +V4FMADDSS/V4FNMADDSS + — Scalar Single Precision Floating-Point Fused Multiply-Add(4-Iterations)

V4FMADDSS/V4FNMADDSS + — Scalar Single Precision Floating-Point Fused Multiply-Add(4-Iterations)

+ + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.F2.0F38.W0 9B /r V4FMADDSS xmm1{k1}{z}, xmm2+3, m128AV/VAVX512_4FMAPSMultiply scalar single-precision floating-point values from source register block indicated by xmm2 by values from m128 and accumulate the result in xmm1.
EVEX.LLIG.F2.0F38.W0 AB /r V4FNMADDSS xmm1{k1}{z}, xmm2+3, m128AV/VAVX512_4FMAPSMultiply and negate scalar single-precision floating-point values from source register block indicated by xmm2 by values from m128 and accumulate the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Operand 1 Operand 2 Operand 3 Operand 4
A Tuple1_4X ModRM:reg (r, w) EVEX.vvvv (r) ModRM:r/m (r) N/A
+

Description + ¶ +

+

This instruction computes 4 sequential scalar fused single-precision floating-point multiply-add instructions with a sequentially selected memory operand in each of the four steps.

+

In the above box, the notation of “+3” is used to denote that the instruction accesses 4 source registers based that operand; sources are consecutive, start in a multiple of 4 boundary, and contain the encoded register operand.

+

This instruction supports memory fault suppression. The entire memory operand is loaded if the least significant mask bit is set to 1 or if a “no masking” encoding is used.

+

The tuple type Tuple1_4X implies that four 32-bit elements (16 bytes) are referenced by the memory operation portion of this instruction.

+

Rounding is performed at every FMA boundary. Exceptions are also taken sequentially. Pre- and post-computational exceptions of the first FMA take priority over the pre- and post-computational exceptions of the second FMA, etc.

+

Operation + ¶ +

+
src_reg_id is the 5 bit index of the vector register specified in the instruction as the src1 register.
+define NFMA_SS(vl, dest, k1, msrc, regs_loaded, src_base, posneg):
+    tmpdest := dest
+    // reg[] is an array representing the SIMD register file.
+    IF k1[0] or *no writemask*:
+        FOR j := 0 to regs_loaded - 1:
+            IF posneg = 0:
+                tmpdest.single[0] := RoundFPControl_MXCSR(tmpdest.single[0] - reg[src_base + j ].single[0] * msrc.single[j])
+            ELSE:
+                tmpdest.single[0] := RoundFPControl_MXCSR(tmpdest.single[0] + reg[src_base + j ].single[0] * msrc.single[j])
+    ELSE IF *zeroing*:
+        tmpdest.single[0] := 0
+    dest := tmpdst
+    dest[MAX_VL-1:VL] := 0
+
+

V4FMADDSS and V4FNMADDSS dest{k1}, src1, msrc (AVX512) + ¶ +

+
VL = 128
+regs_loaded := 4
+src_base := src_reg_id & ~3 // for src1 operand
+posneg := 0 if negative form, 1 otherwise
+NFMA_SS(vl, dest, k1, msrc, regs_loaded, src_base, posneg)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
V4FMADDSS __m128 _mm_4fmadd_ss(__m128, __m128x4, __m128 *);
+
+
V4FMADDSS __m128 _mm_mask_4fmadd_ss(__m128, __mmask8, __m128x4, __m128 *);
+
+
V4FMADDSS __m128 _mm_maskz_4fmadd_ss(__mmask8, __m128, __m128x4, __m128 *);
+
+
V4FNMADDSS __m128 _mm_4fnmadd_ss(__m128, __m128x4, __m128 *);
+
+
V4FNMADDSS __m128 _mm_mask_4fnmadd_ss(__m128, __mmask8, __m128x4, __m128 *);
+
+
V4FNMADDSS __m128 _mm_maskz_4fnmadd_ss(__mmask8, __m128, __m128x4, __m128 *);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

See Type E2; additionally:

+ + + + + + +
#UDIf the EVEX broadcast bit is set to 1.
#UDIf the MODRM.mod = 0b11.
diff --git a/x86/vaddph.html b/x86/vaddph.html new file mode 100644 index 0000000..eef4c61 --- /dev/null +++ b/x86/vaddph.html @@ -0,0 +1,131 @@ + +VADDPH + — Add Packed FP16 Values

VADDPH + — Add Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 58 /r VADDPH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLAdd packed FP16 value from xmm3/m128/m16bcst to xmm2, and store result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 58 /r VADDPH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLAdd packed FP16 value from ymm3/m256/m16bcst to ymm2, and store result in ymm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 58 /r VADDPH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Add packed FP16 value from zmm3/m512/m16bcst to zmm2, and store result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction adds packed FP16 values from source operands and stores the packed FP16 result in the destination operand. The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VADDPH (EVEX Encoded Versions) When SRC2 Operand is a Register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.fp16[j] := SRC1.fp16[j] + SRC2.fp16[j]
+    ELSEIF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VADDPH (EVEX Encoded Versions) When SRC2 Operand is a Memory Source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            DEST.fp16[j] := SRC1.fp16[j] + SRC2.fp16[0]
+        ELSE:
+            DEST.fp16[j] := SRC1.fp16[j] + SRC2.fp16[j]
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VADDPH __m128h _mm_add_ph (__m128h a, __m128h b);
+
+
VADDPH __m128h _mm_mask_add_ph (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VADDPH __m128h _mm_maskz_add_ph (__mmask8 k, __m128h a, __m128h b);
+
+
VADDPH __m256h _mm256_add_ph (__m256h a, __m256h b);
+
+
VADDPH __m256h _mm256_mask_add_ph (__m256h src, __mmask16 k, __m256h a, __m256h b);
+
+
VADDPH __m256h _mm256_maskz_add_ph (__mmask16 k, __m256h a, __m256h b);
+
+
VADDPH __m512h _mm512_add_ph (__m512h a, __m512h b);
+
+
VADDPH __m512h _mm512_add_ph (__m512h a, __m512h b);
+
+
VADDPH __m512h _mm512_mask_add_ph (__m512h src, __mmask32 k, __m512h a, __m512h b);
+
+
VADDPH __m512h _mm512_maskz_add_ph (__mmask32 k, __m512h a, __m512h b);
+
+
VADDPH __m512h _mm512_add_round_ph (__m512h a, __m512h b, int rounding);
+
+
VADDPH __m512h _mm512_mask_add_round_ph (__m512h src, __mmask32 k, __m512h a, __m512h b, int rounding);
+
+
VADDPH __m512h _mm512_maskz_add_round_ph (__mmask32 k, __m512h a, __m512h b, int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vaddsh.html b/x86/vaddsh.html new file mode 100644 index 0000000..5c4dd57 --- /dev/null +++ b/x86/vaddsh.html @@ -0,0 +1,88 @@ + +VADDSH + — Add Scalar FP16 Values

VADDSH + — Add Scalar FP16 Values

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 58 /r VADDSH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Add the low FP16 value from xmm3/m16 to xmm2, and store the result in xmm1 subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction adds the low FP16 value from the source operands and stores the FP16 result in the destination operand.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+

VADDSH (EVEX Encoded Versions) + ¶ +

+
IF EVEX.b = 1 and SRC2 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    DEST.fp16[0] := SRC1.fp16[0] + SRC2.fp16[0]
+ELSEIF *zeroing*:
+    DEST.fp16[0] := 0
+// else dest.fp16[0] remains unchanged
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VADDSH __m128h _mm_add_round_sh (__m128h a, __m128h b, int rounding);
+
+
VADDSH ___m128h _mm_mask_add_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int rounding);
+
+
VADDSH ___m128h _mm_maskz_add_round_sh (__mmask8 k, __m128h a, __m128h b, int rounding);
+
+
VADDSH ___m128h _mm_add_sh (__m128h a, __m128h b);
+
+
VADDSH ___m128h _mm_mask_add_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VADDSH ___m128h _mm_maskz_add_sh (__mmask8 k, __m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/valignd.valignq.html b/x86/valignd.valignq.html new file mode 100644 index 0000000..5172163 --- /dev/null +++ b/x86/valignd.valignq.html @@ -0,0 +1,194 @@ + +VALIGND/VALIGNQ + — Align Doubleword/Quadword Vectors

VALIGND/VALIGNQ + — Align Doubleword/Quadword Vectors

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 03 /r ib VALIGND xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst, imm8AV/VAVX512VL AVX512FShift right and merge vectors xmm2 and xmm3/m128/m32bcst with double-word granularity using imm8 as number of elements to shift, and store the final result in xmm1, under writemask.
EVEX.128.66.0F3A.W1 03 /r ib VALIGNQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst, imm8AV/VAVX512VL AVX512FShift right and merge vectors xmm2 and xmm3/m128/m64bcst with quad-word granularity using imm8 as number of elements to shift, and store the final result in xmm1, under writemask.
EVEX.256.66.0F3A.W0 03 /r ib VALIGND ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst, imm8AV/VAVX512VL AVX512FShift right and merge vectors ymm2 and ymm3/m256/m32bcst with double-word granularity using imm8 as number of elements to shift, and store the final result in ymm1, under writemask.
EVEX.256.66.0F3A.W1 03 /r ib VALIGNQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst, imm8AV/VAVX512VL AVX512FShift right and merge vectors ymm2 and ymm3/m256/m64bcst with quad-word granularity using imm8 as number of elements to shift, and store the final result in ymm1, under writemask.
EVEX.512.66.0F3A.W0 03 /r ib VALIGND zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst, imm8AV/VAVX512FShift right and merge vectors zmm2 and zmm3/m512/m32bcst with double-word granularity using imm8 as number of elements to shift, and store the final result in zmm1, under writemask.
EVEX.512.66.0F3A.W1 03 /r ib VALIGNQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst, imm8AV/VAVX512FShift right and merge vectors zmm2 and zmm3/m512/m64bcst with quad-word granularity using imm8 as number of elements to shift, and store the final result in zmm1, under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Concatenates and shifts right doubleword/quadword elements of the first source operand (the second operand) and the second source operand (the third operand) into a 1024/512/256-bit intermediate vector. The low 512/256/128-bit of the intermediate vector is written to the destination operand (the first operand) using the writemask k1. The destination and first source operands are ZMM/YMM/XMM registers. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location.

+

This instruction is writemasked, so only those elements with the corresponding bit set in vector mask register k1 are computed and stored into zmm1. Elements in zmm1 with the corresponding bit clear in k1 retain their previous values (merging-masking) or are set to 0 (zeroing-masking).

+

Operation + ¶ +

+

VALIGND (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (SRC2 *is memory*) (AND EVEX.b = 1)
+    THEN
+        FOR j := 0 TO KL-1
+            i := j * 32
+            src[i+31:i] := SRC2[31:0]
+        ENDFOR;
+    ELSE src := SRC2
+FI
+; Concatenate sources
+tmp[VL-1:0] := src[VL-1:0]
+tmp[2VL-1:VL] := SRC1[VL-1:0]
+; Shift right doubleword elements
+IF VL = 128
+    THEN SHIFT = imm8[1:0]
+    ELSE
+        IF VL = 256
+            THEN SHIFT = imm8[2:0]
+            ELSE SHIFT = imm8[3:0]
+        FI
+FI;
+tmp[2VL-1:0] := tmp[2VL-1:0] >> (32*SHIFT)
+; Apply writemask
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := tmp[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VALIGNQ (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256),(8, 512)
+IF (SRC2 *is memory*) (AND EVEX.b = 1)
+    THEN
+        FOR j := 0 TO KL-1
+            i := j * 64
+            src[i+63:i] := SRC2[63:0]
+        ENDFOR;
+    ELSE src := SRC2
+FI
+; Concatenate sources
+tmp[VL-1:0] := src[VL-1:0]
+tmp[2VL-1:VL] := SRC1[VL-1:0]
+; Shift right quadword elements
+IF VL = 128
+    THEN SHIFT = imm8[0]
+    ELSE
+        IF VL = 256
+            THEN SHIFT = imm8[1:0]
+            ELSE SHIFT = imm8[2:0]
+        FI
+FI;
+tmp[2VL-1:0] := tmp[2VL-1:0] >> (64*SHIFT)
+; Apply writemask
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := tmp[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VALIGND __m512i _mm512_alignr_epi32( __m512i a, __m512i b, int cnt);
+
+
VALIGND __m512i _mm512_mask_alignr_epi32(__m512i s, __mmask16 k, __m512i a, __m512i b, int cnt);
+
+
VALIGND __m512i _mm512_maskz_alignr_epi32( __mmask16 k, __m512i a, __m512i b, int cnt);
+
+
VALIGND __m256i _mm256_mask_alignr_epi32(__m256i s, __mmask8 k, __m256i a, __m256i b, int cnt);
+
+
VALIGND __m256i _mm256_maskz_alignr_epi32( __mmask8 k, __m256i a, __m256i b, int cnt);
+
+
VALIGND __m128i _mm_mask_alignr_epi32(__m128i s, __mmask8 k, __m128i a, __m128i b, int cnt);
+
+
VALIGND __m128i _mm_maskz_alignr_epi32( __mmask8 k, __m128i a, __m128i b, int cnt);
+
+
VALIGNQ __m512i _mm512_alignr_epi64( __m512i a, __m512i b, int cnt);
+
+
VALIGNQ __m512i _mm512_mask_alignr_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b, int cnt);
+
+
VALIGNQ __m512i _mm512_maskz_alignr_epi64( __mmask8 k, __m512i a, __m512i b, int cnt);
+
+
VALIGNQ __m256i _mm256_mask_alignr_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b, int cnt);
+
+
VALIGNQ __m256i _mm256_maskz_alignr_epi64( __mmask8 k, __m256i a, __m256i b, int cnt);
+
+
VALIGNQ __m128i _mm_mask_alignr_epi64(__m128i s, __mmask8 k, __m128i a, __m128i b, int cnt);
+
+
VALIGNQ __m128i _mm_maskz_alignr_epi64( __mmask8 k, __m128i a, __m128i b, int cnt);
+
+

Exceptions + ¶ +

+

See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vblendmpd.vblendmps.html b/x86/vblendmpd.vblendmps.html new file mode 100644 index 0000000..c16a3df --- /dev/null +++ b/x86/vblendmpd.vblendmps.html @@ -0,0 +1,152 @@ + +VBLENDMPD/VBLENDMPS + — Blend Float64/Float32 Vectors Using an OpMask Control

VBLENDMPD/VBLENDMPS + — Blend Float64/Float32 Vectors Using an OpMask Control

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 65 /r VBLENDMPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstAV/VAVX512VL AVX512FBlend double precision vector xmm2 and double precision vector xmm3/m128/m64bcst and store the result in xmm1, under control mask.
EVEX.256.66.0F38.W1 65 /r VBLENDMPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstAV/VAVX512VL AVX512FBlend double precision vector ymm2 and double precision vector ymm3/m256/m64bcst and store the result in ymm1, under control mask.
EVEX.512.66.0F38.W1 65 /r VBLENDMPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstAV/VAVX512FBlend double precision vector zmm2 and double precision vector zmm3/m512/m64bcst and store the result in zmm1, under control mask.
EVEX.128.66.0F38.W0 65 /r VBLENDMPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512VL AVX512FBlend single precision vector xmm2 and single precision vector xmm3/m128/m32bcst and store the result in xmm1, under control mask.
EVEX.256.66.0F38.W0 65 /r VBLENDMPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512VL AVX512FBlend single precision vector ymm2 and single precision vector ymm3/m256/m32bcst and store the result in ymm1, under control mask.
EVEX.512.66.0F38.W0 65 /r VBLENDMPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstAV/VAVX512FBlend single precision vector zmm2 and single precision vector zmm3/m512/m32bcst using k1 as select control and store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an element-by-element blending between float64/float32 elements in the first source operand (the second operand) with the elements in the second source operand (the third operand) using an opmask register as select control. The blended result is written to the destination register.

+

The destination and first source operands are ZMM/YMM/XMM registers. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location.

+

The opmask register is not used as a writemask for this instruction. Instead, the mask is used as an element selector: every element of the destination is conditionally selected between first source or second source using the value of the related mask bit (0 for first source operand, 1 for second source operand).

+

If EVEX.z is set, the elements with corresponding mask bit value of 0 in the destination operand are zeroed.

+

Operation + ¶ +

+

VBLENDMPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no controlmask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := SRC2[63:0]
+                ELSE
+                    DEST[i+63:i] := SRC2[i+63:i]
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN DEST[i+63:i] := SRC1[i+63:i]
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VBLENDMPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no controlmask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := SRC2[31:0]
+                ELSE
+                    DEST[i+31:i] := SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking*
+                THEN DEST[i+31:i] := SRC1[i+31:i]
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VBLENDMPD __m512d _mm512_mask_blend_pd(__mmask8 k, __m512d a, __m512d b);
+
+
VBLENDMPD __m256d _mm256_mask_blend_pd(__mmask8 k, __m256d a, __m256d b);
+
+
VBLENDMPD __m128d _mm_mask_blend_pd(__mmask8 k, __m128d a, __m128d b);
+
+
VBLENDMPS __m512 _mm512_mask_blend_ps(__mmask16 k, __m512 a, __m512 b);
+
+
VBLENDMPS __m256 _mm256_mask_blend_ps(__mmask8 k, __m256 a, __m256 b);
+
+
VBLENDMPS __m128 _mm_mask_blend_ps(__mmask8 k, __m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vbroadcast.html b/x86/vbroadcast.html new file mode 100644 index 0000000..894bad4 --- /dev/null +++ b/x86/vbroadcast.html @@ -0,0 +1,789 @@ + +VBROADCAST + — Load with Broadcast Floating-Point Data

VBROADCAST + — Load with Broadcast Floating-Point Data

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 18 /r VBROADCASTSS xmm1, m32AV/VAVXBroadcast single precision floating-point element in mem to four locations in xmm1.
VEX.256.66.0F38.W0 18 /r VBROADCASTSS ymm1, m32AV/VAVXBroadcast single precision floating-point element in mem to eight locations in ymm1.
VEX.256.66.0F38.W0 19 /r VBROADCASTSD ymm1, m64AV/VAVXBroadcast double precision floating-point element in mem to four locations in ymm1.
VEX.256.66.0F38.W0 1A /r VBROADCASTF128 ymm1, m128AV/VAVXBroadcast 128 bits of floating-point data in mem to low and high 128-bits in ymm1.
VEX.128.66.0F38.W0 18/r VBROADCASTSS xmm1, xmm2AV/VAVX2Broadcast the low single precision floating-point element in the source operand to four locations in xmm1.
VEX.256.66.0F38.W0 18 /r VBROADCASTSS ymm1, xmm2AV/VAVX2Broadcast low single precision floating-point element in the source operand to eight locations in ymm1.
VEX.256.66.0F38.W0 19 /r VBROADCASTSD ymm1, xmm2AV/VAVX2Broadcast low double precision floating-point element in the source operand to four locations in ymm1.
EVEX.256.66.0F38.W1 19 /r VBROADCASTSD ymm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512FBroadcast low double precision floating-point element in xmm2/m64 to four locations in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 19 /r VBROADCASTSD zmm1 {k1}{z}, xmm2/m64BV/VAVX512FBroadcast low double precision floating-point element in xmm2/m64 to eight locations in zmm1 using writemask k1.
EVEX.256.66.0F38.W0 19 /r VBROADCASTF32X2 ymm1 {k1}{z}, xmm2/m64CV/VAVX512VL AVX512DQBroadcast two single precision floating-point elements in xmm2/m64 to locations in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 19 /r VBROADCASTF32X2 zmm1 {k1}{z}, xmm2/m64CV/VAVX512DQBroadcast two single precision floating-point elements in xmm2/m64 to locations in zmm1 using writemask k1.
EVEX.128.66.0F38.W0 18 /r VBROADCASTSS xmm1 {k1}{z}, xmm2/m32BV/VAVX512VL AVX512FBroadcast low single precision floating-point element in xmm2/m32 to all locations in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 18 /r VBROADCASTSS ymm1 {k1}{z}, xmm2/m32BV/VAVX512VL AVX512FBroadcast low single precision floating-point element in xmm2/m32 to all locations in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 18 /r VBROADCASTSS zmm1 {k1}{z}, xmm2/m32BV/VAVX512FBroadcast low single precision floating-point element in xmm2/m32 to all locations in zmm1 using writemask k1.
EVEX.256.66.0F38.W0 1A /r VBROADCASTF32X4 ymm1 {k1}{z}, m128DV/VAVX512VL AVX512FBroadcast 128 bits of 4 single precision floating-point data in mem to locations in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 1A /r VBROADCASTF32X4 zmm1 {k1}{z}, m128DV/VAVX512FBroadcast 128 bits of 4 single precision floating-point data in mem to locations in zmm1 using writemask k1.
EVEX.256.66.0F38.W1 1A /r VBROADCASTF64X2 ymm1 {k1}{z}, m128CV/VAVX512VL AVX512DQBroadcast 128 bits of 2 double precision floating-point data in mem to locations in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 1A /r VBROADCASTF64X2 zmm1 {k1}{z}, m128CV/VAVX512DQBroadcast 128 bits of 2 double precision floating-point data in mem to locations in zmm1 using writemask k1.
EVEX.512.66.0F38.W0 1B /r VBROADCASTF32X8 zmm1 {k1}{z}, m256EV/VAVX512DQBroadcast 256 bits of 8 single precision floating-point data in mem to locations in zmm1 using writemask k1.
EVEX.512.66.0F38.W1 1B /r VBROADCASTF64X4 zmm1 {k1}{z}, m256DV/VAVX512FBroadcast 256 bits of 4 double precision floating-point data in mem to locations in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BTuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
CTuple2ModRM:reg (w)ModRM:r/m (r)N/AN/A
DTuple4ModRM:reg (w)ModRM:r/m (r)N/AN/A
ETuple8ModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

VBROADCASTSD/VBROADCASTSS/VBROADCASTF128 load floating-point values as one tuple from the source operand (second operand) in memory and broadcast to all elements of the destination operand (first operand).

+

VEX256-encoded versions: The destination operand is a YMM register. The source operand is either a 32-bit, 64-bit, or 128-bit memory location. Register source encodings are reserved and will #UD. Bits (MAXVL-1:256) of the destination register are zeroed.

+

EVEX-encoded versions: The destination operand is a ZMM/YMM/XMM register and updated according to the writemask k1. The source operand is either a 32-bit, 64-bit memory location or the low doubleword/quadword element of an XMM register.

+

VBROADCASTF32X2/VBROADCASTF32X4/VBROADCASTF64X2/VBROADCASTF32X8/VBROADCASTF64X4 load floating-point values as tuples from the source operand (the second operand) in memory or register and broadcast to all elements of the destination operand (the first operand). The destination operand is a YMM/ZMM register updated according to the writemask k1. The source operand is either a register or 64-bit/128-bit/256-bit memory location.

+

VBROADCASTSD and VBROADCASTF128,F32x4 and F64x2 are only supported as 256-bit and 512-bit wide versions and up. VBROADCASTSS is supported in 128-bit, 256-bit and 512-bit wide versions. F32x8 and F64x4 are only supported as 512-bit wide versions.

+

VBROADCASTF32X2/VBROADCASTF32X4/VBROADCASTF32X8 have 32-bit granularity. VBROADCASTF64X2 and VBROADCASTF64X4 have 64-bit granularity.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

If VBROADCASTSD or VBROADCASTF128 is encoded with VEX.L= 0, an attempt to execute the instruction encoded with VEX.L= 0 will cause an #UD exception.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X0 +m32 +DEST +X0 X0 X0 X0 X0 X0 X0 X0 +
Figure 5-1. VBROADCASTSS Operation (VEX.256 encoded version)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X0 +m32 +DEST +0 +0 +0 +0 +X0 X0 X0 X0 +
Figure 5-2. VBROADCASTSS Operation (VEX.128-bit version)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +m64 +X0 +DEST +X0 +X0 +X0 +X0 +
Figure 5-3. VBROADCASTSD Operation (VEX.256-bit version)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +m128 +X0 +DEST +X0 +X0 +
Figure 5-4. VBROADCASTF128 Operation (VEX.256-bit version)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +m256 +X0 +DEST +X0 +X0 +
Figure 5-5. VBROADCASTF64X4 Operation (512-bit version with writemask all 1s)
+

Operation + ¶ +

+

VBROADCASTSS (128-bit Version VEX and Legacy) + ¶ +

+
temp := SRC[31:0]
+DEST[31:0] := temp
+DEST[63:32] := temp
+DEST[95:64] := temp
+DEST[127:96] := temp
+DEST[MAXVL-1:128] := 0
+
+

VBROADCASTSS (VEX.256 Encoded Version) + ¶ +

+
temp := SRC[31:0]
+DEST[31:0] := temp
+DEST[63:32] := temp
+DEST[95:64] := temp
+DEST[127:96] := temp
+DEST[159:128] := temp
+DEST[191:160] := temp
+DEST[223:192] := temp
+DEST[255:224] := temp
+DEST[MAXVL-1:256] := 0
+
+

VBROADCASTSS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) (4, 128), (8, 256),= (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[31:0]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VBROADCASTSD (VEX.256 Encoded Version) + ¶ +

+
temp := SRC[63:0]
+DEST[63:0] := temp
+DEST[127:64] := temp
+DEST[191:128] := temp
+DEST[255:192] := temp
+DEST[MAXVL-1:256] := 0
+
+

VBROADCASTSD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[63:0]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VBROADCASTF32x2 (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    n := (j mod 2) * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[n+31:n]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VBROADCASTF128 (VEX.256 Encoded Version) + ¶ +

+
temp := SRC[127:0]
+DEST[127:0] := temp
+DEST[255:128] := temp
+DEST[MAXVL-1:256] := 0
+
+

VBROADCASTF32X4 (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j* 32
+    n := (j modulo 4) * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[n+31:n]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VBROADCASTF64X2 (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    n := (j modulo 2) * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[n+63:n]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] = 0
+            FI
+    FI;
+ENDFOR;
+
+

VBROADCASTF32X8 (EVEX.U1.512 Encoded Version) + ¶ +

+
FOR j := 0 TO 15
+    i := j * 32
+    n := (j modulo 8) * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[n+31:n]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VBROADCASTF64X4 (EVEX.512 Encoded Version) + ¶ +

+
FOR j := 0 TO 7
+    i := j * 64
+    n := (j modulo 4) * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[n+63:n]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VBROADCASTF32x2 __m512 _mm512_broadcast_f32x2( __m128 a);
+
+
VBROADCASTF32x2 __m512 _mm512_mask_broadcast_f32x2(__m512 s, __mmask16 k, __m128 a);
+
+
VBROADCASTF32x2 __m512 _mm512_maskz_broadcast_f32x2( __mmask16 k, __m128 a);
+
+
VBROADCASTF32x2 __m256 _mm256_broadcast_f32x2( __m128 a);
+
+
VBROADCASTF32x2 __m256 _mm256_mask_broadcast_f32x2(__m256 s, __mmask8 k, __m128 a);
+
+
VBROADCASTF32x2 __m256 _mm256_maskz_broadcast_f32x2( __mmask8 k, __m128 a);
+
+
VBROADCASTF32x4 __m512 _mm512_broadcast_f32x4( __m128 a);
+
+
VBROADCASTF32x4 __m512 _mm512_mask_broadcast_f32x4(__m512 s, __mmask16 k, __m128 a);
+
+
VBROADCASTF32x4 __m512 _mm512_maskz_broadcast_f32x4( __mmask16 k, __m128 a);
+
+
VBROADCASTF32x4 __m256 _mm256_broadcast_f32x4( __m128 a);
+
+
VBROADCASTF32x4 __m256 _mm256_mask_broadcast_f32x4(__m256 s, __mmask8 k, __m128 a);
+
+
VBROADCASTF32x4 __m256 _mm256_maskz_broadcast_f32x4( __mmask8 k, __m128 a);
+
+
VBROADCASTF32x8 __m512 _mm512_broadcast_f32x8( __m256 a);
+
+
VBROADCASTF32x8 __m512 _mm512_mask_broadcast_f32x8(__m512 s, __mmask16 k, __m256 a);
+
+
VBROADCASTF32x8 __m512 _mm512_maskz_broadcast_f32x8( __mmask16 k, __m256 a);
+
+
VBROADCASTF64x2 __m512d _mm512_broadcast_f64x2( __m128d a);
+
+
VBROADCASTF64x2 __m512d _mm512_mask_broadcast_f64x2(__m512d s, __mmask8 k, __m128d a);
+
+
VBROADCASTF64x2 __m512d _mm512_maskz_broadcast_f64x2( __mmask8 k, __m128d a);
+
+
VBROADCASTF64x2 __m256d _mm256_broadcast_f64x2( __m128d a);
+
+
VBROADCASTF64x2 __m256d _mm256_mask_broadcast_f64x2(__m256d s, __mmask8 k, __m128d a);
+
+
VBROADCASTF64x2 __m256d _mm256_maskz_broadcast_f64x2( __mmask8 k, __m128d a);
+
+
VBROADCASTF64x4 __m512d _mm512_broadcast_f64x4( __m256d a);
+
+
VBROADCASTF64x4 __m512d _mm512_mask_broadcast_f64x4(__m512d s, __mmask8 k, __m256d a);
+
+
VBROADCASTF64x4 __m512d _mm512_maskz_broadcast_f64x4( __mmask8 k, __m256d a);
+
+
VBROADCASTSD __m512d _mm512_broadcastsd_pd( __m128d a);
+
+
VBROADCASTSD __m512d _mm512_mask_broadcastsd_pd(__m512d s, __mmask8 k, __m128d a);
+
+
VBROADCASTSD __m512d _mm512_maskz_broadcastsd_pd(__mmask8 k, __m128d a);
+
+
VBROADCASTSD __m256d _mm256_broadcastsd_pd(__m128d a);
+
+
VBROADCASTSD __m256d _mm256_mask_broadcastsd_pd(__m256d s, __mmask8 k, __m128d a);
+
+
VBROADCASTSD __m256d _mm256_maskz_broadcastsd_pd( __mmask8 k, __m128d a);
+
+
VBROADCASTSD __m256d _mm256_broadcast_sd(double *a);
+
+
VBROADCASTSS __m512 _mm512_broadcastss_ps( __m128 a);
+
+
VBROADCASTSS __m512 _mm512_mask_broadcastss_ps(__m512 s, __mmask16 k, __m128 a);
+
+
VBROADCASTSS __m512 _mm512_maskz_broadcastss_ps( __mmask16 k, __m128 a);
+
+
VBROADCASTSS __m256 _mm256_broadcastss_ps(__m128 a);
+
+
VBROADCASTSS __m256 _mm256_mask_broadcastss_ps(__m256 s, __mmask8 k, __m128 a);
+
+
VBROADCASTSS __m256 _mm256_maskz_broadcastss_ps( __mmask8 k, __m128 a);
+
+
VBROADCASTSS __m128 _mm_broadcastss_ps(__m128 a);
+
+
VBROADCASTSS __m128 _mm_mask_broadcastss_ps(__m128 s, __mmask8 k, __m128 a);
+
+
VBROADCASTSS __m128 _mm_maskz_broadcastss_ps( __mmask8 k, __m128 a);
+
+
VBROADCASTSS __m128 _mm_broadcast_ss(float *a);
+
+
VBROADCASTSS __m256 _mm256_broadcast_ss(float *a);
+
+
VBROADCASTF128 __m256 _mm256_broadcast_ps(__m128 * a);
+
+
VBROADCASTF128 __m256d _mm256_broadcast_pd(__m128d * a);
+
+

Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-23, “Type 6 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + + + + +
#UD
If EVEX.L’L = 0 for VBROADCASTSD/VBROADCASTF32X2/VBROADCASTF32X4/VBROAD-CASTF64X2.
If EVEX.L’L < 10b for VBROADCASTF32X8/VBROADCASTF64X4.
diff --git a/x86/vcmpph.html b/x86/vcmpph.html new file mode 100644 index 0000000..db25acc --- /dev/null +++ b/x86/vcmpph.html @@ -0,0 +1,139 @@ + +VCMPPH + — Compare Packed FP16 Values

VCMPPH + — Compare Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.0F3A.W0 C2 /r /ib VCMPPH k1{k2}, xmm2, xmm3/m128/m16bcst, imm8AV/VAVX512-FP16 AVX512VLCompare packed FP16 values in xmm3/m128/m16bcst and xmm2 using bits 4:0 of imm8 as a comparison predicate subject to writemask k2, and store the result in mask register k1.
EVEX.256.NP.0F3A.W0 C2 /r /ib VCMPPH k1{k2}, ymm2, ymm3/m256/m16bcst, imm8AV/VAVX512-FP16 AVX512VLCompare packed FP16 values in ymm3/m256/m16bcst and ymm2 using bits 4:0 of imm8 as a comparison predicate subject to writemask k2, and store the result in mask register k1.
EVEX.512.NP.0F3A.W0 C2 /r /ib VCMPPH k1{k2}, zmm2, zmm3/m512/m16bcst {sae}, imm8AV/VAVX512-FP16Compare packed FP16 values in zmm3/m512/m16bcst and zmm2 using bits 4:0 of imm8 as a comparison predicate subject to writemask k2, and store the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

This instruction compares packed FP16 values from source operands and stores the result in the destination mask operand. The comparison predicate operand (immediate byte bits 4:0) specifies the type of comparison performed on each of the pairs of packed values. The destination elements are updated according to the writemask.

+

Operation + ¶ +

+
CASE (imm8 & 0x1F) OF
+0: CMP_OPERATOR := EQ_OQ;
+1: CMP_OPERATOR := LT_OS;
+2: CMP_OPERATOR := LE_OS;
+3: CMP_OPERATOR := UNORD_Q;
+4: CMP_OPERATOR := NEQ_UQ;
+5: CMP_OPERATOR := NLT_US;
+6: CMP_OPERATOR := NLE_US;
+7: CMP_OPERATOR := ORD_Q;
+8: CMP_OPERATOR := EQ_UQ;
+9: CMP_OPERATOR := NGE_US;
+10: CMP_OPERATOR := NGT_US;
+11: CMP_OPERATOR := FALSE_OQ;
+12: CMP_OPERATOR := NEQ_OQ;
+13: CMP_OPERATOR := GE_OS;
+14: CMP_OPERATOR := GT_OS;
+15: CMP_OPERATOR := TRUE_UQ;
+16: CMP_OPERATOR := EQ_OS;
+17: CMP_OPERATOR := LT_OQ;
+18: CMP_OPERATOR := LE_OQ;
+19: CMP_OPERATOR := UNORD_S;
+20: CMP_OPERATOR := NEQ_US;
+21: CMP_OPERATOR := NLT_UQ;
+22: CMP_OPERATOR := NLE_UQ;
+23: CMP_OPERATOR := ORD_S;
+24: CMP_OPERATOR := EQ_US;
+25: CMP_OPERATOR := NGE_UQ;
+26: CMP_OPERATOR := NGT_UQ;
+27: CMP_OPERATOR := FALSE_OS;
+28: CMP_OPERATOR := NEQ_OS;
+29: CMP_OPERATOR := GE_OQ;
+30: CMP_OPERATOR := GT_OQ;
+31: CMP_OPERATOR := TRUE_US;
+ESAC
+
+

VCMPPH (EVEX Encoded Versions) + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k2[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            tsrc2 := SRC2.fp16[0]
+        ELSE:
+            tsrc2 := SRC2.fp16[j]
+        DEST.bit[j] := SRC1.fp16[j] CMP_OPERATOR tsrc2
+    ELSE
+        DEST.bit[j] := 0
+DEST[MAXKL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCMPPH ___mmask8 _mm_cmp_ph_mask (__m128h a, __m128h b, const int imm8);
+
+
VCMPPH ___mmask8 _mm_mask_cmp_ph_mask (__mmask8 k1, __m128h a, __m128h b, const int imm8);
+
+
VCMPPH ___mmask16 _mm256_cmp_ph_mask (__m256h a, __m256h b, const int imm8);
+
+
VCMPPH ___mmask16 _mm256_mask_cmp_ph_mask (__mmask16 k1, __m256h a, __m256h b, const int imm8);
+
+
VCMPPH ___mmask32 _mm512_cmp_ph_mask (__m512h a, __m512h b, const int imm8);
+
+
VCMPPH ___mmask32 _mm512_mask_cmp_ph_mask (__mmask32 k1, __m512h a, __m512h b, const int imm8);
+
+
VCMPPH ___mmask32 _mm512_cmp_round_ph_mask (__m512h a, __m512h b, const int imm8, const int sae);
+
+
VCMPPH ___mmask32 _mm512_mask_cmp_round_ph_mask (__mmask32 k1, __m512h a, __m512h b, const int imm8, const int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcmpsh.html b/x86/vcmpsh.html new file mode 100644 index 0000000..11e218f --- /dev/null +++ b/x86/vcmpsh.html @@ -0,0 +1,112 @@ + +VCMPSH + — Compare Scalar FP16 Values

VCMPSH + — Compare Scalar FP16 Values

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.0F3A.W0 C2 /r /ib VCMPSH k1{k2}, xmm2, xmm3/m16 {sae}, imm8AV/VAVX512-FP16Compare low FP16 values in xmm3/m16 and xmm2 using bits 4:0 of imm8 as a comparison predicate subject to writemask k2, and store the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

This instruction compares the FP16 values from the lowest element of the source operands and stores the result in the destination mask operand. The comparison predicate operand (immediate byte bits 4:0) specifies the type of comparison performed on the pair of packed FP16 values. The low destination bit is updated according to the writemask. Bits MAXKL-1:1 of the destination operand are zeroed.

+

Operation + ¶ +

+
CASE (imm8 & 0x1F) OF
+0: CMP_OPERATOR := EQ_OQ;
+1: CMP_OPERATOR := LT_OS;
+2: CMP_OPERATOR := LE_OS;
+3: CMP_OPERATOR := UNORD_Q;
+4: CMP_OPERATOR := NEQ_UQ;
+5: CMP_OPERATOR := NLT_US;
+6: CMP_OPERATOR := NLE_US;
+7: CMP_OPERATOR := ORD_Q;
+8: CMP_OPERATOR := EQ_UQ;
+9: CMP_OPERATOR := NGE_US;
+10: CMP_OPERATOR := NGT_US;
+11: CMP_OPERATOR := FALSE_OQ;
+12: CMP_OPERATOR := NEQ_OQ;
+13: CMP_OPERATOR := GE_OS;
+14: CMP_OPERATOR := GT_OS;
+15: CMP_OPERATOR := TRUE_UQ;
+16: CMP_OPERATOR := EQ_OS;
+17: CMP_OPERATOR := LT_OQ;
+18: CMP_OPERATOR := LE_OQ;
+19: CMP_OPERATOR := UNORD_S;
+20: CMP_OPERATOR := NEQ_US;
+21: CMP_OPERATOR := NLT_UQ;
+22: CMP_OPERATOR := NLE_UQ;
+23: CMP_OPERATOR := ORD_S;
+24: CMP_OPERATOR := EQ_US;
+25: CMP_OPERATOR := NGE_UQ;
+26: CMP_OPERATOR := NGT_UQ;
+27: CMP_OPERATOR := FALSE_OS;
+28: CMP_OPERATOR := NEQ_OS;
+29: CMP_OPERATOR := GE_OQ;
+30: CMP_OPERATOR := GT_OQ;
+31: CMP_OPERATOR := TRUE_US;
+ESAC
+
+

VCMPSH (EVEX Encoded Versions) + ¶ +

+
IF k2[0] OR *no writemask*:
+    DEST.bit[0] := SRC1.fp16[0] CMP_OPERATOR SRC2.fp16[0]
+ELSE
+    DEST.bit[0] := 0
+DEST[MAXKL-1:1] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCMPSH __mmask8 _mm_cmp_round_sh_mask (__m128h a, __m128h b, const int imm8, const int sae);
+
+
VCMPSH __mmask8 _mm_mask_cmp_round_sh_mask (__mmask8 k1, __m128h a, __m128h b, const int imm8, const int sae);
+
+
VCMPSH __mmask8 _mm_cmp_sh_mask (__m128h a, __m128h b, const int imm8);
+
+
VCMPSH __mmask8 _mm_mask_cmp_sh_mask (__mmask8 k1, __m128h a, __m128h b, const int imm8);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vcomish.html b/x86/vcomish.html new file mode 100644 index 0000000..5c388c3 --- /dev/null +++ b/x86/vcomish.html @@ -0,0 +1,99 @@ + +VCOMISH + — Compare Scalar Ordered FP16 Values and Set EFLAGS

VCOMISH + — Compare Scalar Ordered FP16 Values and Set EFLAGS

+ + + + + + + + + + + + + +
Description +EVEX.LLIG.NP.MAP5.W0 2F /r A V/V AVX512-FP16 VCOMISH xmm1, xmm2/m16 {sae} Description +EVEX.LLIG.NP.MAP5.W0 2F /r A V/V AVX512-FP16 p/ 64/32 CPUID Feature Instruction En Bit Mode Flag +Support +Description +EVEX.LLIG.NP.MAP5.W0 2F /r A V/V AVX512-FP16 Description +EVEX.LLIG.NP.MAP5.W0 2F /r A V/V AVX512-FP16 Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
VCOMISH xmm1, xmm2/m16 {sae}Compare low FP16 values in xmm1 and xmm2/m16, and set the EFLAGS flags accordingly.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (r)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction compares the FP16 values in the low word of operand 1 (first operand) and operand 2 (second operand), and sets the ZF, PF, and CF flags in the EFLAGS register according to the result (unordered, greater than, less than, or equal). The OF, SF and AF flags in the EFLAGS register are set to 0. The unordered result is returned if either source operand is a NaN (QNaN or SNaN).

+

Operand 1 is an XMM register; operand 2 can be an XMM register or a 16-bit memory location.

+

The VCOMISH instruction differs from the VUCOMISH instruction in that it signals a SIMD floating-point invalid operation exception (#I) when a source operand is either a QNaN or SNaN. The VUCOMISH instruction signals an invalid numeric exception only if a source operand is an SNaN.

+

The EFLAGS register is not updated if an unmasked SIMD floating-point exception is generated. EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCOMISH SRC1, SRC2 + ¶ +

+
RESULT := OrderedCompare(SRC1.fp16[0],SRC2.fp16[0])
+IF RESULT is UNORDERED:
+    ZF, PF, CF := 1, 1, 1
+ELSE IF RESULT is GREATER_THAN:
+    ZF, PF, CF := 0, 0, 0
+ELSE IF RESULT is LESS_THAN:
+    ZF, PF, CF := 0, 0, 1
+ELSE: // RESULT is EQUALS
+    ZF, PF, CF := 1, 0, 0
+OF, AF, SF := 0, 0, 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCOMISH int _mm_comi_round_sh (__m128h a, __m128h b, const int imm8, const int sae);
+
+
VCOMISH int _mm_comi_sh (__m128h a, __m128h b, const int imm8);
+
+
VCOMISH int _mm_comieq_sh (__m128h a, __m128h b);
+
+
VCOMISH int _mm_comige_sh (__m128h a, __m128h b);
+
+
VCOMISH int _mm_comigt_sh (__m128h a, __m128h b);
+
+
VCOMISH int _mm_comile_sh (__m128h a, __m128h b);
+
+
VCOMISH int _mm_comilt_sh (__m128h a, __m128h b);
+
+
VCOMISH int _mm_comineq_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcompresspd.html b/x86/vcompresspd.html new file mode 100644 index 0000000..702cddf --- /dev/null +++ b/x86/vcompresspd.html @@ -0,0 +1,133 @@ + +VCOMPRESSPD + — Store Sparse Packed Double Precision Floating-Point Values Into DenseMemory

VCOMPRESSPD + — Store Sparse Packed Double Precision Floating-Point Values Into DenseMemory

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 8A /r VCOMPRESSPD xmm1/m128 {k1}{z}, xmm2AV/VAVX512VL AVX512FCompress packed double precision floating-point values from xmm2 to xmm1/m128 using writemask k1.
EVEX.256.66.0F38.W1 8A /r VCOMPRESSPD ymm1/m256 {k1}{z}, ymm2AV/VAVX512VL AVX512FCompress packed double precision floating-point values from ymm2 to ymm1/m256 using writemask k1.
EVEX.512.66.0F38.W1 8A /r VCOMPRESSPD zmm1/m512 {k1}{z}, zmm2AV/VAVX512FCompress packed double precision floating-point values from zmm2 using control mask k1 to zmm1/m512.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Compress (store) up to 8 double precision floating-point values from the source operand (the second operand) as a contiguous vector to the destination operand (the first operand) The source operand is a ZMM/YMM/XMM register, the destination operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location.

+

The opmask register k1 selects the active elements (partial vector or possibly non-contiguous if less than 8 active elements) from the source operand to compress into a contiguous vector. The contiguous vector is written to the destination starting from the low element of the destination operand.

+

Memory destination version: Only the contiguous vector is written to the destination memory location. EVEX.z must be zero.

+

Register destination version: If the vector length of the contiguous vector is less than that of the input vector in the source operand, the upper bits of the destination register are unmodified if EVEX.z is not set, otherwise the upper bits are zeroed.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VCOMPRESSPD (EVEX Encoded Versions) Store Form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+SIZE := 64
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[k+SIZE-1:k] := SRC[i+63:i]
+            k := k + SIZE
+    FI;
+ENDFOR
+
+

VCOMPRESSPD (EVEX Encoded Versions) Reg-Reg Form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+SIZE := 64
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+                DEST[k+SIZE-1:k] := SRC[i+63:i]
+                k := k + SIZE
+    FI;
+ENDFOR
+IF *merging-masking*
+            THEN *DEST[VL-1:k] remains unchanged*
+            ELSE DEST[VL-1:k] := 0
+FI
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCOMPRESSPD __m512d _mm512_mask_compress_pd( __m512d s, __mmask8 k, __m512d a);
+
+
VCOMPRESSPD __m512d _mm512_maskz_compress_pd( __mmask8 k, __m512d a);
+
+
VCOMPRESSPD void _mm512_mask_compressstoreu_pd( void * d, __mmask8 k, __m512d a);
+
+
VCOMPRESSPD __m256d _mm256_mask_compress_pd( __m256d s, __mmask8 k, __m256d a);
+
+
VCOMPRESSPD __m256d _mm256_maskz_compress_pd( __mmask8 k, __m256d a);
+
+
VCOMPRESSPD void _mm256_mask_compressstoreu_pd( void * d, __mmask8 k, __m256d a);
+
+
VCOMPRESSPD __m128d _mm_mask_compress_pd( __m128d s, __mmask8 k, __m128d a);
+
+
VCOMPRESSPD __m128d _mm_maskz_compress_pd( __mmask8 k, __m128d a);
+
+
VCOMPRESSPD void _mm_mask_compressstoreu_pd( void * d, __mmask8 k, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcompressps.html b/x86/vcompressps.html new file mode 100644 index 0000000..017b11e --- /dev/null +++ b/x86/vcompressps.html @@ -0,0 +1,133 @@ + +VCOMPRESSPS + — Store Sparse Packed Single Precision Floating-Point Values Into Dense Memory

VCOMPRESSPS + — Store Sparse Packed Single Precision Floating-Point Values Into Dense Memory

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 8A /r VCOMPRESSPS xmm1/m128 {k1}{z}, xmm2AV/VAVX512VL AVX512FCompress packed single precision floating-point values from xmm2 to xmm1/m128 using writemask k1.
EVEX.256.66.0F38.W0 8A /r VCOMPRESSPS ymm1/m256 {k1}{z}, ymm2AV/VAVX512VL AVX512FCompress packed single precision floating-point values from ymm2 to ymm1/m256 using writemask k1.
EVEX.512.66.0F38.W0 8A /r VCOMPRESSPS zmm1/m512 {k1}{z}, zmm2AV/VAVX512FCompress packed single precision floating-point values from zmm2 using control mask k1 to zmm1/m512.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Compress (stores) up to 16 single precision floating-point values from the source operand (the second operand) to the destination operand (the first operand). The source operand is a ZMM/YMM/XMM register, the destination operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location.

+

The opmask register k1 selects the active elements (a partial vector or possibly non-contiguous if less than 16 active elements) from the source operand to compress into a contiguous vector. The contiguous vector is written to the destination starting from the low element of the destination operand.

+

Memory destination version: Only the contiguous vector is written to the destination memory location. EVEX.z must be zero.

+

Register destination version: If the vector length of the contiguous vector is less than that of the input vector in the source operand, the upper bits of the destination register are unmodified if EVEX.z is not set, otherwise the upper bits are zeroed.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VCOMPRESSPS (EVEX Encoded Versions) Store Form + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+SIZE := 32
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[k+SIZE-1:k] := SRC[i+31:i]
+            k := k + SIZE
+    FI;
+ENDFOR;
+
+

VCOMPRESSPS (EVEX Encoded Versions) Reg-Reg Form + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+SIZE := 32
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[k+SIZE-1:k] := SRC[i+31:i]
+            k := k + SIZE
+    FI;
+ENDFOR
+IF *merging-masking*
+    THEN *DEST[VL-1:k] remains unchanged*
+    ELSE DEST[VL-1:k] := 0
+FI
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCOMPRESSPS __m512 _mm512_mask_compress_ps( __m512 s, __mmask16 k, __m512 a);
+
+
VCOMPRESSPS __m512 _mm512_maskz_compress_ps( __mmask16 k, __m512 a);
+
+
VCOMPRESSPS void _mm512_mask_compressstoreu_ps( void * d, __mmask16 k, __m512 a);
+
+
VCOMPRESSPS __m256 _mm256_mask_compress_ps( __m256 s, __mmask8 k, __m256 a);
+
+
VCOMPRESSPS __m256 _mm256_maskz_compress_ps( __mmask8 k, __m256 a);
+
+
VCOMPRESSPS void _mm256_mask_compressstoreu_ps( void * d, __mmask8 k, __m256 a);
+
+
VCOMPRESSPS __m128 _mm_mask_compress_ps( __m128 s, __mmask8 k, __m128 a);
+
+
VCOMPRESSPS __m128 _mm_maskz_compress_ps( __mmask8 k, __m128 a);
+
+
VCOMPRESSPS void _mm_mask_compressstoreu_ps( void * d, __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Exceptions Type E4.nb. in Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtdq2ph.html b/x86/vcvtdq2ph.html new file mode 100644 index 0000000..38f20d5 --- /dev/null +++ b/x86/vcvtdq2ph.html @@ -0,0 +1,120 @@ + +VCVTDQ2PH + — Convert Packed Signed Doubleword Integers to Packed FP16 Values

VCVTDQ2PH + — Convert Packed Signed Doubleword Integers to Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 5B /r VCVTDQ2PH xmm1{k1}{z}, xmm2/m128/m32bcstAV/VAVX512-FP16 AVX512VLConvert four packed signed doubleword integers from xmm2/m128/m32bcst to four packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 5B /r VCVTDQ2PH xmm1{k1}{z}, ymm2/m256/m32bcstAV/VAVX512-FP16 AVX512VLConvert eight packed signed doubleword integers from ymm2/m256/m32bcst to eight packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 5B /r VCVTDQ2PH ymm1{k1}{z}, zmm2/m512/m32bcst {er}AV/VAVX512-FP16Convert sixteen packed signed doubleword integers from zmm2/m512/m32bcst to sixteen packed FP16 values, and store the result in ymm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts four, eight, or sixteen packed signed doubleword integers in the source operand to four, eight, or sixteen packed FP16 values in the destination operand.

+

EVEX encoded versions: The source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcast from a 32-bit memory location. The destination operand is a YMM/XMM register conditionally updated with writemask k1.

+

EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

If the result of the convert operation is overflow and MXCSR.OM=0 then a SIMD exception will be raised with OE=1, PE=1.

+

Operation + ¶ +

+

VCVTDQ2PH DEST, SRC + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 32
+IF *SRC is a register* and (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.dword[0]
+        ELSE
+            tsrc := SRC.dword[j]
+        DEST.fp16[j] := Convert_integer32_to_fp16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL/2] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTDQ2PH __m256h _mm512_cvt_roundepi32_ph (__m512i a, int rounding);
+
+
VCVTDQ2PH __m256h _mm512_mask_cvt_roundepi32_ph (__m256h src, __mmask16 k, __m512i a, int rounding);
+
+
VCVTDQ2PH __m256h _mm512_maskz_cvt_roundepi32_ph (__mmask16 k, __m512i a, int rounding);
+
+
VCVTDQ2PH __m128h _mm_cvtepi32_ph (__m128i a);
+
+
VCVTDQ2PH __m128h _mm_mask_cvtepi32_ph (__m128h src, __mmask8 k, __m128i a);
+
+
VCVTDQ2PH __m128h _mm_maskz_cvtepi32_ph (__mmask8 k, __m128i a);
+
+
VCVTDQ2PH __m128h _mm256_cvtepi32_ph (__m256i a);
+
+
VCVTDQ2PH __m128h _mm256_mask_cvtepi32_ph (__m128h src, __mmask8 k, __m256i a);
+
+
VCVTDQ2PH __m128h _mm256_maskz_cvtepi32_ph (__mmask8 k, __m256i a);
+
+
VCVTDQ2PH __m256h _mm512_cvtepi32_ph (__m512i a);
+
+
VCVTDQ2PH __m256h _mm512_mask_cvtepi32_ph (__m256h src, __mmask16 k, __m512i a);
+
+
VCVTDQ2PH __m256h _mm512_maskz_cvtepi32_ph (__mmask16 k, __m512i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtne2ps2bf16.html b/x86/vcvtne2ps2bf16.html new file mode 100644 index 0000000..85bc89a --- /dev/null +++ b/x86/vcvtne2ps2bf16.html @@ -0,0 +1,113 @@ + +VCVTNE2PS2BF16 + — Convert Two Packed Single Data to One Packed BF16 Data

VCVTNE2PS2BF16 + — Convert Two Packed Single Data to One Packed BF16 Data

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F2.0F38.W0 72 /r VCVTNE2PS2BF16 xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512VL AVX512_BF16Convert packed single data from xmm2 and xmm3/m128/m32bcst to packed BF16 data in xmm1 with writemask k1.
EVEX.256.F2.0F38.W0 72 /r VCVTNE2PS2BF16 ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512VL AVX512_BF16Convert packed single data from ymm2 and ymm3/m256/m32bcst to packed BF16 data in ymm1 with writemask k1.
EVEX.512.F2.0F38.W0 72 /r VCVTNE2PS2BF16 zmm1{k1}{z}, zmm2, zmm3/m512/m32bcstAV/VAVX512F AVX512_BF16Convert packed single data from zmm2 and zmm3/m512/m32bcst to packed BF16 data in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts two SIMD registers of packed single data into a single register of packed BF16 data.

+

This instruction does not support memory fault suppression.

+

This instruction uses “Round to nearest (even)” rounding mode. Output denormals are always flushed to zero and input denormals are always treated as zero. MXCSR is not consulted nor updated. No floating-point exceptions are generated.

+

Operation + ¶ +

+

VCVTNE2PS2BF16 dest, src1, src2 + ¶ +

+
VL = (128, 256, 512)
+KL = VL/16
+origdest := dest
+FOR i := 0 to KL-1:
+    IF k1[ i ] or *no writemask*:
+        IF i < KL/2:
+            IF src2 is memory and evex.b == 1:
+                t := src2.fp32[0]
+            ELSE:
+                t := src2.fp32[ i ]
+        ELSE:
+            t := src1.fp32[ i-KL/2]
+        // See VCVTNEPS2BF16 for definition of convert helper function
+        dest.word[i] := convert_fp32_to_bfloat16(t)
+    ELSE IF *zeroing*:
+        dest.word[ i ] := 0
+    ELSE: // Merge masking, dest element unchanged
+        dest.word[ i ] := origdest.word[ i ]
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTNE2PS2BF16 __m128bh _mm_cvtne2ps_pbh (__m128, __m128);
+
+
VCVTNE2PS2BF16 __m128bh _mm_mask_cvtne2ps_pbh (__m128bh, __mmask8, __m128, __m128);
+
+
VCVTNE2PS2BF16 __m128bh _mm_maskz_cvtne2ps_pbh (__mmask8, __m128, __m128);
+
+
VCVTNE2PS2BF16 __m256bh _mm256_cvtne2ps_pbh (__m256, __m256);
+
+
VCVTNE2PS2BF16 __m256bh _mm256_mask_cvtne2ps_pbh (__m256bh, __mmask16, __m256, __m256);
+
+
VCVTNE2PS2BF16 __m256bh _mm256_maskz_cvtne2ps_ pbh (__mmask16, __m256, __m256);
+
+
VCVTNE2PS2BF16 __m512bh _mm512_cvtne2ps_pbh (__m512, __m512);
+
+
VCVTNE2PS2BF16 __m512bh _mm512_mask_cvtne2ps_pbh (__m512bh, __mmask32, __m512, __m512);
+
+
VCVTNE2PS2BF16 __m512bh _mm512_maskz_cvtne2ps_pbh (__mmask32, __m512, __m512);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vcvtneps2bf16.html b/x86/vcvtneps2bf16.html new file mode 100644 index 0000000..5be17e8 --- /dev/null +++ b/x86/vcvtneps2bf16.html @@ -0,0 +1,125 @@ + +VCVTNEPS2BF16 + — Convert Packed Single Data to Packed BF16 Data

VCVTNEPS2BF16 + — Convert Packed Single Data to Packed BF16 Data

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 72 /r VCVTNEPS2BF16 xmm1{k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512_BF16Convert packed single data from xmm2/m128 to packed BF16 data in xmm1 with writemask k1.
EVEX.256.F3.0F38.W0 72 /r VCVTNEPS2BF16 xmm1{k1}{z}, ymm2/m256/m32bcstAV/VAVX512VL AVX512_BF16Convert packed single data from ymm2/m256 to packed BF16 data in xmm1 with writemask k1.
EVEX.512.F3.0F38.W0 72 /r VCVTNEPS2BF16 ymm1{k1}{z}, zmm2/m512/m32bcstAV/VAVX512F AVX512_BF16Convert packed single data from zmm2/m512 to packed BF16 data in ymm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts one SIMD register of packed single data into a single register of packed BF16 data.

+

This instruction uses “Round to nearest (even)” rounding mode. Output denormals are always flushed to zero and input denormals are always treated as zero. MXCSR is not consulted nor updated.

+

As the instruction operand encoding table shows, the EVEX.vvvv field is not used for encoding an operand. EVEX.vvvv is reserved and must be 0b1111 otherwise instructions will #UD.

+

Operation + ¶ +

+
Define convert_fp32_to_bfloat16(x):
+    IF x is zero or denormal:
+        dest[15] := x[31] // sign preserving zero (denormal go to zero)
+        dest[14:0] := 0
+    ELSE IF x is infinity:
+        dest[15:0] := x[31:16]
+    ELSE IF x is NAN:
+        dest[15:0] := x[31:16] // truncate and set MSB of the mantissa to force QNAN
+        dest[6] := 1
+    ELSE // normal number
+        LSB := x[16]
+        rounding_bias := 0x00007FFF + LSB
+        temp[31:0] := x[31:0] + rounding_bias // integer add
+        dest[15:0] := temp[31:16]
+    RETURN dest
+
+

VCVTNEPS2BF16 dest, src + ¶ +

+
VL = (128, 256, 512)
+KL = VL/16
+origdest := dest
+FOR i := 0 to KL/2-1:
+    IF k1[ i ] or *no writemask*:
+        IF src is memory and evex.b == 1:
+            t := src.fp32[0]
+        ELSE:
+            t := src.fp32[ i ]
+        dest.word[i] := convert_fp32_to_bfloat16(t)
+    ELSE IF *zeroing*:
+        dest.word[ i ] := 0
+    ELSE: // Merge masking, dest element unchanged
+        dest.word[ i ] := origdest.word[ i ]
+DEST[MAXVL-1:VL/2] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTNEPS2BF16 __m128bh _mm_cvtneps_pbh (__m128);
+
+
VCVTNEPS2BF16 __m128bh _mm_mask_cvtneps_pbh (__m128bh, __mmask8, __m128);
+
+
VCVTNEPS2BF16 __m128bh _mm_maskz_cvtneps_pbh (__mmask8, __m128);
+
+
VCVTNEPS2BF16 __m128bh _mm256_cvtneps_pbh (__m256);
+
+
VCVTNEPS2BF16 __m128bh _mm256_mask_cvtneps_pbh (__m128bh, __mmask8, __m256);
+
+
VCVTNEPS2BF16 __m128bh _mm256_maskz_cvtneps_pbh (__mmask8, __m256);
+
+
VCVTNEPS2BF16 __m256bh _mm512_cvtneps_pbh (__m512);
+
+
VCVTNEPS2BF16 __m256bh _mm512_mask_cvtneps_pbh (__m256bh, __mmask16, __m512);
+
+
VCVTNEPS2BF16 __m256bh _mm512_maskz_cvtneps_pbh (__mmask16, __m512);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vcvtpd2ph.html b/x86/vcvtpd2ph.html new file mode 100644 index 0000000..65a4082 --- /dev/null +++ b/x86/vcvtpd2ph.html @@ -0,0 +1,120 @@ + +VCVTPD2PH + — Convert Packed Double Precision FP Values to Packed FP16 Values

VCVTPD2PH + — Convert Packed Double Precision FP Values to Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP5.W1 5A /r VCVTPD2PH xmm1{k1}{z}, xmm2/m128/m64bcstAV/VAVX512-FP16 AVX512VLConvert two packed double precision floating-point values in xmm2/m128/m64bcst to two packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP5.W1 5A /r VCVTPD2PH xmm1{k1}{z}, ymm2/m256/m64bcstAV/VAVX512-FP16 AVX512VLConvert four packed double precision floating-point values in ymm2/m256/m64bcst to four packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.512.66.MAP5.W1 5A /r VCVTPD2PH xmm1{k1}{z}, zmm2/m512/m64bcst {er}AV/VAVX512-FP16Convert eight packed double precision floating-point values in zmm2/m512/m64bcst to eight packed FP16 values, and store the result in ymm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts two, four, or eight packed double precision floating-point values in the source operand (second operand) to two, four, or eight packed FP16 values in the destination operand (first operand). When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits.

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasts from a 64-bit memory location. The destination operand is a XMM register conditionally updated with writemask k1. The upper bits (MAXVL-1:128/64/32) of the corresponding destination are zeroed.

+

EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

This instruction uses MXCSR.DAZ for handling FP64 inputs. FP16 outputs can be normal or denormal, and are not conditionally flushed to zero.

+

Operation + ¶ +

+

VCVTPD2PH DEST, SRC + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 64
+IF *SRC is a register* and (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.double[0]
+        ELSE
+            tsrc := SRC.double[j]
+        DEST.fp16[j] := Convert_fp64_to_fp16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL/4] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPD2PH __m128h _mm512_cvt_roundpd_ph (__m512d a, int rounding);
+
+
VCVTPD2PH __m128h _mm512_mask_cvt_roundpd_ph (__m128h src, __mmask8 k, __m512d a, int rounding);
+
+
VCVTPD2PH __m128h _mm512_maskz_cvt_roundpd_ph (__mmask8 k, __m512d a, int rounding);
+
+
VCVTPD2PH __m128h _mm_cvtpd_ph (__m128d a);
+
+
VCVTPD2PH __m128h _mm_mask_cvtpd_ph (__m128h src, __mmask8 k, __m128d a);
+
+
VCVTPD2PH __m128h _mm_maskz_cvtpd_ph (__mmask8 k, __m128d a);
+
+
VCVTPD2PH __m128h _mm256_cvtpd_ph (__m256d a);
+
+
VCVTPD2PH __m128h _mm256_mask_cvtpd_ph (__m128h src, __mmask8 k, __m256d a);
+
+
VCVTPD2PH __m128h _mm256_maskz_cvtpd_ph (__mmask8 k, __m256d a);
+
+
VCVTPD2PH __m128h _mm512_cvtpd_ph (__m512d a);
+
+
VCVTPD2PH __m128h _mm512_mask_cvtpd_ph (__m128h src, __mmask8 k, __m512d a);
+
+
VCVTPD2PH __m128h _mm512_maskz_cvtpd_ph (__mmask8 k, __m512d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtpd2qq.html b/x86/vcvtpd2qq.html new file mode 100644 index 0000000..001728b --- /dev/null +++ b/x86/vcvtpd2qq.html @@ -0,0 +1,152 @@ + +VCVTPD2QQ + — Convert Packed Double Precision Floating-Point Values to Packed QuadwordIntegers

VCVTPD2QQ + — Convert Packed Double Precision Floating-Point Values to Packed QuadwordIntegers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F.W1 7B /r VCVTPD2QQ xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512DQConvert two packed double precision floating-point values from xmm2/m128/m64bcst to two packed quadword integers in xmm1 with writemask k1.
EVEX.256.66.0F.W1 7B /r VCVTPD2QQ ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512DQConvert four packed double precision floating-point values from ymm2/m256/m64bcst to four packed quadword integers in ymm1 with writemask k1.
EVEX.512.66.0F.W1 7B /r VCVTPD2QQ zmm1 {k1}{z}, zmm2/m512/m64bcst{er}AV/VAVX512DQConvert eight packed double precision floating-point values from zmm2/m512/m64bcst to eight packed quadword integers in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed double precision floating-point values in the source operand (second operand) to packed quadword integers in the destination operand (first operand).

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operation is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (2w-1, where w represents the number of bits in the destination format) is returned.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTPD2QQ (EVEX Encoded Version) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL == 512) AND (EVEX.b == 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Double_Precision_Floating_Point_To_QuadInteger(SRC[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPD2QQ (EVEX Encoded Version) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+                        Convert_Double_Precision_Floating_Point_To_QuadInteger(SRC[63:0])
+                ELSE
+                    DEST[i+63:i] := Convert_Double_Precision_Floating_Point_To_QuadInteger(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPD2QQ __m512i _mm512_cvtpd_epi64( __m512d a);
+
+
VCVTPD2QQ __m512i _mm512_mask_cvtpd_epi64( __m512i s, __mmask8 k, __m512d a);
+
+
VCVTPD2QQ __m512i _mm512_maskz_cvtpd_epi64( __mmask8 k, __m512d a);
+
+
VCVTPD2QQ __m512i _mm512_cvt_roundpd_epi64( __m512d a, int r);
+
+
VCVTPD2QQ __m512i _mm512_mask_cvt_roundpd_epi64( __m512i s, __mmask8 k, __m512d a, int r);
+
+
VCVTPD2QQ __m512i _mm512_maskz_cvt_roundpd_epi64( __mmask8 k, __m512d a, int r);
+
+
VCVTPD2QQ __m256i _mm256_mask_cvtpd_epi64( __m256i s, __mmask8 k, __m256d a);
+
+
VCVTPD2QQ __m256i _mm256_maskz_cvtpd_epi64( __mmask8 k, __m256d a);
+
+
VCVTPD2QQ __m128i _mm_mask_cvtpd_epi64( __m128i s, __mmask8 k, __m128d a);
+
+
VCVTPD2QQ __m128i _mm_maskz_cvtpd_epi64( __mmask8 k, __m128d a);
+
+
VCVTPD2QQ __m256i _mm256_cvtpd_epi64 (__m256d src)
+
+
VCVTPD2QQ __m128i _mm_cvtpd_epi64 (__m128d src)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtpd2udq.html b/x86/vcvtpd2udq.html new file mode 100644 index 0000000..22fd821 --- /dev/null +++ b/x86/vcvtpd2udq.html @@ -0,0 +1,152 @@ + +VCVTPD2UDQ + — Convert Packed Double Precision Floating-Point Values to Packed UnsignedDoubleword Integers

VCVTPD2UDQ + — Convert Packed Double Precision Floating-Point Values to Packed UnsignedDoubleword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.0F.W1 79 /r VCVTPD2UDQ xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512FConvert two packed double precision floating-point values in xmm2/m128/m64bcst to two unsigned doubleword integers in xmm1 subject to writemask k1.
EVEX.256.0F.W1 79 /r VCVTPD2UDQ xmm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512FConvert four packed double precision floating-point values in ymm2/m256/m64bcst to four unsigned doubleword integers in xmm1 subject to writemask k1.
EVEX.512.0F.W1 79 /r VCVTPD2UDQ ymm1 {k1}{z}, zmm2/m512/m64bcst{er}AV/VAVX512FConvert eight packed double precision floating-point values in zmm2/m512/m64bcst to eight unsigned doubleword integers in ymm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed double precision floating-point values in the source operand (the second operand) to packed unsigned doubleword integers in the destination operand (the first operand).

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1. The upper bits (MAXVL-1:256) of the corresponding destination are zeroed.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTPD2UDQ (EVEX Encoded Versions) When SRC2 Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_UInteger(SRC[k+63:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTPD2UDQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_UInteger(SRC[63:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_UInteger(SRC[k+63:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPD2UDQ __m256i _mm512_cvtpd_epu32( __m512d a);
+
+
VCVTPD2UDQ __m256i _mm512_mask_cvtpd_epu32( __m256i s, __mmask8 k, __m512d a);
+
+
VCVTPD2UDQ __m256i _mm512_maskz_cvtpd_epu32( __mmask8 k, __m512d a);
+
+
VCVTPD2UDQ __m256i _mm512_cvt_roundpd_epu32( __m512d a, int r);
+
+
VCVTPD2UDQ __m256i _mm512_mask_cvt_roundpd_epu32( __m256i s, __mmask8 k, __m512d a, int r);
+
+
VCVTPD2UDQ __m256i _mm512_maskz_cvt_roundpd_epu32( __mmask8 k, __m512d a, int r);
+
+
VCVTPD2UDQ __m128i _mm256_mask_cvtpd_epu32( __m128i s, __mmask8 k, __m256d a);
+
+
VCVTPD2UDQ __m128i _mm256_maskz_cvtpd_epu32( __mmask8 k, __m256d a);
+
+
VCVTPD2UDQ __m128i _mm_mask_cvtpd_epu32( __m128i s, __mmask8 k, __m128d a);
+
+
VCVTPD2UDQ __m128i _mm_maskz_cvtpd_epu32( __mmask8 k, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtpd2uqq.html b/x86/vcvtpd2uqq.html new file mode 100644 index 0000000..2fe082b --- /dev/null +++ b/x86/vcvtpd2uqq.html @@ -0,0 +1,153 @@ + +VCVTPD2UQQ + — Convert Packed Double Precision Floating-Point Values to Packed UnsignedQuadword Integers

VCVTPD2UQQ + — Convert Packed Double Precision Floating-Point Values to Packed UnsignedQuadword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F.W1 79 /r VCVTPD2UQQ xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512DQConvert two packed double precision floating-point values from xmm2/mem to two packed unsigned quadword integers in xmm1 with writemask k1.
EVEX.256.66.0F.W1 79 /r VCVTPD2UQQ ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512DQConvert fourth packed double precision floating-point values from ymm2/mem to four packed unsigned quadword integers in ymm1 with writemask k1.
EVEX.512.66.0F.W1 79 /r VCVTPD2UQQ zmm1 {k1}{z}, zmm2/m512/m64bcst{er}AV/VAVX512DQConvert eight packed double precision floating-point values from zmm2/mem to eight packed unsigned quadword integers in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed double precision floating-point values in the source operand (second operand) to packed unsigned quadword integers in the destination operand (first operand).

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operation is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTPD2UQQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL == 512) AND (EVEX.b == 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Double_Precision_Floating_Point_To_UQuadInteger(SRC[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPD2UQQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_Double_Precision_Floating_Point_To_UQuadInteger(SRC[63:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_Double_Precision_Floating_Point_To_UQuadInteger(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPD2UQQ __m512i _mm512_cvtpd_epu64( __m512d a);
+
+
VCVTPD2UQQ __m512i _mm512_mask_cvtpd_epu64( __m512i s, __mmask8 k, __m512d a);
+
+
VCVTPD2UQQ __m512i _mm512_maskz_cvtpd_epu64( __mmask8 k, __m512d a);
+
+
VCVTPD2UQQ __m512i _mm512_cvt_roundpd_epu64( __m512d a, int r);
+
+
VCVTPD2UQQ __m512i _mm512_mask_cvt_roundpd_epu64( __m512i s, __mmask8 k, __m512d a, int r);
+
+
VCVTPD2UQQ __m512i _mm512_maskz_cvt_roundpd_epu64( __mmask8 k, __m512d a, int r);
+
+
VCVTPD2UQQ __m256i _mm256_mask_cvtpd_epu64( __m256i s, __mmask8 k, __m256d a);
+
+
VCVTPD2UQQ __m256i _mm256_maskz_cvtpd_epu64( __mmask8 k, __m256d a);
+
+
VCVTPD2UQQ __m128i _mm_mask_cvtpd_epu64( __m128i s, __mmask8 k, __m128d a);
+
+
VCVTPD2UQQ __m128i _mm_maskz_cvtpd_epu64( __mmask8 k, __m128d a);
+
+
VCVTPD2UQQ __m256i _mm256_cvtpd_epu64 (__m256d src)
+
+
VCVTPD2UQQ __m128i _mm_cvtpd_epu64 (__m128d src)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtph2dq.html b/x86/vcvtph2dq.html new file mode 100644 index 0000000..ff62e9b --- /dev/null +++ b/x86/vcvtph2dq.html @@ -0,0 +1,119 @@ + +VCVTPH2DQ + — Convert Packed FP16 Values to Signed Doubleword Integers

VCVTPH2DQ + — Convert Packed FP16 Values to Signed Doubleword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP5.W0 5B /r VCVTPH2DQ xmm1{k1}{z}, xmm2/m64/m16bcstAV/VAVX512-FP16 AVX512VLConvert four packed FP16 values in xmm2/m64/m16bcst to four signed doubleword integers, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP5.W0 5B /r VCVTPH2DQ ymm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert eight packed FP16 values in xmm2/m128/m16bcst to eight signed doubleword integers, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP5.W0 5B /r VCVTPH2DQ zmm1{k1}{z}, ymm2/m256/m16bcst {er}AV/VAVX512-FP16Convert sixteen packed FP16 values in ymm2/m256/m16bcst to sixteen signed doubleword integers, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to signed doubleword integers in destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTPH2DQ DEST, SRC + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 32
+IF *SRC is a register* and (VL = 512) and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.dword[j] := Convert_fp16_to_integer32(tsrc)
+    ELSE IF *zeroing*:
+        DEST.dword[j] := 0
+    // else dest.dword[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPH2DQ __m512i _mm512_cvt_roundph_epi32 (__m256h a, int rounding);
+
+
VCVTPH2DQ __m512i _mm512_mask_cvt_roundph_epi32 (__m512i src, __mmask16 k, __m256h a, int rounding);
+
+
VCVTPH2DQ __m512i _mm512_maskz_cvt_roundph_epi32 (__mmask16 k, __m256h a, int rounding);
+
+
VCVTPH2DQ __m128i _mm_cvtph_epi32 (__m128h a);
+
+
VCVTPH2DQ __m128i _mm_mask_cvtph_epi32 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTPH2DQ __m128i _mm_maskz_cvtph_epi32 (__mmask8 k, __m128h a);
+
+
VCVTPH2DQ __m256i _mm256_cvtph_epi32 (__m128h a);
+
+
VCVTPH2DQ __m256i _mm256_mask_cvtph_epi32 (__m256i src, __mmask8 k, __m128h a);
+
+
VCVTPH2DQ __m256i _mm256_maskz_cvtph_epi32 (__mmask8 k, __m128h a);
+
+
VCVTPH2DQ __m512i _mm512_cvtph_epi32 (__m256h a);
+
+
VCVTPH2DQ __m512i _mm512_mask_cvtph_epi32 (__m512i src, __mmask16 k, __m256h a);
+
+
VCVTPH2DQ __m512i _mm512_maskz_cvtph_epi32 (__mmask16 k, __m256h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtph2pd.html b/x86/vcvtph2pd.html new file mode 100644 index 0000000..78590ad --- /dev/null +++ b/x86/vcvtph2pd.html @@ -0,0 +1,114 @@ + +VCVTPH2PD + — Convert Packed FP16 Values to FP64 Values

VCVTPH2PD + — Convert Packed FP16 Values to FP64 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 5A /r VCVTPH2PD xmm1{k1}{z}, xmm2/m32/m16bcstAV/VAVX512-FP16 AVX512VLConvert packed FP16 values in xmm2/m32/m16bcst to FP64 values, and store result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 5A /r VCVTPH2PD ymm1{k1}{z}, xmm2/m64/m16bcstAV/VAVX512-FP16 AVX512VLConvert packed FP16 values in xmm2/m64/m16bcst to FP64 values, and store result in ymm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 5A /r VCVTPH2PD zmm1{k1}{z}, xmm2/m128/m16bcst {sae}AV/VAVX512-FP16Convert packed FP16 values in xmm2/m128/m16bcst to FP64 values, and store result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AQuarterModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values to FP64 values in the destination register. The destination elements are updated according to the writemask.

+

This instruction handles both normal and denormal FP16 inputs.

+

Operation + ¶ +

+

VCVTPH2PD DEST, SRC + ¶ +

+
VL = 128, 256, or 512
+KL := VL/64
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.fp64[j] := Convert_fp16_to_fp64(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp64[j] := 0
+    // else dest.fp64[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPH2PD __m512d _mm512_cvt_roundph_pd (__m128h a, int sae);
+
+
VCVTPH2PD __m512d _mm512_mask_cvt_roundph_pd (__m512d src, __mmask8 k, __m128h a, int sae);
+
+
VCVTPH2PD __m512d _mm512_maskz_cvt_roundph_pd (__mmask8 k, __m128h a, int sae);
+
+
VCVTPH2PD __m128d _mm_cvtph_pd (__m128h a);
+
+
VCVTPH2PD __m128d _mm_mask_cvtph_pd (__m128d src, __mmask8 k, __m128h a);
+
+
VCVTPH2PD __m128d _mm_maskz_cvtph_pd (__mmask8 k, __m128h a);
+
+
VCVTPH2PD __m256d _mm256_cvtph_pd (__m128h a);
+
+
VCVTPH2PD __m256d _mm256_mask_cvtph_pd (__m256d src, __mmask8 k, __m128h a);
+
+
VCVTPH2PD __m256d _mm256_maskz_cvtph_pd (__mmask8 k, __m128h a);
+
+
VCVTPH2PD __m512d _mm512_cvtph_pd (__m128h a);
+
+
VCVTPH2PD __m512d _mm512_mask_cvtph_pd (__m512d src, __mmask8 k, __m128h a);
+
+
VCVTPH2PD __m512d _mm512_maskz_cvtph_pd (__mmask8 k, __m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtph2ps.vcvtph2psx.html b/x86/vcvtph2ps.vcvtph2psx.html new file mode 100644 index 0000000..4d803f6 --- /dev/null +++ b/x86/vcvtph2ps.vcvtph2psx.html @@ -0,0 +1,264 @@ + +VCVTPH2PS/VCVTPH2PSX + — Convert Packed FP16 Values to Single Precision Floating-PointValues

VCVTPH2PS/VCVTPH2PSX + — Convert Packed FP16 Values to Single Precision Floating-PointValues

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 13 /r VCVTPH2PS xmm1, xmm2/m64AV/VF16CConvert four packed FP16 values in xmm2/m64 to packed single precision floating-point value in xmm1.
VEX.256.66.0F38.W0 13 /r VCVTPH2PS ymm1, xmm2/m128AV/VF16CConvert eight packed FP16 values in xmm2/m128 to packed single precision floating-point value in ymm1.
EVEX.128.66.0F38.W0 13 /r VCVTPH2PS xmm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512FConvert four packed FP16 values in xmm2/m64 to packed single precision floating-point values in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W0 13 /r VCVTPH2PS ymm1 {k1}{z}, xmm2/m128BV/VAVX512VL AVX512FConvert eight packed FP16 values in xmm2/m128 to packed single precision floating-point values in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W0 13 /r VCVTPH2PS zmm1 {k1}{z}, ymm2/m256 {sae}BV/VAVX512FConvert sixteen packed FP16 values in ymm2/m256 to packed single precision floating-point values in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 13 /r VCVTPH2PSX xmm1{k1}{z}, xmm2/m64/m16bcstCV/VAVX512-FP16 AVX512VLConvert four packed FP16 values in xmm2/m64/m16bcst to four packed single precision floating-point values, and store result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 13 /r VCVTPH2PSX ymm1{k1}{z}, xmm2/m128/m16bcstCV/VAVX512-FP16 AVX512VLConvert eight packed FP16 values in xmm2/m128/m16bcst to eight packed single precision floating-point values, and store result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 13 /r VCVTPH2PSX zmm1{k1}{z}, ymm2/m256/m16bcst {sae}CV/VAVX512-FP16Convert sixteen packed FP16 values in ymm2/m256/m16bcst to sixteen packed single precision floating-point values, and store result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BHalf MemModRM:reg (w)ModRM:r/m (r)N/AN/A
CHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed half precision (16-bits) floating-point values in the low-order bits of the source operand (the second operand) to packed single precision floating-point values and writes the converted values into the destination operand (the first operand).

+

If case of a denormal operand, the correct normal result is returned. MXCSR.DAZ is ignored and is treated as if it 0. No denormal exception is reported on MXCSR.

+

VEX.128 version: The source operand is a XMM register or 64-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 version: The source operand is a XMM register or 128-bit memory location. The destination operand is a YMM register. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX encoded versions: The source operand is a YMM/XMM/XMM (low 64-bits) register or a 256/128/64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

The diagram below illustrates how data is converted from four packed half precision (in 64 bits) to four single precision (in 128 bits) floating-point values.

+

Note: VEX.vvvv and EVEX.vvvv are reserved (must be 1111b).

+
+ + + + + + + + +
VCVTPH2PSxmm1,xmm2/mem64, imm8 127 96 95 64 63 48 47 32 31 16 15 0
xmm2/mem64 VH3 VH2 VH1 VH0
convert convert convert convert 127 96 95 64 63 32 31 0
VS3 VS2 VS1 VS0 xmm1
+
Figure 5-6. VCVTPH2PS (128-bit Version)
+

The VCVTPH2PSX instruction is a new form of the PH to PS conversion instruction, encoded in map 6. The previous version of the instruction, VCVTPH2PS, that is present in AVX512F (encoded in map 2, 0F38) does not support embedded broadcasting. The VCVTPH2PSX instruction has the embedded broadcasting option available.

+

The instructions associated with AVX512_FP16 always handle FP16 denormal number inputs; denormal inputs are not treated as zero.

+

Operation + ¶ +

+
vCvt_h2s(SRC1[15:0])
+{
+RETURN Cvt_Half_Precision_To_Single_Precision(SRC1[15:0]);
+}
+
+

VCVTPH2PS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            vCvt_h2s(SRC[k+15:k])
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPH2PS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := vCvt_h2s(SRC1[15:0]);
+DEST[63:32] := vCvt_h2s(SRC1[31:16]);
+DEST[95:64] := vCvt_h2s(SRC1[47:32]);
+DEST[127:96] := vCvt_h2s(SRC1[63:48]);
+DEST[159:128] := vCvt_h2s(SRC1[79:64]);
+DEST[191:160] := vCvt_h2s(SRC1[95:80]);
+DEST[223:192] := vCvt_h2s(SRC1[111:96]);
+DEST[255:224] := vCvt_h2s(SRC1[127:112]);
+DEST[MAXVL-1:256] := 0
+
+

VCVTPH2PS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := vCvt_h2s(SRC1[15:0]);
+DEST[63:32] := vCvt_h2s(SRC1[31:16]);
+DEST[95:64] := vCvt_h2s(SRC1[47:32]);
+DEST[127:96] := vCvt_h2s(SRC1[63:48]);
+DEST[MAXVL-1:128] := 0
+
+

VCVTPH2PSX DEST, SRC + ¶ +

+
VL = 128, 256, or 512
+KL := VL/32
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.fp32[j] := Convert_fp16_to_fp32(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp32[j] := 0
+    // else dest.fp32[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPH2PS __m512 _mm512_cvtph_ps( __m256i a);
+
+
VCVTPH2PS __m512 _mm512_mask_cvtph_ps(__m512 s, __mmask16 k, __m256i a);
+
+
VCVTPH2PS __m512 _mm512_maskz_cvtph_ps(__mmask16 k, __m256i a);
+
+
VCVTPH2PS __m512 _mm512_cvt_roundph_ps( __m256i a, int sae);
+
+
VCVTPH2PS __m512 _mm512_mask_cvt_roundph_ps(__m512 s, __mmask16 k, __m256i a, int sae);
+
+
VCVTPH2PS __m512 _mm512_maskz_cvt_roundph_ps( __mmask16 k, __m256i a, int sae);
+
+
VCVTPH2PS __m256 _mm256_mask_cvtph_ps(__m256 s, __mmask8 k, __m128i a);
+
+
VCVTPH2PS __m256 _mm256_maskz_cvtph_ps(__mmask8 k, __m128i a);
+
+
VCVTPH2PS __m128 _mm_mask_cvtph_ps(__m128 s, __mmask8 k, __m128i a);
+
+
VCVTPH2PS __m128 _mm_maskz_cvtph_ps(__mmask8 k, __m128i a);
+
+
VCVTPH2PS __m128 _mm_cvtph_ps ( __m128i m1);
+
+
VCVTPH2PS __m256 _mm256_cvtph_ps ( __m128i m1)
+
+
VCVTPH2PSX __m512 _mm512_cvtx_roundph_ps (__m256h a, int sae);
+
+
VCVTPH2PSX __m512 _mm512_mask_cvtx_roundph_ps (__m512 src, __mmask16 k, __m256h a, int sae);
+
+
VCVTPH2PSX __m512 _mm512_maskz_cvtx_roundph_ps (__mmask16 k, __m256h a, int sae);
+
+
VCVTPH2PSX __m128 _mm_cvtxph_ps (__m128h a);
+
+
VCVTPH2PSX __m128 _mm_mask_cvtxph_ps (__m128 src, __mmask8 k, __m128h a);
+
+
VCVTPH2PSX __m128 _mm_maskz_cvtxph_ps (__mmask8 k, __m128h a);
+
+
VCVTPH2PSX __m256 _mm256_cvtxph_ps (__m128h a);
+
+
VCVTPH2PSX __m256 _mm256_mask_cvtxph_ps (__m256 src, __mmask8 k, __m128h a);
+
+
VCVTPH2PSX __m256 _mm256_maskz_cvtxph_ps (__mmask8 k, __m128h a);
+
+
VCVTPH2PSX __m512 _mm512_cvtxph_ps (__m256h a);
+
+
VCVTPH2PSX __m512 _mm512_mask_cvtxph_ps (__m512 src, __mmask16 k, __m256h a);
+
+
VCVTPH2PSX __m512 _mm512_maskz_cvtxph_ps (__mmask16 k, __m256h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

VEX-encoded instructions: Invalid.

+

EVEX-encoded instructions: Invalid.

+

EVEX-encoded instructions with broadcast (VCVTPH2PSX): Invalid, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-26, “Type 11 Class Exception Conditions” (do not report #AC).

+

EVEX-encoded instructions, see Table 2-60, “Type E11 Class Exception Conditions.”

+

EVEX-encoded instructions with broadcast (VCVTPH2PSX), see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + + + + +
#UDIf VEX.W=1.
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/vcvtph2qq.html b/x86/vcvtph2qq.html new file mode 100644 index 0000000..a159cd2 --- /dev/null +++ b/x86/vcvtph2qq.html @@ -0,0 +1,119 @@ + +VCVTPH2QQ + — Convert Packed FP16 Values to Signed Quadword Integer Values

VCVTPH2QQ + — Convert Packed FP16 Values to Signed Quadword Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP5.W0 7B /r VCVTPH2QQ xmm1{k1}{z}, xmm2/m32/m16bcstAV/VAVX512-FP16 AVX512VLConvert two packed FP16 values in xmm2/m32/m16bcst to two signed quadword integers, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP5.W0 7B /r VCVTPH2QQ ymm1{k1}{z}, xmm2/m64/m16bcstAV/VAVX512-FP16 AVX512VLConvert four packed FP16 values in xmm2/m64/m16bcst to four signed quadword integers, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP5.W0 7B /r VCVTPH2QQ zmm1{k1}{z}, xmm2/m128/m16bcst {er}AV/VAVX512-FP16Convert eight packed FP16 values in xmm2/m128/m16bcst to eight signed quadword integers, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AQuarterModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to signed quadword integers in destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTPH2QQ DEST, SRC + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 64
+IF *SRC is a register* and (VL = 512) and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.qword[j] := Convert_fp16_to_integer64(tsrc)
+    ELSE IF *zeroing*:
+        DEST.qword[j] := 0
+    // else dest.qword[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPH2QQ __m512i _mm512_cvt_roundph_epi64 (__m128h a, int rounding);
+
+
VCVTPH2QQ __m512i _mm512_mask_cvt_roundph_epi64 (__m512i src, __mmask8 k, __m128h a, int rounding);
+
+
VCVTPH2QQ __m512i _mm512_maskz_cvt_roundph_epi64 (__mmask8 k, __m128h a, int rounding);
+
+
VCVTPH2QQ __m128i _mm_cvtph_epi64 (__m128h a);
+
+
VCVTPH2QQ __m128i _mm_mask_cvtph_epi64 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTPH2QQ __m128i _mm_maskz_cvtph_epi64 (__mmask8 k, __m128h a);
+
+
VCVTPH2QQ __m256i _mm256_cvtph_epi64 (__m128h a);
+
+
VCVTPH2QQ __m256i _mm256_mask_cvtph_epi64 (__m256i src, __mmask8 k, __m128h a);
+
+
VCVTPH2QQ __m256i _mm256_maskz_cvtph_epi64 (__mmask8 k, __m128h a);
+
+
VCVTPH2QQ __m512i _mm512_cvtph_epi64 (__m128h a);
+
+
VCVTPH2QQ __m512i _mm512_mask_cvtph_epi64 (__m512i src, __mmask8 k, __m128h a);
+
+
VCVTPH2QQ __m512i _mm512_maskz_cvtph_epi64 (__mmask8 k, __m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtph2udq.html b/x86/vcvtph2udq.html new file mode 100644 index 0000000..b73eb02 --- /dev/null +++ b/x86/vcvtph2udq.html @@ -0,0 +1,119 @@ + +VCVTPH2UDQ + — Convert Packed FP16 Values to Unsigned Doubleword Integers

VCVTPH2UDQ + — Convert Packed FP16 Values to Unsigned Doubleword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 79 /r VCVTPH2UDQ xmm1{k1}{z}, xmm2/m64/m16bcstAV/VAVX512-FP16 AVX512VLConvert four packed FP16 values in xmm2/m64/m16bcst to four unsigned doubleword integers, and store the result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 79 /r VCVTPH2UDQ ymm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert eight packed FP16 values in xmm2/m128/m16bcst to eight unsigned doubleword integers, and store the result in ymm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 79 /r VCVTPH2UDQ zmm1{k1}{z}, ymm2/m256/m16bcst {er}AV/VAVX512-FP16Convert sixteen packed FP16 values in ymm2/m256/m16bcst to sixteen unsigned doubleword integers, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to unsigned doubleword integers in destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTPH2UDQ DEST, SRC + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 32
+IF *SRC is a register* and (VL = 512) and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+            DEST.dword[j] := Convert_fp16_to_unsigned_integer32(tsrc)
+    ELSE IF *zeroing*:
+        DEST.dword[j] := 0
+    // else dest.dword[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPH2UDQ __m512i _mm512_cvt_roundph_epu32 (__m256h a, int rounding);
+
+
VCVTPH2UDQ __m512i _mm512_mask_cvt_roundph_epu32 (__m512i src, __mmask16 k, __m256h a, int rounding);
+
+
VCVTPH2UDQ __m512i _mm512_maskz_cvt_roundph_epu32 (__mmask16 k, __m256h a, int rounding);
+
+
VCVTPH2UDQ __m128i _mm_cvtph_epu32 (__m128h a);
+
+
VCVTPH2UDQ __m128i _mm_mask_cvtph_epu32 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTPH2UDQ __m128i _mm_maskz_cvtph_epu32 (__mmask8 k, __m128h a);
+
+
VCVTPH2UDQ __m256i _mm256_cvtph_epu32 (__m128h a);
+
+
VCVTPH2UDQ __m256i _mm256_mask_cvtph_epu32 (__m256i src, __mmask8 k, __m128h a);
+
+
VCVTPH2UDQ __m256i _mm256_maskz_cvtph_epu32 (__mmask8 k, __m128h a);
+
+
VCVTPH2UDQ __m512i _mm512_cvtph_epu32 (__m256h a);
+
+
VCVTPH2UDQ __m512i _mm512_mask_cvtph_epu32 (__m512i src, __mmask16 k, __m256h a);
+
+
VCVTPH2UDQ __m512i _mm512_maskz_cvtph_epu32 (__mmask16 k, __m256h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtph2uqq.html b/x86/vcvtph2uqq.html new file mode 100644 index 0000000..4d68311 --- /dev/null +++ b/x86/vcvtph2uqq.html @@ -0,0 +1,119 @@ + +VCVTPH2UQQ + — Convert Packed FP16 Values to Unsigned Quadword Integers

VCVTPH2UQQ + — Convert Packed FP16 Values to Unsigned Quadword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP5.W0 79 /r VCVTPH2UQQ xmm1{k1}{z}, xmm2/m32/m16bcstAV/VAVX512-FP16 AVX512VLConvert two packed FP16 values in xmm2/m32/m16bcst to two unsigned quadword integers, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP5.W0 79 /r VCVTPH2UQQ ymm1{k1}{z}, xmm2/m64/m16bcstAV/VAVX512-FP16 AVX512VLConvert four packed FP16 values in xmm2/m64/m16bcst to four unsigned quadword integers, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP5.W0 79 /r VCVTPH2UQQ zmm1{k1}{z}, xmm2/m128/m16bcst {er}AV/VAVX512-FP16Convert eight packed FP16 values in xmm2/m128/m16bcst to eight unsigned quadword integers, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AQuarterModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to unsigned quadword integers in destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTPH2UQQ DEST, SRC + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 64
+IF *SRC is a register* and (VL = 512) and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.qword[j] := Convert_fp16_to_unsigned_integer64(tsrc)
+    ELSE IF *zeroing*:
+        DEST.qword[j] := 0
+    // else dest.qword[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPH2UQQ __m512i _mm512_cvt_roundph_epu64 (__m128h a, int rounding);
+
+
VCVTPH2UQQ __m512i _mm512_mask_cvt_roundph_epu64 (__m512i src, __mmask8 k, __m128h a, int rounding);
+
+
VCVTPH2UQQ __m512i _mm512_maskz_cvt_roundph_epu64 (__mmask8 k, __m128h a, int rounding);
+
+
VCVTPH2UQQ __m128i _mm_cvtph_epu64 (__m128h a);
+
+
VCVTPH2UQQ __m128i _mm_mask_cvtph_epu64 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTPH2UQQ __m128i _mm_maskz_cvtph_epu64 (__mmask8 k, __m128h a);
+
+
VCVTPH2UQQ __m256i _mm256_cvtph_epu64 (__m128h a);
+
+
VCVTPH2UQQ __m256i _mm256_mask_cvtph_epu64 (__m256i src, __mmask8 k, __m128h a);
+
+
VCVTPH2UQQ __m256i _mm256_maskz_cvtph_epu64 (__mmask8 k, __m128h a);
+
+
VCVTPH2UQQ __m512i _mm512_cvtph_epu64 (__m128h a);
+
+
VCVTPH2UQQ __m512i _mm512_mask_cvtph_epu64 (__m512i src, __mmask8 k, __m128h a);
+
+
VCVTPH2UQQ __m512i _mm512_maskz_cvtph_epu64 (__mmask8 k, __m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtph2uw.html b/x86/vcvtph2uw.html new file mode 100644 index 0000000..70ff103 --- /dev/null +++ b/x86/vcvtph2uw.html @@ -0,0 +1,119 @@ + +VCVTPH2UW + — Convert Packed FP16 Values to Unsigned Word Integers

VCVTPH2UW + — Convert Packed FP16 Values to Unsigned Word Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 7D /r VCVTPH2UW xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert packed FP16 values in xmm2/m128/m16bcst to unsigned word integers, and store the result in xmm1.
EVEX.256.NP.MAP5.W0 7D /r VCVTPH2UW ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLConvert packed FP16 values in ymm2/m256/m16bcst to unsigned word integers, and store the result in ymm1.
EVEX.512.NP.MAP5.W0 7D /r VCVTPH2UW zmm1{k1}{z}, zmm2/m512/m16bcst {er}AV/VAVX512-FP16Convert packed FP16 values in zmm2/m512/m16bcst to unsigned word integers, and store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to unsigned word integers in the destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTPH2UW DEST, SRC + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 16
+IF *SRC is a register* and (VL = 512) and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.word[j] := Convert_fp16_to_unsigned_integer16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.word[j] := 0
+    // else dest.word[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPH2UW __m512i _mm512_cvt_roundph_epu16 (__m512h a, int sae);
+
+
VCVTPH2UW __m512i _mm512_mask_cvt_roundph_epu16 (__m512i src, __mmask32 k, __m512h a, int sae);
+
+
VCVTPH2UW __m512i _mm512_maskz_cvt_roundph_epu16 (__mmask32 k, __m512h a, int sae);
+
+
VCVTPH2UW __m128i _mm_cvtph_epu16 (__m128h a);
+
+
VCVTPH2UW __m128i _mm_mask_cvtph_epu16 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTPH2UW __m128i _mm_maskz_cvtph_epu16 (__mmask8 k, __m128h a);
+
+
VCVTPH2UW __m256i _mm256_cvtph_epu16 (__m256h a);
+
+
VCVTPH2UW __m256i _mm256_mask_cvtph_epu16 (__m256i src, __mmask16 k, __m256h a);
+
+
VCVTPH2UW __m256i _mm256_maskz_cvtph_epu16 (__mmask16 k, __m256h a);
+
+
VCVTPH2UW __m512i _mm512_cvtph_epu16 (__m512h a);
+
+
VCVTPH2UW __m512i _mm512_mask_cvtph_epu16 (__m512i src, __mmask32 k, __m512h a);
+
+
VCVTPH2UW __m512i _mm512_maskz_cvtph_epu16 (__mmask32 k, __m512h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtph2w.html b/x86/vcvtph2w.html new file mode 100644 index 0000000..58e2878 --- /dev/null +++ b/x86/vcvtph2w.html @@ -0,0 +1,119 @@ + +VCVTPH2W + — Convert Packed FP16 Values to Signed Word Integers

VCVTPH2W + — Convert Packed FP16 Values to Signed Word Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP5.W0 7D /r VCVTPH2W xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert packed FP16 values in xmm2/m128/m16bcst to signed word integers, and store the result in xmm1.
EVEX.256.66.MAP5.W0 7D /r VCVTPH2W ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLConvert packed FP16 values in ymm2/m256/m16bcst to signed word integers, and store the result in ymm1.
EVEX.512.66.MAP5.W0 7D /r VCVTPH2W zmm1{k1}{z}, zmm2/m512/m16bcst {er}AV/VAVX512-FP16Convert packed FP16 values in zmm2/m512/m16bcst to signed word integers, and store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to signed word integers in the destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTPH2W DEST, SRC + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 16
+IF *SRC is a register* and (VL = 512) and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.word[j] := Convert_fp16_to_integer16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.word[j] := 0
+    // else dest.word[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPH2W __m512i _mm512_cvt_roundph_epi16 (__m512h a, int rounding);
+
+
VCVTPH2W __m512i _mm512_mask_cvt_roundph_epi16 (__m512i src, __mmask32 k, __m512h a, int rounding);
+
+
VCVTPH2W __m512i _mm512_maskz_cvt_roundph_epi16 (__mmask32 k, __m512h a, int rounding);
+
+
VCVTPH2W __m128i _mm_cvtph_epi16 (__m128h a);
+
+
VCVTPH2W __m128i _mm_mask_cvtph_epi16 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTPH2W __m128i _mm_maskz_cvtph_epi16 (__mmask8 k, __m128h a);
+
+
VCVTPH2W __m256i _mm256_cvtph_epi16 (__m256h a);
+
+
VCVTPH2W __m256i _mm256_mask_cvtph_epi16 (__m256i src, __mmask16 k, __m256h a);
+
+
VCVTPH2W __m256i _mm256_maskz_cvtph_epi16 (__mmask16 k, __m256h a);
+
+
VCVTPH2W __m512i _mm512_cvtph_epi16 (__m512h a);
+
+
VCVTPH2W __m512i _mm512_mask_cvtph_epi16 (__m512i src, __mmask32 k, __m512h a);
+
+
VCVTPH2W __m512i _mm512_maskz_cvtph_epi16 (__mmask32 k, __m512h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtps2ph.html b/x86/vcvtps2ph.html new file mode 100644 index 0000000..3911b0f --- /dev/null +++ b/x86/vcvtps2ph.html @@ -0,0 +1,300 @@ + +VCVTPS2PH + — Convert Single-Precision FP Value to 16-bit FP Value

VCVTPS2PH + — Convert Single-Precision FP Value to 16-bit FP Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F3A.W0 1D /r ib VCVTPS2PH xmm1/m64, xmm2, imm8AV/VF16CConvert four packed single-precision floating-point values in xmm2 to packed half-precision (16-bit) floating-point values in xmm1/m64. Imm8 provides rounding controls.
VEX.256.66.0F3A.W0 1D /r ib VCVTPS2PH xmm1/m128, ymm2, imm8AV/VF16CConvert eight packed single-precision floating-point values in ymm2 to packed half-precision (16-bit) floating-point values in xmm1/m128. Imm8 provides rounding controls.
EVEX.128.66.0F3A.W0 1D /r ib VCVTPS2PH xmm1/m64 {k1}{z}, xmm2, imm8BV/VAVX512VL AVX512FConvert four packed single-precision floating-point values in xmm2 to packed half-precision (16-bit) floating-point values in xmm1/m64. Imm8 provides rounding controls.
EVEX.256.66.0F3A.W0 1D /r ib VCVTPS2PH xmm1/m128 {k1}{z}, ymm2, imm8BV/VAVX512VL AVX512FConvert eight packed single-precision floating-point values in ymm2 to packed half-precision (16-bit) floating-point values in xmm1/m128. Imm8 provides rounding controls.
EVEX.512.66.0F3A.W0 1D /r ib VCVTPS2PH ymm1/m256 {k1}{z}, zmm2{sae}, imm8BV/VAVX512FConvert sixteen packed single-precision floating-point values in zmm2 to packed half-precision (16-bit) floating-point values in ymm1/m256. Imm8 provides rounding controls.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)imm8N/A
BHalf MemModRM:r/m (w)ModRM:reg (r)imm8N/A
+

Description + ¶ +

+

Convert packed single-precision floating values in the source operand to half-precision (16-bit) floating-point values and store to the destination operand. The rounding mode is specified using the immediate field (imm8).

+

Underflow results (i.e., tiny results) are converted to denormals. MXCSR.FTZ is ignored. If a source element is denormal relative to the input format with DM masked and at least one of PM or UM unmasked; a SIMD exception will be raised with DE, UE and PE set.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +VCVTPS2PHxmm1/mem64,xmm2, imm8 +127 +96 95 +64 63 +32 31 +0 +VS3 +VS2 +VS1 +VS0 +xmm2 +convert +convert +convert +convert +127 +96 +95 +64 63 +48 47 +32 31 +16 15 0 +VH3 +VH2 +VH1 +VH0 +xmm1/mem64 +
Figure 5-7. VCVTPS2PH (128-bit Version)
+

The immediate byte defines several bit fields that control rounding operation. The effect and encoding of the RC field are listed in Table 5-13.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
BitsField Name/valueDescriptionComment
Imm[1:0]RC=00BRound to nearest evenIf Imm[2] = 0
RC=01BRound down
RC=10BRound up
RC=11BTruncate
Imm[2]MS1=0Use imm[1:0] for roundingIgnore MXCSR.RC
MS1=1Use MXCSR.RC for rounding
Imm[7:3]IgnoredIgnored by processor
+
Table 5-13. Immediate Byte Encoding for 16-bit Floating-Point Conversion Instructions
+

VEX.128 version: The source operand is a XMM register. The destination operand is a XMM register or 64-bit memory location. If the destination operand is a register then the upper bits (MAXVL-1:64) of corresponding register are zeroed.

+

VEX.256 version: The source operand is a YMM register. The destination operand is a XMM register or 128-bit memory location. If the destination operand is a register, the upper bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

Note: VEX.vvvv and EVEX.vvvv are reserved (must be 1111b).

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register. The destination operand is a YMM/XMM/XMM (low 64-bits) register or a 256/128/64-bit memory location, conditionally updated with writemask k1. Bits (MAXVL-1:256/128/64) of the corresponding destination register are zeroed.

+

Operation + ¶ +

+
vCvt_s2h(SRC1[31:0])
+{
+IF Imm[2] = 0
+THEN ; using Imm[1:0] for rounding control, see Table 5-13
+    RETURN Cvt_Single_Precision_To_Half_Precision_FP_Imm(SRC1[31:0]);
+ELSE ; using MXCSR.RC for rounding control
+    RETURN Cvt_Single_Precision_To_Half_Precision_FP_Mxcsr(SRC1[31:0]);
+FI;
+}
+
+

VCVTPS2PH (EVEX Encoded Versions) When DEST is a Register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] :=
+            vCvt_s2h(SRC[k+31:k])
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTPS2PH (EVEX Encoded Versions) When DEST is Memory + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] :=
+            vCvt_s2h(SRC[k+31:k])
+        ELSE
+            *DEST[i+15:i] remains unchanged*
+                ; merging-masking
+    FI;
+ENDFOR
+
+

VCVTPS2PH (VEX.256 Encoded Version) + ¶ +

+
DEST[15:0] := vCvt_s2h(SRC1[31:0]);
+DEST[31:16] := vCvt_s2h(SRC1[63:32]);
+DEST[47:32] := vCvt_s2h(SRC1[95:64]);
+DEST[63:48] := vCvt_s2h(SRC1[127:96]);
+DEST[79:64] := vCvt_s2h(SRC1[159:128]);
+DEST[95:80] := vCvt_s2h(SRC1[191:160]);
+DEST[111:96] := vCvt_s2h(SRC1[223:192]);
+DEST[127:112] := vCvt_s2h(SRC1[255:224]);
+DEST[MAXVL-1:128] := 0
+
+

VCVTPS2PH (VEX.128 Encoded Version) + ¶ +

+
DEST[15:0] := vCvt_s2h(SRC1[31:0]);
+DEST[31:16] := vCvt_s2h(SRC1[63:32]);
+DEST[47:32] := vCvt_s2h(SRC1[95:64]);
+DEST[63:48] := vCvt_s2h(SRC1[127:96]);
+DEST[MAXVL-1:64] := 0
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPS2PH __m256i _mm512_cvtps_ph(__m512 a);
+
+
VCVTPS2PH __m256i _mm512_mask_cvtps_ph(__m256i s, __mmask16 k,__m512 a);
+
+
VCVTPS2PH __m256i _mm512_maskz_cvtps_ph(__mmask16 k,__m512 a);
+
+
VCVTPS2PH __m256i _mm512_cvt_roundps_ph(__m512 a, const int imm);
+
+
VCVTPS2PH __m256i _mm512_mask_cvt_roundps_ph(__m256i s, __mmask16 k,__m512 a, const int imm);
+
+
VCVTPS2PH __m256i _mm512_maskz_cvt_roundps_ph(__mmask16 k,__m512 a, const int imm);
+
+
VCVTPS2PH __m128i _mm256_mask_cvtps_ph(__m128i s, __mmask8 k,__m256 a);
+
+
VCVTPS2PH __m128i _mm256_maskz_cvtps_ph(__mmask8 k,__m256 a);
+
+
VCVTPS2PH __m128i _mm_mask_cvtps_ph(__m128i s, __mmask8 k,__m128 a);
+
+
VCVTPS2PH __m128i _mm_maskz_cvtps_ph(__mmask8 k,__m128 a);
+
+
VCVTPS2PH __m128i _mm_cvtps_ph ( __m128 m1, const int imm);
+
+
VCVTPS2PH __m128i _mm256_cvtps_ph(__m256 m1, const int imm);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal (if MXCSR.DAZ=0).

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-26, “Type 11 Class Exception Conditions” (do not report #AC);

+

EVEX-encoded instructions, see Table 2-60, “Type E11 Class Exception Conditions.”

+

Additionally:

+ + + + + + +
#UDIf VEX.W=1.
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/vcvtps2phx.html b/x86/vcvtps2phx.html new file mode 100644 index 0000000..829e052 --- /dev/null +++ b/x86/vcvtps2phx.html @@ -0,0 +1,129 @@ + +VCVTPS2PHX + — Convert Packed Single Precision Floating-Point Values to Packed FP16 Values

VCVTPS2PHX + — Convert Packed Single Precision Floating-Point Values to Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.MAP5.W0 1D /r VCVTPS2PHX xmm1{k1}{z}, xmm2/m128/m32bcstAV/VAVX512-FP16 AVX512VLConvert four packed single precision floating-point values in xmm2/m128/m32bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP5.W0 1D /r VCVTPS2PHX xmm1{k1}{z}, ymm2/m256/m32bcstAV/VAVX512-FP16 AVX512VLConvert eight packed single precision floating-point values in ymm2/m256/m32bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.512.66.MAP5.W0 1D /r VCVTPS2PHX ymm1{k1}{z}, zmm2/m512/m32bcst {er}AV/VAVX512-FP16Convert sixteen packed single precision floating-point values in zmm2 /m512/m32bcst to packed FP16 values, and store the result in ymm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed single precision floating values in the source operand to FP16 values and stores to the destination operand.

+

The VCVTPS2PHX instruction supports broadcasting.

+

This instruction uses MXCSR.DAZ for handling FP32 inputs. FP16 outputs can be normal or denormal numbers, and are not conditionally flushed based on MXCSR settings.

+

Operation + ¶ +

+

VCVTPS2PHX DEST, SRC (AVX512_FP16 Load Version With Broadcast Support) + ¶ +

+
VL = 128, 256, or 512
+KL := VL / 32
+IF *SRC is a register* and (VL == 512) and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp32[0]
+        ELSE
+            tsrc := SRC.fp32[j]
+        DEST.fp16[j] := Convert_fp32_to_fp16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL/2] := 0
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPS2PHX __m256h _mm512_cvtx_roundps_ph (__m512 a, int rounding);
+
+
VCVTPS2PHX __m256h _mm512_mask_cvtx_roundps_ph (__m256h src, __mmask16 k, __m512 a, int rounding);
+
+
VCVTPS2PHX __m256h _mm512_maskz_cvtx_roundps_ph (__mmask16 k, __m512 a, int rounding);
+
+
VCVTPS2PHX __m128h _mm_cvtxps_ph (__m128 a);
+
+
VCVTPS2PHX __m128h _mm_mask_cvtxps_ph (__m128h src, __mmask8 k, __m128 a);
+
+
VCVTPS2PHX __m128h _mm_maskz_cvtxps_ph (__mmask8 k, __m128 a);
+
+
VCVTPS2PHX __m128h _mm256_cvtxps_ph (__m256 a);
+
+
VCVTPS2PHX __m128h _mm256_mask_cvtxps_ph (__m128h src, __mmask8 k, __m256 a);
+
+
VCVTPS2PHX __m128h _mm256_maskz_cvtxps_ph (__mmask8 k, __m256 a);
+
+
VCVTPS2PHX __m256h _mm512_cvtxps_ph (__m512 a);
+
+
VCVTPS2PHX __m256h _mm512_mask_cvtxps_ph (__m256h src, __mmask16 k, __m512 a);
+
+
VCVTPS2PHX __m256h _mm512_maskz_cvtxps_ph (__mmask16 k, __m512 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal (if MXCSR.DAZ=0).

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + + + + +
#UDIf VEX.W=1.
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/vcvtps2qq.html b/x86/vcvtps2qq.html new file mode 100644 index 0000000..b638d7a --- /dev/null +++ b/x86/vcvtps2qq.html @@ -0,0 +1,155 @@ + +VCVTPS2QQ + — Convert Packed Single Precision Floating-Point Values to Packed SignedQuadword Integer Values

VCVTPS2QQ + — Convert Packed Single Precision Floating-Point Values to Packed SignedQuadword Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F.W0 7B /r VCVTPS2QQ xmm1 {k1}{z}, xmm2/m64/m32bcstAV/VAVX512VL AVX512DQConvert two packed single precision floating-point values from xmm2/m64/m32bcst to two packed signed quadword values in xmm1 subject to writemask k1.
EVEX.256.66.0F.W0 7B /r VCVTPS2QQ ymm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512DQConvert four packed single precision floating-point values from xmm2/m128/m32bcst to four packed signed quadword values in ymm1 subject to writemask k1.
EVEX.512.66.0F.W0 7B /r VCVTPS2QQ zmm1 {k1}{z}, ymm2/m256/m32bcst{er}AV/VAVX512DQConvert eight packed single precision floating-point values from ymm2/m256/m32bcst to eight packed signed quadword values in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts eight packed single precision floating-point values in the source operand to eight signed quadword integers in the destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (2w-1, where w represents the number of bits in the destination format) is returned.

+

The source operand is a YMM/XMM/XMM (low 64- bits) register or a 256/128/64-bit memory location. The destination operation is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTPS2QQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL == 512) AND (EVEX.b == 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Single_Precision_To_QuadInteger(SRC[k+31:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPS2QQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_QuadInteger(SRC[31:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_QuadInteger(SRC[k+31:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPS2QQ __m512i _mm512_cvtps_epi64( __m512 a);
+
+
VCVTPS2QQ __m512i _mm512_mask_cvtps_epi64( __m512i s, __mmask16 k, __m512 a);
+
+
VCVTPS2QQ __m512i _mm512_maskz_cvtps_epi64( __mmask16 k, __m512 a);
+
+
VCVTPS2QQ __m512i _mm512_cvt_roundps_epi64( __m512 a, int r);
+
+
VCVTPS2QQ __m512i _mm512_mask_cvt_roundps_epi64( __m512i s, __mmask16 k, __m512 a, int r);
+
+
VCVTPS2QQ __m512i _mm512_maskz_cvt_roundps_epi64( __mmask16 k, __m512 a, int r);
+
+
VCVTPS2QQ __m256i _mm256_cvtps_epi64( __m256 a);
+
+
VCVTPS2QQ __m256i _mm256_mask_cvtps_epi64( __m256i s, __mmask8 k, __m256 a);
+
+
VCVTPS2QQ __m256i _mm256_maskz_cvtps_epi64( __mmask8 k, __m256 a);
+
+
VCVTPS2QQ __m128i _mm_cvtps_epi64( __m128 a);
+
+
VCVTPS2QQ __m128i _mm_mask_cvtps_epi64( __m128i s, __mmask8 k, __m128 a);
+
+
VCVTPS2QQ __m128i _mm_maskz_cvtps_epi64( __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtps2udq.html b/x86/vcvtps2udq.html new file mode 100644 index 0000000..018224e --- /dev/null +++ b/x86/vcvtps2udq.html @@ -0,0 +1,153 @@ + +VCVTPS2UDQ + — Convert Packed Single Precision Floating-Point Values to Packed UnsignedDoubleword Integer Values

VCVTPS2UDQ + — Convert Packed Single Precision Floating-Point Values to Packed UnsignedDoubleword Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.0F.W0 79 /r VCVTPS2UDQ xmm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512FConvert four packed single precision floating-point values from xmm2/m128/m32bcst to four packed unsigned doubleword values in xmm1 subject to writemask k1.
EVEX.256.0F.W0 79 /r VCVTPS2UDQ ymm1 {k1}{z}, ymm2/m256/m32bcstAV/VAVX512VL AVX512FConvert eight packed single precision floating-point values from ymm2/m256/m32bcst to eight packed unsigned doubleword values in ymm1 subject to writemask k1.
EVEX.512.0F.W0 79 /r VCVTPS2UDQ zmm1 {k1}{z}, zmm2/m512/m32bcst{er}AV/VAVX512FConvert sixteen packed single precision floating-point values from zmm2/m512/m32bcst to sixteen packed unsigned doubleword values in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts sixteen packed single precision floating-point values in the source operand to sixteen unsigned double-word integers in the destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTPS2UDQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_UInteger(SRC[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPS2UDQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no *
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_UInteger(SRC[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_UInteger(SRC[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPS2UDQ __m512i _mm512_cvtps_epu32( __m512 a);
+
+
VCVTPS2UDQ __m512i _mm512_mask_cvtps_epu32( __m512i s, __mmask16 k, __m512 a);
+
+
VCVTPS2UDQ __m512i _mm512_maskz_cvtps_epu32( __mmask16 k, __m512 a);
+
+
VCVTPS2UDQ __m512i _mm512_cvt_roundps_epu32( __m512 a, int r);
+
+
VCVTPS2UDQ __m512i _mm512_mask_cvt_roundps_epu32( __m512i s, __mmask16 k, __m512 a, int r);
+
+
VCVTPS2UDQ __m512i _mm512_maskz_cvt_roundps_epu32( __mmask16 k, __m512 a, int r);
+
+
VCVTPS2UDQ __m256i _mm256_cvtps_epu32( __m256d a);
+
+
VCVTPS2UDQ __m256i _mm256_mask_cvtps_epu32( __m256i s, __mmask8 k, __m256 a);
+
+
VCVTPS2UDQ __m256i _mm256_maskz_cvtps_epu32( __mmask8 k, __m256 a);
+
+
VCVTPS2UDQ __m128i _mm_cvtps_epu32( __m128 a);
+
+
VCVTPS2UDQ __m128i _mm_mask_cvtps_epu32( __m128i s, __mmask8 k, __m128 a);
+
+
VCVTPS2UDQ __m128i _mm_maskz_cvtps_epu32( __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtps2uqq.html b/x86/vcvtps2uqq.html new file mode 100644 index 0000000..ddfc1a9 --- /dev/null +++ b/x86/vcvtps2uqq.html @@ -0,0 +1,155 @@ + +VCVTPS2UQQ + — Convert Packed Single Precision Floating-Point Values to Packed UnsignedQuadword Integer Values

VCVTPS2UQQ + — Convert Packed Single Precision Floating-Point Values to Packed UnsignedQuadword Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F.W0 79 /r VCVTPS2UQQ xmm1 {k1}{z}, xmm2/m64/m32bcstAV/VAVX512VL AVX512DQConvert two packed single precision floating-point values from zmm2/m64/m32bcst to two packed unsigned quadword values in zmm1 subject to writemask k1.
EVEX.256.66.0F.W0 79 /r VCVTPS2UQQ ymm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512DQConvert four packed single precision floating-point values from xmm2/m128/m32bcst to four packed unsigned quadword values in ymm1 subject to writemask k1.
EVEX.512.66.0F.W0 79 /r VCVTPS2UQQ zmm1 {k1}{z}, ymm2/m256/m32bcst{er}AV/VAVX512DQConvert eight packed single precision floating-point values from ymm2/m256/m32bcst to eight packed unsigned quadword values in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts up to eight packed single precision floating-point values in the source operand to unsigned quadword integers in the destination operand.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

The source operand is a YMM/XMM/XMM (low 64- bits) register or a 256/128/64-bit memory location. The destination operation is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTPS2UQQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL == 512) AND (EVEX.b == 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Single_Precision_To_UQuadInteger(SRC[k+31:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTPS2UQQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_UQuadInteger(SRC[31:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_UQuadInteger(SRC[k+31:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTPS2UQQ __m512i _mm512_cvtps_epu64( __m512 a);
+
+
VCVTPS2UQQ __m512i _mm512_mask_cvtps_epu64( __m512i s, __mmask16 k, __m512 a);
+
+
VCVTPS2UQQ __m512i _mm512_maskz_cvtps_epu64( __mmask16 k, __m512 a);
+
+
VCVTPS2UQQ __m512i _mm512_cvt_roundps_epu64( __m512 a, int r);
+
+
VCVTPS2UQQ __m512i _mm512_mask_cvt_roundps_epu64( __m512i s, __mmask16 k, __m512 a, int r);
+
+
VCVTPS2UQQ __m512i _mm512_maskz_cvt_roundps_epu64( __mmask16 k, __m512 a, int r);
+
+
VCVTPS2UQQ __m256i _mm256_cvtps_epu64( __m256 a);
+
+
VCVTPS2UQQ __m256i _mm256_mask_cvtps_epu64( __m256i s, __mmask8 k, __m256 a);
+
+
VCVTPS2UQQ __m256i _mm256_maskz_cvtps_epu64( __mmask8 k, __m256 a);
+
+
VCVTPS2UQQ __m128i _mm_cvtps_epu64( __m128 a);
+
+
VCVTPS2UQQ __m128i _mm_mask_cvtps_epu64( __m128i s, __mmask8 k, __m128 a);
+
+
VCVTPS2UQQ __m128i _mm_maskz_cvtps_epu64( __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtqq2pd.html b/x86/vcvtqq2pd.html new file mode 100644 index 0000000..c99b509 --- /dev/null +++ b/x86/vcvtqq2pd.html @@ -0,0 +1,149 @@ + +VCVTQQ2PD + — Convert Packed Quadword Integers to Packed Double Precision Floating-PointValues

VCVTQQ2PD + — Convert Packed Quadword Integers to Packed Double Precision Floating-PointValues

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F.W1 E6 /r VCVTQQ2PD xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512DQConvert two packed quadword integers from xmm2/m128/m64bcst to packed double precision floating-point values in xmm1 with writemask k1.
EVEX.256.F3.0F.W1 E6 /r VCVTQQ2PD ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512DQConvert four packed quadword integers from ymm2/m256/m64bcst to packed double precision floating-point values in ymm1 with writemask k1.
EVEX.512.F3.0F.W1 E6 /r VCVTQQ2PD zmm1 {k1}{z}, zmm2/m512/m64bcst{er}AV/VAVX512DQConvert eight packed quadword integers from zmm2/m512/m64bcst to eight packed double precision floating-point values in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed quadword integers in the source operand (second operand) to packed double precision floating-point values in the destination operand (first operand).

+

The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operation is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTQQ2PD (EVEX2 Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL == 512) AND (EVEX.b == 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_QuadInteger_To_Double_Precision_Floating_Point(SRC[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTQQ2PD (EVEX Encoded Versions) when SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_QuadInteger_To_Double_Precision_Floating_Point(SRC[63:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_QuadInteger_To_Double_Precision_Floating_Point(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTQQ2PD __m512d _mm512_cvtepi64_pd( __m512i a);
+
+
VCVTQQ2PD __m512d _mm512_mask_cvtepi64_pd( __m512d s, __mmask16 k, __m512i a);
+
+
VCVTQQ2PD __m512d _mm512_maskz_cvtepi64_pd( __mmask16 k, __m512i a);
+
+
VCVTQQ2PD __m512d _mm512_cvt_roundepi64_pd( __m512i a, int r);
+
+
VCVTQQ2PD __m512d _mm512_mask_cvt_roundepi64_pd( __m512d s, __mmask8 k, __m512i a, int r);
+
+
VCVTQQ2PD __m512d _mm512_maskz_cvt_roundepi64_pd( __mmask8 k, __m512i a, int r);
+
+
VCVTQQ2PD __m256d _mm256_mask_cvtepi64_pd( __m256d s, __mmask8 k, __m256i a);
+
+
VCVTQQ2PD __m256d _mm256_maskz_cvtepi64_pd( __mmask8 k, __m256i a);
+
+
VCVTQQ2PD __m128d _mm_mask_cvtepi64_pd( __m128d s, __mmask8 k, __m128i a);
+
+
VCVTQQ2PD __m128d _mm_maskz_cvtepi64_pd( __mmask8 k, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtqq2ph.html b/x86/vcvtqq2ph.html new file mode 100644 index 0000000..8e550f4 --- /dev/null +++ b/x86/vcvtqq2ph.html @@ -0,0 +1,119 @@ + +VCVTQQ2PH + — Convert Packed Signed Quadword Integers to Packed FP16 Values

VCVTQQ2PH + — Convert Packed Signed Quadword Integers to Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W1 5B /r VCVTQQ2PH xmm1{k1}{z}, xmm2/m128/m64bcstAV/VAVX512-FP16 AVX512VLConvert two packed signed quadword integers in xmm2/m128/m64bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W1 5B /r VCVTQQ2PH xmm1{k1}{z}, ymm2/m256/m64bcstAV/VAVX512-FP16 AVX512VLConvert four packed signed quadword integers in ymm2/m256/m64bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.512.NP.MAP5.W1 5B /r VCVTQQ2PH xmm1{k1}{z}, zmm2/m512/m64bcst {er}AV/VAVX512-FP16Convert eight packed signed quadword integers in zmm2/m512/m64bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed signed quadword integers in the source operand to packed FP16 values in the destination operand. The destination elements are updated according to the writemask.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

If the result of the convert operation is overflow and MXCSR.OM=0 then a SIMD exception will be raised with OE=1, PE=1.

+

Operation + ¶ +

+

VCVTQQ2PH DEST, SRC + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 64
+IF *SRC is a register* and (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.qword[0]
+        ELSE
+            tsrc := SRC.qword[j]
+        DEST.fp16[j] := Convert_integer64_to_fp16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL/4] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTQQ2PH __m128h _mm512_cvt_roundepi64_ph (__m512i a, int rounding);
+
+
VCVTQQ2PH __m128h _mm512_mask_cvt_roundepi64_ph (__m128h src, __mmask8 k, __m512i a, int rounding);
+
+
VCVTQQ2PH __m128h _mm512_maskz_cvt_roundepi64_ph (__mmask8 k, __m512i a, int rounding);
+
+
VCVTQQ2PH __m128h _mm_cvtepi64_ph (__m128i a);
+
+
VCVTQQ2PH __m128h _mm_mask_cvtepi64_ph (__m128h src, __mmask8 k, __m128i a);
+
+
VCVTQQ2PH __m128h _mm_maskz_cvtepi64_ph (__mmask8 k, __m128i a);
+
+
VCVTQQ2PH __m128h _mm256_cvtepi64_ph (__m256i a);
+
+
VCVTQQ2PH __m128h _mm256_mask_cvtepi64_ph (__m128h src, __mmask8 k, __m256i a);
+
+
VCVTQQ2PH __m128h _mm256_maskz_cvtepi64_ph (__mmask8 k, __m256i a);
+
+
VCVTQQ2PH __m128h _mm512_cvtepi64_ph (__m512i a);
+
+
VCVTQQ2PH __m128h _mm512_mask_cvtepi64_ph (__m128h src, __mmask8 k, __m512i a);
+
+
VCVTQQ2PH __m128h _mm512_maskz_cvtepi64_ph (__mmask8 k, __m512i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtqq2ps.html b/x86/vcvtqq2ps.html new file mode 100644 index 0000000..455e4b1 --- /dev/null +++ b/x86/vcvtqq2ps.html @@ -0,0 +1,149 @@ + +VCVTQQ2PS + — Convert Packed Quadword Integers to Packed Single Precision Floating-PointValues

VCVTQQ2PS + — Convert Packed Quadword Integers to Packed Single Precision Floating-PointValues

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.0F.W1 5B /r VCVTQQ2PS xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512DQConvert two packed quadword integers from xmm2/mem to packed single precision floating-point values in xmm1 with writemask k1.
EVEX.256.0F.W1 5B /r VCVTQQ2PS xmm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512DQConvert four packed quadword integers from ymm2/mem to packed single precision floating-point values in xmm1 with writemask k1.
EVEX.512.0F.W1 5B /r VCVTQQ2PS ymm1 {k1}{z}, zmm2/m512/m64bcst{er}AV/VAVX512DQConvert eight packed quadword integers from zmm2/mem to eight packed single precision floating-point values in ymm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed quadword integers in the source operand (second operand) to packed single precision floating-point values in the destination operand (first operand).

+

The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operation is a YMM/XMM/XMM (lower 64 bits) register conditionally updated with writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTQQ2PS (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[k+31:k] :=
+            Convert_QuadInteger_To_Single_Precision_Floating_Point(SRC[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[k+31:k] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[k+31:k] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTQQ2PS (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[k+31:k] :=
+            Convert_QuadInteger_To_Single_Precision_Floating_Point(SRC[63:0])
+                ELSE
+                    DEST[k+31:k] :=
+            Convert_QuadInteger_To_Single_Precision_Floating_Point(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[k+31:k] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[k+31:k] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTQQ2PS __m256 _mm512_cvtepi64_ps( __m512i a);
+
+
VCVTQQ2PS __m256 _mm512_mask_cvtepi64_ps( __m256 s, __mmask16 k, __m512i a);
+
+
VCVTQQ2PS __m256 _mm512_maskz_cvtepi64_ps( __mmask16 k, __m512i a);
+
+
VCVTQQ2PS __m256 _mm512_cvt_roundepi64_ps( __m512i a, int r);
+
+
VCVTQQ2PS __m256 _mm512_mask_cvt_roundepi_ps( __m256 s, __mmask8 k, __m512i a, int r);
+
+
VCVTQQ2PS __m256 _mm512_maskz_cvt_roundepi64_ps( __mmask8 k, __m512i a, int r);
+
+
VCVTQQ2PS __m128 _mm256_cvtepi64_ps( __m256i a);
+
+
VCVTQQ2PS __m128 _mm256_mask_cvtepi64_ps( __m128 s, __mmask8 k, __m256i a);
+
+
VCVTQQ2PS __m128 _mm256_maskz_cvtepi64_ps( __mmask8 k, __m256i a);
+
+
VCVTQQ2PS __m128 _mm_cvtepi64_ps( __m128i a);
+
+
VCVTQQ2PS __m128 _mm_mask_cvtepi64_ps( __m128 s, __mmask8 k, __m128i a);
+
+
VCVTQQ2PS __m128 _mm_maskz_cvtepi64_ps( __mmask8 k, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtsd2sh.html b/x86/vcvtsd2sh.html new file mode 100644 index 0000000..fbcbf7d --- /dev/null +++ b/x86/vcvtsd2sh.html @@ -0,0 +1,89 @@ + +VCVTSD2SH + — Convert Low FP64 Value to an FP16 Value

VCVTSD2SH + — Convert Low FP64 Value to an FP16 Value

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F2.MAP5.W1 5A /r VCVTSD2SH xmm1{k1}{z}, xmm2, xmm3/m64 {er}AV/VAVX512-FP16Convert the low FP64 value in xmm3/m64 to an FP16 value and store the result in the low element of xmm1 subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction converts the low FP64 value in the second source operand to an FP16 value, and stores the result in the low element of the destination operand.

+

When the conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+

VCVTSD2SH dest, src1, src2 + ¶ +

+
IF *SRC2 is a register* and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    DEST.fp16[0] := Convert_fp64_to_fp16(SRC2.fp64[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else dest.fp16[0] remains unchanged
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSD2SH __m128h _mm_cvt_roundsd_sh (__m128h a, __m128d b, const int rounding);
+
+
VCVTSD2SH __m128h _mm_mask_cvt_roundsd_sh (__m128h src, __mmask8 k, __m128h a, __m128d b, const int rounding);
+
+
VCVTSD2SH __m128h _mm_maskz_cvt_roundsd_sh (__mmask8 k, __m128h a, __m128d b, const int rounding);
+
+
VCVTSD2SH __m128h _mm_cvtsd_sh (__m128h a, __m128d b);
+
+
VCVTSD2SH __m128h _mm_mask_cvtsd_sh (__m128h src, __mmask8 k, __m128h a, __m128d b);
+
+
VCVTSD2SH __m128h _mm_maskz_cvtsd_sh (__mmask8 k, __m128h a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vcvtsd2usi.html b/x86/vcvtsd2usi.html new file mode 100644 index 0000000..2f633f4 --- /dev/null +++ b/x86/vcvtsd2usi.html @@ -0,0 +1,89 @@ + +VCVTSD2USI + — Convert Scalar Double Precision Floating-Point Value to Unsigned DoublewordInteger

VCVTSD2USI + — Convert Scalar Double Precision Floating-Point Value to Unsigned DoublewordInteger

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.F2.0F.W0 79 /r VCVTSD2USI r32, xmm1/m64{er}AV/VAVX512FConvert one double precision floating-point value from xmm1/m64 to one unsigned doubleword integer r32.
EVEX.LLIG.F2.0F.W1 79 /r VCVTSD2USI r64, xmm1/m64{er}AV/N.E.1AVX512FConvert one double precision floating-point value from xmm1/m64 to one unsigned quadword integer zero-extended into r64.
+
+

1. EVEX.W1 in non-64 bit is ignored; the instruction behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 FixedModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts a double precision floating-point value in the source operand (the second operand) to an unsigned doubleword integer in the destination operand (the first operand). The source operand can be an XMM register or a 64-bit memory location. The destination operand is a general-purpose register. When the source operand is an XMM register, the double precision floating-point value is contained in the low quadword of the register.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

Operation + ¶ +

+

VCVTSD2USI (EVEX Encoded Version) + ¶ +

+
IF (SRC *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF 64-Bit Mode and OperandSize = 64
+    THEN DEST[63:0] := Convert_Double_Precision_Floating_Point_To_UInteger(SRC[63:0]);
+    ELSE DEST[31:0] := Convert_Double_Precision_Floating_Point_To_UInteger(SRC[63:0]);
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSD2USI unsigned int _mm_cvtsd_u32(__m128d);
+
+
VCVTSD2USI unsigned int _mm_cvt_roundsd_u32(__m128d, int r);
+
+
VCVTSD2USI unsigned __int64 _mm_cvtsd_u64(__m128d);
+
+
VCVTSD2USI unsigned __int64 _mm_cvt_roundsd_u64(__m128d, int r);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvtsh2sd.html b/x86/vcvtsh2sd.html new file mode 100644 index 0000000..84bb2a9 --- /dev/null +++ b/x86/vcvtsh2sd.html @@ -0,0 +1,84 @@ + +VCVTSH2SD + — Convert Low FP16 Value to an FP64 Value

VCVTSH2SD + — Convert Low FP16 Value to an FP64 Value

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 5A /r VCVTSH2SD xmm1{k1}{z}, xmm2, xmm3/m16 {sae}AV/VAVX512-FP16Convert the low FP16 value in xmm3/m16 to an FP64 value and store the result in the low element of xmm1 subject to writemask k1. Bits 127:64 of xmm2 are copied to xmm1[127:64].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction converts the low FP16 element in the second source operand to a FP64 element in the low element of the destination operand.

+

Bits 127:64 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP64 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+

VCVTSH2SD dest, src1, src2 + ¶ +

+
IF k1[0] OR *no writemask*:
+    DEST.fp64[0] := Convert_fp16_to_fp64(SRC2.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp64[0] := 0
+// else dest.fp64[0] remains unchanged
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSH2SD __m128d _mm_cvt_roundsh_sd (__m128d a, __m128h b, const int sae);
+
+
VCVTSH2SD __m128d _mm_mask_cvt_roundsh_sd (__m128d src, __mmask8 k, __m128d a, __m128h b, const int sae);
+
+
VCVTSH2SD __m128d _mm_maskz_cvt_roundsh_sd (__mmask8 k, __m128d a, __m128h b, const int sae);
+
+
VCVTSH2SD __m128d _mm_cvtsh_sd (__m128d a, __m128h b);
+
+
VCVTSH2SD __m128d _mm_mask_cvtsh_sd (__m128d src, __mmask8 k, __m128d a, __m128h b);
+
+
VCVTSH2SD __m128d _mm_maskz_cvtsh_sd (__mmask8 k, __m128d a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vcvtsh2si.html b/x86/vcvtsh2si.html new file mode 100644 index 0000000..0b63eff --- /dev/null +++ b/x86/vcvtsh2si.html @@ -0,0 +1,89 @@ + +VCVTSH2SI + — Convert Low FP16 Value to Signed Integer

VCVTSH2SI + — Convert Low FP16 Value to Signed Integer

+ + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 2D /r VCVTSH2SI r32, xmm1/m16 {er}AV/V1AVX512-FP16Convert the low FP16 element in xmm1/m16 to a signed integer and store the result in r32.
EVEX.LLIG.F3.MAP5.W1 2D /r VCVTSH2SI r64, xmm1/m16 {er}AV/N.E.AVX512-FP16Convert the low FP16 element in xmm1/m16 to a signed integer and store the result in r64.
+
+

1. Outside of 64b mode, the EVEX.W field is ignored. The instruction behaves as if W=0 was used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts the low FP16 element in the source operand to a signed integer in the destination general purpose register.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer indefinite value is returned.

+

Operation + ¶ +

+

VCVTSH2SI dest, src + ¶ +

+
IF *SRC is a register* and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+IF 64-mode and OperandSize == 64:
+    DEST.qword := Convert_fp16_to_integer64(SRC.fp16[0])
+ELSE:
+    DEST.dword := Convert_fp16_to_integer32(SRC.fp16[0])
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSH2SI int _mm_cvt_roundsh_i32 (__m128h a, int rounding);
+
+
VCVTSH2SI __int64 _mm_cvt_roundsh_i64 (__m128h a, int rounding);
+
+
VCVTSH2SI int _mm_cvtsh_i32 (__m128h a);
+
+
VCVTSH2SI __int64 _mm_cvtsh_i64 (__m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvtsh2ss.html b/x86/vcvtsh2ss.html new file mode 100644 index 0000000..af4843d --- /dev/null +++ b/x86/vcvtsh2ss.html @@ -0,0 +1,84 @@ + +VCVTSH2SS + — Convert Low FP16 Value to FP32 Value

VCVTSH2SS + — Convert Low FP16 Value to FP32 Value

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.NP.MAP6.W0 13 /r VCVTSH2SS xmm1{k1}{z}, xmm2, xmm3/m16 {sae}AV/VAVX512-FP16Convert the low FP16 element in xmm3/m16 to an FP32 value and store in the low element of xmm1 subject to writemask k1. Bits 127:32 of xmm2 are copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction converts the low FP16 element in the second source operand to the low FP32 element of the destination operand.

+

Bits 127:32 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+

VCVTSH2SS dest, src1, src2 + ¶ +

+
IF k1[0] OR *no writemask*:
+    DEST.fp32[0] := Convert_fp16_to_fp32(SRC2.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp32[0] := 0
+// else dest.fp32[0] remains unchanged
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSH2SS __m128 _mm_cvt_roundsh_ss (__m128 a, __m128h b, const int sae);
+
+
VCVTSH2SS __m128 _mm_mask_cvt_roundsh_ss (__m128 src, __mmask8 k, __m128 a, __m128h b, const int sae);
+
+
VCVTSH2SS __m128 _mm_maskz_cvt_roundsh_ss (__mmask8 k, __m128 a, __m128h b, const int sae);
+
+
VCVTSH2SS __m128 _mm_cvtsh_ss (__m128 a, __m128h b);
+
+
VCVTSH2SS __m128 _mm_mask_cvtsh_ss (__m128 src, __mmask8 k, __m128 a, __m128h b);
+
+
VCVTSH2SS __m128 _mm_maskz_cvtsh_ss (__mmask8 k, __m128 a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vcvtsh2usi.html b/x86/vcvtsh2usi.html new file mode 100644 index 0000000..2ecd46c --- /dev/null +++ b/x86/vcvtsh2usi.html @@ -0,0 +1,90 @@ + +VCVTSH2USI + — Convert Low FP16 Value to Unsigned Integer

VCVTSH2USI + — Convert Low FP16 Value to Unsigned Integer

+ + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 79 /r VCVTSH2USI r32, xmm1/m16 {er}AV/V1AVX512-FP16Convert the low FP16 element in xmm1/m16 to an unsigned integer and store the result in r32.
EVEX.LLIG.F3.MAP5.W1 79 /r VCVTSH2USI r64, xmm1/m16 {er}AV/N.E.AVX512-FP16Convert the low FP16 element in xmm1/m16 to an unsigned integer and store the result in r64.
+
+

1. Outside of 64b mode, the EVEX.W field is ignored. The instruction behaves as if W=0 was used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts the low FP16 element in the source operand to an unsigned integer in the destination general purpose register.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer indefinite value is returned.

+

Operation + ¶ +

+

VCVTSH2USI dest, src + ¶ +

+
// SET_RM() sets the rounding mode used for this instruction.
+IF *SRC is a register* and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+IF 64-mode and OperandSize == 64:
+    DEST.qword := Convert_fp16_to_unsigned_integer64(SRC.fp16[0])
+ELSE:
+    DEST.dword := Convert_fp16_to_unsigned_integer32(SRC.fp16[0])
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSH2USI unsigned int _mm_cvt_roundsh_u32 (__m128h a, int sae);
+
+
VCVTSH2USI unsigned __int64 _mm_cvt_roundsh_u64 (__m128h a, int rounding);
+
+
VCVTSH2USI unsigned int _mm_cvtsh_u32 (__m128h a);
+
+
VCVTSH2USI unsigned __int64 _mm_cvtsh_u64 (__m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvtsi2sh.html b/x86/vcvtsi2sh.html new file mode 100644 index 0000000..e8bbd68 --- /dev/null +++ b/x86/vcvtsi2sh.html @@ -0,0 +1,92 @@ + +VCVTSI2SH + — Convert a Signed Doubleword/Quadword Integer to an FP16 Value

VCVTSI2SH + — Convert a Signed Doubleword/Quadword Integer to an FP16 Value

+ + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 2A /r VCVTSI2SH xmm1, xmm2, r32/m32 {er}AV/V1AVX512-FP16Convert the signed doubleword integer in r32/m32 to an FP16 value and store the result in xmm1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
EVEX.LLIG.F3.MAP5.W1 2A /r VCVTSI2SH xmm1, xmm2, r64/m64 {er}AV/N.E.AVX512-FP16Convert the signed quadword integer in r64/m64 to an FP16 value and store the result in xmm1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+
+

1. Outside of 64b mode, the EVEX.W field is ignored. The instruction behaves as if W=0 was used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction converts a signed doubleword integer (or signed quadword integer if operand size is 64 bits) in the second source operand to an FP16 value in the destination operand. The result is stored in the low word of the destination operand. When conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or embedded rounding controls.

+

The second source operand can be a general-purpose register or a 32/64-bit memory location. The first source and destination operands are XMM registers. Bits 127:16 of the XMM register destination are copied from corresponding bits in the first source operand. Bits MAXVL-1:128 of the destination register are zeroed.

+

If the result of the convert operation is overflow and MXCSR.OM=0 then a SIMD exception will be raised with OE=1, PE=1.

+

Operation + ¶ +

+

VCVTSI2SH dest, src1, src2 + ¶ +

+
IF *SRC2 is a register* and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+IF 64-mode and OperandSize == 64:
+    DEST.fp16[0] := Convert_integer64_to_fp16(SRC2.qword)
+ELSE:
+    DEST.fp16[0] := Convert_integer32_to_fp16(SRC2.dword)
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSI2SH __m128h _mm_cvt_roundi32_sh (__m128h a, int b, int rounding);
+
+
VCVTSI2SH __m128h _mm_cvt_roundi64_sh (__m128h a, __int64 b, int rounding);
+
+
VCVTSI2SH __m128h _mm_cvti32_sh (__m128h a, int b);
+
+
VCVTSI2SH __m128h _mm_cvti64_sh (__m128h a, __int64 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvtss2sh.html b/x86/vcvtss2sh.html new file mode 100644 index 0000000..1552705 --- /dev/null +++ b/x86/vcvtss2sh.html @@ -0,0 +1,89 @@ + +VCVTSS2SH + — Convert Low FP32 Value to an FP16 Value

VCVTSS2SH + — Convert Low FP32 Value to an FP16 Value

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.NP.MAP5.W0 1D /r VCVTSS2SH xmm1{k1}{z}, xmm2, xmm3/m32 {er}AV/VAVX512-FP16Convert low FP32 value in xmm3/m32 to an FP16 value and store in the low element of xmm1 subject to writemask k1. Bits 127:16 from xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction converts the low FP32 value in the second source operand to a FP16 value in the low element of the destination operand.

+

When the conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+

VCVTSS2SH dest, src1, src2 + ¶ +

+
IF *SRC2 is a register* and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    DEST.fp16[0] := Convert_fp32_to_fp16(SRC2.fp32[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else dest.fp16[0] remains unchanged
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSS2SH __m128h _mm_cvt_roundss_sh (__m128h a, __m128 b, const int rounding);
+
+
VCVTSS2SH __m128h _mm_mask_cvt_roundss_sh (__m128h src, __mmask8 k, __m128h a, __m128 b, const int rounding);
+
+
VCVTSS2SH __m128h _mm_maskz_cvt_roundss_sh (__mmask8 k, __m128h a, __m128 b, const int rounding);
+
+
VCVTSS2SH __m128h _mm_cvtss_sh (__m128h a, __m128 b);
+
+
VCVTSS2SH __m128h _mm_mask_cvtss_sh (__m128h src, __mmask8 k, __m128h a, __m128 b);
+
+
VCVTSS2SH __m128h _mm_maskz_cvtss_sh (__mmask8 k, __m128h a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vcvtss2usi.html b/x86/vcvtss2usi.html new file mode 100644 index 0000000..e766052 --- /dev/null +++ b/x86/vcvtss2usi.html @@ -0,0 +1,93 @@ + +VCVTSS2USI + — Convert Scalar Single Precision Floating-Point Value to Unsigned DoublewordInteger

VCVTSS2USI + — Convert Scalar Single Precision Floating-Point Value to Unsigned DoublewordInteger

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.F3.0F.W0 79 /r VCVTSS2USI r32, xmm1/m32{er}AV/VAVX512FConvert one single precision floating-point value from xmm1/m32 to one unsigned doubleword integer in r32.
EVEX.LLIG.F3.0F.W1 79 /r VCVTSS2USI r64, xmm1/m32{er}AV/N.E.1AVX512FConvert one single precision floating-point value from xmm1/m32 to one unsigned quadword integer in r64.
+
+

1. EVEX.W1 in non-64 bit is ignored; the instruction behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 FixedModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts a single precision floating-point value in the source operand (the second operand) to an unsigned double-word integer (or unsigned quadword integer if operand size is 64 bits) in the destination operand (the first operand). The source operand can be an XMM register or a memory location. The destination operand is a general-purpose register. When the source operand is an XMM register, the single precision floating-point value is contained in the low doubleword of the register.

+

When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

VEX.W1 and EVEX.W1 versions: promotes the instruction to produce 64-bit data in 64-bit mode.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTSS2USI (EVEX Encoded Version) + ¶ +

+
IF (SRC *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF 64-bit Mode and OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Single_Precision_Floating_Point_To_UInteger(SRC[31:0]);
+ELSE
+    DEST[31:0] := Convert_Single_Precision_Floating_Point_To_UInteger(SRC[31:0]);
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTSS2USI unsigned _mm_cvtss_u32( __m128 a);
+
+
VCVTSS2USI unsigned _mm_cvt_roundss_u32( __m128 a, int r);
+
+
VCVTSS2USI unsigned __int64 _mm_cvtss_u64( __m128 a);
+
+
VCVTSS2USI unsigned __int64 _mm_cvt_roundss_u64( __m128 a, int r);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvttpd2qq.html b/x86/vcvttpd2qq.html new file mode 100644 index 0000000..ff53581 --- /dev/null +++ b/x86/vcvttpd2qq.html @@ -0,0 +1,142 @@ + +VCVTTPD2QQ + — Convert With Truncation Packed Double Precision Floating-Point Values toPacked Quadword Integers

VCVTTPD2QQ + — Convert With Truncation Packed Double Precision Floating-Point Values toPacked Quadword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F.W1 7A /r VCVTTPD2QQ xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512DQConvert two packed double precision floating-point values from zmm2/m128/m64bcst to two packed quadword integers in zmm1 using truncation with writemask k1.
EVEX.256.66.0F.W1 7A /r VCVTTPD2QQ ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512DQConvert four packed double precision floating-point values from ymm2/m256/m64bcst to four packed quadword integers in ymm1 using truncation with writemask k1.
EVEX.512.66.0F.W1 7A /r VCVTTPD2QQ zmm1 {k1}{z}, zmm2/m512/m64bcst{sae}AV/VAVX512DQConvert eight packed double precision floating-point values from zmm2/m512 to eight packed quadword integers in zmm1 using truncation with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts with truncation packed double precision floating-point values in the source operand (second operand) to packed quadword integers in the destination operand (first operand).

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (2w-1, where w represents the number of bits in the destination format) is returned.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTTPD2QQ (EVEX Encoded Version) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Double_Precision_Floating_Point_To_QuadInteger_Truncate(SRC[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTTPD2QQ (EVEX Encoded Version) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+                        Convert_Double_Precision_Floating_Point_To_QuadInteger_Truncate(SRC[63:0])
+                ELSE
+                    DEST[i+63:i] := Convert_Double_Precision_Floating_Point_To_QuadInteger_Truncate(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPD2QQ __m512i _mm512_cvttpd_epi64( __m512d a);
+
+
VCVTTPD2QQ __m512i _mm512_mask_cvttpd_epi64( __m512i s, __mmask8 k, __m512d a);
+
+
VCVTTPD2QQ __m512i _mm512_maskz_cvttpd_epi64( __mmask8 k, __m512d a);
+
+
VCVTTPD2QQ __m512i _mm512_cvtt_roundpd_epi64( __m512d a, int sae);
+
+
VCVTTPD2QQ __m512i _mm512_mask_cvtt_roundpd_epi64( __m512i s, __mmask8 k, __m512d a, int sae);
+
+
VCVTTPD2QQ __m512i _mm512_maskz_cvtt_roundpd_epi64( __mmask8 k, __m512d a, int sae);
+
+
VCVTTPD2QQ __m256i _mm256_mask_cvttpd_epi64( __m256i s, __mmask8 k, __m256d a);
+
+
VCVTTPD2QQ __m256i _mm256_maskz_cvttpd_epi64( __mmask8 k, __m256d a);
+
+
VCVTTPD2QQ __m128i _mm_mask_cvttpd_epi64( __m128i s, __mmask8 k, __m128d a);
+
+
VCVTTPD2QQ __m128i _mm_maskz_cvttpd_epi64( __mmask8 k, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvttpd2udq.html b/x86/vcvttpd2udq.html new file mode 100644 index 0000000..0676121 --- /dev/null +++ b/x86/vcvttpd2udq.html @@ -0,0 +1,146 @@ + +VCVTTPD2UDQ + — Convert With Truncation Packed Double Precision Floating-Point Values toPacked Unsigned Doubleword Integers

VCVTTPD2UDQ + — Convert With Truncation Packed Double Precision Floating-Point Values toPacked Unsigned Doubleword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.0F.W1 78 /r VCVTTPD2UDQ xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512FConvert two packed double precision floating-point values in xmm2/m128/m64bcst to two unsigned doubleword integers in xmm1 using truncation subject to writemask k1.
EVEX.256.0F.W1 78 02 /r VCVTTPD2UDQ xmm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512FConvert four packed double precision floating-point values in ymm2/m256/m64bcst to four unsigned doubleword integers in xmm1 using truncation subject to writemask k1.
EVEX.512.0F.W1 78 /r VCVTTPD2UDQ ymm1 {k1}{z}, zmm2/m512/m64bcst{sae}AV/VAVX512FConvert eight packed double precision floating-point values in zmm2/m512/m64bcst to eight unsigned doubleword integers in ymm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts with truncation packed double precision floating-point values in the source operand (the second operand) to packed unsigned doubleword integers in the destination operand (the first operand).

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a YMM/XMM/XMM (low 64 bits) register conditionally updated with writemask k1. The upper bits (MAXVL-1:256) of the corresponding destination are zeroed.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTTPD2UDQ (EVEX Encoded Versions) When SRC2 Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_UInteger_Truncate(SRC[k+63:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTTPD2UDQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256),(8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_UInteger_Truncate(SRC[63:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_Double_Precision_Floating_Point_To_UInteger_Truncate(SRC[k+63:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPD2UDQ __m256i _mm512_cvttpd_epu32( __m512d a);
+
+
VCVTTPD2UDQ __m256i _mm512_mask_cvttpd_epu32( __m256i s, __mmask8 k, __m512d a);
+
+
VCVTTPD2UDQ __m256i _mm512_maskz_cvttpd_epu32( __mmask8 k, __m512d a);
+
+
VCVTTPD2UDQ __m256i _mm512_cvtt_roundpd_epu32( __m512d a, int sae);
+
+
VCVTTPD2UDQ __m256i _mm512_mask_cvtt_roundpd_epu32( __m256i s, __mmask8 k, __m512d a, int sae);
+
+
VCVTTPD2UDQ __m256i _mm512_maskz_cvtt_roundpd_epu32( __mmask8 k, __m512d a, int sae);
+
+
VCVTTPD2UDQ __m128i _mm256_mask_cvttpd_epu32( __m128i s, __mmask8 k, __m256d a);
+
+
VCVTTPD2UDQ __m128i _mm256_maskz_cvttpd_epu32( __mmask8 k, __m256d a);
+
+
VCVTTPD2UDQ __m128i _mm_mask_cvttpd_epu32( __m128i s, __mmask8 k, __m128d a);
+
+
VCVTTPD2UDQ __m128i _mm_maskz_cvttpd_epu32( __mmask8 k, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvttpd2uqq.html b/x86/vcvttpd2uqq.html new file mode 100644 index 0000000..ec35870 --- /dev/null +++ b/x86/vcvttpd2uqq.html @@ -0,0 +1,143 @@ + +VCVTTPD2UQQ + — Convert With Truncation Packed Double Precision Floating-Point Values toPacked Unsigned Quadword Integers

VCVTTPD2UQQ + — Convert With Truncation Packed Double Precision Floating-Point Values toPacked Unsigned Quadword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F.W1 78 /r VCVTTPD2UQQ xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512DQConvert two packed double precision floating-point values from xmm2/m128/m64bcst to two packed unsigned quadword integers in xmm1 using truncation with writemask k1.
EVEX.256.66.0F.W1 78 /r VCVTTPD2UQQ ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512DQConvert four packed double precision floating-point values from ymm2/m256/m64bcst to four packed unsigned quadword integers in ymm1 using truncation with writemask k1.
EVEX.512.66.0F.W1 78 /r VCVTTPD2UQQ zmm1 {k1}{z}, zmm2/m512/m64bcst{sae}AV/VAVX512DQConvert eight packed double precision floating-point values from zmm2/mem to eight packed unsigned quadword integers in zmm1 using truncation with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts with truncation packed double precision floating-point values in the source operand (second operand) to packed unsigned quadword integers in the destination operand (first operand).

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operation is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTTPD2UQQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Double_Precision_Floating_Point_To_UQuadInteger_Truncate(SRC[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTTPD2UQQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_Double_Precision_Floating_Point_To_UQuadInteger_Truncate(SRC[63:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_Double_Precision_Floating_Point_To_UQuadInteger_Truncate(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPD2UQQ _mm<size>[_mask[z]]_cvtt[_round]pd_epu64 VCVTTPD2UQQ __m512i _mm512_cvttpd_epu64( __m512d a);
+
+
VCVTTPD2UQQ __m512i _mm512_mask_cvttpd_epu64( __m512i s, __mmask8 k, __m512d a);
+
+
VCVTTPD2UQQ __m512i _mm512_maskz_cvttpd_epu64( __mmask8 k, __m512d a);
+
+
VCVTTPD2UQQ __m512i _mm512_cvtt_roundpd_epu64( __m512d a, int sae);
+
+
VCVTTPD2UQQ __m512i _mm512_mask_cvtt_roundpd_epu64( __m512i s, __mmask8 k, __m512d a, int sae);
+
+
VCVTTPD2UQQ __m512i _mm512_maskz_cvtt_roundpd_epu64( __mmask8 k, __m512d a, int sae);
+
+
VCVTTPD2UQQ __m256i _mm256_mask_cvttpd_epu64( __m256i s, __mmask8 k, __m256d a);
+
+
VCVTTPD2UQQ __m256i _mm256_maskz_cvttpd_epu64( __mmask8 k, __m256d a);
+
+
VCVTTPD2UQQ __m128i _mm_mask_cvttpd_epu64( __m128i s, __mmask8 k, __m128d a);
+
+
VCVTTPD2UQQ __m128i _mm_maskz_cvttpd_epu64( __mmask8 k, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvttph2dq.html b/x86/vcvttph2dq.html new file mode 100644 index 0000000..f15c3df --- /dev/null +++ b/x86/vcvttph2dq.html @@ -0,0 +1,115 @@ + +VCVTTPH2DQ + — Convert with Truncation Packed FP16 Values to Signed Doubleword Integers

VCVTTPH2DQ + — Convert with Truncation Packed FP16 Values to Signed Doubleword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.F3.MAP5.W0 5B /r VCVTTPH2DQ xmm1{k1}{z}, xmm2/m64/m16bcstAV/VAVX512-FP16 AVX512VLConvert four packed FP16 values in xmm2/m64/m16bcst to four signed doubleword integers, and store the result in xmm1 using truncation subject to writemask k1.
EVEX.256.F3.MAP5.W0 5B /r VCVTTPH2DQ ymm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert eight packed FP16 values in xmm2/m128/m16bcst to eight signed doubleword integers, and store the result in ymm1 using truncation subject to writemask k1.
EVEX.512.F3.MAP5.W0 5B /r VCVTTPH2DQ zmm1{k1}{z}, ymm2/m256/m16bcst {sae}AV/VAVX512-FP16Convert sixteen packed FP16 values in ymm2/m256/m16bcst to sixteen signed doubleword integers, and store the result in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to signed doubleword integers in destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result is larger than the maximum signed doubleword integer, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTTPH2DQ dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 32
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.fp32[j] := Convert_fp16_to_integer32_truncate(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp32[j] := 0
+    // else dest.fp32[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPH2DQ __m512i _mm512_cvtt_roundph_epi32 (__m256h a, int sae);
+
+
VCVTTPH2DQ __m512i _mm512_mask_cvtt_roundph_epi32 (__m512i src, __mmask16 k, __m256h a, int sae);
+
+
VCVTTPH2DQ __m512i _mm512_maskz_cvtt_roundph_epi32 (__mmask16 k, __m256h a, int sae);
+
+
VCVTTPH2DQ __m128i _mm_cvttph_epi32 (__m128h a);
+
+
VCVTTPH2DQ __m128i _mm_mask_cvttph_epi32 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2DQ __m128i _mm_maskz_cvttph_epi32 (__mmask8 k, __m128h a);
+
+
VCVTTPH2DQ __m256i _mm256_cvttph_epi32 (__m128h a);
+
+
VCVTTPH2DQ __m256i _mm256_mask_cvttph_epi32 (__m256i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2DQ __m256i _mm256_maskz_cvttph_epi32 (__mmask8 k, __m128h a);
+
+
VCVTTPH2DQ __m512i _mm512_cvttph_epi32 (__m256h a);
+
+
VCVTTPH2DQ __m512i _mm512_mask_cvttph_epi32 (__m512i src, __mmask16 k, __m256h a);
+
+
VCVTTPH2DQ __m512i _mm512_maskz_cvttph_epi32 (__mmask16 k, __m256h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvttph2qq.html b/x86/vcvttph2qq.html new file mode 100644 index 0000000..3435750 --- /dev/null +++ b/x86/vcvttph2qq.html @@ -0,0 +1,115 @@ + +VCVTTPH2QQ + — Convert with Truncation Packed FP16 Values to Signed Quadword Integers

VCVTTPH2QQ + — Convert with Truncation Packed FP16 Values to Signed Quadword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP5.W0 7A /r VCVTTPH2QQ xmm1{k1}{z}, xmm2/m32/m16bcstAV/VAVX512-FP16 AVX512VLConvert two packed FP16 values in xmm2/m32/m16bcst to two signed quadword integers, and store the result in xmm1 using truncation subject to writemask k1.
EVEX.256.66.MAP5.W0 7A /r VCVTTPH2QQ ymm1{k1}{z}, xmm2/m64/m16bcstAV/VAVX512-FP16 AVX512VLConvert four packed FP16 values in xmm2/m64/m16bcst to four signed quadword integers, and store the result in ymm1 using truncation subject to writemask k1.
EVEX.512.66.MAP5.W0 7A /r VCVTTPH2QQ zmm1{k1}{z}, xmm2/m128/m16bcst {sae}AV/VAVX512-FP16Convert eight packed FP16 values in xmm2/m128/m16bcst to eight signed quadword integers, and store the result in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AQuarterModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to signed quadword integers in the destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTTPH2QQ dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 64
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.qword[j] := Convert_fp16_to_integer64_truncate(tsrc)
+    ELSE IF *zeroing*:
+        DEST.qword[j] := 0
+    // else dest.qword[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPH2QQ __m512i _mm512_cvtt_roundph_epi64 (__m128h a, int sae);
+
+
VCVTTPH2QQ __m512i _mm512_mask_cvtt_roundph_epi64 (__m512i src, __mmask8 k, __m128h a, int sae);
+
+
VCVTTPH2QQ __m512i _mm512_maskz_cvtt_roundph_epi64 (__mmask8 k, __m128h a, int sae);
+
+
VCVTTPH2QQ __m128i _mm_cvttph_epi64 (__m128h a);
+
+
VCVTTPH2QQ __m128i _mm_mask_cvttph_epi64 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2QQ __m128i _mm_maskz_cvttph_epi64 (__mmask8 k, __m128h a);
+
+
VCVTTPH2QQ __m256i _mm256_cvttph_epi64 (__m128h a);
+
+
VCVTTPH2QQ __m256i _mm256_mask_cvttph_epi64 (__m256i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2QQ __m256i _mm256_maskz_cvttph_epi64 (__mmask8 k, __m128h a);
+
+
VCVTTPH2QQ __m512i _mm512_cvttph_epi64 (__m128h a);
+
+
VCVTTPH2QQ __m512i _mm512_mask_cvttph_epi64 (__m512i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2QQ __m512i _mm512_maskz_cvttph_epi64 (__mmask8 k, __m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvttph2udq.html b/x86/vcvttph2udq.html new file mode 100644 index 0000000..ca3fdad --- /dev/null +++ b/x86/vcvttph2udq.html @@ -0,0 +1,115 @@ + +VCVTTPH2UDQ + — Convert with Truncation Packed FP16 Values to Unsigned DoublewordIntegers

VCVTTPH2UDQ + — Convert with Truncation Packed FP16 Values to Unsigned DoublewordIntegers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 78 /r VCVTTPH2UDQ xmm1{k1}{z}, xmm2/m64/m16bcstAV/VAVX512-FP16 AVX512VLConvert four packed FP16 values in xmm2/m64/m16bcst to four unsigned doubleword integers, and store the result in xmm1 using truncation subject to writemask k1.
EVEX.256.NP.MAP5.W0 78 /r VCVTTPH2UDQ ymm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert eight packed FP16 values in xmm2/m128/m16bcst to eight unsigned doubleword integers, and store the result in ymm1 using truncation subject to writemask k1.
EVEX.512.NP.MAP5.W0 78 /r VCVTTPH2UDQ zmm1{k1}{z}, ymm2/m256/m16bcst {sae}AV/VAVX512-FP16Convert sixteen packed FP16 values in ymm2/m256/m16bcst to sixteen unsigned doubleword integers, and store the result in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to unsigned doubleword integers in the destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer indefinite value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTTPH2UDQ dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 32
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.dword[j] := Convert_fp16_to_unsigned_integer32_truncate(tsrc)
+    ELSE IF *zeroing*:
+        DEST.dword[j] := 0
+    // else dest.dword[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPH2UDQ __m512i _mm512_cvtt_roundph_epu32 (__m256h a, int sae);
+
+
VCVTTPH2UDQ __m512i _mm512_mask_cvtt_roundph_epu32 (__m512i src, __mmask16 k, __m256h a, int sae);
+
+
VCVTTPH2UDQ __m512i _mm512_maskz_cvtt_roundph_epu32 (__mmask16 k, __m256h a, int sae);
+
+
VCVTTPH2UDQ __m128i _mm_cvttph_epu32 (__m128h a);
+
+
VCVTTPH2UDQ __m128i _mm_mask_cvttph_epu32 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2UDQ __m128i _mm_maskz_cvttph_epu32 (__mmask8 k, __m128h a);
+
+
VCVTTPH2UDQ __m256i _mm256_cvttph_epu32 (__m128h a);
+
+
VCVTTPH2UDQ __m256i _mm256_mask_cvttph_epu32 (__m256i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2UDQ __m256i _mm256_maskz_cvttph_epu32 (__mmask8 k, __m128h a);
+
+
VCVTTPH2UDQ __m512i _mm512_cvttph_epu32 (__m256h a);
+
+
VCVTTPH2UDQ __m512i _mm512_mask_cvttph_epu32 (__m512i src, __mmask16 k, __m256h a);
+
+
VCVTTPH2UDQ __m512i _mm512_maskz_cvttph_epu32 (__mmask16 k, __m256h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvttph2uqq.html b/x86/vcvttph2uqq.html new file mode 100644 index 0000000..a0f186a --- /dev/null +++ b/x86/vcvttph2uqq.html @@ -0,0 +1,115 @@ + +VCVTTPH2UQQ + — Convert with Truncation Packed FP16 Values to Unsigned Quadword Integers

VCVTTPH2UQQ + — Convert with Truncation Packed FP16 Values to Unsigned Quadword Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP5.W0 78 /r VCVTTPH2UQQ xmm1{k1}{z}, xmm2/m32/m16bcstAV/VAVX512-FP16 AVX512VLConvert two packed FP16 values in xmm2/m32/m16bcst to two unsigned quadword integers, and store the result in xmm1 using truncation subject to writemask k1.
EVEX.256.66.MAP5.W0 78 /r VCVTTPH2UQQ ymm1{k1}{z}, xmm2/m64/m16bcstAV/VAVX512-FP16 AVX512VLConvert four packed FP16 values in xmm2/m64/m16bcst to four unsigned quadword integers, and store the result in ymm1 using truncation subject to writemask k1.
EVEX.512.66.MAP5.W0 78 /r VCVTTPH2UQQ zmm1{k1}{z}, xmm2/m128/m16bcst {sae}AV/VAVX512-FP16Convert eight packed FP16 values in xmm2/m128/m16bcst to eight unsigned quadword integers, and store the result in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AQuarterModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to unsigned quadword integers in the destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer indefinite value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTTPH2UQQ dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 64
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.qword[j] := Convert_fp16_to_unsigned_integer64_truncate(tsrc)
+    ELSE IF *zeroing*:
+        DEST.qword[j] := 0
+    // else dest.qword[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPH2UQQ __m512i _mm512_cvtt_roundph_epu64 (__m128h a, int sae);
+
+
VCVTTPH2UQQ __m512i _mm512_mask_cvtt_roundph_epu64 (__m512i src, __mmask8 k, __m128h a, int sae);
+
+
VCVTTPH2UQQ __m512i _mm512_maskz_cvtt_roundph_epu64 (__mmask8 k, __m128h a, int sae);
+
+
VCVTTPH2UQQ __m128i _mm_cvttph_epu64 (__m128h a);
+
+
VCVTTPH2UQQ __m128i _mm_mask_cvttph_epu64 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2UQQ __m128i _mm_maskz_cvttph_epu64 (__mmask8 k, __m128h a);
+
+
VCVTTPH2UQQ __m256i _mm256_cvttph_epu64 (__m128h a);
+
+
VCVTTPH2UQQ __m256i _mm256_mask_cvttph_epu64 (__m256i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2UQQ __m256i _mm256_maskz_cvttph_epu64 (__mmask8 k, __m128h a);
+
+
VCVTTPH2UQQ __m512i _mm512_cvttph_epu64 (__m128h a);
+
+
VCVTTPH2UQQ __m512i _mm512_mask_cvttph_epu64 (__m512i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2UQQ __m512i _mm512_maskz_cvttph_epu64 (__mmask8 k, __m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvttph2uw.html b/x86/vcvttph2uw.html new file mode 100644 index 0000000..6335084 --- /dev/null +++ b/x86/vcvttph2uw.html @@ -0,0 +1,115 @@ + +VCVTTPH2UW + — Convert Packed FP16 Values to Unsigned Word Integers

VCVTTPH2UW + — Convert Packed FP16 Values to Unsigned Word Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 7C /r VCVTTPH2UW xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert eight packed FP16 values in xmm2/m128/m16bcst to eight unsigned word integers, and store the result in xmm1 using truncation subject to writemask k1.
EVEX.256.NP.MAP5.W0 7C /r VCVTTPH2UW ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLConvert sixteen packed FP16 values in ymm2/m256/m16bcst to sixteen unsigned word integers, and store the result in ymm1 using truncation subject to writemask k1.
EVEX.512.NP.MAP5.W0 7C /r VCVTTPH2UW zmm1{k1}{z}, zmm2/m512/m16bcst {sae}AV/VAVX512-FP16Convert thirty-two packed FP16 values in zmm2/m512/m16bcst to thirty-two unsigned word integers, and store the result in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to unsigned word integers in the destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer indefinite value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTTPH2UW dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.word[j] := Convert_fp16_to_unsigned_integer16_truncate(tsrc)
+    ELSE IF *zeroing*:
+        DEST.word[j] := 0
+    // else dest.word[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPH2UW __m512i _mm512_cvtt_roundph_epu16 (__m512h a, int sae);
+
+
VCVTTPH2UW __m512i _mm512_mask_cvtt_roundph_epu16 (__m512i src, __mmask32 k, __m512h a, int sae);
+
+
VCVTTPH2UW __m512i _mm512_maskz_cvtt_roundph_epu16 (__mmask32 k, __m512h a, int sae);
+
+
VCVTTPH2UW __m128i _mm_cvttph_epu16 (__m128h a);
+
+
VCVTTPH2UW __m128i _mm_mask_cvttph_epu16 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2UW __m128i _mm_maskz_cvttph_epu16 (__mmask8 k, __m128h a);
+
+
VCVTTPH2UW __m256i _mm256_cvttph_epu16 (__m256h a);
+
+
VCVTTPH2UW __m256i _mm256_mask_cvttph_epu16 (__m256i src, __mmask16 k, __m256h a);
+
+
VCVTTPH2UW __m256i _mm256_maskz_cvttph_epu16 (__mmask16 k, __m256h a);
+
+
VCVTTPH2UW __m512i _mm512_cvttph_epu16 (__m512h a);
+
+
VCVTTPH2UW __m512i _mm512_mask_cvttph_epu16 (__m512i src, __mmask32 k, __m512h a);
+
+
VCVTTPH2UW __m512i _mm512_maskz_cvttph_epu16 (__mmask32 k, __m512h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvttph2w.html b/x86/vcvttph2w.html new file mode 100644 index 0000000..0e1c9b7 --- /dev/null +++ b/x86/vcvttph2w.html @@ -0,0 +1,115 @@ + +VCVTTPH2W + — Convert Packed FP16 Values to Signed Word Integers

VCVTTPH2W + — Convert Packed FP16 Values to Signed Word Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP5.W0 7C /r VCVTTPH2W xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert eight packed FP16 values in xmm2/m128/m16bcst to eight signed word integers, and store the result in xmm1 using truncation subject to writemask k1.
EVEX.256.66.MAP5.W0 7C /r VCVTTPH2W ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLConvert sixteen packed FP16 values in ymm2/m256/m16bcst to sixteen signed word integers, and store the result in ymm1 using truncation subject to writemask k1.
EVEX.512.66.MAP5.W0 7C /r VCVTTPH2W zmm1{k1}{z}, zmm2/m512/m16bcst {sae}AV/VAVX512-FP16Convert thirty-two packed FP16 values in zmm2/m512/m16bcst to thirty-two signed word integers, and store the result in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed FP16 values in the source operand to signed word integers in the destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer indefinite value is returned.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTTPH2W dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.fp16[0]
+        ELSE
+            tsrc := SRC.fp16[j]
+        DEST.word[j] := Convert_fp16_to_integer16_truncate(tsrc)
+    ELSE IF *zeroing*:
+        DEST.word[j] := 0
+    // else dest.word[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPH2W __m512i _mm512_cvtt_roundph_epi16 (__m512h a, int sae);
+
+
VCVTTPH2W __m512i _mm512_mask_cvtt_roundph_epi16 (__m512i src, __mmask32 k, __m512h a, int sae);
+
+
VCVTTPH2W __m512i _mm512_maskz_cvtt_roundph_epi16 (__mmask32 k, __m512h a, int sae);
+
+
VCVTTPH2W __m128i _mm_cvttph_epi16 (__m128h a);
+
+
VCVTTPH2W __m128i _mm_mask_cvttph_epi16 (__m128i src, __mmask8 k, __m128h a);
+
+
VCVTTPH2W __m128i _mm_maskz_cvttph_epi16 (__mmask8 k, __m128h a);
+
+
VCVTTPH2W __m256i _mm256_cvttph_epi16 (__m256h a);
+
+
VCVTTPH2W __m256i _mm256_mask_cvttph_epi16 (__m256i src, __mmask16 k, __m256h a);
+
+
VCVTTPH2W __m256i _mm256_maskz_cvttph_epi16 (__mmask16 k, __m256h a);
+
+
VCVTTPH2W __m512i _mm512_cvttph_epi16 (__m512h a);
+
+
VCVTTPH2W __m512i _mm512_mask_cvttph_epi16 (__m512i src, __mmask32 k, __m512h a);
+
+
VCVTTPH2W __m512i _mm512_maskz_cvttph_epi16 (__mmask32 k, __m512h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvttps2qq.html b/x86/vcvttps2qq.html new file mode 100644 index 0000000..57e0fbf --- /dev/null +++ b/x86/vcvttps2qq.html @@ -0,0 +1,145 @@ + +VCVTTPS2QQ + — Convert With Truncation Packed Single Precision Floating-Point Values toPacked Signed Quadword Integer Values

VCVTTPS2QQ + — Convert With Truncation Packed Single Precision Floating-Point Values toPacked Signed Quadword Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F.W0 7A /r VCVTTPS2QQ xmm1 {k1}{z}, xmm2/m64/m32bcstAV/VAVX512VL AVX512DQConvert two packed single precision floating-point values from xmm2/m64/m32bcst to two packed signed quadword values in xmm1 using truncation subject to writemask k1.
EVEX.256.66.0F.W0 7A /r VCVTTPS2QQ ymm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512DQConvert four packed single precision floating-point values from xmm2/m128/m32bcst to four packed signed quadword values in ymm1 using truncation subject to writemask k1.
EVEX.512.66.0F.W0 7A /r VCVTTPS2QQ zmm1 {k1}{z}, ymm2/m256/m32bcst{sae}AV/VAVX512DQConvert eight packed single precision floating-point values from ymm2/m256/m32bcst to eight packed signed quadword values in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts with truncation packed single precision floating-point values in the source operand to eight signed quadword integers in the destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the indefinite integer value (2w-1, where w represents the number of bits in the destination format) is returned.

+

EVEX encoded versions: The source operand is a YMM/XMM/XMM (low 64 bits) register or a 256/128/64-bit memory location. The destination operation is a vector register conditionally updated with writemask k1.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTTPS2QQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Single_Precision_To_QuadInteger_Truncate(SRC[k+31:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTTPS2QQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_QuadInteger_Truncate(SRC[31:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_QuadInteger_Truncate(SRC[k+31:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPS2QQ __m512i _mm512_cvttps_epi64( __m256 a);
+
+
VCVTTPS2QQ __m512i _mm512_mask_cvttps_epi64( __m512i s, __mmask16 k, __m256 a);
+
+
VCVTTPS2QQ __m512i _mm512_maskz_cvttps_epi64( __mmask16 k, __m256 a);
+
+
VCVTTPS2QQ __m512i _mm512_cvtt_roundps_epi64( __m256 a, int sae);
+
+
VCVTTPS2QQ __m512i _mm512_mask_cvtt_roundps_epi64( __m512i s, __mmask16 k, __m256 a, int sae);
+
+
VCVTTPS2QQ __m512i _mm512_maskz_cvtt_roundps_epi64( __mmask16 k, __m256 a, int sae);
+
+
VCVTTPS2QQ __m256i _mm256_mask_cvttps_epi64( __m256i s, __mmask8 k, __m128 a);
+
+
VCVTTPS2QQ __m256i _mm256_maskz_cvttps_epi64( __mmask8 k, __m128 a);
+
+
VCVTTPS2QQ __m128i _mm_mask_cvttps_epi64( __m128i s, __mmask8 k, __m128 a);
+
+
VCVTTPS2QQ __m128i _mm_maskz_cvttps_epi64( __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvttps2udq.html b/x86/vcvttps2udq.html new file mode 100644 index 0000000..e16cc6f --- /dev/null +++ b/x86/vcvttps2udq.html @@ -0,0 +1,143 @@ + +VCVTTPS2UDQ + — Convert With Truncation Packed Single Precision Floating-Point Values toPacked Unsigned Doubleword Integer Values

VCVTTPS2UDQ + — Convert With Truncation Packed Single Precision Floating-Point Values toPacked Unsigned Doubleword Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.0F.W0 78 /r VCVTTPS2UDQ xmm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512FConvert four packed single precision floating-point values from xmm2/m128/m32bcst to four packed unsigned doubleword values in xmm1 using truncation subject to writemask k1.
EVEX.256.0F.W0 78 /r VCVTTPS2UDQ ymm1 {k1}{z}, ymm2/m256/m32bcstAV/VAVX512VL AVX512FConvert eight packed single precision floating-point values from ymm2/m256/m32bcst to eight packed unsigned doubleword values in ymm1 using truncation subject to writemask k1.
EVEX.512.0F.W0 78 /r VCVTTPS2UDQ zmm1 {k1}{z}, zmm2/m512/m32bcst{sae}AV/VAVX512FConvert sixteen packed single precision floating-point values from zmm2/m512/m32bcst to sixteen packed unsigned doubleword values in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts with truncation packed single precision floating-point values in the source operand to sixteen unsigned doubleword integers in the destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTTPS2UDQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_UInteger_Truncate(SRC[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTTPS2UDQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_UInteger_Truncate(SRC[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_Single_Precision_Floating_Point_To_UInteger_Truncate(SRC[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPS2UDQ __m512i _mm512_cvttps_epu32( __m512 a);
+
+
VCVTTPS2UDQ __m512i _mm512_mask_cvttps_epu32( __m512i s, __mmask16 k, __m512 a);
+
+
VCVTTPS2UDQ __m512i _mm512_maskz_cvttps_epu32( __mmask16 k, __m512 a);
+
+
VCVTTPS2UDQ __m512i _mm512_cvtt_roundps_epu32( __m512 a, int sae);
+
+
VCVTTPS2UDQ __m512i _mm512_mask_cvtt_roundps_epu32( __m512i s, __mmask16 k, __m512 a, int sae);
+
+
VCVTTPS2UDQ __m512i _mm512_maskz_cvtt_roundps_epu32( __mmask16 k, __m512 a, int sae);
+
+
VCVTTPS2UDQ __m256i _mm256_mask_cvttps_epu32( __m256i s, __mmask8 k, __m256 a);
+
+
VCVTTPS2UDQ __m256i _mm256_maskz_cvttps_epu32( __mmask8 k, __m256 a);
+
+
VCVTTPS2UDQ __m128i _mm_mask_cvttps_epu32( __m128i s, __mmask8 k, __m128 a);
+
+
VCVTTPS2UDQ __m128i _mm_maskz_cvttps_epu32( __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvttps2uqq.html b/x86/vcvttps2uqq.html new file mode 100644 index 0000000..296ba08 --- /dev/null +++ b/x86/vcvttps2uqq.html @@ -0,0 +1,145 @@ + +VCVTTPS2UQQ + — Convert With Truncation Packed Single Precision Floating-Point Values toPacked Unsigned Quadword Integer Values

VCVTTPS2UQQ + — Convert With Truncation Packed Single Precision Floating-Point Values toPacked Unsigned Quadword Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F.W0 78 /r VCVTTPS2UQQ xmm1 {k1}{z}, xmm2/m64/m32bcstAV/VAVX512VL AVX512DQConvert two packed single precision floating-point values from xmm2/m64/m32bcst to two packed unsigned quadword values in xmm1 using truncation subject to writemask k1.
EVEX.256.66.0F.W0 78 /r VCVTTPS2UQQ ymm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512DQConvert four packed single precision floating-point values from xmm2/m128/m32bcst to four packed unsigned quadword values in ymm1 using truncation subject to writemask k1.
EVEX.512.66.0F.W0 78 /r VCVTTPS2UQQ zmm1 {k1}{z}, ymm2/m256/m32bcst{sae}AV/VAVX512DQConvert eight packed single precision floating-point values from ymm2/m256/m32bcst to eight packed unsigned quadword values in zmm1 using truncation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts with truncation up to eight packed single precision floating-point values in the source operand to unsigned quadword integers in the destination operand.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

EVEX encoded versions: The source operand is a YMM/XMM/XMM (low 64 bits) register or a 256/128/64-bit memory location. The destination operation is a vector register conditionally updated with writemask k1.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTTPS2UQQ (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_Single_Precision_To_UQuadInteger_Truncate(SRC[k+31:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTTPS2UQQ (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_UQuadInteger_Truncate(SRC[31:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_Single_Precision_To_UQuadInteger_Truncate(SRC[k+31:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTPS2UQQ _mm<size>[_mask[z]]_cvtt[_round]ps_epu64 VCVTTPS2UQQ __m512i _mm512_cvttps_epu64( __m256 a);
+
+
VCVTTPS2UQQ __m512i _mm512_mask_cvttps_epu64( __m512i s, __mmask16 k, __m256 a);
+
+
VCVTTPS2UQQ __m512i _mm512_maskz_cvttps_epu64( __mmask16 k, __m256 a);
+
+
VCVTTPS2UQQ __m512i _mm512_cvtt_roundps_epu64( __m256 a, int sae);
+
+
VCVTTPS2UQQ __m512i _mm512_mask_cvtt_roundps_epu64( __m512i s, __mmask16 k, __m256 a, int sae);
+
+
VCVTTPS2UQQ __m512i _mm512_maskz_cvtt_roundps_epu64( __mmask16 k, __m256 a, int sae);
+
+
VCVTTPS2UQQ __m256i _mm256_mask_cvttps_epu64( __m256i s, __mmask8 k, __m128 a);
+
+
VCVTTPS2UQQ __m256i _mm256_maskz_cvttps_epu64( __mmask8 k, __m128 a);
+
+
VCVTTPS2UQQ __m128i _mm_mask_cvttps_epu64( __m128i s, __mmask8 k, __m128 a);
+
+
VCVTTPS2UQQ __m128i _mm_maskz_cvttps_epu64( __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvttsd2usi.html b/x86/vcvttsd2usi.html new file mode 100644 index 0000000..7d9e15c --- /dev/null +++ b/x86/vcvttsd2usi.html @@ -0,0 +1,84 @@ + +VCVTTSD2USI + — Convert With Truncation Scalar Double Precision Floating-Point Value toUnsigned Integer

VCVTTSD2USI + — Convert With Truncation Scalar Double Precision Floating-Point Value toUnsigned Integer

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.F2.0F.W0 78 /r VCVTTSD2USI r32, xmm1/m64{sae}AV/VAVX512FConvert one double precision floating-point value from xmm1/m64 to one unsigned doubleword integer r32 using truncation.
EVEX.LLIG.F2.0F.W1 78 /r VCVTTSD2USI r64, xmm1/m64{sae}AV/N.E.1AVX512FConvert one double precision floating-point value from xmm1/m64 to one unsigned quadword integer zero-extended into r64 using truncation.
+
+

1. For this specific instruction, EVEX.W in non-64 bit is ignored; the instruction behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 FixedModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts with truncation a double precision floating-point value in the source operand (the second operand) to an unsigned doubleword integer (or unsigned quadword integer if operand size is 64 bits) in the destination operand (the first operand). The source operand can be an XMM register or a 64-bit memory location. The destination operand is a general-purpose register. When the source operand is an XMM register, the double precision floating-point value is contained in the low quadword of the register.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

EVEX.W1 version: promotes the instruction to produce 64-bit data in 64-bit mode.

+

Operation + ¶ +

+

VCVTTSD2USI (EVEX Encoded Version) + ¶ +

+
IF 64-Bit Mode and OperandSize = 64
+    THEN DEST[63:0] := Convert_Double_Precision_Floating_Point_To_UInteger_Truncate(SRC[63:0]);
+    ELSE DEST[31:0] := Convert_Double_Precision_Floating_Point_To_UInteger_Truncate(SRC[63:0]);
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTSD2USI unsigned int _mm_cvttsd_u32(__m128d);
+
+
VCVTTSD2USI unsigned int _mm_cvtt_roundsd_u32(__m128d, int sae);
+
+
VCVTTSD2USI unsigned __int64 _mm_cvttsd_u64(__m128d);
+
+
VCVTTSD2USI unsigned __int64 _mm_cvtt_roundsd_u64(__m128d, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvttsh2si.html b/x86/vcvttsh2si.html new file mode 100644 index 0000000..a98609f --- /dev/null +++ b/x86/vcvttsh2si.html @@ -0,0 +1,85 @@ + +VCVTTSH2SI + — Convert with Truncation Low FP16 Value to a Signed Integer

VCVTTSH2SI + — Convert with Truncation Low FP16 Value to a Signed Integer

+ + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 2C /r VCVTTSH2SI r32, xmm1/m16 {sae}AV/V1AVX512-FP16Convert FP16 value in the low element of xmm1/m16 to a signed integer and store the result in r32 using truncation.
EVEX.LLIG.F3.MAP5.W1 2C /r VCVTTSH2SI r64, xmm1/m16 {sae}AV/N.E.AVX512-FP16Convert FP16 value in the low element of xmm1/m16 to a signed integer and store the result in r64 using truncation.
+
+

1. Outside of 64b mode, the EVEX.W field is ignored. The instruction behaves as if W=0 was used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts the low FP16 element in the source operand to a signed integer in the destination general purpose register.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer indefinite value is returned.

+

Operation + ¶ +

+

VCVTTSH2SI dest, src + ¶ +

+
IF 64-mode and OperandSize == 64:
+    DEST.qword := Convert_fp16_to_integer64_truncate(SRC.fp16[0])
+ELSE:
+    DEST.dword := Convert_fp16_to_integer32_truncate(SRC.fp16[0])
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTSH2SI int _mm_cvtt_roundsh_i32 (__m128h a, int sae);
+
+
VCVTTSH2SI __int64 _mm_cvtt_roundsh_i64 (__m128h a, int sae);
+
+
VCVTTSH2SI int _mm_cvttsh_i32 (__m128h a);
+
+
VCVTTSH2SI __int64 _mm_cvttsh_i64 (__m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvttsh2usi.html b/x86/vcvttsh2usi.html new file mode 100644 index 0000000..edf1e11 --- /dev/null +++ b/x86/vcvttsh2usi.html @@ -0,0 +1,85 @@ + +VCVTTSH2USI + — Convert with Truncation Low FP16 Value to an Unsigned Integer

VCVTTSH2USI + — Convert with Truncation Low FP16 Value to an Unsigned Integer

+ + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 78 /r VCVTTSH2USI r32, xmm1/m16 {sae}AV/V1AVX512-FP16Convert FP16 value in the low element of xmm1/m16 to an unsigned integer and store the result in r32 using truncation.
EVEX.LLIG.F3.MAP5.W1 78 /r VCVTTSH2USI r64, xmm1/m16 {sae}AV/N.E.AVX512-FP16Convert FP16 value in the low element of xmm1/m16 to an unsigned integer and store the result in r64 using truncation.
+
+

1. Outside of 64b mode, the EVEX.W field is ignored. The instruction behaves as if W=0 was used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts the low FP16 element in the source operand to an unsigned integer in the destination general purpose register.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer indefinite value is returned.

+

Operation + ¶ +

+

VCVTTSH2USI dest, src + ¶ +

+
IF 64-mode and OperandSize == 64:
+    DEST.qword := Convert_fp16_to_unsigned_integer64_truncate(SRC.fp16[0])
+ELSE:
+    DEST.dword := Convert_fp16_to_unsigned_integer32_truncate(SRC.fp16[0])
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTSH2USI unsigned int _mm_cvtt_roundsh_u32 (__m128h a, int sae);
+
+
VCVTTSH2USI unsigned __int64 _mm_cvtt_roundsh_u64 (__m128h a, int sae);
+
+
VCVTTSH2USI unsigned int _mm_cvttsh_u32 (__m128h a);
+
+
VCVTTSH2USI unsigned __int64 _mm_cvttsh_u64 (__m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvttss2usi.html b/x86/vcvttss2usi.html new file mode 100644 index 0000000..319f5e4 --- /dev/null +++ b/x86/vcvttss2usi.html @@ -0,0 +1,87 @@ + +VCVTTSS2USI + — Convert With Truncation Scalar Single Precision Floating-Point Value toUnsigned Integer

VCVTTSS2USI + — Convert With Truncation Scalar Single Precision Floating-Point Value toUnsigned Integer

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.F3.0F.W0 78 /r VCVTTSS2USI r32, xmm1/m32{sae}AV/VAVX512FConvert one single precision floating-point value from xmm1/m32 to one unsigned doubleword integer in r32 using truncation.
EVEX.LLIG.F3.0F.W1 78 /r VCVTTSS2USI r64, xmm1/m32{sae}AV/N.E.1AVX512FConvert one single precision floating-point value from xmm1/m32 to one unsigned quadword integer in r64 using truncation.
+
+

1. For this specific instruction, EVEX.W in non-64 bit is ignored; the instruction behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 FixedModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts with truncation a single precision floating-point value in the source operand (the second operand) to an unsigned doubleword integer (or unsigned quadword integer if operand size is 64 bits) in the destination operand (the first operand). The source operand can be an XMM register or a memory location. The destination operand is a general-purpose register. When the source operand is an XMM register, the single precision floating-point value is contained in the low doubleword of the register.

+

When a conversion is inexact, a truncated (round toward zero) value is returned. If a converted result cannot be represented in the destination format, the floating-point invalid exception is raised, and if this exception is masked, the integer value 2w – 1 is returned, where w represents the number of bits in the destination format.

+

EVEX.W1 version: promotes the instruction to produce 64-bit data in 64-bit mode.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTTSS2USI (EVEX Encoded Version) + ¶ +

+
IF 64-bit Mode and OperandSize = 64
+THEN
+    DEST[63:0] := Convert_Single_Precision_Floating_Point_To_UInteger_Truncate(SRC[31:0]);
+ELSE
+    DEST[31:0] := Convert_Single_Precision_Floating_Point_To_UInteger_Truncate(SRC[31:0]);
+FI;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTTSS2USI unsigned int _mm_cvttss_u32( __m128 a);
+
+
VCVTTSS2USI unsigned int _mm_cvtt_roundss_u32( __m128 a, int sae);
+
+
VCVTTSS2USI unsigned __int64 _mm_cvttss_u64( __m128 a);
+
+
VCVTTSS2USI unsigned __int64 _mm_cvtt_roundss_u64( __m128 a, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvtudq2pd.html b/x86/vcvtudq2pd.html new file mode 100644 index 0000000..7547eb7 --- /dev/null +++ b/x86/vcvtudq2pd.html @@ -0,0 +1,143 @@ + +VCVTUDQ2PD + — Convert Packed Unsigned Doubleword Integers to Packed Double PrecisionFloating-Point Values

VCVTUDQ2PD + — Convert Packed Unsigned Doubleword Integers to Packed Double PrecisionFloating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F.W0 7A /r VCVTUDQ2PD xmm1 {k1}{z}, xmm2/m64/m32bcstAV/VAVX512VL AVX512FConvert two packed unsigned doubleword integers from ymm2/m64/m32bcst to packed double precision floating-point values in zmm1 with writemask k1.
EVEX.256.F3.0F.W0 7A /r VCVTUDQ2PD ymm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512FConvert four packed unsigned doubleword integers from xmm2/m128/m32bcst to packed double precision floating-point values in zmm1 with writemask k1.
EVEX.512.F3.0F.W0 7A /r VCVTUDQ2PD zmm1 {k1}{z}, ymm2/m256/m32bcstAV/VAVX512FConvert eight packed unsigned doubleword integers from ymm2/m256/m32bcst to eight packed double precision floating-point values in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AHalfModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed unsigned doubleword integers in the source operand (second operand) to packed double precision floating-point values in the destination operand (first operand).

+

The source operand is a YMM/XMM/XMM (low 64 bits) register, a 256/128/64-bit memory location or a 256/128/64-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Attempt to encode this instruction with EVEX embedded rounding is ignored.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTUDQ2PD (EVEX Encoded Versions) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_UInteger_To_Double_Precision_Floating_Point(SRC[k+31:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTUDQ2PD (EVEX Encoded Versions) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_UInteger_To_Double_Precision_Floating_Point(SRC[31:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_UInteger_To_Double_Precision_Floating_Point(SRC[k+31:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUDQ2PD __m512d _mm512_cvtepu32_pd( __m256i a);
+
+
VCVTUDQ2PD __m512d _mm512_mask_cvtepu32_pd( __m512d s, __mmask8 k, __m256i a);
+
+
VCVTUDQ2PD __m512d _mm512_maskz_cvtepu32_pd( __mmask8 k, __m256i a);
+
+
VCVTUDQ2PD __m256d _mm256_cvtepu32_pd( __m128i a);
+
+
VCVTUDQ2PD __m256d _mm256_mask_cvtepu32_pd( __m256d s, __mmask8 k, __m128i a);
+
+
VCVTUDQ2PD __m256d _mm256_maskz_cvtepu32_pd( __mmask8 k, __m128i a);
+
+
VCVTUDQ2PD __m128d _mm_cvtepu32_pd( __m128i a);
+
+
VCVTUDQ2PD __m128d _mm_mask_cvtepu32_pd( __m128d s, __mmask8 k, __m128i a);
+
+
VCVTUDQ2PD __m128d _mm_maskz_cvtepu32_pd( __mmask8 k, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-51, “Type E5 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtudq2ph.html b/x86/vcvtudq2ph.html new file mode 100644 index 0000000..7549aad --- /dev/null +++ b/x86/vcvtudq2ph.html @@ -0,0 +1,119 @@ + +VCVTUDQ2PH + — Convert Packed Unsigned Doubleword Integers to Packed FP16 Values

VCVTUDQ2PH + — Convert Packed Unsigned Doubleword Integers to Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.F2.MAP5.W0 7A /r VCVTUDQ2PH xmm1{k1}{z}, xmm2/m128/m32bcstAV/VAVX512-FP16 AVX512VLConvert four packed unsigned doubleword integers from xmm2/m128/m32bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.256.F2.MAP5.W0 7A /r VCVTUDQ2PH xmm1{k1}{z}, ymm2/m256/m32bcstAV/VAVX512-FP16 AVX512VLConvert eight packed unsigned doubleword integers from ymm2/m256/m32bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.512.F2.MAP5.W0 7A /r VCVTUDQ2PH ymm1{k1}{z}, zmm2/m512/m32bcst {er}AV/VAVX512-FP16Convert sixteen packed unsigned doubleword integers from zmm2/m512/m32bcst to packed FP16 values, and store the result in ymm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed unsigned doubleword integers in the source operand to packed FP16 values in the destination operand. The destination elements are updated according to the writemask.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

If the result of the convert operation is overflow and MXCSR.OM=0 then a SIMD exception will be raised with OE=1, PE=1.

+

Operation + ¶ +

+

VCVTUDQ2PH dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 32
+IF *SRC is a register* and (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.dword[0]
+        ELSE
+            tsrc := SRC.dword[j]
+        DEST.fp16[j] := Convert_unsigned_integer32_to_fp16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL/2] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUDQ2PH __m256h _mm512_cvt_roundepu32_ph (__m512i a, int rounding);
+
+
VCVTUDQ2PH __m256h _mm512_mask_cvt_roundepu32_ph (__m256h src, __mmask16 k, __m512i a, int rounding);
+
+
VCVTUDQ2PH __m256h _mm512_maskz_cvt_roundepu32_ph (__mmask16 k, __m512i a, int rounding);
+
+
VCVTUDQ2PH __m128h _mm_cvtepu32_ph (__m128i a);
+
+
VCVTUDQ2PH __m128h _mm_mask_cvtepu32_ph (__m128h src, __mmask8 k, __m128i a);
+
+
VCVTUDQ2PH __m128h _mm_maskz_cvtepu32_ph (__mmask8 k, __m128i a);
+
+
VCVTUDQ2PH __m128h _mm256_cvtepu32_ph (__m256i a);
+
+
VCVTUDQ2PH __m128h _mm256_mask_cvtepu32_ph (__m128h src, __mmask8 k, __m256i a);
+
+
VCVTUDQ2PH __m128h _mm256_maskz_cvtepu32_ph (__mmask8 k, __m256i a);
+
+
VCVTUDQ2PH __m256h _mm512_cvtepu32_ph (__m512i a);
+
+
VCVTUDQ2PH __m256h _mm512_mask_cvtepu32_ph (__m256h src, __mmask16 k, __m512i a);
+
+
VCVTUDQ2PH __m256h _mm512_maskz_cvtepu32_ph (__mmask16 k, __m512i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtudq2ps.html b/x86/vcvtudq2ps.html new file mode 100644 index 0000000..8259f6f --- /dev/null +++ b/x86/vcvtudq2ps.html @@ -0,0 +1,152 @@ + +VCVTUDQ2PS + — Convert Packed Unsigned Doubleword Integers to Packed Single PrecisionFloating-Point Values

VCVTUDQ2PS + — Convert Packed Unsigned Doubleword Integers to Packed Single PrecisionFloating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F2.0F.W0 7A /r VCVTUDQ2PS xmm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512FConvert four packed unsigned doubleword integers from xmm2/m128/m32bcst to packed single precision floating-point values in xmm1 with writemask k1.
EVEX.256.F2.0F.W0 7A /r VCVTUDQ2PS ymm1 {k1}{z}, ymm2/m256/m32bcstAV/VAVX512VL AVX512FConvert eight packed unsigned doubleword integers from ymm2/m256/m32bcst to packed single precision floating-point values in zmm1 with writemask k1.
EVEX.512.F2.0F.W0 7A /r VCVTUDQ2PS zmm1 {k1}{z}, zmm2/m512/m32bcst{er}AV/VAVX512FConvert sixteen packed unsigned doubleword integers from zmm2/m512/m32bcst to sixteen packed single precision floating-point values in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed unsigned doubleword integers in the source operand (second operand) to single precision floating-point values in the destination operand (first operand).

+

The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTUDQ2PS (EVEX Encoded Version) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            Convert_UInteger_To_Single_Precision_Floating_Point(SRC[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTUDQ2PS (EVEX Encoded Version) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_UInteger_To_Single_Precision_Floating_Point(SRC[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_UInteger_To_Single_Precision_Floating_Point(SRC[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUDQ2PS __m512 _mm512_cvtepu32_ps( __m512i a);
+
+
VCVTUDQ2PS __m512 _mm512_mask_cvtepu32_ps( __m512 s, __mmask16 k, __m512i a);
+
+
VCVTUDQ2PS __m512 _mm512_maskz_cvtepu32_ps( __mmask16 k, __m512i a);
+
+
VCVTUDQ2PS __m512 _mm512_cvt_roundepu32_ps( __m512i a, int r);
+
+
VCVTUDQ2PS __m512 _mm512_mask_cvt_roundepu32_ps( __m512 s, __mmask16 k, __m512i a, int r);
+
+
VCVTUDQ2PS __m512 _mm512_maskz_cvt_roundepu32_ps( __mmask16 k, __m512i a, int r);
+
+
VCVTUDQ2PS __m256 _mm256_cvtepu32_ps( __m256i a);
+
+
VCVTUDQ2PS __m256 _mm256_mask_cvtepu32_ps( __m256 s, __mmask8 k, __m256i a);
+
+
VCVTUDQ2PS __m256 _mm256_maskz_cvtepu32_ps( __mmask8 k, __m256i a);
+
+
VCVTUDQ2PS __m128 _mm_cvtepu32_ps( __m128i a);
+
+
VCVTUDQ2PS __m128 _mm_mask_cvtepu32_ps( __m128 s, __mmask8 k, __m128i a);
+
+
VCVTUDQ2PS __m128 _mm_maskz_cvtepu32_ps( __mmask8 k, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtuqq2pd.html b/x86/vcvtuqq2pd.html new file mode 100644 index 0000000..dfad42d --- /dev/null +++ b/x86/vcvtuqq2pd.html @@ -0,0 +1,152 @@ + +VCVTUQQ2PD + — Convert Packed Unsigned Quadword Integers to Packed Double PrecisionFloating-Point Values

VCVTUQQ2PD + — Convert Packed Unsigned Quadword Integers to Packed Double PrecisionFloating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F.W1 7A /r VCVTUQQ2PD xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512DQConvert two packed unsigned quadword integers from xmm2/m128/m64bcst to two packed double precision floating-point values in xmm1 with writemask k1.
EVEX.256.F3.0F.W1 7A /r VCVTUQQ2PD ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512DQConvert four packed unsigned quadword integers from ymm2/m256/m64bcst to packed double precision floating-point values in ymm1 with writemask k1.
EVEX.512.F3.0F.W1 7A /r VCVTUQQ2PD zmm1 {k1}{z}, zmm2/m512/m64bcst{er}AV/VAVX512DQConvert eight packed unsigned quadword integers from zmm2/m512/m64bcst to eight packed double precision floating-point values in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed unsigned quadword integers in the source operand (second operand) to packed double precision floating-point values in the destination operand (first operand).

+

The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTUQQ2PD (EVEX Encoded Version) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL == 512) AND (EVEX.b == 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            Convert_UQuadInteger_To_Double_Precision_Floating_Point(SRC[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VCVTUQQ2PD (EVEX Encoded Version) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1)
+                THEN
+                    DEST[i+63:i] :=
+            Convert_UQuadInteger_To_Double_Precision_Floating_Point(SRC[63:0])
+                ELSE
+                    DEST[i+63:i] :=
+            Convert_UQuadInteger_To_Double_Precision_Floating_Point(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUQQ2PD __m512d _mm512_cvtepu64_ps( __m512i a);
+
+
VCVTUQQ2PD __m512d _mm512_mask_cvtepu64_ps( __m512d s, __mmask8 k, __m512i a);
+
+
VCVTUQQ2PD __m512d _mm512_maskz_cvtepu64_ps( __mmask8 k, __m512i a);
+
+
VCVTUQQ2PD __m512d _mm512_cvt_roundepu64_ps( __m512i a, int r);
+
+
VCVTUQQ2PD __m512d _mm512_mask_cvt_roundepu64_ps( __m512d s, __mmask8 k, __m512i a, int r);
+
+
VCVTUQQ2PD __m512d _mm512_maskz_cvt_roundepu64_ps( __mmask8 k, __m512i a, int r);
+
+
VCVTUQQ2PD __m256d _mm256_cvtepu64_ps( __m256i a);
+
+
VCVTUQQ2PD __m256d _mm256_mask_cvtepu64_ps( __m256d s, __mmask8 k, __m256i a);
+
+
VCVTUQQ2PD __m256d _mm256_maskz_cvtepu64_ps( __mmask8 k, __m256i a);
+
+
VCVTUQQ2PD __m128d _mm_cvtepu64_ps( __m128i a);
+
+
VCVTUQQ2PD __m128d _mm_mask_cvtepu64_ps( __m128d s, __mmask8 k, __m128i a);
+
+
VCVTUQQ2PD __m128d _mm_maskz_cvtepu64_ps( __mmask8 k, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtuqq2ph.html b/x86/vcvtuqq2ph.html new file mode 100644 index 0000000..5548227 --- /dev/null +++ b/x86/vcvtuqq2ph.html @@ -0,0 +1,119 @@ + +VCVTUQQ2PH + — Convert Packed Unsigned Quadword Integers to Packed FP16 Values

VCVTUQQ2PH + — Convert Packed Unsigned Quadword Integers to Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.F2.MAP5.W1 7A /r VCVTUQQ2PH xmm1{k1}{z}, xmm2/m128/m64bcstAV/VAVX512-FP16 AVX512VLConvert two packed unsigned doubleword integers from xmm2/m128/m64bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.256.F2.MAP5.W1 7A /r VCVTUQQ2PH xmm1{k1}{z}, ymm2/m256/m64bcstAV/VAVX512-FP16 AVX512VLConvert four packed unsigned doubleword integers from ymm2/m256/m64bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.512.F2.MAP5.W1 7A /r VCVTUQQ2PH xmm1{k1}{z}, zmm2/m512/m64bcst {er}AV/VAVX512-FP16Convert eight packed unsigned doubleword integers from zmm2/m512/m64bcst to packed FP16 values, and store the result in xmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed unsigned quadword integers in the source operand to packed FP16 values in the destination operand. The destination elements are updated according to the writemask.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

If the result of the convert operation is overflow and MXCSR.OM=0 then a SIMD exception will be raised with OE=1, PE=1.

+

Operation + ¶ +

+

VCVTUQQ2PH dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 64
+IF *SRC is a register* and (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.qword[0]
+        ELSE
+            tsrc := SRC.qword[j]
+        DEST.fp16[j] := Convert_unsigned_integer64_to_fp16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL/4] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUQQ2PH __m128h _mm512_cvt_roundepu64_ph (__m512i a, int rounding);
+
+
VCVTUQQ2PH __m128h _mm512_mask_cvt_roundepu64_ph (__m128h src, __mmask8 k, __m512i a, int rounding);
+
+
VCVTUQQ2PH __m128h _mm512_maskz_cvt_roundepu64_ph (__mmask8 k, __m512i a, int rounding);
+
+
VCVTUQQ2PH __m128h _mm_cvtepu64_ph (__m128i a);
+
+
VCVTUQQ2PH __m128h _mm_mask_cvtepu64_ph (__m128h src, __mmask8 k, __m128i a);
+
+
VCVTUQQ2PH __m128h _mm_maskz_cvtepu64_ph (__mmask8 k, __m128i a);
+
+
VCVTUQQ2PH __m128h _mm256_cvtepu64_ph (__m256i a);
+
+
VCVTUQQ2PH __m128h _mm256_mask_cvtepu64_ph (__m128h src, __mmask8 k, __m256i a);
+
+
VCVTUQQ2PH __m128h _mm256_maskz_cvtepu64_ph (__mmask8 k, __m256i a);
+
+
VCVTUQQ2PH __m128h _mm512_cvtepu64_ph (__m512i a);
+
+
VCVTUQQ2PH __m128h _mm512_mask_cvtepu64_ph (__m128h src, __mmask8 k, __m512i a);
+
+
VCVTUQQ2PH __m128h _mm512_maskz_cvtepu64_ph (__mmask8 k, __m512i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtuqq2ps.html b/x86/vcvtuqq2ps.html new file mode 100644 index 0000000..2cd5bf9 --- /dev/null +++ b/x86/vcvtuqq2ps.html @@ -0,0 +1,154 @@ + +VCVTUQQ2PS + — Convert Packed Unsigned Quadword Integers to Packed Single PrecisionFloating-Point Values

VCVTUQQ2PS + — Convert Packed Unsigned Quadword Integers to Packed Single PrecisionFloating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F2.0F.W1 7A /r VCVTUQQ2PS xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512DQConvert two packed unsigned quadword integers from xmm2/m128/m64bcst to packed single precision floating-point values in zmm1 with writemask k1.
EVEX.256.F2.0F.W1 7A /r VCVTUQQ2PS xmm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512DQConvert four packed unsigned quadword integers from ymm2/m256/m64bcst to packed single precision floating-point values in xmm1 with writemask k1.
EVEX.512.F2.0F.W1 7A /r VCVTUQQ2PS ymm1 {k1}{z}, zmm2/m512/m64bcst{er}AV/VAVX512DQConvert eight packed unsigned quadword integers from zmm2/m512/m64bcst to eight packed single precision floating-point values in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts packed unsigned quadword integers in the source operand (second operand) to single precision floating-point values in the destination operand (first operand).

+

EVEX encoded versions: The source operand is a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is a YMM/XMM/XMM (low 64 bits) register conditionally updated with writemask k1.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VCVTUQQ2PS (EVEX Encoded Version) When SRC Operand is a Register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            Convert_UQuadInteger_To_Single_Precision_Floating_Point(SRC[k+63:k])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

VCVTUQQ2PS (EVEX Encoded Version) When SRC Operand is a Memory Source + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            Convert_UQuadInteger_To_Single_Precision_Floating_Point(SRC[63:0])
+                ELSE
+                    DEST[i+31:i] :=
+            Convert_UQuadInteger_To_Single_Precision_Floating_Point(SRC[k+63:k])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUQQ2PS __m256 _mm512_cvtepu64_ps( __m512i a);
+
+
VCVTUQQ2PS __m256 _mm512_mask_cvtepu64_ps( __m256 s, __mmask8 k, __m512i a);
+
+
VCVTUQQ2PS __m256 _mm512_maskz_cvtepu64_ps( __mmask8 k, __m512i a);
+
+
VCVTUQQ2PS __m256 _mm512_cvt_roundepu64_ps( __m512i a, int r);
+
+
VCVTUQQ2PS __m256 _mm512_mask_cvt_roundepu64_ps( __m256 s, __mmask8 k, __m512i a, int r);
+
+
VCVTUQQ2PS __m256 _mm512_maskz_cvt_roundepu64_ps( __mmask8 k, __m512i a, int r);
+
+
VCVTUQQ2PS __m128 _mm256_cvtepu64_ps( __m256i a);
+
+
VCVTUQQ2PS __m128 _mm256_mask_cvtepu64_ps( __m128 s, __mmask8 k, __m256i a);
+
+
VCVTUQQ2PS __m128 _mm256_maskz_cvtepu64_ps( __mmask8 k, __m256i a);
+
+
VCVTUQQ2PS __m128 _mm_cvtepu64_ps( __m128i a);
+
+
VCVTUQQ2PS __m128 _mm_mask_cvtepu64_ps( __m128 s, __mmask8 k, __m128i a);
+
+
VCVTUQQ2PS __m128 _mm_maskz_cvtepu64_ps( __mmask8 k, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vcvtusi2sd.html b/x86/vcvtusi2sd.html new file mode 100644 index 0000000..0cd45ed --- /dev/null +++ b/x86/vcvtusi2sd.html @@ -0,0 +1,93 @@ + +VCVTUSI2SD + — Convert Unsigned Integer to Scalar Double Precision Floating-Point Value

VCVTUSI2SD + — Convert Unsigned Integer to Scalar Double Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.F2.0F.W0 7B /r VCVTUSI2SD xmm1, xmm2, r/m32AV/VAVX512FConvert one unsigned doubleword integer from r/m32 to one double precision floating-point value in xmm1.
EVEX.LLIG.F2.0F.W1 7B /r VCVTUSI2SD xmm1, xmm2, r/m64{er}AV/N.E.1AVX512FConvert one unsigned quadword integer from r/m64 to one double precision floating-point value in xmm1.
+
+

1. For this specific instruction, EVEX.W in non-64 bit is ignored; the instruction behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts an unsigned doubleword integer (or unsigned quadword integer if operand size is 64 bits) in the second source operand to a double precision floating-point value in the destination operand. The result is stored in the low quadword of the destination operand. When conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register.

+

The second source operand can be a general-purpose register or a 32/64-bit memory location. The first source and destination operands are XMM registers. Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX.W1 version: promotes the instruction to use 64-bit input value in 64-bit mode.

+

EVEX.W0 version: attempt to encode this instruction with EVEX embedded rounding is ignored.

+

Operation + ¶ +

+

VCVTUSI2SD (EVEX Encoded Version) + ¶ +

+
IF (SRC2 *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF 64-Bit Mode And OperandSize = 64
+THEN
+    DEST[63:0] := Convert_UInteger_To_Double_Precision_Floating_Point(SRC2[63:0]);
+ELSE
+    DEST[63:0] := Convert_UInteger_To_Double_Precision_Floating_Point(SRC2[31:0]);
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUSI2SD __m128d _mm_cvtu32_sd( __m128d s, unsigned a);
+
+
VCVTUSI2SD __m128d _mm_cvtu64_sd( __m128d s, unsigned __int64 a);
+
+
VCVTUSI2SD __m128d _mm_cvt_roundu64_sd( __m128d s, unsigned __int64 a, int r);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

See Table 2-48, “Type E3NF Class Exception Conditions” if W1; otherwise, see Table 2-59, “Type E10NF Class Exception Conditions.”

diff --git a/x86/vcvtusi2sh.html b/x86/vcvtusi2sh.html new file mode 100644 index 0000000..7058881 --- /dev/null +++ b/x86/vcvtusi2sh.html @@ -0,0 +1,92 @@ + +VCVTUSI2SH + — Convert Unsigned Doubleword Integer to an FP16 Value

VCVTUSI2SH + — Convert Unsigned Doubleword Integer to an FP16 Value

+ + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 7B /r VCVTUSI2SH xmm1, xmm2, r32/m32 {er}AV/V1AVX512-FP16Convert an unsigned doubleword integer from r32/m32 to an FP16 value, and store the result in xmm1. Bits 127:16 from xmm2 are copied to xmm1[127:16].
EVEX.LLIG.F3.MAP5.W1 7B /r VCVTUSI2SH xmm1, xmm2, r64/m64 {er}AV/N.E.AVX512-FP16Convert an unsigned quadword integer from r64/m64 to an FP16 value, and store the result in xmm1. Bits 127:16 from xmm2 are copied to xmm1[127:16].
+
+

1. Outside of 64b mode, the EVEX.W field is ignored. The instruction behaves as if W=0 was used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction converts an unsigned doubleword integer (or unsigned quadword integer if operand size is 64 bits) in the second source operand to a FP16 value in the destination operand. The result is stored in the low word of the destination operand. When conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or embedded rounding controls.

+

The second source operand can be a general-purpose register or a 32/64-bit memory location. The first source and destination operands are XMM registers. Bits 127:16 of the XMM register destination are copied from corresponding bits in the first source operand. Bits MAXVL-1:128 of the destination register are zeroed.

+

If the result of the convert operation is overflow and MXCSR.OM=0 then a SIMD exception will be raised with OE=1, PE=1.

+

Operation + ¶ +

+

VCVTUSI2SH dest, src1, src2 + ¶ +

+
IF *SRC2 is a register* and (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+IF 64-mode and OperandSize == 64:
+    DEST.fp16[0] := Convert_unsigned_integer64_to_fp16(SRC2.qword)
+ELSE:
+    DEST.fp16[0] := Convert_unsigned_integer32_to_fp16(SRC2.dword)
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUSI2SH __m128h _mm_cvt_roundu32_sh (__m128h a, unsigned int b, int rounding);
+
+
VCVTUSI2SH __m128h _mm_cvt_roundu64_sh (__m128h a, unsigned __int64 b, int rounding);
+
+
VCVTUSI2SH __m128h _mm_cvtu32_sh (__m128h a, unsigned int b);
+
+
VCVTUSI2SH __m128h _mm_cvtu64_sh (__m128h a, unsigned __int64 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvtusi2ss.html b/x86/vcvtusi2ss.html new file mode 100644 index 0000000..fdb1aef --- /dev/null +++ b/x86/vcvtusi2ss.html @@ -0,0 +1,94 @@ + +VCVTUSI2SS + — Convert Unsigned Integer to Scalar Single Precision Floating-Point Value

VCVTUSI2SS + — Convert Unsigned Integer to Scalar Single Precision Floating-Point Value

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.F3.0F.W0 7B /r VCVTUSI2SS xmm1, xmm2, r/m32{er}AV/VAVX512FConvert one signed doubleword integer from r/m32 to one single precision floating-point value in xmm1.
EVEX.LLIG.F3.0F.W1 7B /r VCVTUSI2SS xmm1, xmm2, r/m64{er}AV/N.E.1AVX512FConvert one signed quadword integer from r/m64 to one single precision floating-point value in xmm1.
+
+

1. For this specific instruction, EVEX.W in non-64 bit is ignored; the instruction behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Converts a unsigned doubleword integer (or unsigned quadword integer if operand size is 64 bits) in the source operand (second operand) to a single precision floating-point value in the destination operand (first operand). The source operand can be a general-purpose register or a memory location. The destination operand is an XMM register. The result is stored in the low doubleword of the destination operand. When a conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or the embedded rounding control bits.

+

The second source operand can be a general-purpose register or a 32/64-bit memory location. The first source and destination operands are XMM registers. Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

EVEX.W1 version: promotes the instruction to use 64-bit input value in 64-bit mode.

+

Operation + ¶ +

+

VCVTUSI2SS (EVEX Encoded Version) + ¶ +

+
IF (SRC2 *is register*) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF 64-Bit Mode And OperandSize = 64
+THEN
+    DEST[31:0] := Convert_UInteger_To_Single_Precision_Floating_Point(SRC[63:0]);
+ELSE
+    DEST[31:0] := Convert_UInteger_To_Single_Precision_Floating_Point(SRC[31:0]);
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUSI2SS __m128 _mm_cvtu32_ss( __m128 s, unsigned a);
+
+
VCVTUSI2SS __m128 _mm_cvt_roundu32_ss( __m128 s, unsigned a, int r);
+
+
VCVTUSI2SS __m128 _mm_cvtu64_ss( __m128 s, unsigned __int64 a);
+
+
VCVTUSI2SS __m128 _mm_cvt_roundu64_ss( __m128 s, unsigned __int64 a, int r);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

See Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vcvtuw2ph.html b/x86/vcvtuw2ph.html new file mode 100644 index 0000000..3817ba8 --- /dev/null +++ b/x86/vcvtuw2ph.html @@ -0,0 +1,119 @@ + +VCVTUW2PH + — Convert Packed Unsigned Word Integers to FP16 Values

VCVTUW2PH + — Convert Packed Unsigned Word Integers to FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.F2.MAP5.W0 7D /r VCVTUW2PH xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert eight packed unsigned word integers from xmm2/m128/m16bcst to FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.256.F2.MAP5.W0 7D /r VCVTUW2PH ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLConvert sixteen packed unsigned word integers from ymm2/m256/m16bcst to FP16 values, and store the result in ymm1 subject to writemask k1.
EVEX.512.F2.MAP5.W0 7D /r VCVTUW2PH zmm1{k1}{z}, zmm2/m512/m16bcst {er}AV/VAVX512-FP16Convert thirty-two packed unsigned word integers from zmm2/m512/m16bcst to FP16 values, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed unsigned word integers in the source operand to FP16 values in the destination operand. When conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or embedded rounding controls.

+

The destination elements are updated according to the writemask.

+

If the result of the convert operation is overflow and MXCSR.OM=0 then a SIMD exception will be raised with OE=1, PE=1.

+

Operation + ¶ +

+

VCVTUW2PH dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 16
+IF *SRC is a register* and (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.word[0]
+        ELSE
+            tsrc := SRC.word[j]
+        DEST.fp16[j] := Convert_unsignd_integer16_to_fp16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTUW2PH __m512h _mm512_cvt_roundepu16_ph (__m512i a, int rounding);
+
+
VCVTUW2PH __m512h _mm512_mask_cvt_roundepu16_ph (__m512h src, __mmask32 k, __m512i a, int rounding);
+
+
VCVTUW2PH __m512h _mm512_maskz_cvt_roundepu16_ph (__mmask32 k, __m512i a, int rounding);
+
+
VCVTUW2PH __m128h _mm_cvtepu16_ph (__m128i a);
+
+
VCVTUW2PH __m128h _mm_mask_cvtepu16_ph (__m128h src, __mmask8 k, __m128i a);
+
+
VCVTUW2PH __m128h _mm_maskz_cvtepu16_ph (__mmask8 k, __m128i a);
+
+
VCVTUW2PH __m256h _mm256_cvtepu16_ph (__m256i a);
+
+
VCVTUW2PH __m256h _mm256_mask_cvtepu16_ph (__m256h src, __mmask16 k, __m256i a);
+
+
VCVTUW2PH __m256h _mm256_maskz_cvtepu16_ph (__mmask16 k, __m256i a);
+
+
VCVTUW2PH __m512h _mm512_cvtepu16_ph (__m512i a);
+
+
VCVTUW2PH __m512h _mm512_mask_cvtepu16_ph (__m512h src, __mmask32 k, __m512i a);
+
+
VCVTUW2PH __m512h _mm512_maskz_cvtepu16_ph (__mmask32 k, __m512i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vcvtw2ph.html b/x86/vcvtw2ph.html new file mode 100644 index 0000000..51e1189 --- /dev/null +++ b/x86/vcvtw2ph.html @@ -0,0 +1,118 @@ + +VCVTW2PH + — Convert Packed Signed Word Integers to FP16 Values

VCVTW2PH + — Convert Packed Signed Word Integers to FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.F3.MAP5.W0 7D /r VCVTW2PH xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert eight packed signed word integers from xmm2/m128/m16bcst to FP16 values, and store the result in xmm1 subject to writemask k1.
EVEX.256.F3.MAP5.W0 7D /r VCVTW2PH ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLConvert sixteen packed signed word integers from ymm2/m256/m16bcst to FP16 values, and store the result in ymm1 subject to writemask k1.
EVEX.512.F3.MAP5.W0 7D /r VCVTW2PH zmm1{k1}{z}, zmm2/m512/m16bcst {er}AV/VAVX512-FP16Convert thirty-two packed signed word integers from zmm2/m512/m16bcst to FP16 values, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction converts packed signed word integers in the source operand to FP16 values in the destination operand. When conversion is inexact, the value returned is rounded according to the rounding control bits in the MXCSR register or embedded rounding controls.

+

The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VCVTW2PH dest, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL / 16
+IF *SRC is a register* and (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE:
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *SRC is memory* and EVEX.b = 1:
+            tsrc := SRC.word[0]
+        ELSE
+            tsrc := SRC.word[j]
+        DEST.fp16[j] := Convert_integer16_to_fp16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VCVTW2PH __m512h _mm512_cvt_roundepi16_ph (__m512i a, int rounding);
+
+
VCVTW2PH __m512h _mm512_mask_cvt_roundepi16_ph (__m512h src, __mmask32 k, __m512i a, int rounding);
+
+
VCVTW2PH __m512h _mm512_maskz_cvt_roundepi16_ph (__mmask32 k, __m512i a, int rounding);
+
+
VCVTW2PH __m128h _mm_cvtepi16_ph (__m128i a);
+
+
VCVTW2PH __m128h _mm_mask_cvtepi16_ph (__m128h src, __mmask8 k, __m128i a);
+
+
VCVTW2PH __m128h _mm_maskz_cvtepi16_ph (__mmask8 k, __m128i a);
+
+
VCVTW2PH __m256h _mm256_cvtepi16_ph (__m256i a);
+
+
VCVTW2PH __m256h _mm256_mask_cvtepi16_ph (__m256h src, __mmask16 k, __m256i a);
+
+
VCVTW2PH __m256h _mm256_maskz_cvtepi16_ph (__mmask16 k, __m256i a);
+
+
VCVTW2PH __m512h _mm512_cvtepi16_ph (__m512i a);
+
+
VCVTW2PH __m512h _mm512_mask_cvtepi16_ph (__m512h src, __mmask32 k, __m512i a);
+
+
VCVTW2PH __m512h _mm512_maskz_cvtepi16_ph (__mmask32 k, __m512i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vdbpsadbw.html b/x86/vdbpsadbw.html new file mode 100644 index 0000000..9cc0c94 --- /dev/null +++ b/x86/vdbpsadbw.html @@ -0,0 +1,669 @@ + +VDBPSADBW + — Double Block Packed Sum-Absolute-Differences (SAD) on Unsigned Bytes

VDBPSADBW + — Double Block Packed Sum-Absolute-Differences (SAD) on Unsigned Bytes

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 42 /r ib VDBPSADBW xmm1 {k1}{z}, xmm2, xmm3/m128, imm8AV/VAVX512VL AVX512BWCompute packed SAD word results of unsigned bytes in dword block from xmm2 with unsigned bytes of dword blocks transformed from xmm3/m128 using the shuffle controls in imm8. Results are written to xmm1 under the writemask k1.
EVEX.256.66.0F3A.W0 42 /r ib VDBPSADBW ymm1 {k1}{z}, ymm2, ymm3/m256, imm8AV/VAVX512VL AVX512BWCompute packed SAD word results of unsigned bytes in dword block from ymm2 with unsigned bytes of dword blocks transformed from ymm3/m256 using the shuffle controls in imm8. Results are written to ymm1 under the writemask k1.
EVEX.512.66.0F3A.W0 42 /r ib VDBPSADBW zmm1 {k1}{z}, zmm2, zmm3/m512, imm8AV/VAVX512BWCompute packed SAD word results of unsigned bytes in dword block from zmm2 with unsigned bytes of dword blocks transformed from zmm3/m512 using the shuffle controls in imm8. Results are written to zmm1 under the writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Compute packed SAD (sum of absolute differences) word results of unsigned bytes from two 32-bit dword elements. Packed SAD word results are calculated in multiples of qword superblocks, producing 4 SAD word results in each 64-bit superblock of the destination register.

+

Within each super block of packed word results, the SAD results from two 32-bit dword elements are calculated as follows:

+
    +
  • The lower two word results are calculated each from the SAD operation between a sliding dword element within a qword superblock from an intermediate vector with a stationary dword element in the corresponding qword superblock of the first source operand. The intermediate vector, see “Tmp1” in Figure 5-8, is constructed from the second source operand the imm8 byte as shuffle control to select dword elements within a 128-bit lane of the second source operand. The two sliding dword elements in a qword superblock of Tmp1 are located at byte offset 0 and 1 within the superblock, respectively. The stationary dword element in the qword superblock from the first source operand is located at byte offset 0.
  • +
  • The next two word results are calculated each from the SAD operation between a sliding dword element within a qword superblock from the intermediate vector Tmp1 with a second stationary dword element in the corresponding qword superblock of the first source operand. The two sliding dword elements in a qword superblock of Tmp1 are located at byte offset 2and 3 within the superblock, respectively. The stationary dword element in the qword superblock from the first source operand is located at byte offset 4.
  • +
  • The intermediate vector is constructed in 128-bits lanes. Within each 128-bit lane, each dword element of the intermediate vector is selected by a two-bit field within the imm8 byte on the corresponding 128-bits of the second source operand. The imm8 byte serves as dword shuffle control within each 128-bit lanes of the intermediate vector and the second source operand, similarly to PSHUFD.
+

The first source operand is a ZMM/YMM/XMM register. The second source operand is a ZMM/YMM/XMM register, or a 512/256/128-bit memory location. The destination operand is conditionally updated based on writemask k1 at 16-bit word granularity.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +127+128*n +95+128*n +63+128*n +31+128*n +128*n +128-bit Lane of Src2 +DW1 +DW0 +DW3 +DW2 +00B: DW0 +01B: DW1 +10B: DW2 +imm8 shuffle control +11B: DW3 +75310 +127+128*n +95+128*n +63+128*n +31+128*n +128*n +128-bit Lane of Tmp1 +Tmp1 qword superblock +39 31 23 15 8 +55 47 39 31 24 +Tmp1 sliding dword +Tmp1 sliding dword +31 23 15 7 0 +63 55 47 39 32 +Src1 stationary dword 0 +Src1 stationary dword 1 +_ _ _ _ +_ _ _ _ +abs abs abs abs +abs abs abs abs ++ ++ +47 39 31 23 16 +31 23 15 7 0 +Tmp1 sliding dword +Tmp1 sliding dword +63 55 47 +31 23 15 7 0 +Src1 stationary dword 1 +Src1 stationary dword 0 +____ +____ +abs abs abs abs +abs abs abs abs ++ +63 +47 +31 +15 +0 +Destination qword superblock +
Figure 5-8. 64-bit Super Block of SAD Operation in VDBPSADBW
+

Operation + ¶ +

+

VDBPSADBW (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+Selection of quadruplets:
+FOR I = 0 to VL step 128
+    TMP1[I+31:I] := select (SRC2[I+127: I], imm8[1:0])
+    TMP1[I+63: I+32] := select (SRC2[I+127: I], imm8[3:2])
+    TMP1[I+95: I+64] := select (SRC2[I+127: I], imm8[5:4])
+    TMP1[I+127: I+96] := select (SRC2[I+127: I], imm8[7:6])
+END FOR
+SAD of quadruplets:
+FOR I =0 to VL step 64
+    TMP_DEST[I+15:I] := ABS(SRC1[I+7: I] - TMP1[I+7: I]) +
+        ABS(SRC1[I+15: I+8]- TMP1[I+15: I+8]) +
+        ABS(SRC1[I+23: I+16]- TMP1[I+23: I+16]) +
+        ABS(SRC1[I+31: I+24]- TMP1[I+31: I+24])
+    TMP_DEST[I+31: I+16] := ABS(SRC1[I+7: I] - TMP1[I+15: I+8]) +
+        ABS(SRC1[I+15: I+8]- TMP1[I+23: I+16]) +
+        ABS(SRC1[I+23: I+16]- TMP1[I+31: I+24]) +
+        ABS(SRC1[I+31: I+24]- TMP1[I+39: I+32])
+    TMP_DEST[I+47: I+32] := ABS(SRC1[I+39: I+32] - TMP1[I+23: I+16]) +
+        ABS(SRC1[I+47: I+40]- TMP1[I+31: I+24]) +
+        ABS(SRC1[I+55: I+48]- TMP1[I+39: I+32]) +
+        ABS(SRC1[I+63: I+56]- TMP1[I+47: I+40])
+    TMP_DEST[I+63: I+48] := ABS(SRC1[I+39: I+32] - TMP1[I+31: I+24]) +
+        ABS(SRC1[I+47: I+40] - TMP1[I+39: I+32]) +
+        ABS(SRC1[I+55: I+48] - TMP1[I+47: I+40]) +
+        ABS(SRC1[I+63: I+56] - TMP1[I+55: I+48])
+ENDFOR
+FORj:= 0TOKL-1
+    i:= j*16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TMP_DEST[i+15:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VDBPSADBW __m512i _mm512_dbsad_epu8(__m512i a, __m512i b int imm8);
+
+
VDBPSADBW __m512i _mm512_mask_dbsad_epu8(__m512i s, __mmask32 m, __m512i a, __m512i b int imm8);
+
+
VDBPSADBW __m512i _mm512_maskz_dbsad_epu8(__mmask32 m, __m512i a, __m512i b int imm8);
+
+
VDBPSADBW __m256i _mm256_dbsad_epu8(__m256i a, __m256i b int imm8);
+
+
VDBPSADBW __m256i _mm256_mask_dbsad_epu8(__m256i s, __mmask16 m, __m256i a, __m256i b int imm8);
+
+
VDBPSADBW __m256i _mm256_maskz_dbsad_epu8(__mmask16 m, __m256i a, __m256i b int imm8);
+
+
VDBPSADBW __m128i _mm_dbsad_epu8(__m128i a, __m128i b int imm8);
+
+
VDBPSADBW __m128i _mm_mask_dbsad_epu8(__m128i s, __mmask8 m, __m128i a, __m128i b int imm8);
+
+
VDBPSADBW __m128i _mm_maskz_dbsad_epu8(__mmask8 m, __m128i a, __m128i b int imm8);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vdivph.html b/x86/vdivph.html new file mode 100644 index 0000000..351a8cc --- /dev/null +++ b/x86/vdivph.html @@ -0,0 +1,129 @@ + +VDIVPH + — Divide Packed FP16 Values

VDIVPH + — Divide Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 5E /r VDIVPH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLDivide packed FP16 values in xmm2 by packed FP16 values in xmm3/m128/m16bcst, and store the result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 5E /r VDIVPH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLDivide packed FP16 values in ymm2 by packed FP16 values in ymm3/m256/m16bcst, and store the result in ymm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 5E /r VDIVPH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Divide packed FP16 values in zmm2 by packed FP16 values in zmm3/m512/m16bcst, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction divides packed FP16 values from the first source operand by the corresponding elements in the second source operand, storing the packed FP16 result in the destination operand. The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VDIVPH (EVEX Encoded Versions) When SRC2 Operand is a Register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.fp16[j] := SRC1.fp16[j] / SRC2.fp16[j]
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VDIVPH (EVEX Encoded Versions) When SRC2 Operand is a Memory Source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            DEST.fp16[j] := SRC1.fp16[j] / SRC2.fp16[0]
+        ELSE:
+            DEST.fp16[j] := SRC1.fp16[j] / SRC2.fp16[j]
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VDIVPH __m128h _mm_div_ph (__m128h a, __m128h b);
+
+
VDIVPH __m128h _mm_mask_div_ph (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VDIVPH __m128h _mm_maskz_div_ph (__mmask8 k, __m128h a, __m128h b);
+
+
VDIVPH __m256h _mm256_div_ph (__m256h a, __m256h b);
+
+
VDIVPH __m256h _mm256_mask_div_ph (__m256h src, __mmask16 k, __m256h a, __m256h b);
+
+
VDIVPH __m256h _mm256_maskz_div_ph (__mmask16 k, __m256h a, __m256h b);
+
+
VDIVPH __m512h _mm512_div_ph (__m512h a, __m512h b);
+
+
VDIVPH __m512h _mm512_mask_div_ph (__m512h src, __mmask32 k, __m512h a, __m512h b);
+
+
VDIVPH __m512h _mm512_maskz_div_ph (__mmask32 k, __m512h a, __m512h b);
+
+
VDIVPH __m512h _mm512_div_round_ph (__m512h a, __m512h b, int rounding);
+
+
VDIVPH __m512h _mm512_mask_div_round_ph (__m512h src, __mmask32 k, __m512h a, __m512h b, int rounding);
+
+
VDIVPH __m512h _mm512_maskz_div_round_ph (__mmask32 k, __m512h a, __m512h b, int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal, Zero.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vdivsh.html b/x86/vdivsh.html new file mode 100644 index 0000000..010781f --- /dev/null +++ b/x86/vdivsh.html @@ -0,0 +1,87 @@ + +VDIVSH + — Divide Scalar FP16 Values

VDIVSH + — Divide Scalar FP16 Values

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 5E /r VDIVSH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Divide low FP16 value in xmm2 by low FP16 value in xmm3/m16, and store the result in xmm1 subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction divides the low FP16 value from the first source operand by the corresponding value in the second source operand, storing the FP16 result in the destination operand. Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+

VDIVSH (EVEX Encoded Versions) + ¶ +

+
IF EVEX.b = 1 and SRC2 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    DEST.fp16[0] := SRC1.fp16[0] / SRC2.fp16[0]
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else dest.fp16[0] remains unchanged
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VDIVSH __m128h _mm_div_round_sh (__m128h a, __m128h b, int rounding);
+
+
VDIVSH __m128h _mm_mask_div_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int rounding);
+
+
VDIVSH __m128h _mm_maskz_div_round_sh (__mmask8 k, __m128h a, __m128h b, int rounding);
+
+
VDIVSH __m128h _mm_div_sh (__m128h a, __m128h b);
+
+
VDIVSH __m128h _mm_mask_div_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VDIVSH __m128h _mm_maskz_div_sh (__mmask8 k, __m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal, Zero.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vdpbf16ps.html b/x86/vdpbf16ps.html new file mode 100644 index 0000000..84b4e5f --- /dev/null +++ b/x86/vdpbf16ps.html @@ -0,0 +1,142 @@ + +VDPBF16PS + — Dot Product of BF16 Pairs Accumulated Into Packed Single Precision

VDPBF16PS + — Dot Product of BF16 Pairs Accumulated Into Packed Single Precision

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 52 /r VDPBF16PS xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512VL AVX512_BF16Multiply BF16 pairs from xmm2 and xmm3/m128, and accumulate the resulting packed single precision results in xmm1 with writemask k1.
EVEX.256.F3.0F38.W0 52 /r VDPBF16PS ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512VL AVX512_BF16Multiply BF16 pairs from ymm2 and ymm3/m256, and accumulate the resulting packed single precision results in ymm1 with writemask k1.
EVEX.512.F3.0F38.W0 52 /r VDPBF16PS zmm1{k1}{z}, zmm2, zmm3/m512/m32bcstAV/VAVX512F AVX512_BF16Multiply BF16 pairs from zmm2 and zmm3/m512, and accumulate the resulting packed single precision results in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a SIMD dot-product of two BF16 pairs and accumulates into a packed single precision register.

+

“Round to nearest even” rounding mode is used when doing each accumulation of the FMA. Output denormals are always flushed to zero and input denormals are always treated as zero. MXCSR is not consulted nor updated.

+

NaN propagation priorities are described in Table 5-1.

+
+ + + + + + + + + + + + + + + + + + + + + + +
NaN PriorityDescriptionComments
1src1 low is NaNLower part has priority over upper part, i.e., it overrides the upper part.
2src2 low is NaN
3src1 high is NaNUpper part may be overridden if lower has NaN.
4src2 high is NaN
5srcdest is NaNDest is propagated if no NaN is encountered by src2.
+
Table 5-1. NaN Propagation Priorities
+

Operation + ¶ +

+
Define make_fp32(x):
+    // The x parameter is bfloat16. Pack it in to upper 16b of a dword. The bit pattern is a legal fp32 value. Return that bit pattern.
+    dword := 0
+    dword[31:16] := x
+    RETURN dword
+
+

VDPBF16PS srcdest, src1, src2 + ¶ +

+
VL = (128, 256, 512)
+KL = VL/32
+origdest := srcdest
+FOR i := 0 to KL-1:
+    IF k1[ i ] or *no writemask*:
+        IF src2 is memory and evex.b == 1:
+            t := src2.dword[0]
+        ELSE:
+            t := src2.dword[ i ]
+        // FP32 FMA with daz in, ftz out and RNE rounding. MXCSR neither consulted nor updated.
+        srcdest.fp32[ i ] += make_fp32(src1.bfloat16[2*i+1]) * make_fp32(t.bfloat[1])
+        srcdest.fp32[ i ] += make_fp32(src1.bfloat16[2*i+0]) * make_fp32(t.bfloat[0])
+    ELSE IF *zeroing*:
+        srcdest.dword[ i ] := 0
+    ELSE: // merge masking, dest element unchanged
+        srcdest.dword[ i ] := origdest.dword[ i ]
+srcdest[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VDPBF16PS __m128 _mm_dpbf16_ps(__m128, __m128bh, __m128bh);
+
+
VDPBF16PS __m128 _mm_mask_dpbf16_ps( __m128, __mmask8, __m128bh, __m128bh);
+
+
VDPBF16PS __m128 _mm_maskz_dpbf16_ps(__mmask8, __m128, __m128bh, __m128bh);
+
+
VDPBF16PS __m256 _mm256_dpbf16_ps(__m256, __m256bh, __m256bh);
+
+
VDPBF16PS __m256 _mm256_mask_dpbf16_ps(__m256, __mmask8, __m256bh, __m256bh);
+
+
VDPBF16PS __m256 _mm256_maskz_dpbf16_ps(__mmask8, __m256, __m256bh, __m256bh);
+
+
VDPBF16PS __m512 _mm512_dpbf16_ps(__m512, __m512bh, __m512bh);
+
+
VDPBF16PS __m512 _mm512_mask_dpbf16_ps(__m512, __mmask16, __m512bh, __m512bh);
+
+
VDPBF16PS __m512 _mm512_maskz_dpbf16_ps(__mmask16, __m512, __m512bh, __m512bh);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/verr.verw.html b/x86/verr.verw.html new file mode 100644 index 0000000..5acfc24 --- /dev/null +++ b/x86/verr.verw.html @@ -0,0 +1,147 @@ + +VERR/VERW + — Verify a Segment for Reading or Writing

VERR/VERW + — Verify a Segment for Reading or Writing

+ + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 00 /4 VERR r/m16MValidValidSet ZF=1 if segment specified with r/m16 can be read.
0F 00 /5 VERW r/m16MValidValidSet ZF=1 if segment specified with r/m16 can be written.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Verifies whether the code or data segment specified with the source operand is readable (VERR) or writable (VERW) from the current privilege level (CPL). The source operand is a 16-bit register or a memory location that contains the segment selector for the segment to be verified. If the segment is accessible and readable (VERR) or writable (VERW), the ZF flag is set; otherwise, the ZF flag is cleared. Code segments are never verified as writable. This check cannot be performed on system segments.

+

To set the ZF flag, the following conditions must be met:

+
    +
  • The segment selector is not NULL.
  • +
  • The selector must denote a descriptor within the bounds of the descriptor table (GDT or LDT).
  • +
  • The selector must denote the descriptor of a code or data segment (not that of a system segment or gate).
  • +
  • For the VERR instruction, the segment must be readable.
  • +
  • For the VERW instruction, the segment must be a writable data segment.
  • +
  • If the segment is not a conforming code segment, the segment’s DPL must be greater than or equal to (have less or the same privilege as) both the CPL and the segment selector's RPL.
+

The validation performed is the same as is performed when a segment selector is loaded into the DS, ES, FS, or GS register, and the indicated access (read or write) is performed. The segment selector's value cannot result in a protection exception, enabling the software to anticipate possible segment access problems.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode. The operand size is fixed at 16 bits.

+

Operation + ¶ +

+
IF SRC(Offset) > (GDTR(Limit) or (LDTR(Limit))
+    THEN ZF := 0; FI;
+Read segment descriptor;
+IF SegmentDescriptor(DescriptorType) = 0 (* System segment *)
+or (SegmentDescriptor(Type) ≠ conforming code segment)
+and (CPL > DPL) or (RPL > DPL)
+    THEN
+        ZF := 0;
+    ELSE
+        IF ((Instruction = VERR) and (Segment readable))
+        or ((Instruction = VERW) and (Segment writable))
+            THEN
+                ZF := 1;
+            ELSE
+                ZF := 0;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

The ZF flag is set to 1 if the segment is accessible and readable (VERR) or writable (VERW); otherwise, it is set to 0.

+

Protected Mode Exceptions + ¶ +

+

The only exceptions generated for these instructions are those related to illegal addressing of the source operand.

+ + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + +
#UDThe VERR and VERW instructions are not recognized in real-address mode.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + +
#UDThe VERR and VERW instructions are not recognized in virtual-8086 mode.
If the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used.
diff --git a/x86/vexp2pd.html b/x86/vexp2pd.html new file mode 100644 index 0000000..84170bd --- /dev/null +++ b/x86/vexp2pd.html @@ -0,0 +1,120 @@ + +VEXP2PD + — Approximation to the Exponential 2^x of Packed Double Precision Floating-PointValues With Less Than 2^-23 Relative Error

VEXP2PD + — Approximation to the Exponential 2^x of Packed Double Precision Floating-PointValues With Less Than 2^-23 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W1 C8 /r VEXP2PD zmm1 {k1}{z}, zmm2/m512/m64bcst {sae}AV/VAVX512ERComputes approximations to the exponential 2^x (with less than 2^-23 of maximum relative error) of the packed double precision floating-point values from zmm2/m512/m64bcst and stores the floating-point result in zmm1with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
A Full ModRM:reg (r, w) ModRM:r/m (r) N/A N/A
+

Description + ¶ +

+

Computes the approximate base-2 exponential evaluation of the double precision floating-point values in the source operand (the second operand) and stores the results to the destination operand (the first operand) using the writemask k1. The approximate base-2 exponential is evaluated with less than 2^-23 of relative error.

+

Denormal input values are treated as zeros and do not signal #DE, irrespective of MXCSR.DAZ. Denormal results are flushed to zeros and do not signal #UE, irrespective of MXCSR.FTZ.

+

The source operand is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

A numerically exact implementation of VEXP2xx can be found at https://software.intel.com/en-us/articles/refer- + ¶ +

+

ence-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VEXP2PD + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+63:i] := EXP2_23_DP(SRC[63:0])
+                ELSE DEST[i+63:i] := EXP2_23_DP(SRC[i+63:i])
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+63:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+63:i] := 0
+        FI;
+    FI;
+ENDFOR;
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Source InputResultComments
NaNQNaN(src)If (SRC = SNaN) then #I
+∞+∞
+/-01.0fExact result
-∞+0.0f
Integral value N2^ (N)Exact result
+
Table 6-44. Special Values Behavior
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VEXP2PD __m512d _mm512_exp2a23_round_pd (__m512d a, int sae);
+
+
VEXP2PD __m512d _mm512_mask_exp2a23_round_pd (__m512d a, __mmask8 m, __m512d b, int sae);
+
+
VEXP2PD __m512d _mm512_maskz_exp2a23_round_pd ( __mmask8 m, __m512d b, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Overflow.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vexp2ps.html b/x86/vexp2ps.html new file mode 100644 index 0000000..2ad55cd --- /dev/null +++ b/x86/vexp2ps.html @@ -0,0 +1,120 @@ + +VEXP2PS + — Approximation to the Exponential 2^x of Packed Single Precision Floating-PointValues With Less Than 2^-23 Relative Error

VEXP2PS + — Approximation to the Exponential 2^x of Packed Single Precision Floating-PointValues With Less Than 2^-23 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W0 C8 /r VEXP2PS zmm1 {k1}{z}, zmm2/m512/m32bcst {sae}AV/VAVX512ERComputes approximations to the exponential 2^x (with less than 2^-23 of maximum relative error) of the packed single-precision floating-point values from zmm2/m512/m32bcst and stores the floating-point result in zmm1with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
A Full ModRM:reg (r, w) ModRM:r/m (r) N/A N/A
+

Description + ¶ +

+

Computes the approximate base-2 exponential evaluation of the single-precision floating-point values in the source operand (the second operand) and store the results in the destination operand (the first operand) using the write-mask k1. The approximate base-2 exponential is evaluated with less than 2^-23 of relative error.

+

Denormal input values are treated as zeros and do not signal #DE, irrespective of MXCSR.DAZ. Denormal results are flushed to zeros and do not signal #UE, irrespective of MXCSR.FTZ.

+

The source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

A numerically exact implementation of VEXP2xx can be found at https://software.intel.com/en-us/articles/refer- + ¶ +

+

ence-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VEXP2PS + ¶ +

+
(KL, VL) = (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+31:i] := EXP2_23_SP(SRC[31:0])
+                ELSE DEST[i+31:i] := EXP2_23_SP(SRC[i+31:i])
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+31:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+31:i] := 0
+        FI;
+    FI;
+ENDFOR;
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Source InputResultComments
NaNQNaN(src)If (SRC = SNaN) then #I
+∞+∞
+/-01.0fExact result
-∞+0.0f
Integral value N2^ (N)Exact result
+
Table 6-45. Special Values Behavior
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VEXP2PS __m512 _mm512_exp2a23_round_ps (__m512 a, int sae);
+
+
VEXP2PS __m512 _mm512_mask_exp2a23_round_ps (__m512 a, __mmask16 m, __m512 b, int sae);
+
+
VEXP2PS __m512 _mm512_maskz_exp2a23_round_ps (__mmask16 m, __m512 b, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Overflow.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vexpandpd.html b/x86/vexpandpd.html new file mode 100644 index 0000000..7ea1662 --- /dev/null +++ b/x86/vexpandpd.html @@ -0,0 +1,126 @@ + +VEXPANDPD + — Load Sparse Packed Double Precision Floating-Point Values From Dense Memory

VEXPANDPD + — Load Sparse Packed Double Precision Floating-Point Values From Dense Memory

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 88 /r VEXPANDPD xmm1 {k1}{z}, xmm2/m128AV/VAVX512VL AVX512FExpand packed double precision floating-point values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.66.0F38.W1 88 /r VEXPANDPD ymm1 {k1}{z}, ymm2/m256AV/VAVX512VL AVX512FExpand packed double precision floating-point values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.66.0F38.W1 88 /r VEXPANDPD zmm1 {k1}{z}, zmm2/m512AV/VAVX512FExpand packed double precision floating-point values from zmm2/m512 to zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Expand (load) up to 8/4/2, contiguous, double precision floating-point values of the input vector in the source operand (the second operand) to sparse elements in the destination operand (the first operand) selected by the writemask k1.

+

The destination operand is a ZMM/YMM/XMM register, the source operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location.

+

The input vector starts from the lowest element in the source operand. The writemask register k1 selects the destination elements (a partial vector or sparse elements if less than 8 elements) to be replaced by the ascending elements in the input vector. Destination elements not selected by the writemask k1 are either unmodified or zeroed, depending on EVEX.z.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VEXPANDPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+63:i] := SRC[k+63:k];
+            k := k + 64
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    THEN DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VEXPANDPD __m512d _mm512_mask_expand_pd( __m512d s, __mmask8 k, __m512d a);
+
+
VEXPANDPD __m512d _mm512_maskz_expand_pd( __mmask8 k, __m512d a);
+
+
VEXPANDPD __m512d _mm512_mask_expandloadu_pd( __m512d s, __mmask8 k, void * a);
+
+
VEXPANDPD __m512d _mm512_maskz_expandloadu_pd( __mmask8 k, void * a);
+
+
VEXPANDPD __m256d _mm256_mask_expand_pd( __m256d s, __mmask8 k, __m256d a);
+
+
VEXPANDPD __m256d _mm256_maskz_expand_pd( __mmask8 k, __m256d a);
+
+
VEXPANDPD __m256d _mm256_mask_expandloadu_pd( __m256d s, __mmask8 k, void * a);
+
+
VEXPANDPD __m256d _mm256_maskz_expandloadu_pd( __mmask8 k, void * a);
+
+
VEXPANDPD __m128d _mm_mask_expand_pd( __m128d s, __mmask8 k, __m128d a);
+
+
VEXPANDPD __m128d _mm_maskz_expand_pd( __mmask8 k, __m128d a);
+
+
VEXPANDPD __m128d _mm_mask_expandloadu_pd( __m128d s, __mmask8 k, void * a);
+
+
VEXPANDPD __m128d _mm_maskz_expandloadu_pd( __mmask8 k, void * a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vexpandps.html b/x86/vexpandps.html new file mode 100644 index 0000000..3a96553 --- /dev/null +++ b/x86/vexpandps.html @@ -0,0 +1,126 @@ + +VEXPANDPS + — Load Sparse Packed Single Precision Floating-Point Values From Dense Memory

VEXPANDPS + — Load Sparse Packed Single Precision Floating-Point Values From Dense Memory

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 88 /r VEXPANDPS xmm1 {k1}{z}, xmm2/m128AV/VAVX512VL AVX512FExpand packed single precision floating-point values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.66.0F38.W0 88 /r VEXPANDPS ymm1 {k1}{z}, ymm2/m256AV/VAVX512VL AVX512FExpand packed single precision floating-point values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.66.0F38.W0 88 /r VEXPANDPS zmm1 {k1}{z}, zmm2/m512AV/VAVX512FExpand packed single precision floating-point values from zmm2/m512 to zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Expand (load) up to 16/8/4, contiguous, single precision floating-point values of the input vector in the source operand (the second operand) to sparse elements of the destination operand (the first operand) selected by the writemask k1.

+

The destination operand is a ZMM/YMM/XMM register, the source operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location.

+

The input vector starts from the lowest element in the source operand. The writemask k1 selects the destination elements (a partial vector or sparse elements if less than 16 elements) to be replaced by the ascending elements in the input vector. Destination elements not selected by the writemask k1 are either unmodified or zeroed, depending on EVEX.z.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VEXPANDPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+31:i] := SRC[k+31:k];
+            k := k + 32
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VEXPANDPS __m512 _mm512_mask_expand_ps( __m512 s, __mmask16 k, __m512 a);
+
+
VEXPANDPS __m512 _mm512_maskz_expand_ps( __mmask16 k, __m512 a);
+
+
VEXPANDPS __m512 _mm512_mask_expandloadu_ps( __m512 s, __mmask16 k, void * a);
+
+
VEXPANDPS __m512 _mm512_maskz_expandloadu_ps( __mmask16 k, void * a);
+
+
VEXPANDPD __m256 _mm256_mask_expand_ps( __m256 s, __mmask8 k, __m256 a);
+
+
VEXPANDPD __m256 _mm256_maskz_expand_ps( __mmask8 k, __m256 a);
+
+
VEXPANDPD __m256 _mm256_mask_expandloadu_ps( __m256 s, __mmask8 k, void * a);
+
+
VEXPANDPD __m256 _mm256_maskz_expandloadu_ps( __mmask8 k, void * a);
+
+
VEXPANDPD __m128 _mm_mask_expand_ps( __m128 s, __mmask8 k, __m128 a);
+
+
VEXPANDPD __m128 _mm_maskz_expand_ps( __mmask8 k, __m128 a);
+
+
VEXPANDPD __m128 _mm_mask_expandloadu_ps( __m128 s, __mmask8 k, void * a);
+
+
VEXPANDPD __m128 _mm_maskz_expandloadu_ps( __mmask8 k, void * a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vextractf128.vextractf32x4.vextractf64x2.vextractf32x8.vextractf64x4.html b/x86/vextractf128.vextractf32x4.vextractf64x2.vextractf32x8.vextractf64x4.html new file mode 100644 index 0000000..3844230 --- /dev/null +++ b/x86/vextractf128.vextractf32x4.vextractf64x2.vextractf32x8.vextractf64x4.html @@ -0,0 +1,393 @@ + +VEXTRACTF128/VEXTRACTF32x4/VEXTRACTF64x2/VEXTRACTF32x8/VEXTRACTF64x4 + — Extract Packed Floating-Point Values

VEXTRACTF128/VEXTRACTF32x4/VEXTRACTF64x2/VEXTRACTF32x8/VEXTRACTF64x4 + — Extract Packed Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.256.66.0F3A.W0 19 /r ib VEXTRACTF128 xmm1/m128, ymm2, imm8AV/VAVXExtract 128 bits of packed floating-point values from ymm2 and store results in xmm1/m128.
EVEX.256.66.0F3A.W0 19 /r ib VEXTRACTF32X4 xmm1/m128 {k1}{z}, ymm2, imm8CV/VAVX512VL AVX512FExtract 128 bits of packed single precision floating-point values from ymm2 and store results in xmm1/m128 subject to writemask k1.
EVEX.512.66.0F3A.W0 19 /r ib VEXTRACTF32x4 xmm1/m128 {k1}{z}, zmm2, imm8CV/VAVX512FExtract 128 bits of packed single precision floating-point values from zmm2 and store results in xmm1/m128 subject to writemask k1.
EVEX.256.66.0F3A.W1 19 /r ib VEXTRACTF64X2 xmm1/m128 {k1}{z}, ymm2, imm8BV/VAVX512VL AVX512DQExtract 128 bits of packed double precision floating-point values from ymm2 and store results in xmm1/m128 subject to writemask k1.
EVEX.512.66.0F3A.W1 19 /r ib VEXTRACTF64X2 xmm1/m128 {k1}{z}, zmm2, imm8BV/VAVX512DQExtract 128 bits of packed double precision floating-point values from zmm2 and store results in xmm1/m128 subject to writemask k1.
EVEX.512.66.0F3A.W0 1B /r ib VEXTRACTF32X8 ymm1/m256 {k1}{z}, zmm2, imm8DV/VAVX512DQExtract 256 bits of packed single precision floating-point values from zmm2 and store results in ymm1/m256 subject to writemask k1.
EVEX.512.66.0F3A.W1 1B /r ib VEXTRACTF64x4 ymm1/m256 {k1}{z}, zmm2, imm8CV/VAVX512FExtract 256 bits of packed double precision floating-point values from zmm2 and store results in ymm1/m256 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)imm8N/A
BTuple2ModRM:r/m (w)ModRM:reg (r)imm8N/A
CTuple4ModRM:r/m (w)ModRM:reg (r)imm8N/A
DTuple8ModRM:r/m (w)ModRM:reg (r)imm8N/A
+

Description + ¶ +

+

VEXTRACTF128/VEXTRACTF32x4 and VEXTRACTF64x2 extract 128-bits of single precision floating-point values from the source operand (the second operand) and store to the low 128-bit of the destination operand (the first operand). The 128-bit data extraction occurs at an 128-bit granular offset specified by imm8[0] (256-bit) or imm8[1:0] as the multiply factor. The destination may be either a vector register or an 128-bit memory location.

+

VEXTRACTF32x4: The low 128-bit of the destination operand is updated at 32-bit granularity according to the writemask.

+

VEXTRACTF32x8 and VEXTRACTF64x4 extract 256-bits of double precision floating-point values from the source operand (second operand) and store to the low 256-bit of the destination operand (the first operand). The 256-bit data extraction occurs at an 256-bit granular offset specified by imm8[0] (256-bit) or imm8[0] as the multiply factor The destination may be either a vector register or a 256-bit memory location.

+

VEXTRACTF64x4: The low 256-bit of the destination operand is updated at 64-bit granularity according to the writemask.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

The high 6 bits of the immediate are ignored.

+

If VEXTRACTF128 is encoded with VEX.L= 0, an attempt to execute the instruction encoded with VEX.L= 0 will cause an #UD exception.

+

Operation + ¶ +

+

VEXTRACTF32x4 (EVEX Encoded Versions) When Destination is a Register + ¶ +

+
VL = 256, 512
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC1[127:0]
+        1: TMP_DEST[127:0] := SRC1[255:128]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0] := SRC1[127:0]
+        01: TMP_DEST[127:0] := SRC1[255:128]
+        10: TMP_DEST[127:0] := SRC1[383:256]
+        11: TMP_DEST[127:0] := SRC1[511:384]
+    ESAC.
+FI;
+FOR j := 0 TO 3
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:128] := 0
+
+

VEXTRACTF32x4 (EVEX Encoded Versions) When Destination is Memory + ¶ +

+
VL = 256, 512
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC1[127:0]
+        1: TMP_DEST[127:0] := SRC1[255:128]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0] := SRC1[127:0]
+        01: TMP_DEST[127:0] := SRC1[255:128]
+        10: TMP_DEST[127:0] := SRC1[383:256]
+        11: TMP_DEST[127:0] := SRC1[511:384]
+    ESAC.
+FI;
+FOR j := 0 TO 3
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE *DEST[i+31:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VEXTRACTF64x2 (EVEX Encoded Versions) When Destination is a Register + ¶ +

+
VL = 256, 512
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC1[127:0]
+        1: TMP_DEST[127:0] := SRC1[255:128]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0] := SRC1[127:0]
+        01: TMP_DEST[127:0] := SRC1[255:128]
+        10: TMP_DEST[127:0] := SRC1[383:256]
+        11: TMP_DEST[127:0] := SRC1[511:384]
+    ESAC.
+FI;
+FOR j := 0 TO 1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:128] := 0
+
+

VEXTRACTF64x2 (EVEX Encoded Versions) When Destination is Memory + ¶ +

+
VL = 256, 512
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC1[127:0]
+        1: TMP_DEST[127:0] := SRC1[255:128]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0] := SRC1[127:0]
+        01: TMP_DEST[127:0] := SRC1[255:128]
+        10: TMP_DEST[127:0] := SRC1[383:256]
+        11: TMP_DEST[127:0] := SRC1[511:384]
+    ESAC.
+FI;
+FOR j := 0 TO 1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE *DEST[i+63:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR
+
+

VEXTRACTF32x8 (EVEX.U1.512 Encoded Version) When Destination is a Register + ¶ +

+
VL = 512
+CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC1[255:0]
+    1: TMP_DEST[255:0] := SRC1[511:256]
+ESAC.
+FOR j := 0 TO 7
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:256] := 0
+
+

VEXTRACTF32x8 (EVEX.U1.512 Encoded Version) When Destination is Memory + ¶ +

+
CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC1[255:0]
+    1: TMP_DEST[255:0] := SRC1[511:256]
+ESAC.
+FOR j := 0 TO 7
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE *DEST[i+31:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR
+
+

VEXTRACTF64x4 (EVEX.512 Encoded Version) When Destination is a Register + ¶ +

+
VL = 512
+CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC1[255:0]
+    1: TMP_DEST[255:0] := SRC1[511:256]
+ESAC.
+FOR j := 0 TO 3
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:256] := 0
+
+

VEXTRACTF64x4 (EVEX.512 Encoded Version) When Destination is Memory + ¶ +

+
CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC1[255:0]
+    1: TMP_DEST[255:0] := SRC1[511:256]
+ESAC.
+FOR j := 0 TO 3
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE ; merging-masking
+            *DEST[i+63:i] remains unchanged*
+    FI;
+ENDFOR
+
+

VEXTRACTF128 (Memory Destination Form) + ¶ +

+
CASE (imm8[0]) OF
+    0: DEST[127:0] := SRC1[127:0]
+    1: DEST[127:0] := SRC1[255:128]
+ESAC.
+
+

VEXTRACTF128 (Register Destination Form) + ¶ +

+
CASE (imm8[0]) OF
+    0: DEST[127:0] := SRC1[127:0]
+    1: DEST[127:0] := SRC1[255:128]
+ESAC.
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VEXTRACTF32x4 __m128 _mm512_extractf32x4_ps(__m512 a, const int nidx);
+
+
VEXTRACTF32x4 __m128 _mm512_mask_extractf32x4_ps(__m128 s, __mmask8 k, __m512 a, const int nidx);
+
+
VEXTRACTF32x4 __m128 _mm512_maskz_extractf32x4_ps( __mmask8 k, __m512 a, const int nidx);
+
+
VEXTRACTF32x4 __m128 _mm256_extractf32x4_ps(__m256 a, const int nidx);
+
+
VEXTRACTF32x4 __m128 _mm256_mask_extractf32x4_ps(__m128 s, __mmask8 k, __m256 a, const int nidx);
+
+
VEXTRACTF32x4 __m128 _mm256_maskz_extractf32x4_ps( __mmask8 k, __m256 a, const int nidx);
+
+
VEXTRACTF32x8 __m256 _mm512_extractf32x8_ps(__m512 a, const int nidx);
+
+
VEXTRACTF32x8 __m256 _mm512_mask_extractf32x8_ps(__m256 s, __mmask8 k, __m512 a, const int nidx);
+
+
VEXTRACTF32x8 __m256 _mm512_maskz_extractf32x8_ps( __mmask8 k, __m512 a, const int nidx);
+
+
VEXTRACTF64x2 __m128d _mm512_extractf64x2_pd(__m512d a, const int nidx);
+
+
VEXTRACTF64x2 __m128d _mm512_mask_extractf64x2_pd(__m128d s, __mmask8 k, __m512d a, const int nidx);
+
+
VEXTRACTF64x2 __m128d _mm512_maskz_extractf64x2_pd( __mmask8 k, __m512d a, const int nidx);
+
+
VEXTRACTF64x2 __m128d _mm256_extractf64x2_pd(__m256d a, const int nidx);
+
+
VEXTRACTF64x2 __m128d _mm256_mask_extractf64x2_pd(__m128d s, __mmask8 k, __m256d a, const int nidx);
+
+
VEXTRACTF64x2 __m128d _mm256_maskz_extractf64x2_pd( __mmask8 k, __m256d a, const int nidx);
+
+
VEXTRACTF64x4 __m256d _mm512_extractf64x4_pd( __m512d a, const int nidx);
+
+
VEXTRACTF64x4 __m256d _mm512_mask_extractf64x4_pd(__m256d s, __mmask8 k, __m512d a, const int nidx);
+
+
VEXTRACTF64x4 __m256d _mm512_maskz_extractf64x4_pd( __mmask8 k, __m512d a, const int nidx);
+
+
VEXTRACTF128 __m128 _mm256_extractf128_ps (__m256 a, int offset);
+
+
VEXTRACTF128 __m128d _mm256_extractf128_pd (__m256d a, int offset);
+
+
VEXTRACTF128 __m128i_mm256_extractf128_si256(__m256i a, int offset);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-23, “Type 6 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-54, “Type E6NF Class Exception Conditions.”

+

Additionally:

+ + + + + + +
#UDIF VEX.L = 0.
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/vextracti128.vextracti32x4.vextracti64x2.vextracti32x8.vextracti64x4.html b/x86/vextracti128.vextracti32x4.vextracti64x2.vextracti32x8.vextracti64x4.html new file mode 100644 index 0000000..89b4cef --- /dev/null +++ b/x86/vextracti128.vextracti32x4.vextracti64x2.vextracti32x8.vextracti64x4.html @@ -0,0 +1,393 @@ + +VEXTRACTI128/VEXTRACTI32x4/VEXTRACTI64x2/VEXTRACTI32x8/VEXTRACTI64x4 + — ExtractPacked Integer Values

VEXTRACTI128/VEXTRACTI32x4/VEXTRACTI64x2/VEXTRACTI32x8/VEXTRACTI64x4 + — ExtractPacked Integer Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.256.66.0F3A.W0 39 /r ib VEXTRACTI128 xmm1/m128, ymm2, imm8AV/VAVX2Extract 128 bits of integer data from ymm2 and store results in xmm1/m128.
EVEX.256.66.0F3A.W0 39 /r ib VEXTRACTI32X4 xmm1/m128 {k1}{z}, ymm2, imm8CV/VAVX512VL AVX512FExtract 128 bits of double-word integer values from ymm2 and store results in xmm1/m128 subject to writemask k1.
EVEX.512.66.0F3A.W0 39 /r ib VEXTRACTI32x4 xmm1/m128 {k1}{z}, zmm2, imm8CV/VAVX512FExtract 128 bits of double-word integer values from zmm2 and store results in xmm1/m128 subject to writemask k1.
EVEX.256.66.0F3A.W1 39 /r ib VEXTRACTI64X2 xmm1/m128 {k1}{z}, ymm2, imm8BV/VAVX512VL AVX512DQExtract 128 bits of quad-word integer values from ymm2 and store results in xmm1/m128 subject to writemask k1.
EVEX.512.66.0F3A.W1 39 /r ib VEXTRACTI64X2 xmm1/m128 {k1}{z}, zmm2, imm8BV/VAVX512DQExtract 128 bits of quad-word integer values from zmm2 and store results in xmm1/m128 subject to writemask k1.
EVEX.512.66.0F3A.W0 3B /r ib VEXTRACTI32X8 ymm1/m256 {k1}{z}, zmm2, imm8DV/VAVX512DQExtract 256 bits of double-word integer values from zmm2 and store results in ymm1/m256 subject to writemask k1.
EVEX.512.66.0F3A.W1 3B /r ib VEXTRACTI64x4 ymm1/m256 {k1}{z}, zmm2, imm8CV/VAVX512FExtract 256 bits of quad-word integer values from zmm2 and store results in ymm1/m256 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:r/m (w)ModRM:reg (r)imm8N/A
BTuple2ModRM:r/m (w)ModRM:reg (r)imm8N/A
CTuple4ModRM:r/m (w)ModRM:reg (r)imm8N/A
DTuple8ModRM:r/m (w)ModRM:reg (r)imm8N/A
+

Description + ¶ +

+

VEXTRACTI128/VEXTRACTI32x4 and VEXTRACTI64x2 extract 128-bits of doubleword integer values from the source operand (the second operand) and store to the low 128-bit of the destination operand (the first operand). The 128-bit data extraction occurs at an 128-bit granular offset specified by imm8[0] (256-bit) or imm8[1:0] as the multiply factor. The destination may be either a vector register or an 128-bit memory location.

+

VEXTRACTI32x4: The low 128-bit of the destination operand is updated at 32-bit granularity according to the writemask.

+

VEXTRACTI64x2: The low 128-bit of the destination operand is updated at 64-bit granularity according to the writemask.

+

VEXTRACTI32x8 and VEXTRACTI64x4 extract 256-bits of quadword integer values from the source operand (the second operand) and store to the low 256-bit of the destination operand (the first operand). The 256-bit data extraction occurs at an 256-bit granular offset specified by imm8[0] (256-bit) or imm8[0] as the multiply factor The destination may be either a vector register or a 256-bit memory location.

+

VEXTRACTI32x8: The low 256-bit of the destination operand is updated at 32-bit granularity according to the writemask.

+

VEXTRACTI64x4: The low 256-bit of the destination operand is updated at 64-bit granularity according to the writemask.

+

VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

The high 7 bits (6 bits in EVEX.512) of the immediate are ignored.

+

If VEXTRACTI128 is encoded with VEX.L= 0, an attempt to execute the instruction encoded with VEX.L= 0 will cause an #UD exception.

+

Operation + ¶ +

+

VEXTRACTI32x4 (EVEX encoded versions) when destination is a register + ¶ +

+
VL = 256, 512
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC1[127:0]
+        1: TMP_DEST[127:0] := SRC1[255:128]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0] := SRC1[127:0]
+        01: TMP_DEST[127:0] := SRC1[255:128]
+        10: TMP_DEST[127:0] := SRC1[383:256]
+        11: TMP_DEST[127:0] := SRC1[511:384]
+    ESAC.
+FI;
+FOR j := 0 TO 3
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:128] := 0
+
+

VEXTRACTI32x4 (EVEX encoded versions) when destination is memory + ¶ +

+
VL = 256, 512
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC1[127:0]
+        1: TMP_DEST[127:0] := SRC1[255:128]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0] := SRC1[127:0]
+        01: TMP_DEST[127:0] := SRC1[255:128]
+        10: TMP_DEST[127:0] := SRC1[383:256]
+        11: TMP_DEST[127:0] := SRC1[511:384]
+    ESAC.
+FI;
+FOR j := 0 TO 3
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE *DEST[i+31:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR
+
+

VEXTRACTI64x2 (EVEX encoded versions) when destination is a register + ¶ +

+
VL = 256, 512
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC1[127:0]
+        1: TMP_DEST[127:0] := SRC1[255:128]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0] := SRC1[127:0]
+        01: TMP_DEST[127:0] := SRC1[255:128]
+        10: TMP_DEST[127:0] := SRC1[383:256]
+        11: TMP_DEST[127:0] := SRC1[511:384]
+    ESAC.
+FI;
+FOR j := 0 TO 1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:128] := 0
+
+

VEXTRACTI64x2 (EVEX encoded versions) when destination is memory + ¶ +

+
VL = 256, 512
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC1[127:0]
+        1: TMP_DEST[127:0] := SRC1[255:128]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0] := SRC1[127:0]
+        01: TMP_DEST[127:0] := SRC1[255:128]
+        10: TMP_DEST[127:0] := SRC1[383:256]
+        11: TMP_DEST[127:0] := SRC1[511:384]
+    ESAC.
+FI;
+FOR j := 0 TO 1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE *DEST[i+63:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR
+
+

VEXTRACTI32x8 (EVEX.U1.512 encoded version) when destination is a register + ¶ +

+
VL = 512
+CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC1[255:0]
+    1: TMP_DEST[255:0] := SRC1[511:256]
+ESAC.
+FOR j := 0 TO 7
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:256] := 0
+
+

VEXTRACTI32x8 (EVEX.U1.512 encoded version) when destination is memory + ¶ +

+
CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC1[255:0]
+    1: TMP_DEST[255:0] := SRC1[511:256]
+ESAC.
+FOR j := 0 TO 7
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE *DEST[i+31:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR
+
+

VEXTRACTI64x4 (EVEX.512 encoded version) when destination is a register + ¶ +

+
VL = 512
+CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC1[255:0]
+    1: TMP_DEST[255:0] := SRC1[511:256]
+ESAC.
+FOR j := 0 TO 3
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:256] := 0
+
+

VEXTRACTI64x4 (EVEX.512 encoded version) when destination is memory + ¶ +

+
CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC1[255:0]
+    1: TMP_DEST[255:0] := SRC1[511:256]
+ESAC.
+FOR j := 0 TO 3
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE *DEST[i+63:i] remains unchanged*
+            ; merging-masking
+    FI;
+ENDFOR
+
+

VEXTRACTI128 (memory destination form) + ¶ +

+
CASE (imm8[0]) OF
+    0: DEST[127:0] := SRC1[127:0]
+    1: DEST[127:0] := SRC1[255:128]
+ESAC.
+
+

VEXTRACTI128 (register destination form) + ¶ +

+
CASE (imm8[0]) OF
+    0: DEST[127:0] := SRC1[127:0]
+    1: DEST[127:0] := SRC1[255:128]
+ESAC.
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VEXTRACTI32x4 __m128i _mm512_extracti32x4_epi32(__m512i a, const int nidx);
+
+
VEXTRACTI32x4 __m128i _mm512_mask_extracti32x4_epi32(__m128i s, __mmask8 k, __m512i a, const int nidx);
+
+
VEXTRACTI32x4 __m128i _mm512_maskz_extracti32x4_epi32( __mmask8 k, __m512i a, const int nidx);
+
+
VEXTRACTI32x4 __m128i _mm256_extracti32x4_epi32(__m256i a, const int nidx);
+
+
VEXTRACTI32x4 __m128i _mm256_mask_extracti32x4_epi32(__m128i s, __mmask8 k, __m256i a, const int nidx);
+
+
VEXTRACTI32x4 __m128i _mm256_maskz_extracti32x4_epi32( __mmask8 k, __m256i a, const int nidx);
+
+
VEXTRACTI32x8 __m256i _mm512_extracti32x8_epi32(__m512i a, const int nidx);
+
+
VEXTRACTI32x8 __m256i _mm512_mask_extracti32x8_epi32(__m256i s, __mmask8 k, __m512i a, const int nidx);
+
+
VEXTRACTI32x8 __m256i _mm512_maskz_extracti32x8_epi32( __mmask8 k, __m512i a, const int nidx);
+
+
VEXTRACTI64x2 __m128i _mm512_extracti64x2_epi64(__m512i a, const int nidx);
+
+
VEXTRACTI64x2 __m128i _mm512_mask_extracti64x2_epi64(__m128i s, __mmask8 k, __m512i a, const int nidx);
+
+
VEXTRACTI64x2 __m128i _mm512_maskz_extracti64x2_epi64( __mmask8 k, __m512i a, const int nidx);
+
+
VEXTRACTI64x2 __m128i _mm256_extracti64x2_epi64(__m256i a, const int nidx);
+
+
VEXTRACTI64x2 __m128i _mm256_mask_extracti64x2_epi64(__m128i s, __mmask8 k, __m256i a, const int nidx);
+
+
VEXTRACTI64x2 __m128i _mm256_maskz_extracti64x2_epi64( __mmask8 k, __m256i a, const int nidx);
+
+
VEXTRACTI64x4 __m256i _mm512_extracti64x4_epi64(__m512i a, const int nidx);
+
+
VEXTRACTI64x4 __m256i _mm512_mask_extracti64x4_epi64(__m256i s, __mmask8 k, __m512i a, const int nidx);
+
+
VEXTRACTI64x4 __m256i _mm512_maskz_extracti64x4_epi64( __mmask8 k, __m512i a, const int nidx);
+
+
VEXTRACTI128 __m128i _mm256_extracti128_si256(__m256i a, int offset);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-23, “Type 6 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-54, “Type E6NF Class Exception Conditions.”

+

Additionally:

+ + + + + + +
#UDIF VEX.L = 0.
#UDIf VEX.vvvv != 1111B or EVEX.vvvv != 1111B.
diff --git a/x86/vfcmaddcph.vfmaddcph.html b/x86/vfcmaddcph.vfmaddcph.html new file mode 100644 index 0000000..cdd8cb4 --- /dev/null +++ b/x86/vfcmaddcph.vfmaddcph.html @@ -0,0 +1,216 @@ + +VFCMADDCPH/VFMADDCPH + — Complex Multiply and Accumulate FP16 Values

VFCMADDCPH/VFMADDCPH + — Complex Multiply and Accumulate FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.F2.MAP6.W0 56 /r VFCMADDCPH xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512-FP16 AVX512VLComplex multiply a pair of FP16 values from xmm2 and complex conjugate of xmm3/m128/m32bcst, add to xmm1 and store the result in xmm1 subject to writemask k1.
EVEX.256.F2.MAP6.W0 56 /r VFCMADDCPH ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512-FP16 AVX512VLComplex multiply a pair of FP16 values from ymm2 and complex conjugate of ymm3/m256/m32bcst, add to ymm1 and store the result in ymm1 subject to writemask k1.
EVEX.512.F2.MAP6.W0 56 /r VFCMADDCPH zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst {er}AV/VAVX512-FP16Complex multiply a pair of FP16 values from zmm2 and complex conjugate of zmm3/m512/m32bcst, add to zmm1 and store the result in zmm1 subject to writemask k1.
EVEX.128.F3.MAP6.W0 56 /r VFMADDCPH xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512-FP16 AVX512VLComplex multiply a pair of FP16 values from xmm2 and xmm3/m128/m32bcst, add to xmm1 and store the result in xmm1 subject to writemask k1.
EVEX.256.F3.MAP6.W0 56 /r VFMADDCPH ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512-FP16 AVX512VLComplex multiply a pair of FP16 values from ymm2 and ymm3/m256/m32bcst, add to ymm1 and store the result in ymm1 subject to writemask k1.
EVEX.512.F3.MAP6.W0 56 /r VFMADDCPH zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst {er}AV/VAVX512-FP16Complex multiply a pair of FP16 values from zmm2 and zmm3/m512/m32bcst, add to zmm1 and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a complex multiply and accumulate operation. There are normal and complex conjugate forms of the operation.

+

The broadcasting and masking for this operation is done on 32-bit quantities representing a pair of FP16 values.

+

Rounding is performed at every FMA (fused multiply and add) boundary. Execution occurs as if all MXCSR exceptions are masked. MXCSR status bits are updated to reflect exceptional conditions.

+

Operation + ¶ +

+

VFCMADDCPH dest{k1}, src1, src2 (AVX512) + ¶ +

+
VL = 128, 256, 512
+KL := VL / 32
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF broadcasting and src2 is memory:
+            tsrc2.fp16[2*i+0] := src2.fp16[0]
+            tsrc2.fp16[2*i+1] := src2.fp16[1]
+        ELSE:
+            tsrc2.fp16[2*i+0] := src2.fp16[2*i+0]
+            tsrc2.fp16[2*i+1] := src2.fp16[2*i+1]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        tmp[2*i+0] := dest.fp16[2*i+0] + src1.fp16[2*i+0] * tsrc2.fp16[2*i+0]
+        tmp[2*i+1] := dest.fp16[2*i+1] + src1.fp16[2*i+1] * tsrc2.fp16[2*i+0]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        // conjugate version subtracts odd final term
+        dest.fp16[2*i+0] := tmp[2*i+0] + src1.fp16[2*i+1] * tsrc2.fp16[2*i+1]
+        dest.fp16[2*i+1] := tmp[2*i+1] - src1.fp16[2*i+0] * tsrc2.fp16[2*i+1]
+    ELSE IF *zeroing*:
+        dest.fp16[2*i+0] := 0
+        dest.fp16[2*i+1] := 0
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDCPH dest{k1}, src1, src2 (AVX512) + ¶ +

+
VL = 128, 256, 512
+KL := VL / 32
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF broadcasting and src2 is memory:
+            tsrc2.fp16[2*i+0] := src2.fp16[0]
+            tsrc2.fp16[2*i+1] := src2.fp16[1]
+        ELSE:
+            tsrc2.fp16[2*i+0] := src2.fp16[2*i+0]
+            tsrc2.fp16[2*i+1] := src2.fp16[2*i+1]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        tmp[2*i+0] := dest.fp16[2*i+0] + src1.fp16[2*i+0] * tsrc2.fp16[2*i+0]
+        tmp[2*i+1] := dest.fp16[2*i+1] + src1.fp16[2*i+1] * tsrc2.fp16[2*i+0]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        // non-conjugate version subtracts even term
+        dest.fp16[2*i+0] := tmp[2*i+0] - src1.fp16[2*i+1] * tsrc2.fp16[2*i+1]
+        dest.fp16[2*i+1] := tmp[2*i+1] + src1.fp16[2*i+0] * tsrc2.fp16[2*i+1]
+    ELSE IF *zeroing*:
+        dest.fp16[2*i+0] := 0
+        dest.fp16[2*i+1] := 0
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFCMADDCPH __m128h _mm_fcmadd_pch (__m128h a, __m128h b, __m128h c);
+
+
VFCMADDCPH __m128h _mm_mask_fcmadd_pch (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
VFCMADDCPH __m128h _mm_mask3_fcmadd_pch (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
VFCMADDCPH __m128h _mm_maskz_fcmadd_pch (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
VFCMADDCPH __m256h _mm256_fcmadd_pch (__m256h a, __m256h b, __m256h c);
+
+
VFCMADDCPH __m256h _mm256_mask_fcmadd_pch (__m256h a, __mmask8 k, __m256h b, __m256h c);
+
+
VFCMADDCPH __m256h _mm256_mask3_fcmadd_pch (__m256h a, __m256h b, __m256h c, __mmask8 k);
+
+
VFCMADDCPH __m256h _mm256_maskz_fcmadd_pch (__mmask8 k, __m256h a, __m256h b, __m256h c);
+
+
VFCMADDCPH __m512h _mm512_fcmadd_pch (__m512h a, __m512h b, __m512h c);
+
+
VFCMADDCPH __m512h _mm512_mask_fcmadd_pch (__m512h a, __mmask16 k, __m512h b, __m512h c);
+
+
VFCMADDCPH __m512h _mm512_mask3_fcmadd_pch (__m512h a, __m512h b, __m512h c, __mmask16 k);
+
+
VFCMADDCPH __m512h _mm512_maskz_fcmadd_pch (__mmask16 k, __m512h a, __m512h b, __m512h c);
+
+
VFCMADDCPH __m512h _mm512_fcmadd_round_pch (__m512h a, __m512h b, __m512h c, const int rounding);
+
+
VFCMADDCPH __m512h _mm512_mask_fcmadd_round_pch (__m512h a, __mmask16 k, __m512h b, __m512h c, const int rounding);
+
+
VFCMADDCPH __m512h _mm512_mask3_fcmadd_round_pch (__m512h a, __m512h b, __m512h c, __mmask16 k, const int rounding);
+
+
VFCMADDCPH __m512h _mm512_maskz_fcmadd_round_pch (__mmask16 k, __m512h a, __m512h b, __m512h c, const int rounding);
+
+
VFMADDCPH __m128h _mm_fmadd_pch (__m128h a, __m128h b, __m128h c);
+
+
VFMADDCPH __m128h _mm_mask_fmadd_pch (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
VFMADDCPH __m128h _mm_mask3_fmadd_pch (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
VFMADDCPH __m128h _mm_maskz_fmadd_pch (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
VFMADDCPH __m256h _mm256_fmadd_pch (__m256h a, __m256h b, __m256h c);
+
+
VFMADDCPH __m256h _mm256_mask_fmadd_pch (__m256h a, __mmask8 k, __m256h b, __m256h c);
+
+
VFMADDCPH __m256h _mm256_mask3_fmadd_pch (__m256h a, __m256h b, __m256h c, __mmask8 k);
+
+
VFMADDCPH __m256h _mm256_maskz_fmadd_pch (__mmask8 k, __m256h a, __m256h b, __m256h c);
+
+
VFMADDCPH __m512h _mm512_fmadd_pch (__m512h a, __m512h b, __m512h c);
+
+
VFMADDCPH __m512h _mm512_mask_fmadd_pch (__m512h a, __mmask16 k, __m512h b, __m512h c);
+
+
VFMADDCPH __m512h _mm512_mask3_fmadd_pch (__m512h a, __m512h b, __m512h c, __mmask16 k);
+
+
VFMADDCPH __m512h _mm512_maskz_fmadd_pch (__mmask16 k, __m512h a, __m512h b, __m512h c);
+
+
VFMADDCPH __m512h _mm512_fmadd_round_pch (__m512h a, __m512h b, __m512h c, const int rounding);
+
+
VFMADDCPH __m512h _mm512_mask_fmadd_round_pch (__m512h a, __mmask16 k, __m512h b, __m512h c, const int rounding);
+
+
VFMADDCPH __m512h _mm512_mask3_fmadd_round_pch (__m512h a, __m512h b, __m512h c, __mmask16 k, const int rounding);
+
+
VFMADDCPH __m512h _mm512_maskz_fmadd_round_pch (__mmask16 k, __m512h a, __m512h b, __m512h c, const int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf (dest_reg == src1_reg) or (dest_reg == src2_reg).
diff --git a/x86/vfcmaddcsh.vfmaddcsh.html b/x86/vfcmaddcsh.vfmaddcsh.html new file mode 100644 index 0000000..9e851cb --- /dev/null +++ b/x86/vfcmaddcsh.vfmaddcsh.html @@ -0,0 +1,137 @@ + +VFCMADDCSH/VFMADDCSH + — Complex Multiply and Accumulate Scalar FP16 Values

VFCMADDCSH/VFMADDCSH + — Complex Multiply and Accumulate Scalar FP16 Values

+ + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F2.MAP6.W0 57 /r VFCMADDCSH xmm1{k1}{z}, xmm2, xmm3/m32 {er}AV/VAVX512-FP16Complex multiply a pair of FP16 values from xmm2 and complex conjugate of xmm3/m32, add to xmm1 and store the result in xmm1 subject to writemask k1. Bits 127:32 of xmm2 are copied to xmm1[127:32].
EVEX.LLIG.F3.MAP6.W0 57 /r VFMADDCSH xmm1{k1}{z}, xmm2, xmm3/m32 {er}AV/VAVX512-FP16Complex multiply a pair of FP16 values from xmm2 and xmm3/m32, add to xmm1 and store the result in xmm1 subject to writemask k1. Bits 127:32 of xmm2 are copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a complex multiply and accumulate operation. There are normal and complex conjugate forms of the operation.

+

The masking for this operation is done on 32-bit quantities representing a pair of FP16 values.

+

Bits 127:32 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Rounding is performed at every FMA (fused multiply and add) boundary. Execution occurs as if all MXCSR exceptions are masked. MXCSR status bits are updated to reflect exceptional conditions.

+

Operation + ¶ +

+

VFCMADDCSH dest{k1}, src1, src2 (AVX512) + ¶ +

+
IF k1[0] or *no writemask*:
+    tmp[0] := dest.fp16[0] + src1.fp16[0] * src2.fp16[0]
+    tmp[1] := dest.fp16[1] + src1.fp16[1] * src2.fp16[0]
+    // conjugate version subtracts odd final term
+    dest.fp16[0] := tmp[0] + src1.fp16[1] * src2.fp16[1]
+    dest.fp16[1] := tmp[1] - src1.fp16[0] * src2.fp16[1]
+ELSE IF *zeroing*:
+    dest.fp16[0] := 0
+    dest.fp16[1] := 0
+DEST[127:32] := src1[127:32] // copy upper part of src1
+DEST[MAXVL-1:128] := 0
+
+

VFMADDCSH dest{k1}, src1, src2 (AVX512) + ¶ +

+
IF k1[0] or *no writemask*:
+    tmp[0] := dest.fp16[0] + src1.fp16[0] * src2.fp16[0]
+    tmp[1] := dest.fp16[1] + src1.fp16[1] * src2.fp16[0]
+    // non-conjugate version subtracts last even term
+    dest.fp16[0] := tmp[0] - src1.fp16[1] * src2.fp16[1]
+    dest.fp16[1] := tmp[1] + src1.fp16[0] * src2.fp16[1]
+ELSE IF *zeroing*:
+    dest.fp16[0] := 0
+    dest.fp16[1] := 0
+DEST[127:32] := src1[127:32] // copy upper part of src1
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFCMADDCSH __m128h _mm_fcmadd_round_sch (__m128h a, __m128h b, __m128h c, const int rounding);
+
+
VFCMADDCSH __m128h _mm_mask_fcmadd_round_sch (__m128h a, __mmask8 k, __m128h b, __m128h c, const int rounding);
+
+
VFCMADDCSH __m128h _mm_mask3_fcmadd_round_sch (__m128h a, __m128h b, __m128h c, __mmask8 k, const int rounding);
+
+
VFCMADDCSH __m128h _mm_maskz_fcmadd_round_sch (__mmask8 k, __m128h a, __m128h b, __m128h c, const int rounding);
+
+
VFCMADDCSH __m128h _mm_fcmadd_sch (__m128h a, __m128h b, __m128h c);
+
+
VFCMADDCSH __m128h _mm_mask_fcmadd_sch (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
VFCMADDCSH __m128h _mm_mask3_fcmadd_sch (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
VFCMADDCSH __m128h _mm_maskz_fcmadd_sch (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
VFMADDCSH __m128h _mm_fmadd_round_sch (__m128h a, __m128h b, __m128h c, const int rounding);
+
+
VFMADDCSH __m128h _mm_mask_fmadd_round_sch (__m128h a, __mmask8 k, __m128h b, __m128h c, const int rounding);
+
+
VFMADDCSH __m128h _mm_mask3_fmadd_round_sch (__m128h a, __m128h b, __m128h c, __mmask8 k, const int rounding);
+
+
VFMADDCSH __m128h _mm_maskz_fmadd_round_sch (__mmask8 k, __m128h a, __m128h b, __m128h c, const int rounding);
+
+
VFMADDCSH __m128h _mm_fmadd_sch (__m128h a, __m128h b, __m128h c);
+
+
VFMADDCSH __m128h _mm_mask_fmadd_sch (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
VFMADDCSH __m128h _mm_mask3_fmadd_sch (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
VFMADDCSH __m128h _mm_maskz_fmadd_sch (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-58, “Type E10 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf (dest_reg == src1_reg) or (dest_reg == src2_reg).
diff --git a/x86/vfcmulcph.vfmulcph.html b/x86/vfcmulcph.vfmulcph.html new file mode 100644 index 0000000..b7f0dd7 --- /dev/null +++ b/x86/vfcmulcph.vfmulcph.html @@ -0,0 +1,247 @@ + +VFCMULCPH/VFMULCPH + — Complex Multiply FP16 Values

VFCMULCPH/VFMULCPH + — Complex Multiply FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.F2.MAP6.W0 D6 /r VFCMULCPH xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512-FP16 AVX512VLComplex multiply a pair of FP16 values from xmm2 and complex conjugate of xmm3/m128/m32bcst, and store the result in xmm1 subject to writemask k1.
EVEX.256.F2.MAP6.W0 D6 /r VFCMULCPH ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512-FP16 AVX512VLComplex multiply a pair of FP16 values from ymm2 and complex conjugate of ymm3/m256/m32bcst, and store the result in ymm1 subject to writemask k1.
EVEX.512.F2.MAP6.W0 D6 /r VFCMULCPH zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst {er}AV/VAVX512-FP16Complex multiply a pair of FP16 values from zmm2 and complex conjugate of zmm3/m512/m32bcst, and store the result in zmm1 subject to writemask k1.
EVEX.128.F3.MAP6.W0 D6 /r VFMULCPH xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512-FP16 AVX512VLComplex multiply a pair of FP16 values from xmm2 and xmm3/m128/m32bcst, and store the result in xmm1 subject to writemask k1.
EVEX.256.F3.MAP6.W0 D6 /r VFMULCPH ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512-FP16 AVX512VLComplex multiply a pair of FP16 values from ymm2 and ymm3/m256/m32bcst, and store the result in ymm1 subject to writemask k1.
EVEX.512.F3.MAP6.W0 D6 /r VFMULCPH zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst {er}AV/VAVX512-FP16Complex multiply a pair of FP16 values from zmm2 and zmm3/m512/m32bcst, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a complex multiply operation. There are normal and complex conjugate forms of the operation. The broadcasting and masking for this operation is done on 32-bit quantities representing a pair of FP16 values.

+

Rounding is performed at every FMA (fused multiply and add) boundary. Execution occurs as if all MXCSR exceptions are masked. MXCSR status bits are updated to reflect exceptional conditions.

+

Operation + ¶ +

+

VFCMULCPH dest{k1}, src1, src2 (AVX512) + ¶ +

+
VL = 128, 256 or 512
+KL := VL/32
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF broadcasting and src2 is memory:
+            tsrc2.fp16[2*i+0] := src2.fp16[0]
+            tsrc2.fp16[2*i+1] := src2.fp16[1]
+        ELSE:
+            tsrc2.fp16[2*i+0] := src2.fp16[2*i+0]
+            tsrc2.fp16[2*i+1] := src2.fp16[2*i+1]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        tmp.fp16[2*i+0] := src1.fp16[2*i+0] * tsrc2.fp16[2*i+0]
+        tmp.fp16[2*i+1] := src1.fp16[2*i+1] * tsrc2.fp16[2*i+0]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        // conjugate version subtracts odd final term
+        dest.fp16[2*i] := tmp.fp16[2*i+0] +src1.fp16[2*i+1] * tsrc2.fp16[2*i+1]
+        dest.fp16[2*i+1] := tmp.fp16[2*i+1] - src1.fp16[2*i+0] * tsrc2.fp16[2*i+1]
+    ELSE IF *zeroing*:
+        dest.fp16[2*i+0] := 0
+        dest.fp16[2*i+1] := 0
+DEST[MAXVL-1:VL] := 0
+
+

VFMULCPH dest{k1}, src1, src2 (AVX512) + ¶ +

+
VL = 128, 256 or 512
+KL := VL/32
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF broadcasting and src2 is memory:
+            tsrc2.fp16[2*i+0] := src2.fp16[0]
+            tsrc2.fp16[2*i+1] := src2.fp16[1]
+        ELSE:
+            tsrc2.fp16[2*i+0] := src2.fp16[2*i+0]
+            tsrc2.fp16[2*i+1] := src2.fp16[2*i+1]
+FOR i := 0 to kl-1:
+    IF k1[i] or *no writemask*:
+        tmp.fp16[2*i+0] := src1.fp16[2*i+0] * tsrc2.fp16[2*i+0]
+        tmp.fp16[2*i+1] := src1.fp16[2*i+1] * tsrc2.fp16[2*i+0]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        // non-conjugate version subtracts last even term
+        dest.fp16[2*i+0] := tmp.fp16[2*i+0] - src1.fp16[2*i+1] * tsrc2.fp16[2*i+1]
+        dest.fp16[2*i+1] := tmp.fp16[2*i+1] + src1.fp16[2*i+0] * tsrc2.fp16[2*i+1]
+    ELSE IF *zeroing*:
+        dest.fp16[2*i+0] := 0
+        dest.fp16[2*i+1] := 0
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFCMULCPH __m128h _mm_cmul_pch (__m128h a, __m128h b);
+
+
VFCMULCPH __m128h _mm_mask_cmul_pch (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VFCMULCPH __m128h _mm_maskz_cmul_pch (__mmask8 k, __m128h a, __m128h b);
+
+
VFCMULCPH __m256h _mm256_cmul_pch (__m256h a, __m256h b);
+
+
VFCMULCPH __m256h _mm256_mask_cmul_pch (__m256h src, __mmask8 k, __m256h a, __m256h b);
+
+
VFCMULCPH __m256h _mm256_maskz_cmul_pch (__mmask8 k, __m256h a, __m256h b);
+
+
VFCMULCPH __m512h _mm512_cmul_pch (__m512h a, __m512h b);
+
+
VFCMULCPH __m512h _mm512_mask_cmul_pch (__m512h src, __mmask16 k, __m512h a, __m512h b);
+
+
VFCMULCPH __m512h _mm512_maskz_cmul_pch (__mmask16 k, __m512h a, __m512h b);
+
+
VFCMULCPH __m512h _mm512_cmul_round_pch (__m512h a, __m512h b, const int rounding);
+
+
VFCMULCPH __m512h _mm512_mask_cmul_round_pch (__m512h src, __mmask16 k, __m512h a, __m512h b, const int rounding);
+
+
VFCMULCPH __m512h _mm512_maskz_cmul_round_pch (__mmask16 k, __m512h a, __m512h b, const int rounding);
+
+
VFCMULCPH __m128h _mm_fcmul_pch (__m128h a, __m128h b);
+
+
VFCMULCPH __m128h _mm_mask_fcmul_pch (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VFCMULCPH __m128h _mm_maskz_fcmul_pch (__mmask8 k, __m128h a, __m128h b);
+
+
VFCMULCPH __m256h _mm256_fcmul_pch (__m256h a, __m256h b);
+
+
VFCMULCPH __m256h _mm256_mask_fcmul_pch (__m256h src, __mmask8 k, __m256h a, __m256h b);
+
+
VFCMULCPH __m256h _mm256_maskz_fcmul_pch (__mmask8 k, __m256h a, __m256h b);
+
+
VFCMULCPH __m512h _mm512_fcmul_pch (__m512h a, __m512h b);
+
+
VFCMULCPH __m512h _mm512_mask_fcmul_pch (__m512h src, __mmask16 k, __m512h a, __m512h b);
+
+
VFCMULCPH __m512h _mm512_maskz_fcmul_pch (__mmask16 k, __m512h a, __m512h b);
+
+
VFCMULCPH __m512h _mm512_fcmul_round_pch (__m512h a, __m512h b, const int rounding);
+
+
VFCMULCPH __m512h _mm512_mask_fcmul_round_pch (__m512h src, __mmask16 k, __m512h a, __m512h b, const int rounding);
+
+
VFCMULCPH __m512h _mm512_maskz_fcmul_round_pch (__mmask16 k, __m512h a, __m512h b, const int rounding);
+
+
VFMULCPH __m128h _mm_fmul_pch (__m128h a, __m128h b);
+
+
VFMULCPH __m128h _mm_mask_fmul_pch (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VFMULCPH __m128h _mm_maskz_fmul_pch (__mmask8 k, __m128h a, __m128h b);
+
+
VFMULCPH __m256h _mm256_fmul_pch (__m256h a, __m256h b);
+
+
VFMULCPH __m256h _mm256_mask_fmul_pch (__m256h src, __mmask8 k, __m256h a, __m256h b);
+
+
VFMULCPH __m256h _mm256_maskz_fmul_pch (__mmask8 k, __m256h a, __m256h b);
+
+
VFMULCPH __m512h _mm512_fmul_pch (__m512h a, __m512h b);
+
+
VFMULCPH __m512h _mm512_mask_fmul_pch (__m512h src, __mmask16 k, __m512h a, __m512h b);
+
+
VFMULCPH __m512h _mm512_maskz_fmul_pch (__mmask16 k, __m512h a, __m512h b);
+
+
VFMULCPH __m512h _mm512_fmul_round_pch (__m512h a, __m512h b, const int rounding);
+
+
VFMULCPH __m512h _mm512_mask_fmul_round_pch (__m512h src, __mmask16 k, __m512h a, __m512h b, const int rounding);
+
+
VFMULCPH __m512h _mm512_maskz_fmul_round_pch (__mmask16 k, __m512h a, __m512h b, const int rounding);
+
+
VFMULCPH __m128h _mm_mask_mul_pch (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VFMULCPH __m128h _mm_maskz_mul_pch (__mmask8 k, __m128h a, __m128h b);
+
+
VFMULCPH __m128h _mm_mul_pch (__m128h a, __m128h b);
+
+
VFMULCPH __m256h _mm256_mask_mul_pch (__m256h src, __mmask8 k, __m256h a, __m256h b);
+
+
VFMULCPH __m256h _mm256_maskz_mul_pch (__mmask8 k, __m256h a, __m256h b);
+
+
VFMULCPH __m256h _mm256_mul_pch (__m256h a, __m256h b);
+
+
VFMULCPH __m512h _mm512_mask_mul_pch (__m512h src, __mmask16 k, __m512h a, __m512h b);
+
+
VFMULCPH __m512h _mm512_maskz_mul_pch (__mmask16 k, __m512h a, __m512h b);
+
+
VFMULCPH __m512h _mm512_mul_pch (__m512h a, __m512h b);
+
+
VFMULCPH __m512h _mm512_mask_mul_round_pch (__m512h src, __mmask16 k, __m512h a, __m512h b, const int rounding);
+
+
VFMULCPH __m512h _mm512_maskz_mul_round_pch (__mmask16 k, __m512h a, __m512h b, const int rounding);
+
+
VFMULCPH __m512h _mm512_mul_round_pch (__m512h a, __m512h b, const int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf (dest_reg == src1_reg) or (dest_reg == src2_reg).
diff --git a/x86/vfcmulcsh.vfmulcsh.html b/x86/vfcmulcsh.vfmulcsh.html new file mode 100644 index 0000000..5a28412 --- /dev/null +++ b/x86/vfcmulcsh.vfmulcsh.html @@ -0,0 +1,154 @@ + +VFCMULCSH/VFMULCSH + — Complex Multiply Scalar FP16 Values

VFCMULCSH/VFMULCSH + — Complex Multiply Scalar FP16 Values

+ + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F2.MAP6.W0 D7 /r VFCMULCSH xmm1{k1}{z}, xmm2, xmm3/m32 {er}AV/VAVX512-FP16Complex multiply a pair of FP16 values from xmm2 and complex conjugate of xmm3/m32, and store the result in xmm1 subject to writemask k1. Bits 127:32 of xmm2 are copied to xmm1[127:32].
EVEX.LLIG.F3.MAP6.W0 D7 /r VFMULCSH xmm1{k1}{z}, xmm2, xmm3/m32 {er}AV/VAVX512-FP16Complex multiply a pair of FP16 values from xmm2 and xmm3/m32, and store the result in xmm1 subject to writemask k1. Bits 127:32 of xmm2 are copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a complex multiply operation. There are normal and complex conjugate forms of the operation. The masking for this operation is done on 32-bit quantities representing a pair of FP16 values.

+

Bits 127:32 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Rounding is performed at every FMA (fused multiply and add) boundary. Execution occurs as if all MXCSR exceptions are masked. MXCSR status bits are updated to reflect exceptional conditions.

+

Operation + ¶ +

+

VFCMULCSH dest{k1}, src1, src2 (AVX512) + ¶ +

+
KL := VL / 32
+IF k1[0] or *no writemask*:
+    tmp.fp16[0] := src1.fp16[0] * src2.fp16[0]
+    tmp.fp16[1] := src1.fp16[1] * src2.fp16[0]
+    // conjugate version subtracts odd final term
+    dest.fp16[0] := tmp.fp16[0] + src1.fp16[1] * src2.fp16[1]
+    dest.fp16[1] := tmp.fp16[1] - src1.fp16[0] * src2.fp16[1]
+ELSE IF *zeroing*:
+    dest.fp16[0] := 0
+    dest.fp16[1] := 0
+DEST[127:32] := src1[127:32] // copy upper part of src1
+DEST[MAXVL-1:128] := 0
+
+

VFMULCSH dest{k1}, src1, src2 (AVX512) + ¶ +

+
KL := VL / 32
+IF k1[0] or *no writemask*:
+    // non-conjugate version subtracts last even term
+    tmp.fp16[0] := src1.fp16[0] * src2.fp16[0]
+    tmp.fp16[1] := src1.fp16[1] * src2.fp16[0]
+    dest.fp16[0] := tmp.fp16[0] - src1.fp16[1] * src2.fp16[1]
+    dest.fp16[1] := tmp.fp16[1] + src1.fp16[0] * src2.fp16[1]
+ELSE IF *zeroing*:
+    dest.fp16[0] := 0
+    dest.fp16[1] := 0
+DEST[127:32] := src1[127:32] // copy upper part of src1
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFCMULCSH __m128h _mm_cmul_round_sch (__m128h a, __m128h b, const int rounding);
+
+
VFCMULCSH __m128h _mm_mask_cmul_round_sch (__m128h src, __mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VFCMULCSH __m128h _mm_maskz_cmul_round_sch (__mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VFCMULCSH __m128h _mm_cmul_sch (__m128h a, __m128h b);
+
+
VFCMULCSH __m128h _mm_mask_cmul_sch (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VFCMULCSH __m128h _mm_maskz_cmul_sch (__mmask8 k, __m128h a, __m128h b);
+
+
VFCMULCSH __m128h _mm_fcmul_round_sch (__m128h a, __m128h b, const int rounding);
+
+
VFCMULCSH __m128h _mm_mask_fcmul_round_sch (__m128h src, __mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VFCMULCSH __m128h _mm_maskz_fcmul_round_sch (__mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VFCMULCSH __m128h _mm_fcmul_sch (__m128h a, __m128h b);
+
+
VFCMULCSH __m128h _mm_mask_fcmul_sch (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VFCMULCSH __m128h _mm_maskz_fcmul_sch (__mmask8 k, __m128h a, __m128h b);
+
+
VFMULCSH __m128h _mm_fmul_round_sch (__m128h a, __m128h b, const int rounding);
+
+
VFMULCSH __m128h _mm_mask_fmul_round_sch (__m128h src, __mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VFMULCSH __m128h _mm_maskz_fmul_round_sch (__mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VFMULCSH __m128h _mm_fmul_sch (__m128h a, __m128h b);
+
+
VFMULCSH __m128h _mm_mask_fmul_sch (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VFMULCSH __m128h _mm_maskz_fmul_sch (__mmask8 k, __m128h a, __m128h b);
+
+
VFMULCSH __m128h _mm_mask_mul_round_sch (__m128h src, __mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VFMULCSH __m128h _mm_maskz_mul_round_sch (__mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VFMULCSH __m128h _mm_mul_round_sch (__m128h a, __m128h b, const int rounding);
+
+
VFMULCSH __m128h _mm_mask_mul_sch (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VFMULCSH __m128h _mm_maskz_mul_sch (__mmask8 k, __m128h a, __m128h b);
+
+
VFMULCSH __m128h _mm_mul_sch (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-58, “Type E10 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf (dest_reg == src1_reg) or (dest_reg == src2_reg).
diff --git a/x86/vfixupimmpd.html b/x86/vfixupimmpd.html new file mode 100644 index 0000000..3cfebf0 --- /dev/null +++ b/x86/vfixupimmpd.html @@ -0,0 +1,252 @@ + +VFIXUPIMMPD + — Fix Up Special Packed Float64 Values

VFIXUPIMMPD + — Fix Up Special Packed Float64 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 54 /r ib VFIXUPIMMPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst, imm8AV/VAVX512VL AVX512FFix up special numbers in float64 vector xmm1, float64 vector xmm2 and int64 vector xmm3/m128/m64bcst and store the result in xmm1, under writemask.
EVEX.256.66.0F3A.W1 54 /r ib VFIXUPIMMPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst, imm8AV/VAVX512VL AVX512FFix up special numbers in float64 vector ymm1, float64 vector ymm2 and int64 vector ymm3/m256/m64bcst and store the result in ymm1, under writemask.
EVEX.512.66.0F3A.W1 54 /r ib VFIXUPIMMPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{sae}, imm8AV/VAVX512FFix up elements of float64 vector in zmm2 using int64 vector table in zmm3/m512/m64bcst, combine with preserved elements from zmm1, and store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Perform fix-up of quad-word elements encoded in double precision floating-point format in the first source operand (the second operand) using a 32-bit, two-level look-up table specified in the corresponding quadword element of the second source operand (the third operand) with exception reporting specifier imm8. The elements that are fixed-up are selected by mask bits of 1 specified in the opmask k1. Mask bits of 0 in the opmask k1 or table response action of 0000b preserves the corresponding element of the first operand. The fixed-up elements from the first source operand and the preserved element in the first operand are combined as the final results in the destination operand (the first operand).

+

The destination and the first source operands are ZMM/YMM/XMM registers. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location.

+

The two-level look-up table perform a fix-up of each double precision floating-point input data in the first source operand by decoding the input data encoding into 8 token types. A response table is defined for each token type that converts the input encoding in the first source operand with one of 16 response actions.

+

This instruction is specifically intended for use in fixing up the results of arithmetic calculations involving one source so that they match the spec, although it is generally useful for fixing up the results of multiple-instruction sequences to reflect special-number inputs. For example, consider rcp(0). Input 0 to rcp, and you should get INF according to the DX10 spec. However, evaluating rcp via Newton-Raphson, where x=approx(1/0), yields an incorrect result. To deal with this, VFIXUPIMMPD can be used after the N-R reciprocal sequence to set the result to the correct value (i.e., INF when the input is 0).

+

If MXCSR.DAZ is not set, denormal input elements in the first source operand are considered as normal inputs and do not trigger any fixup nor fault reporting.

+

Imm8 is used to set the required flags reporting. It supports #ZE and #IE fault reporting (see details below).

+

MXCSR mask bits are ignored and are treated as if all mask bits are set to masked response). If any of the imm8 bits is set and the condition met for fault reporting, MXCSR.IE or MXCSR.ZE might be updated.

+

This instruction is writemasked, so only those elements with the corresponding bit set in vector mask register k1 are computed and stored into zmm1. Elements in the destination with the corresponding bit clear in k1 retain their previous values or are set to 0.

+

Operation + ¶ +

+
enum TOKEN_TYPE
+{
+    QNAN_TOKEN := 0,
+    SNAN_TOKEN := 1,
+    ZERO_VALUE_TOKEN := 2,
+    POS_ONE_VALUE_TOKEN := 3,
+    NEG_INF_TOKEN := 4,
+    POS_INF_TOKEN := 5,
+    NEG_VALUE_TOKEN := 6,
+    POS_VALUE_TOKEN := 7
+}
+FIXUPIMM_DP (dest[63:0], src1[63:0],tbl3[63:0], imm8 [7:0]){
+    tsrc[63:0] := ((src1[62:52] = 0) AND (MXCSR.DAZ =1)) ? 0.0 : src1[63:0]
+    CASE(tsrc[63:0] of TOKEN_TYPE) {
+        QNAN_TOKEN: j := 0;
+        SNAN_TOKEN: j := 1;
+        ZERO_VALUE_TOKEN: j := 2;
+        POS_ONE_VALUE_TOKEN: j := 3;
+        NEG_INF_TOKEN: j := 4;
+        POS_INF_TOKEN: j := 5;
+        NEG_VALUE_TOKEN: j := 6;
+        POS_VALUE_TOKEN: j := 7;
+    } ; end source special CASE(tsrc...)
+    ; The required response from src3 table is extracted
+    token_response[3:0] = tbl3[3+4*j:4*j];
+    CASE(token_response[3:0]) {
+        0000: dest[63:0] := dest[63:0];
+                ; preserve content of DEST
+        0001: dest[63:0] := tsrc[63:0];
+                ; pass through src1 normal input value, denormal as zero
+        0010: dest[63:0] := QNaN(tsrc[63:0]);
+        0011: dest[63:0] := QNAN_Indefinite;
+        0100: dest[63:0] := -INF;
+        0101: dest[63:0] := +INF;
+        0110: dest[63:0] := tsrc.sign? –INF : +INF;
+        0111: dest[63:0] := -0;
+        1000: dest[63:0] := +0;
+        1001: dest[63:0] := -1;
+        1010: dest[63:0] := +1;
+        1011: dest[63:0] := 1⁄2;
+        1100: dest[63:0] := 90.0;
+        1101: dest[63:0] := PI/2;
+        1110: dest[63:0] := MAX_FLOAT;
+        1111: dest[63:0] := -MAX_FLOAT;
+    }
+            ; end of token_response CASE
+    ; The required fault reporting from imm8 is extracted
+    ; TOKENs are mutually exclusive and TOKENs priority defines the order.
+    ; Multiple faults related to a single token can occur simultaneously.
+    IF (tsrc[63:0] of TOKEN_TYPE: ZERO_VALUE_TOKEN) AND imm8[0] then set #ZE;
+    IF (tsrc[63:0] of TOKEN_TYPE: ZERO_VALUE_TOKEN) AND imm8[1] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: ONE_VALUE_TOKEN) AND imm8[2] then set #ZE;
+    IF (tsrc[63:0] of TOKEN_TYPE: ONE_VALUE_TOKEN) AND imm8[3] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: SNAN_TOKEN) AND imm8[4] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: NEG_INF_TOKEN) AND imm8[5] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: NEG_VALUE_TOKEN) AND imm8[6] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: POS_INF_TOKEN) AND imm8[7] then set #IE;
+        ; end fault reporting
+    return dest[63:0];
+}
+        ; end of FIXUPIMM_DP()
+
+

VFIXUPIMMPD + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+63:i] := FIXUPIMM_DP(DEST[i+63:i], SRC1[i+63:i], SRC2[63:0], imm8 [7:0])
+                ELSE
+                    DEST[i+63:i] := FIXUPIMM_DP(DEST[i+63:i], SRC1[i+63:i], SRC2[i+63:i], imm8 [7:0])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0
+                        ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+Immediate Control Description:
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +76543210 ++INF #IE +-VE #IE +-INF #IE +SNaN #IE +ONE #IE +ONE #ZE +ZERO #IE +ZERO #ZE +
Figure 5-9. VFIXUPIMMPD Immediate Control Description
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFIXUPIMMPD __m512d _mm512_fixupimm_pd( __m512d a, __m512i tbl, int imm);
+
+
VFIXUPIMMPD __m512d _mm512_mask_fixupimm_pd(__m512d s, __mmask8 k, __m512d a, __m512i tbl, int imm);
+
+
VFIXUPIMMPD __m512d _mm512_maskz_fixupimm_pd( __mmask8 k, __m512d a, __m512i tbl, int imm);
+
+
VFIXUPIMMPD __m512d _mm512_fixupimm_round_pd( __m512d a, __m512i tbl, int imm, int sae);
+
+
VFIXUPIMMPD __m512d _mm512_mask_fixupimm_round_pd(__m512d s, __mmask8 k, __m512d a, __m512i tbl, int imm, int sae);
+
+
VFIXUPIMMPD __m512d _mm512_maskz_fixupimm_round_pd( __mmask8 k, __m512d a, __m512i tbl, int imm, int sae);
+
+
VFIXUPIMMPD __m256d _mm256_fixupimm_pd( __m256d a, m256d b, __m256i c, int imm8);
+
+
VFIXUPIMMPD __m256d _mm256_mask_fixupimm_pd(__m256d a, __mmask8 k, __m256d b, __m256i c, int imm8);
+
+
VFIXUPIMMPD __m256d _mm256_maskz_fixupimm_pd( __mmask8 k, __m256d a, __m256d b, __m256i c, int imm8);
+
+
VFIXUPIMMPD __m128d _mm_fixupimm_pd( __m128d a, __m128d b, __m128i c, int imm8);
+
+
VFIXUPIMMPD __m128d _mm_mask_fixupimm_pd(__m128d a, __mmask8 k, __m128d b, __m128i c, int imm8);
+
+
VFIXUPIMMPD __m128d _mm_maskz_fixupimm_pd( __mmask8 k, __m128d a, __m128d b, 128ic, int imm8);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Zero, Invalid.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfixupimmps.html b/x86/vfixupimmps.html new file mode 100644 index 0000000..56350f4 --- /dev/null +++ b/x86/vfixupimmps.html @@ -0,0 +1,251 @@ + +VFIXUPIMMPS + — Fix Up Special Packed Float32 Values

VFIXUPIMMPS + — Fix Up Special Packed Float32 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 54 /r VFIXUPIMMPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst, imm8AV/VAVX512VL AVX512FFix up special numbers in float32 vector xmm1, float32 vector xmm2 and int32 vector xmm3/m128/m32bcst and store the result in xmm1, under writemask.
EVEX.256.66.0F3A.W0 54 /r VFIXUPIMMPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst, imm8AV/VAVX512VL AVX512FFix up special numbers in float32 vector ymm1, float32 vector ymm2 and int32 vector ymm3/m256/m32bcst and store the result in ymm1, under writemask.
EVEX.512.66.0F3A.W0 54 /r ib VFIXUPIMMPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{sae}, imm8AV/VAVX512FFix up elements of float32 vector in zmm2 using int32 vector table in zmm3/m512/m32bcst, combine with preserved elements from zmm1, and store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Perform fix-up of doubleword elements encoded in single precision floating-point format in the first source operand (the second operand) using a 32-bit, two-level look-up table specified in the corresponding doubleword element of the second source operand (the third operand) with exception reporting specifier imm8. The elements that are fixed-up are selected by mask bits of 1 specified in the opmask k1. Mask bits of 0 in the opmask k1 or table response action of 0000b preserves the corresponding element of the first operand. The fixed-up elements from the first source operand and the preserved element in the first operand are combined as the final results in the destination operand (the first operand).

+

The destination and the first source operands are ZMM/YMM/XMM registers. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location.

+

The two-level look-up table perform a fix-up of each single precision floating-point input data in the first source operand by decoding the input data encoding into 8 token types. A response table is defined for each token type that converts the input encoding in the first source operand with one of 16 response actions.

+

This instruction is specifically intended for use in fixing up the results of arithmetic calculations involving one source so that they match the spec, although it is generally useful for fixing up the results of multiple-instruction sequences to reflect special-number inputs. For example, consider rcp(0). Input 0 to rcp, and you should get INF according to the DX10 spec. However, evaluating rcp via Newton-Raphson, where x=approx(1/0), yields an incorrect result. To deal with this, VFIXUPIMMPS can be used after the N-R reciprocal sequence to set the result to the correct value (i.e., INF when the input is 0).

+

If MXCSR.DAZ is not set, denormal input elements in the first source operand are considered as normal inputs and do not trigger any fixup nor fault reporting.

+

Imm8 is used to set the required flags reporting. It supports #ZE and #IE fault reporting (see details below).

+

MXCSR.DAZ is used and refer to zmm2 only (i.e., zmm1 is not considered as zero in case MXCSR.DAZ is set).

+

MXCSR mask bits are ignored and are treated as if all mask bits are set to masked response). If any of the imm8 bits is set and the condition met for fault reporting, MXCSR.IE or MXCSR.ZE might be updated.

+

Operation + ¶ +

+
enum TOKEN_TYPE
+{
+    QNAN_TOKEN := 0,
+    SNAN_TOKEN := 1,
+    ZERO_VALUE_TOKEN := 2,
+    POS_ONE_VALUE_TOKEN := 3,
+    NEG_INF_TOKEN := 4,
+    POS_INF_TOKEN := 5,
+    NEG_VALUE_TOKEN := 6,
+    POS_VALUE_TOKEN := 7
+}
+FIXUPIMM_SP ( dest[31:0], src1[31:0],tbl3[31:0], imm8 [7:0]){
+    tsrc[31:0] := ((src1[30:23] = 0) AND (MXCSR.DAZ =1)) ? 0.0 : src1[31:0]
+    CASE(tsrc[31:0] of TOKEN_TYPE) {
+        QNAN_TOKEN: j := 0;
+        SNAN_TOKEN: j := 1;
+        ZERO_VALUE_TOKEN: j := 2;
+        POS_ONE_VALUE_TOKEN: j := 3;
+        NEG_INF_TOKEN: j := 4;
+        POS_INF_TOKEN: j := 5;
+        NEG_VALUE_TOKEN: j := 6;
+        POS_VALUE_TOKEN: j := 7;
+    }
+            ; end source special CASE(tsrc...)
+    ; The required response from src3 table is extracted
+    token_response[3:0] = tbl3[3+4*j:4*j];
+    CASE(token_response[3:0]) {
+        0000: dest[31:0] := dest[31:0];
+        0001: dest[31:0] := tsrc[31:0];
+        0010: dest[31:0] := QNaN(tsrc[31:0]);
+        0011: dest[31:0] := QNAN_Indefinite;
+        0100: dest[31:0] := -INF;
+        0101: dest[31:0] := +INF;
+        0110: dest[31:0] := tsrc.sign? –INF : +INF;
+        0111: dest[31:0] := -0;
+        1000: dest[31:0] := +0;
+        1001: dest[31:0] := -1;
+        1010: dest[31:0] := +1;
+        1011: dest[31:0] := 1⁄2;
+        1100: dest[31:0] := 90.0;
+        1101: dest[31:0] := PI/2;
+        1110: dest[31:0] := MAX_FLOAT;
+        1111: dest[31:0] := -MAX_FLOAT;
+    }
+            ; end of token_response CASE
+    ; The required fault reporting from imm8 is extracted
+    ; TOKENs are mutually exclusive and TOKENs priority defines the order.
+    ; Multiple faults related to a single token can occur simultaneously.
+    IF (tsrc[31:0] of TOKEN_TYPE: ZERO_VALUE_TOKEN) AND imm8[0] then set #ZE;
+    IF (tsrc[31:0] of TOKEN_TYPE: ZERO_VALUE_TOKEN) AND imm8[1] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: ONE_VALUE_TOKEN) AND imm8[2] then set #ZE;
+    IF (tsrc[31:0] of TOKEN_TYPE: ONE_VALUE_TOKEN) AND imm8[3] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: SNAN_TOKEN) AND imm8[4] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: NEG_INF_TOKEN) AND imm8[5] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: NEG_VALUE_TOKEN) AND imm8[6] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: POS_INF_TOKEN) AND imm8[7] then set #IE;
+        ; end fault reporting
+    return dest[31:0];
+}
+        ; end of FIXUPIMM_SP()
+
+

VFIXUPIMMPS (EVEX) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := FIXUPIMM_SP(DEST[i+31:i], SRC1[i+31:i], SRC2[31:0], imm8 [7:0])
+                ELSE
+                    DEST[i+31:i] := FIXUPIMM_SP(DEST[i+31:i], SRC1[i+31:i], SRC2[i+31:i], imm8 [7:0])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0
+                        ; zeroing-masking
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+Immediate Control Description:
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +76543210 ++INF #IE +-VE #IE +-INF #IE +SNaN #IE +ONE #IE +ONE #ZE +ZERO #IE +ZERO #ZE +
Figure 5-10. VFIXUPIMMPS Immediate Control Description
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFIXUPIMMPS __m512 _mm512_fixupimm_ps( __m512 a, __m512i tbl, int imm);
+
+
VFIXUPIMMPS __m512 _mm512_mask_fixupimm_ps(__m512 s, __mmask16 k, __m512 a, __m512i tbl, int imm);
+
+
VFIXUPIMMPS __m512 _mm512_maskz_fixupimm_ps( __mmask16 k, __m512 a, __m512i tbl, int imm);
+
+
VFIXUPIMMPS __m512 _mm512_fixupimm_round_ps( __m512 a, __m512i tbl, int imm, int sae);
+
+
VFIXUPIMMPS __m512 _mm512_mask_fixupimm_round_ps(__m512 s, __mmask16 k, __m512 a, __m512i tbl, int imm, int sae);
+
+
VFIXUPIMMPS __m512 _mm512_maskz_fixupimm_round_ps( __mmask16 k, __m512 a, __m512i tbl, int imm, int sae);
+
+
VFIXUPIMMPS __m256 _mm256_fixupimm_ps( __m256 a, __m256 b, __m256i c, int imm8);
+
+
VFIXUPIMMPS __m256 _mm256_mask_fixupimm_ps(__m256 a, __mmask8 k, __m256 b, __m256i c, int imm8);
+
+
VFIXUPIMMPS __m256 _mm256_maskz_fixupimm_ps( __mmask8 k, __m256 a, __m256b, __m256i c, int imm8);
+
+
VFIXUPIMMPS __m128 _mm_fixupimm_ps( __m128 a, __m128 b, 128i c, int imm8);
+
+
VFIXUPIMMPS __m128 _mm_mask_fixupimm_ps(__m128 a, __mmask8 k, __m128 b, __m128i c, int imm8);
+
+
VFIXUPIMMPS __m128 _mm_maskz_fixupimm_ps( __mmask8 k, __m128 a, __m128 b, __m128i c, int imm8);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Zero, Invalid.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfixupimmsd.html b/x86/vfixupimmsd.html new file mode 100644 index 0000000..d80e42e --- /dev/null +++ b/x86/vfixupimmsd.html @@ -0,0 +1,217 @@ + +VFIXUPIMMSD + — Fix Up Special Scalar Float64 Value

VFIXUPIMMSD + — Fix Up Special Scalar Float64 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W1 55 /r ib VFIXUPIMMSD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}, imm8AV/VAVX512FFix up a float64 number in the low quadword element of xmm2 using scalar int32 table in xmm3/m64 and store the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Perform a fix-up of the low quadword element encoded in double precision floating-point format in the first source operand (the second operand) using a 32-bit, two-level look-up table specified in the low quadword element of the second source operand (the third operand) with exception reporting specifier imm8. The element that is fixed-up is selected by mask bit of 1 specified in the opmask k1. Mask bit of 0 in the opmask k1 or table response action of 0000b preserves the corresponding element of the first operand. The fixed-up element from the first source operand or the preserved element in the first operand becomes the low quadword element of the destination operand (the first operand). Bits 127:64 of the destination operand is copied from the corresponding bits of the first source operand. The destination and first source operands are XMM registers. The second source operand can be a XMM register or a 64- bit memory location.

+

The two-level look-up table perform a fix-up of each double precision floating-point input data in the first source operand by decoding the input data encoding into 8 token types. A response table is defined for each token type that converts the input encoding in the first source operand with one of 16 response actions.

+

This instruction is specifically intended for use in fixing up the results of arithmetic calculations involving one source so that they match the spec, although it is generally useful for fixing up the results of multiple-instruction sequences to reflect special-number inputs. For example, consider rcp(0). Input 0 to rcp, and you should get INF according to the DX10 spec. However, evaluating rcp via Newton-Raphson, where x=approx(1/0), yields an incorrect result. To deal with this, VFIXUPIMMPD can be used after the N-R reciprocal sequence to set the result to the correct value (i.e., INF when the input is 0).

+

If MXCSR.DAZ is not set, denormal input elements in the first source operand are considered as normal inputs and do not trigger any fixup nor fault reporting.

+

Imm8 is used to set the required flags reporting. It supports #ZE and #IE fault reporting (see details below).

+

MXCSR.DAZ is used and refer to zmm2 only (i.e., zmm1 is not considered as zero in case MXCSR.DAZ is set).

+

MXCSR mask bits are ignored and are treated as if all mask bits are set to masked response). If any of the imm8 bits is set and the condition met for fault reporting, MXCSR.IE or MXCSR.ZE might be updated.

+

Operation + ¶ +

+
enum TOKEN_TYPE
+{
+    QNAN_TOKEN := 0,
+    SNAN_TOKEN := 1,
+    ZERO_VALUE_TOKEN := 2,
+    POS_ONE_VALUE_TOKEN := 3,
+    NEG_INF_TOKEN := 4,
+    POS_INF_TOKEN := 5,
+    NEG_VALUE_TOKEN := 6,
+    POS_VALUE_TOKEN := 7
+}
+FIXUPIMM_DP (dest[63:0], src1[63:0],tbl3[63:0], imm8 [7:0]){
+    tsrc[63:0] := ((src1[62:52] = 0) AND (MXCSR.DAZ =1)) ? 0.0 : src1[63:0]
+    CASE(tsrc[63:0] of TOKEN_TYPE) {
+        QNAN_TOKEN: j := 0;
+        SNAN_TOKEN: j := 1;
+        ZERO_VALUE_TOKEN: j := 2;
+        POS_ONE_VALUE_TOKEN: j := 3;
+        NEG_INF_TOKEN: j := 4;
+        POS_INF_TOKEN: j := 5;
+        NEG_VALUE_TOKEN: j := 6;
+        POS_VALUE_TOKEN: j := 7;
+    }
+            ; end source special CASE(tsrc...)
+    ; The required response from src3 table is extracted
+    token_response[3:0] = tbl3[3+4*j:4*j];
+    CASE(token_response[3:0]) {
+        0000: dest[63:0] := dest[63:0]
+        0001: dest[63:0] := tsrc[63:0];
+        0010: dest[63:0] := QNaN(tsrc[63:0]);
+        0011: dest[63:0] := QNAN_Indefinite;
+        0100:dest[63:0] := -INF;
+        0101: dest[63:0] := +INF;
+        0110: dest[63:0] := tsrc.sign? –INF : +INF;
+        0111: dest[63:0] := -0;
+        1000: dest[63:0] := +0;
+        1001: dest[63:0] := -1;
+        1010: dest[63:0] := +1;
+        1011: dest[63:0] := 1⁄2;
+        1100: dest[63:0] := 90.0;
+        1101: dest[63:0] := PI/2;
+        1110: dest[63:0] := MAX_FLOAT;
+        1111: dest[63:0] := -MAX_FLOAT;
+    }
+            ; end of token_response CASE
+    ; The required fault reporting from imm8 is extracted
+    ; TOKENs are mutually exclusive and TOKENs priority defines the order.
+    ; Multiple faults related to a single token can occur simultaneously.
+    IF (tsrc[63:0] of TOKEN_TYPE: ZERO_VALUE_TOKEN) AND imm8[0] then set #ZE;
+    IF (tsrc[63:0] of TOKEN_TYPE: ZERO_VALUE_TOKEN) AND imm8[1] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: ONE_VALUE_TOKEN) AND imm8[2] then set #ZE;
+    IF (tsrc[63:0] of TOKEN_TYPE: ONE_VALUE_TOKEN) AND imm8[3] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: SNAN_TOKEN) AND imm8[4] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: NEG_INF_TOKEN) AND imm8[5] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: NEG_VALUE_TOKEN) AND imm8[6] then set #IE;
+    IF (tsrc[63:0] of TOKEN_TYPE: POS_INF_TOKEN) AND imm8[7] then set #IE;
+        ; end fault reporting
+    return dest[63:0];
+}
+        ; end of FIXUPIMM_DP()
+
+

VFIXUPIMMSD (EVEX encoded version) + ¶ +

+
IF k1[0] OR *no writemask*
+    THEN DEST[63:0] := FIXUPIMM_DP(DEST[63:0], SRC1[63:0], SRC2[63:0], imm8 [7:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE DEST[63:0] := 0
+                ; zeroing-masking
+        FI
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+Immediate Control Description:
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +76543210 ++INF #IE +-VE #IE +-INF #IE +SNaN #IE +ONE #IE +ONE #ZE +ZERO #IE +ZERO #ZE +
Figure 5-11. VFIXUPIMMSD Immediate Control Description
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFIXUPIMMSD __m128d _mm_fixupimm_sd( __m128d a, __m128i tbl, int imm);
+
+
VFIXUPIMMSD __m128d _mm_mask_fixupimm_sd(__m128d s, __mmask8 k, __m128d a, __m128i tbl, int imm);
+
+
VFIXUPIMMSD __m128d _mm_maskz_fixupimm_sd( __mmask8 k, __m128d a, __m128i tbl, int imm);
+
+
VFIXUPIMMSD __m128d _mm_fixupimm_round_sd( __m128d a, __m128i tbl, int imm, int sae);
+
+
VFIXUPIMMSD __m128d _mm_mask_fixupimm_round_sd(__m128d s, __mmask8 k, __m128d a, __m128i tbl, int imm, int sae);
+
+
VFIXUPIMMSD __m128d _mm_maskz_fixupimm_round_sd( __mmask8 k, __m128d a, __m128i tbl, int imm, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Zero, Invalid

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfixupimmss.html b/x86/vfixupimmss.html new file mode 100644 index 0000000..071a2a7 --- /dev/null +++ b/x86/vfixupimmss.html @@ -0,0 +1,216 @@ + +VFIXUPIMMSS + — Fix Up Special Scalar Float32 Value

VFIXUPIMMSS + — Fix Up Special Scalar Float32 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W0 55 /r ib VFIXUPIMMSS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}, imm8AV/VAVX512FFix up a float32 number in the low doubleword element in xmm2 using scalar int32 table in xmm3/m32 and store the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Perform a fix-up of the low doubleword element encoded in single precision floating-point format in the first source operand (the second operand) using a 32-bit, two-level look-up table specified in the low doubleword element of the second source operand (the third operand) with exception reporting specifier imm8. The element that is fixed-up is selected by mask bit of 1 specified in the opmask k1. Mask bit of 0 in the opmask k1 or table response action of 0000b preserves the corresponding element of the first operand. The fixed-up element from the first source operand or the preserved element in the first operand becomes the low doubleword element of the destination operand (the first operand) Bits 127:32 of the destination operand is copied from the corresponding bits of the first source operand. The destination and first source operands are XMM registers. The second source operand can be a XMM register or a 32-bit memory location.

+

The two-level look-up table perform a fix-up of each single precision floating-point input data in the first source operand by decoding the input data encoding into 8 token types. A response table is defined for each token type that converts the input encoding in the first source operand with one of 16 response actions.

+

This instruction is specifically intended for use in fixing up the results of arithmetic calculations involving one source so that they match the spec, although it is generally useful for fixing up the results of multiple-instruction sequences to reflect special-number inputs. For example, consider rcp(0). Input 0 to rcp, and you should get INF according to the DX10 spec. However, evaluating rcp via Newton-Raphson, where x=approx(1/0), yields an incorrect result. To deal with this, VFIXUPIMMPD can be used after the N-R reciprocal sequence to set the result to the correct value (i.e., INF when the input is 0).

+

If MXCSR.DAZ is not set, denormal input elements in the first source operand are considered as normal inputs and do not trigger any fixup nor fault reporting.

+

Imm8 is used to set the required flags reporting. It supports #ZE and #IE fault reporting (see details below).

+

MXCSR.DAZ is used and refer to zmm2 only (i.e., zmm1 is not considered as zero in case MXCSR.DAZ is set).

+

MXCSR mask bits are ignored and are treated as if all mask bits are set to masked response). If any of the imm8 bits is set and the condition met for fault reporting, MXCSR.IE or MXCSR.ZE might be updated.

+

Operation + ¶ +

+
enum TOKEN_TYPE
+{
+    QNAN_TOKEN := 0,
+    SNAN_TOKEN := 1,
+    ZERO_VALUE_TOKEN := 2,
+    POS_ONE_VALUE_TOKEN := 3,
+    NEG_INF_TOKEN := 4,
+    POS_INF_TOKEN := 5,
+    NEG_VALUE_TOKEN := 6,
+    POS_VALUE_TOKEN := 7
+}
+FIXUPIMM_SP (dest[31:0], src1[31:0],tbl3[31:0], imm8 [7:0]){
+    tsrc[31:0] := ((src1[30:23] = 0) AND (MXCSR.DAZ =1)) ? 0.0 : src1[31:0]
+    CASE(tsrc[63:0] of TOKEN_TYPE) {
+        QNAN_TOKEN: j := 0;
+        SNAN_TOKEN: j := 1;
+        ZERO_VALUE_TOKEN: j := 2;
+        POS_ONE_VALUE_TOKEN: j := 3;
+        NEG_INF_TOKEN: j := 4;
+        POS_INF_TOKEN: j := 5;
+        NEG_VALUE_TOKEN: j := 6;
+        POS_VALUE_TOKEN: j := 7;
+    }
+            ; end source special CASE(tsrc...)
+    ; The required response from src3 table is extracted
+    token_response[3:0] = tbl3[3+4*j:4*j];
+    CASE(token_response[3:0]) {
+        0000: dest[31:0] := dest[31:0];
+        0001: dest[31:0] := tsrc[31:0];
+        0010: dest[31:0] := QNaN(tsrc[31:0]);
+        0011: dest[31:0] := QNAN_Indefinite;
+        0100: dest[31:0] := -INF;
+        0101: dest[31:0] := +INF;
+        0110: dest[31:0] := tsrc.sign? –INF : +INF;
+        0111: dest[31:0] := -0;
+        1000: dest[31:0] := +0;
+        1001: dest[31:0] := -1;
+        1010: dest[31:0] := +1;
+        1011: dest[31:0] := 1⁄2;
+        1100: dest[31:0] := 90.0;
+        1101: dest[31:0] := PI/2;
+        1110: dest[31:0] := MAX_FLOAT;
+        1111: dest[31:0] := -MAX_FLOAT;
+    }
+            ; end of token_response CASE
+    ; The required fault reporting from imm8 is extracted
+    ; TOKENs are mutually exclusive and TOKENs priority defines the order.
+    ; Multiple faults related to a single token can occur simultaneously.
+    IF (tsrc[31:0] of TOKEN_TYPE: ZERO_VALUE_TOKEN) AND imm8[0] then set #ZE;
+    IF (tsrc[31:0] of TOKEN_TYPE: ZERO_VALUE_TOKEN) AND imm8[1] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: ONE_VALUE_TOKEN) AND imm8[2] then set #ZE;
+    IF (tsrc[31:0] of TOKEN_TYPE: ONE_VALUE_TOKEN) AND imm8[3] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: SNAN_TOKEN) AND imm8[4] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: NEG_INF_TOKEN) AND imm8[5] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: NEG_VALUE_TOKEN) AND imm8[6] then set #IE;
+    IF (tsrc[31:0] of TOKEN_TYPE: POS_INF_TOKEN) AND imm8[7] then set #IE;
+        ; end fault reporting
+    return dest[31:0];
+} ; end of FIXUPIMM_SP()
+
+

VFIXUPIMMSS (EVEX encoded version) + ¶ +

+
IF k1[0] OR *no writemask*
+    THEN DEST[31:0] := FIXUPIMM_SP(DEST[31:0], SRC1[31:0], SRC2[31:0], imm8 [7:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE DEST[31:0] := 0
+                ; zeroing-masking
+        FI
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+Immediate Control Description:
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +76543210 ++INF #IE +-VE #IE +-INF #IE +SNaN #IE +ONE #IE +ONE #ZE +ZERO #IE +ZERO #ZE +
Figure 5-12. VFIXUPIMMSS Immediate Control Description
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFIXUPIMMSS __m128 _mm_fixupimm_ss( __m128 a, __m128i tbl, int imm);
+
+
VFIXUPIMMSS __m128 _mm_mask_fixupimm_ss(__m128 s, __mmask8 k, __m128 a, __m128i tbl, int imm);
+
+
VFIXUPIMMSS __m128 _mm_maskz_fixupimm_ss( __mmask8 k, __m128 a, __m128i tbl, int imm);
+
+
VFIXUPIMMSS __m128 _mm_fixupimm_round_ss( __m128 a, __m128i tbl, int imm, int sae);
+
+
VFIXUPIMMSS __m128 _mm_mask_fixupimm_round_ss(__m128 s, __mmask8 k, __m128 a, __m128i tbl, int imm, int sae);
+
+
VFIXUPIMMSS __m128 _mm_maskz_fixupimm_round_ss( __mmask8 k, __m128 a, __m128i tbl, int imm, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Zero, Invalid

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfmadd132pd.vfmadd213pd.vfmadd231pd.html b/x86/vfmadd132pd.vfmadd213pd.vfmadd231pd.html new file mode 100644 index 0000000..609b496 --- /dev/null +++ b/x86/vfmadd132pd.vfmadd213pd.vfmadd231pd.html @@ -0,0 +1,400 @@ + +VFMADD132PD/VFMADD213PD/VFMADD231PD + — Fused Multiply-Add of Packed DoublePrecision Floating-Point Values

VFMADD132PD/VFMADD213PD/VFMADD231PD + — Fused Multiply-Add of Packed DoublePrecision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W1 98 /r VFMADD132PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm3/mem, add to xmm2 and put result in xmm1.
VEX.128.66.0F38.W1 A8 /r VFMADD213PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm2, add to xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W1 B8 /r VFMADD231PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm2 and xmm3/mem, add to xmm1 and put result in xmm1.
VEX.256.66.0F38.W1 98 /r VFMADD132PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm3/mem, add to ymm2 and put result in ymm1.
VEX.256.66.0F38.W1 A8 /r VFMADD213PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm2, add to ymm3/mem and put result in ymm1.
VEX.256.66.0F38.W1 B8 /r VFMADD231PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm2 and ymm3/mem, add to ymm1 and put result in ymm1.
EVEX.128.66.0F38.W1 98 /r VFMADD132PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm3/m128/m64bcst, add to xmm2 and put result in xmm1.
EVEX.128.66.0F38.W1 A8 /r VFMADD213PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm2, add to xmm3/m128/m64bcst and put result in xmm1.
EVEX.128.66.0F38.W1 B8 /r VFMADD231PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm2 and xmm3/m128/m64bcst, add to xmm1 and put result in xmm1.
EVEX.256.66.0F38.W1 98 /r VFMADD132PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm3/m256/m64bcst, add to ymm2 and put result in ymm1.
EVEX.256.66.0F38.W1 A8 /r VFMADD213PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm2, add to ymm3/m256/m64bcst and put result in ymm1.
EVEX.256.66.0F38.W1 B8 /r VFMADD231PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm2 and ymm3/m256/m64bcst, add to ymm1 and put result in ymm1.
EVEX.512.66.0F38.W1 98 /r VFMADD132PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm3/m512/m64bcst, add to zmm2 and put result in zmm1.
EVEX.512.66.0F38.W1 A8 /r VFMADD213PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm2, add to zmm3/m512/m64bcst and put result in zmm1.
EVEX.512.66.0F38.W1 B8 /r VFMADD231PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm2 and zmm3/m512/m64bcst, add to zmm1 and put result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a set of SIMD multiply-add computation on packed double precision floating-point values using three source operands and writes the multiply-add results in the destination operand. The destination operand is also the first source operand. The second operand must be a SIMD register. The third source operand can be a SIMD register or a memory location.

+

VFMADD132PD: Multiplies the two, four or eight packed double precision floating-point values from the first source operand to the two, four or eight packed double precision floating-point values in the third source operand, adds the infinite precision intermediate result to the two, four or eight packed double precision floating-point values in the second source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

VFMADD213PD: Multiplies the two, four or eight packed double precision floating-point values from the second source operand to the two, four or eight packed double precision floating-point values in the first source operand, adds the infinite precision intermediate result to the two, four or eight packed double precision floating-point values in the third source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

VFMADD231PD: Multiplies the two, four or eight packed double precision floating-point values from the second source to the two, four or eight packed double precision floating-point values in the third source operand, adds the infinite precision intermediate result to the two, four or eight packed double precision floating-point values in the first source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) is a ZMM register and encoded in reg_field. The second source operand is a ZMM register and encoded in EVEX.vvvv. The third source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFMADD132PD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR(DEST[n+63:n]*SRC3[n+63:n] + SRC2[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMADD213PD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR(SRC2[n+63:n]*DEST[n+63:n] + SRC3[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMADD231PD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR(SRC2[n+63:n]*SRC3[n+63:n] + DEST[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMADD132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(DEST[i+63:i]*SRC3[i+63:i] + SRC2[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[63:0] + SRC2[i+63:i])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[i+63:i] + SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(SRC2[i+63:i]*DEST[i+63:i] + SRC3[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] + SRC3[63:0])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] + SRC3[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(SRC2[i+63:i]*SRC3[i+63:i] + DEST[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[63:0] + DEST[i+63:i])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[i+63:i] + DEST[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADDxxxPD __m512d _mm512_fmadd_pd(__m512d a, __m512d b, __m512d c);
+
+
VFMADDxxxPD __m512d _mm512_fmadd_round_pd(__m512d a, __m512d b, __m512d c, int r);
+
+
VFMADDxxxPD __m512d _mm512_mask_fmadd_pd(__m512d a, __mmask8 k, __m512d b, __m512d c);
+
+
VFMADDxxxPD __m512d _mm512_maskz_fmadd_pd(__mmask8 k, __m512d a, __m512d b, __m512d c);
+
+
VFMADDxxxPD __m512d _mm512_mask3_fmadd_pd(__m512d a, __m512d b, __m512d c, __mmask8 k);
+
+
VFMADDxxxPD __m512d _mm512_mask_fmadd_round_pd(__m512d a, __mmask8 k, __m512d b, __m512d c, int r);
+
+
VFMADDxxxPD __m512d _mm512_maskz_fmadd_round_pd(__mmask8 k, __m512d a, __m512d b, __m512d c, int r);
+
+
VFMADDxxxPD __m512d _mm512_mask3_fmadd_round_pd(__m512d a, __m512d b, __m512d c, __mmask8 k, int r);
+
+
VFMADDxxxPD __m256d _mm256_mask_fmadd_pd(__m256d a, __mmask8 k, __m256d b, __m256d c);
+
+
VFMADDxxxPD __m256d _mm256_maskz_fmadd_pd(__mmask8 k, __m256d a, __m256d b, __m256d c);
+
+
VFMADDxxxPD __m256d _mm256_mask3_fmadd_pd(__m256d a, __m256d b, __m256d c, __mmask8 k);
+
+
VFMADDxxxPD __m128d _mm_mask_fmadd_pd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFMADDxxxPD __m128d _mm_maskz_fmadd_pd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFMADDxxxPD __m128d _mm_mask3_fmadd_pd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFMADDxxxPD __m128d _mm_fmadd_pd (__m128d a, __m128d b, __m128d c);
+
+
VFMADDxxxPD __m256d _mm256_fmadd_pd (__m256d a, __m256d b, __m256d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmadd132ph.vfnmadd132ph.vfmadd213ph.vfnmadd213ph.vfmadd231ph.vfnmadd231ph.html b/x86/vfmadd132ph.vfnmadd132ph.vfmadd213ph.vfnmadd213ph.vfmadd231ph.vfnmadd231ph.html new file mode 100644 index 0000000..0b955e1 --- /dev/null +++ b/x86/vfmadd132ph.vfnmadd132ph.vfmadd213ph.vfnmadd213ph.vfmadd231ph.vfnmadd231ph.html @@ -0,0 +1,367 @@ + +VFMADD132PH/VFNMADD132PH/VFMADD213PH/VFNMADD213PH/VFMADD231PH/VFNMADD231PH + — Fused Multiply-Add of Packed FP16 Values

VFMADD132PH/VFNMADD132PH/VFMADD213PH/VFNMADD213PH/VFMADD231PH/VFNMADD231PH + — Fused Multiply-Add of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP6.W0 98 /r VFMADD132PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm3/m128/m16bcst, add to xmm2, and store the result in xmm1.
EVEX.256.66.MAP6.W0 98 /r VFMADD132PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm3/m256/m16bcst, add to ymm2, and store the result in ymm1.
EVEX.512.66.MAP6.W0 98 /r VFMADD132PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm3/m512/m16bcst, add to zmm2, and store the result in zmm1.
EVEX.128.66.MAP6.W0 A8 /r VFMADD213PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm2, add to xmm3/m128/m16bcst, and store the result in xmm1.
EVEX.256.66.MAP6.W0 A8 /r VFMADD213PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm2, add to ymm3/m256/m16bcst, and store the result in ymm1.
EVEX.512.66.MAP6.W0 A8 /r VFMADD213PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm2, add to zmm3/m512/m16bcst, and store the result in zmm1.
EVEX.128.66.MAP6.W0 B8 /r VFMADD231PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm2 and xmm3/m128/m16bcst, add to xmm1, and store the result in xmm1.
EVEX.256.66.MAP6.W0 B8 /r VFMADD231PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm2 and ymm3/m256/m16bcst, add to ymm1, and store the result in ymm1.
EVEX.512.66.MAP6.W0 B8 /r VFMADD231PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm2 and zmm3/m512/m16bcst, add to zmm1, and store the result in zmm1.
EVEX.128.66.MAP6.W0 9C /r VFNMADD132PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm3/m128/m16bcst, and negate the value. Add this value to xmm2, and store the result in xmm1.
EVEX.256.66.MAP6.W0 9C /r VFNMADD132PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm3/m256/m16bcst, and negate the value. Add this value to ymm2, and store the result in ymm1.
EVEX.512.66.MAP6.W0 9C /r VFNMADD132PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm3/m512/m16bcst, and negate the value. Add this value to zmm2, and store the result in zmm1.
EVEX.128.66.MAP6.W0 AC /r VFNMADD213PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm2, and negate the value. Add this value to xmm3/m128/m16bcst, and store the result in xmm1.
EVEX.256.66.MAP6.W0 AC /r VFNMADD213PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm2, and negate the value. Add this value to ymm3/m256/m16bcst, and store the result in ymm1.
EVEX.512.66.MAP6.W0 AC /r VFNMADD213PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm2, and negate the value. Add this value to zmm3/m512/m16bcst, and store the result in zmm1.
EVEX.128.66.MAP6.W0 BC /r VFNMADD231PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm2 and xmm3/m128/m16bcst, and negate the value. Add this value to xmm1, and store the result in xmm1.
EVEX.256.66.MAP6.W0 BC /r VFNMADD231PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm2 and ymm3/m256/m16bcst, and negate the value. Add this value to ymm1, and store the result in ymm1.
EVEX.512.66.MAP6.W0 BC /r VFNMADD231PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm2 and zmm3/m512/m16bcst, and negate the value. Add this value to zmm1, and store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a packed multiply-add or negated multiply-add computation on FP16 values using three source operands and writes the results in the destination operand. The destination operand is also the first source operand. The “N” (negated) forms of this instruction add the negated infinite precision intermediate product to the corresponding remaining operand. The notation’ “132”, “213” and “231” indicate the use of the operands in ±A * B + C, where each digit corresponds to the operand number, with the destination being operand 1; see Table 5-2.

+

The destination elements are updated according to the writemask.

+
+ + + + + + + + + + + + +
NotationOperands
132dest = ± dest*src3+src2
231dest = ± src2*src3+dest
213dest = ± src2*dest+src3
+
Table 5-2. VF[,N]MADD[132,213,231]PH Notation for Operands
+

Operation + ¶ +

+

VF[,N]MADD132PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-DEST.fp16[j]*SRC3.fp16[j] + SRC2.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j]*SRC3.fp16[j] + SRC2.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MADD132PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-DEST.fp16[j] * t3 + SRC2.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j] * t3 + SRC2.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MADD213PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-SRC2.fp16[j]*DEST.fp16[j] + SRC3.fp16[j])
+        ELSE
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*DEST.fp16[j] + SRC3.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MADD213PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-SRC2.fp16[j] * DEST.fp16[j] + t3 )
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * DEST.fp16[j] + t3 )
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MADD231PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *negative form:
+            DEST.fp16[j] := RoundFPControl(-SRC2.fp16[j]*SRC3.fp16[j] + DEST.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*SRC3.fp16[j] + DEST.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MADD231PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-SRC2.fp16[j] * t3 + DEST.fp16[j] )
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * t3 + DEST.fp16[j] )
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADD132PH, VFMADD213PH , and VFMADD231PH: __m128h _mm_fmadd_ph (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fmadd_ph (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fmadd_ph (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fmadd_ph (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
__m256h _mm256_fmadd_ph (__m256h a, __m256h b, __m256h c);
+
+
__m256h _mm256_mask_fmadd_ph (__m256h a, __mmask16 k, __m256h b, __m256h c);
+
+
__m256h _mm256_mask3_fmadd_ph (__m256h a, __m256h b, __m256h c, __mmask16 k);
+
+
__m256h _mm256_maskz_fmadd_ph (__mmask16 k, __m256h a, __m256h b, __m256h c);
+
+
__m512h _mm512_fmadd_ph (__m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_mask_fmadd_ph (__m512h a, __mmask32 k, __m512h b, __m512h c);
+
+
__m512h _mm512_mask3_fmadd_ph (__m512h a, __m512h b, __m512h c, __mmask32 k);
+
+
__m512h _mm512_maskz_fmadd_ph (__mmask32 k, __m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_fmadd_round_ph (__m512h a, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask_fmadd_round_ph (__m512h a, __mmask32 k, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask3_fmadd_round_ph (__m512h a, __m512h b, __m512h c, __mmask32 k, const int rounding);
+
+
__m512h _mm512_maskz_fmadd_round_ph (__mmask32 k, __m512h a, __m512h b, __m512h c, const int rounding);
+
+
VFNMADD132PH, VFNMADD213PH, and VFNMADD231PH: __m128h _mm_fnmadd_ph (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fnmadd_ph (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fnmadd_ph (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fnmadd_ph (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
__m256h _mm256_fnmadd_ph (__m256h a, __m256h b, __m256h c);
+
+
__m256h _mm256_mask_fnmadd_ph (__m256h a, __mmask16 k, __m256h b, __m256h c);
+
+
__m256h _mm256_mask3_fnmadd_ph (__m256h a, __m256h b, __m256h c, __mmask16 k);
+
+
__m256h _mm256_maskz_fnmadd_ph (__mmask16 k, __m256h a, __m256h b, __m256h c);
+
+
__m512h _mm512_fnmadd_ph (__m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_mask_fnmadd_ph (__m512h a, __mmask32 k, __m512h b, __m512h c);
+
+
__m512h _mm512_mask3_fnmadd_ph (__m512h a, __m512h b, __m512h c, __mmask32 k);
+
+
__m512h _mm512_maskz_fnmadd_ph (__mmask32 k, __m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_fnmadd_round_ph (__m512h a, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask_fnmadd_round_ph (__m512h a, __mmask32 k, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask3_fnmadd_round_ph (__m512h a, __m512h b, __m512h c, __mmask32 k, const int rounding);
+
+
__m512h _mm512_maskz_fnmadd_round_ph (__mmask32 k, __m512h a, __m512h b, __m512h c, const int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmadd132ps.vfmadd213ps.vfmadd231ps.html b/x86/vfmadd132ps.vfmadd213ps.vfmadd231ps.html new file mode 100644 index 0000000..dc1bfff --- /dev/null +++ b/x86/vfmadd132ps.vfmadd213ps.vfmadd231ps.html @@ -0,0 +1,400 @@ + +VFMADD132PS/VFMADD213PS/VFMADD231PS + — Fused Multiply-Add of Packed SinglePrecision Floating-Point Values

VFMADD132PS/VFMADD213PS/VFMADD231PS + — Fused Multiply-Add of Packed SinglePrecision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 98 /r VFMADD132PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm3/mem, add to xmm2 and put result in xmm1.
VEX.128.66.0F38.W0 A8 /r VFMADD213PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm2, add to xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W0 B8 /r VFMADD231PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm2 and xmm3/mem, add to xmm1 and put result in xmm1.
VEX.256.66.0F38.W0 98 /r VFMADD132PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm3/mem, add to ymm2 and put result in ymm1.
VEX.256.66.0F38.W0 A8 /r VFMADD213PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm2, add to ymm3/mem and put result in ymm1.
VEX.256.66.0F38.0 B8 /r VFMADD231PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm2 and ymm3/mem, add to ymm1 and put result in ymm1.
EVEX.128.66.0F38.W0 98 /r VFMADD132PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm3/m128/m32bcst, add to xmm2 and put result in xmm1.
EVEX.128.66.0F38.W0 A8 /r VFMADD213PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm2, add to xmm3/m128/m32bcst and put result in xmm1.
EVEX.128.66.0F38.W0 B8 /r VFMADD231PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm2 and xmm3/m128/m32bcst, add to xmm1 and put result in xmm1.
EVEX.256.66.0F38.W0 98 /r VFMADD132PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm3/m256/m32bcst, add to ymm2 and put result in ymm1.
EVEX.256.66.0F38.W0 A8 /r VFMADD213PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm2, add to ymm3/m256/m32bcst and put result in ymm1.
EVEX.256.66.0F38.W0 B8 /r VFMADD231PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm2 and ymm3/m256/m32bcst, add to ymm1 and put result in ymm1.
EVEX.512.66.0F38.W0 98 /r VFMADD132PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm1 and zmm3/m512/m32bcst, add to zmm2 and put result in zmm1.
EVEX.512.66.0F38.W0 A8 /r VFMADD213PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm1 and zmm2, add to zmm3/m512/m32bcst and put result in zmm1.
EVEX.512.66.0F38.W0 B8 /r VFMADD231PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm2 and zmm3/m512/m32bcst, add to zmm1 and put result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a set of SIMD multiply-add computation on packed single precision floating-point values using three source operands and writes the multiply-add results in the destination operand. The destination operand is also the first source operand. The second operand must be a SIMD register. The third source operand can be a SIMD register or a memory location.

+

VFMADD132PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the first source operand to the four, eight or sixteen packed single precision floating-point values in the third source operand, adds the infinite precision intermediate result to the four, eight or sixteen packed single precision floating-point values in the second source operand, performs rounding and stores the resulting four, eight or sixteen packed single precision floating-point values to the destination operand (first source operand).

+

VFMADD213PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source operand to the four, eight or sixteen packed single precision floating-point values in the first source operand, adds the infinite precision intermediate result to the four, eight or sixteen packed single precision floating-point values in the third source operand, performs rounding and stores the resulting the four, eight or sixteen packed single precision floating-point values to the destination operand (first source operand).

+

VFMADD231PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source operand to the four, eight or sixteen packed single precision floating-point values in the third source operand, adds the infinite precision intermediate result to the four, eight or sixteen packed single precision floating-point values in the first source operand, performs rounding and stores the resulting four, eight or sixteen packed single precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) is a ZMM register and encoded in reg_field. The second source operand is a ZMM register and encoded in EVEX.vvvv. The third source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFMADD132PS DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 4
+ELSEIF (VEX.256)
+    MAXNUM := 8
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(DEST[n+31:n]*SRC3[n+31:n] + SRC2[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMADD213PS DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 4
+ELSEIF (VEX.256)
+    MAXNUM := 8
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(SRC2[n+31:n]*DEST[n+31:n] + SRC3[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMADD231PS DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 4
+ELSEIF (VEX.256)
+    MAXNUM := 8
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(SRC2[n+31:n]*SRC3[n+31:n] + DEST[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMADD132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl(DEST[i+31:i]*SRC3[i+31:i] + SRC2[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[31:0] + SRC2[i+31:i])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[i+31:i] + SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl(SRC2[i+31:i]*DEST[i+31:i] + SRC3[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] + SRC3[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] + SRC3[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl(SRC2[i+31:i]*SRC3[i+31:i] + DEST[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADD231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[31:0] + DEST[i+31:i])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[i+31:i] + DEST[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADDxxxPS __m512 _mm512_fmadd_ps(__m512 a, __m512 b, __m512 c);
+
+
VFMADDxxxPS __m512 _mm512_fmadd_round_ps(__m512 a, __m512 b, __m512 c, int r);
+
+
VFMADDxxxPS __m512 _mm512_mask_fmadd_ps(__m512 a, __mmask16 k, __m512 b, __m512 c);
+
+
VFMADDxxxPS __m512 _mm512_maskz_fmadd_ps(__mmask16 k, __m512 a, __m512 b, __m512 c);
+
+
VFMADDxxxPS __m512 _mm512_mask3_fmadd_ps(__m512 a, __m512 b, __m512 c, __mmask16 k);
+
+
VFMADDxxxPS __m512 _mm512_mask_fmadd_round_ps(__m512 a, __mmask16 k, __m512 b, __m512 c, int r);
+
+
VFMADDxxxPS __m512 _mm512_maskz_fmadd_round_ps(__mmask16 k, __m512 a, __m512 b, __m512 c, int r);
+
+
VFMADDxxxPS __m512 _mm512_mask3_fmadd_round_ps(__m512 a, __m512 b, __m512 c, __mmask16 k, int r);
+
+
VFMADDxxxPS __m256 _mm256_mask_fmadd_ps(__m256 a, __mmask8 k, __m256 b, __m256 c);
+
+
VFMADDxxxPS __m256 _mm256_maskz_fmadd_ps(__mmask8 k, __m256 a, __m256 b, __m256 c);
+
+
VFMADDxxxPS __m256 _mm256_mask3_fmadd_ps(__m256 a, __m256 b, __m256 c, __mmask8 k);
+
+
VFMADDxxxPS __m128 _mm_mask_fmadd_ps(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFMADDxxxPS __m128 _mm_maskz_fmadd_ps(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFMADDxxxPS __m128 _mm_mask3_fmadd_ps(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFMADDxxxPS __m128 _mm_fmadd_ps (__m128 a, __m128 b, __m128 c);
+
+
VFMADDxxxPS __m256 _mm256_fmadd_ps (__m256 a, __m256 b, __m256 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmadd132sd.vfmadd213sd.vfmadd231sd.html b/x86/vfmadd132sd.vfmadd213sd.vfmadd231sd.html new file mode 100644 index 0000000..37eebcc --- /dev/null +++ b/x86/vfmadd132sd.vfmadd213sd.vfmadd231sd.html @@ -0,0 +1,206 @@ + +VFMADD132SD/VFMADD213SD/VFMADD231SD + — Fused Multiply-Add of Scalar DoublePrecision Floating-Point Values

VFMADD132SD/VFMADD213SD/VFMADD231SD + — Fused Multiply-Add of Scalar DoublePrecision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.LIG.66.0F38.W1 99 /r VFMADD132SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm1 and xmm3/m64, add to xmm2 and put result in xmm1.
VEX.LIG.66.0F38.W1 A9 /r VFMADD213SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm1 and xmm2, add to xmm3/m64 and put result in xmm1.
VEX.LIG.66.0F38.W1 B9 /r VFMADD231SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm2 and xmm3/m64, add to xmm1 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 99 /r VFMADD132SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm1 and xmm3/m64, add to xmm2 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 A9 /r VFMADD213SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm1 and xmm2, add to xmm3/m64 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 B9 /r VFMADD231SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm2 and xmm3/m64, add to xmm1 and put result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BTuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD multiply-add computation on the low double precision floating-point values using three source operands and writes the multiply-add result in the destination operand. The destination operand is also the first source operand. The first and second operand are XMM registers. The third source operand can be an XMM register or a 64-bit memory location.

+

VFMADD132SD: Multiplies the low double precision floating-point value from the first source operand to the low double precision floating-point value in the third source operand, adds the infinite precision intermediate result to the low double precision floating-point values in the second source operand, performs rounding and stores the resulting double precision floating-point value to the destination operand (first source operand).

+

VFMADD213SD: Multiplies the low double precision floating-point value from the second source operand to the low double precision floating-point value in the first source operand, adds the infinite precision intermediate result to the low double precision floating-point value in the third source operand, performs rounding and stores the resulting double precision floating-point value to the destination operand (first source operand).

+

VFMADD231SD: Multiplies the low double precision floating-point value from the second source to the low double precision floating-point value in the third source operand, adds the infinite precision intermediate result to the low double precision floating-point value in the first source operand, performs rounding and stores the resulting double precision floating-point value to the destination operand (first source operand).

+

VEX.128 and EVEX encoded version: The destination operand (also first source operand) is encoded in reg_field. The second source operand is encoded in VEX.vvvv/EVEX.vvvv. The third source operand is encoded in rm_field. Bits 127:64 of the destination are unchanged. Bits MAXVL-1:128 of the destination register are zeroed.

+

EVEX encoded version: The low quadword element of the destination is updated according to the writemask.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFMADD132SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(DEST[63:0]*SRC3[63:0] + SRC2[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD213SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(SRC2[63:0]*DEST[63:0] + SRC3[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD231SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(SRC2[63:0]*SRC3[63:0] + DEST[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD132SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := MAXVL-1:128RoundFPControl_MXCSR(DEST[63:0]*SRC3[63:0] + SRC2[63:0])
+DEST[127:63] := DEST[127:63]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD213SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*DEST[63:0] + SRC3[63:0])
+DEST[127:63] := DEST[127:63]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD231SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*SRC3[63:0] + DEST[63:0])
+DEST[127:63] := DEST[127:63]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADDxxxSD __m128d _mm_fmadd_round_sd(__m128d a, __m128d b, __m128d c, int r);
+
+
VFMADDxxxSD __m128d _mm_mask_fmadd_sd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFMADDxxxSD __m128d _mm_maskz_fmadd_sd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFMADDxxxSD __m128d _mm_mask3_fmadd_sd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFMADDxxxSD __m128d _mm_mask_fmadd_round_sd(__m128d a, __mmask8 k, __m128d b, __m128d c, int r);
+
+
VFMADDxxxSD __m128d _mm_maskz_fmadd_round_sd(__mmask8 k, __m128d a, __m128d b, __m128d c, int r);
+
+
VFMADDxxxSD __m128d _mm_mask3_fmadd_round_sd(__m128d a, __m128d b, __m128d c, __mmask8 k, int r);
+
+
VFMADDxxxSD __m128d _mm_fmadd_sd (__m128d a, __m128d b, __m128d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfmadd132sh.vfnmadd132sh.vfmadd213sh.vfnmadd213sh.vfmadd231sh.vfnmadd231sh.html b/x86/vfmadd132sh.vfnmadd132sh.vfmadd213sh.vfnmadd213sh.vfmadd231sh.vfnmadd231sh.html new file mode 100644 index 0000000..301e0f3 --- /dev/null +++ b/x86/vfmadd132sh.vfnmadd132sh.vfmadd213sh.vfnmadd213sh.vfmadd231sh.vfnmadd231sh.html @@ -0,0 +1,197 @@ + +VFMADD132SH/VFNMADD132SH/VFMADD213SH/VFNMADD213SH/VFMADD231SH/VFNMADD231SH + — Fused Multiply-Add of Scalar FP16 Values

VFMADD132SH/VFNMADD132SH/VFMADD213SH/VFNMADD213SH/VFMADD231SH/VFNMADD231SH + — Fused Multiply-Add of Scalar FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.66.MAP6.W0 99 /r VFMADD132SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm1 and xmm3/m16, add to xmm2, and store the result in xmm1.
EVEX.LLIG.66.MAP6.W0 A9 /r VFMADD213SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm1 and xmm2, add to xmm3/m16, and store the result in xmm1.
EVEX.LLIG.66.MAP6.W0 B9 /r VFMADD231SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm2 and xmm3/m16, add to xmm1, and store the result in xmm1.
EVEX.LLIG.66.MAP6.W0 9D /r VFNMADD132SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm1 and xmm3/m16, and negate the value. Add this value to xmm2, and store the result in xmm1.
EVEX.LLIG.66.MAP6.W0 AD /r VFNMADD213SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm1 and xmm2, and negate the value. Add this value to xmm3/m16, and store the result in xmm1.
EVEX.LLIG.66.MAP6.W0 BD /r VFNMADD231SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm2 and xmm3/m16, and negate the value. Add this value to xmm1, and store the result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a scalar multiply-add or negated multiply-add computation on the low FP16 values using three source operands and writes the result in the destination operand. The destination operand is also the first source operand. The “N” (negated) forms of this instruction add the negated infinite precision intermediate product to the corresponding remaining operand. The notation’ “132”, “213” and “231” indicate the use of the operands in ±A * B + C, where each digit corresponds to the operand number, with the destination being operand 1; see Table 5-4.

+

Bits 127:16 of the destination operand are preserved. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+
+ + + + + + + + + + + + +
NotationOperands
132dest = ± dest*src3+src2
231dest = ± src2*src3+dest
213dest = ± src2*dest+src3
+
Table 5-4. VF[,N]MADD[132,213,231]SH Notation for Operands
+

Operation + ¶ +

+

VF[,N]MADD132SH DEST, SRC2, SRC3 (EVEX encoded versions) + ¶ +

+
IF EVEX.b = 1 and SRC3 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    IF *negative form*:
+        DEST.fp16[0] := RoundFPControl(-DEST.fp16[0]*SRC3.fp16[0] + SRC2.fp16[0])
+    ELSE:
+        DEST.fp16[0] := RoundFPControl(DEST.fp16[0]*SRC3.fp16[0] + SRC2.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else DEST.fp16[0] remains unchanged
+//DEST[127:16] remains unchanged
+DEST[MAXVL-1:128] := 0
+
+

VF[,N]MADD213SH DEST, SRC2, SRC3 (EVEX encoded versions) + ¶ +

+
IF EVEX.b = 1 and SRC3 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    IF *negative form:
+        DEST.fp16[0] := RoundFPControl(-SRC2.fp16[0]*DEST.fp16[0] + SRC3.fp16[0])
+    ELSE:
+        DEST.fp16[0] := RoundFPControl(SRC2.fp16[0]*DEST.fp16[0] + SRC3.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else DEST.fp16[0] remains unchanged
+//DEST[127:16] remains unchanged
+DEST[MAXVL-1:128] := 0
+
+

VF[,N]MADD231SH DEST, SRC2, SRC3 (EVEX encoded versions) + ¶ +

+
IF EVEX.b = 1 and SRC3 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    IF *negative form*:
+        DEST.fp16[0] := RoundFPControl(-SRC2.fp16[0]*SRC3.fp16[0] + DEST.fp16[0])
+    ELSE:
+        DEST.fp16[0] := RoundFPControl(SRC2.fp16[0]*SRC3.fp16[0] + DEST.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else DEST.fp16[0] remains unchanged
+//DEST[127:16] remains unchanged
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADD132SH, VFMADD213SH, and VFMADD231SH: __m128h _mm_fmadd_round_sh (__m128h a, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_mask_fmadd_round_sh (__m128h a, __mmask8 k, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_mask3_fmadd_round_sh (__m128h a, __m128h b, __m128h c, __mmask8 k, const int rounding);
+
+
__m128h _mm_maskz_fmadd_round_sh (__mmask8 k, __m128h a, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_fmadd_sh (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fmadd_sh (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fmadd_sh (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fmadd_sh (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
VFNMADD132SH, VFNMADD213SH, and VFNMADD231SH: __m128h _mm_fnmadd_round_sh (__m128h a, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_mask_fnmadd_round_sh (__m128h a, __mmask8 k, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_mask3_fnmadd_round_sh (__m128h a, __m128h b, __m128h c, __mmask8 k, const int rounding);
+
+
__m128h _mm_maskz_fnmadd_round_sh (__mmask8 k, __m128h a, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_fnmadd_sh (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fnmadd_sh (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fnmadd_sh (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fnmadd_sh (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfmadd132ss.vfmadd213ss.vfmadd231ss.html b/x86/vfmadd132ss.vfmadd213ss.vfmadd231ss.html new file mode 100644 index 0000000..ad93294 --- /dev/null +++ b/x86/vfmadd132ss.vfmadd213ss.vfmadd231ss.html @@ -0,0 +1,207 @@ + +VFMADD132SS/VFMADD213SS/VFMADD231SS + — Fused Multiply-Add of Scalar Single PrecisionFloating-Point Values

VFMADD132SS/VFMADD213SS/VFMADD231SS + — Fused Multiply-Add of Scalar Single PrecisionFloating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.LIG.66.0F38.W0 99 /r VFMADD132SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single precision floating-point value from xmm1 and xmm3/m32, add to xmm2 and put result in xmm1.
VEX.LIG.66.0F38.W0 A9 /r VFMADD213SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single precision floating-point value from xmm1 and xmm2, add to xmm3/m32 and put result in xmm1.
VEX.LIG.66.0F38.W0 B9 /r VFMADD231SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single precision floating-point value from xmm2 and xmm3/m32, add to xmm1 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 99 /r VFMADD132SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single precision floating-point value from xmm1 and xmm3/m32, add to xmm2 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 A9 /r VFMADD213SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single precision floating-point value from xmm1 and xmm2, add to xmm3/m32 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 B9 /r VFMADD231SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single precision floating-point value from xmm2 and xmm3/m32, add to xmm1 and put result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BTuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD multiply-add computation on single precision floating-point values using three source operands and writes the multiply-add results in the destination operand. The destination operand is also the first source operand. The first and second operands are XMM registers. The third source operand can be a XMM register or a 32-bit memory location.

+

VFMADD132SS: Multiplies the low single precision floating-point value from the first source operand to the low single precision floating-point value in the third source operand, adds the infinite precision intermediate result to the low single precision floating-point value in the second source operand, performs rounding and stores the resulting single precision floating-point value to the destination operand (first source operand).

+

VFMADD213SS: Multiplies the low single precision floating-point value from the second source operand to the low single precision floating-point value in the first source operand, adds the infinite precision intermediate result to the low single precision floating-point value in the third source operand, performs rounding and stores the resulting single precision floating-point value to the destination operand (first source operand).

+

VFMADD231SS: Multiplies the low single precision floating-point value from the second source operand to the low single precision floating-point value in the third source operand, adds the infinite precision intermediate result to the low single precision floating-point value in the first source operand, performs rounding and stores the resulting single precision floating-point value to the destination operand (first source operand).

+

VEX.128 and EVEX encoded version: The destination operand (also first source operand) is encoded in reg_field. The second source operand is encoded in VEX.vvvv/EVEX.vvvv. The third source operand is encoded in rm_field. Bits 127:32 of the destination are unchanged. Bits MAXVL-1:128 of the destination register are zeroed.

+

EVEX encoded version: The low doubleword element of the destination is updated according to the writemask.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFMADD132SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(DEST[31:0]*SRC3[31:0] + SRC2[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD213SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(SRC2[31:0]*DEST[31:0] + SRC3[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD231SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(SRC2[31:0]*SRC3[31:0] + DEST[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0]] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD132SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(DEST[31:0]*SRC3[31:0] + SRC2[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD213SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(SRC2[31:0]*DEST[31:0] + SRC3[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMADD231SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(SRC2[31:0]*SRC3[31:0] + DEST[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADDxxxSS __m128 _mm_fmadd_round_ss(__m128 a, __m128 b, __m128 c, int r);
+
+
VFMADDxxxSS __m128 _mm_mask_fmadd_ss(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFMADDxxxSS __m128 _mm_maskz_fmadd_ss(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFMADDxxxSS __m128 _mm_mask3_fmadd_ss(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFMADDxxxSS __m128 _mm_mask_fmadd_round_ss(__m128 a, __mmask8 k, __m128 b, __m128 c, int r);
+
+
VFMADDxxxSS __m128 _mm_maskz_fmadd_round_ss(__mmask8 k, __m128 a, __m128 b, __m128 c, int r);
+
+
VFMADDxxxSS __m128 _mm_mask3_fmadd_round_ss(__m128 a, __m128 b, __m128 c, __mmask8 k, int r);
+
+
VFMADDxxxSS __m128 _mm_fmadd_ss (__m128 a, __m128 b, __m128 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfmaddrnd231pd.html b/x86/vfmaddrnd231pd.html new file mode 100644 index 0000000..63b0877 --- /dev/null +++ b/x86/vfmaddrnd231pd.html @@ -0,0 +1,130 @@ + +VFMADDRND231PD + — Fused Multiply-Add of Packed Double-Precision Floating-Point Valueswith rounding control

VFMADDRND231PD + — Fused Multiply-Add of Packed Double-Precision Floating-Point Valueswith rounding control

+ + + +
Opcode/ Mode CPUID Description Instruction Support Feature Flag VEX.DDS.128.66.0F3A.W1 B8 /r /ib V/V FMA Multiply packed double-precision floating-point values from xmm1 VFMADDRND231PD xmm0, and xmm2/mem, add to xmm0 and xmm1, xmm2/m128, imm8 put result in xmm0. VEX.DDS.256.66.0F3A.W1 B8 /r /ib V/V FMA Multiply packed double-precision floating-point values from ymm1 VFMADDRND231PD ymm0, and ymm2/mem, add to ymm0 and ymm1, ymm2/m256, imm8 put result in ymm0.
+

Description + ¶ +

+

Multiplies the two or four packed double-precision floating-point values from the second source operand to the two or four packed double-precision floating-point values in the third source operand, adds the infinite precision intermediate result to the two or four packed double-precision floating-point values in the first source operand, performs rounding and stores the resulting two or four packed double-precision floating-point values to the destination operand (first source operand).

+

The immediate byte defines several bit fields that control rounding, DAZ, FTZ, and exception suppression (SeeTable 5-3).The rounding mode specified in MXCSR.RC may be bypassed if the immediate bit called MS1 (MXCSR.RC Override) is set. Likewise, the MXCSR.FTZ and MXCSR.DAZ may also be bypassed if the immediate bit called MS2 (MXCSR.FTZ/DAZ Override) is set. In case SAE (Suppress All Exceptions) bit is set (i.e. imm8[3] = 1), the status flags in MXCSR are not updated and no SIMD floating-point exceptions are raised. When SAE bit is not set (i.e. imm8[3] = 0) then SIMD floating-point exceptions are signaled according to the MXCSR. If any result operand is an SNaN then it will be converted to a QNaN.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
BitsField Name/valueDescriptionComment
Imm8[1:0 ]RC=0Round to nearest evenIf Imm8[2] = 1
RC=1Round down
RC=2Round up
RC=3Truncate
Imm8[2]MS1=0Use MXCSR.RC for rounding
MS1=1Use Imm8[1:0] for roundingIgnore MXCSR.RC
Imm8[3]SAE=0Use MXCSR Exception Mask settings
SAE=1Suppress all Exception signalingNumerical result is computed as if FP exceptions are masked.
Imm8[4]MS2=0Use MXCSR.DAZ and MXCSR.FTZ
MS2=1Use Imm8[6:5] to control DAZ/FTZ operationIgnore MXCSR.DAZ and MXCSR.FTZ
Imm8[5]DAZControl DAZIF MS2 = 1
Imm8[6]FTZControl FTZIF MS2 = 1
Imm8[7]MBZMust be zero
+
Table 5-3. Immediate Byte Encoding
+

Compiler tools may optionally support the complementary mnemonic VMADDRND321PD. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column. See also Section 2.3.1, “FMA Instruction Operand Order and Arithmetic Behavior”

+

Operation + ¶ +

+
In the operations below, “+” and “*” symbols represent multiplication and addition with infinite precision inputs and outputs (no rounding)
+
+

VFMADDRND231PD DEST, SRC2, SRC3, imm8 + ¶ +

+
IF (VEX.128) THEN
+    MAXVL =2
+ELSEIF (VEX.256)
+    MAXVL = 4
+FI
+IF (imm8[3] = 1) THEN
+    Suppress_SIMD_Exception_Signaling_Reporting();
+FI
+For i = 0 to MAXVL-1 {
+    n = 64*i;
+    DEST[n+63:n]←RoundFPControl_Imm((SRC2[n+63:n]*SRC3[n+63:n] + DEST[n+63:n]), imm8)
+}
+IF (VEX.128) THEN
+DEST[255:128] ← 0
+FI
+IF (imm8[3] = 1) THEN
+    Resume_SIMD_Exception_Signaling_Reporting();
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADDRND231PD __m128d _mm_fmaddround_pd (__m128d a, __m128d b, __m128d c, const int ctrl);
+
+
VFMADDRND231PD __m256d _mm256_fmaddround_pd (__m256d a, __m256d b, __m256d c, const int ctrl);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

IF imm[3] = 1 Then

+

None

+

Else

+

Overflow, Underflow, Invalid, Precision, Denormal

+

FI

+

Other Exceptions + ¶ +

+

See Exceptions Type 2

diff --git a/x86/vfmaddsub132pd.vfmaddsub213pd.vfmaddsub231pd.html b/x86/vfmaddsub132pd.vfmaddsub213pd.vfmaddsub231pd.html new file mode 100644 index 0000000..3a9dd44 --- /dev/null +++ b/x86/vfmaddsub132pd.vfmaddsub213pd.vfmaddsub231pd.html @@ -0,0 +1,439 @@ + +VFMADDSUB132PD/VFMADDSUB213PD/VFMADDSUB231PD + — Fused Multiply-AlternatingAdd/Subtract of Packed Double Precision Floating-Point Values

VFMADDSUB132PD/VFMADDSUB213PD/VFMADDSUB231PD + — Fused Multiply-AlternatingAdd/Subtract of Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W1 96 /r VFMADDSUB132PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm3/mem, add/subtract elements in xmm2 and put result in xmm1.
VEX.128.66.0F38.W1 A6 /r VFMADDSUB213PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm2, add/subtract elements in xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W1 B6 /r VFMADDSUB231PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm2 and xmm3/mem, add/subtract elements in xmm1 and put result in xmm1.
VEX.256.66.0F38.W1 96 /r VFMADDSUB132PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm3/mem, add/subtract elements in ymm2 and put result in ymm1.
VEX.256.66.0F38.W1 A6 /r VFMADDSUB213PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm2, add/subtract elements in ymm3/mem and put result in ymm1.
VEX.256.66.0F38.W1 B6 /r VFMADDSUB231PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm2 and ymm3/mem, add/subtract elements in ymm1 and put result in ymm1.
EVEX.128.66.0F38.W1 A6 /r VFMADDSUB213PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm2, add/subtract elements in xmm3/m128/m64bcst and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W1 B6 /r VFMADDSUB231PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm2 and xmm3/m128/m64bcst, add/subtract elements in xmm1 and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W1 96 /r VFMADDSUB132PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm3/m128/m64bcst, add/subtract elements in xmm2 and put result in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W1 A6 /r VFMADDSUB213PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm2, add/subtract elements in ymm3/m256/m64bcst and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W1 B6 /r VFMADDSUB231PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm2 and ymm3/m256/m64bcst, add/subtract elements in ymm1 and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W1 96 /r VFMADDSUB132PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm3/m256/m64bcst, add/subtract elements in ymm2 and put result in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W1 A6 /r VFMADDSUB213PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1and zmm2, add/subtract elements in zmm3/m512/m64bcst and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W1 B6 /r VFMADDSUB231PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm2 and zmm3/m512/m64bcst, add/subtract elements in zmm1 and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W1 96 /r VFMADDSUB132PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm3/m512/m64bcst, add/subtract elements in zmm2 and put result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFMADDSUB132PD: Multiplies the two, four, or eight packed double precision floating-point values from the first source operand to the two or four packed double precision floating-point values in the third source operand. From the infinite precision intermediate result, adds the odd double precision floating-point elements and subtracts the even double precision floating-point values in the second source operand, performs rounding and stores the resulting two or four packed double precision floating-point values to the destination operand (first source operand).

+

VFMADDSUB213PD: Multiplies the two, four, or eight packed double precision floating-point values from the second source operand to the two or four packed double precision floating-point values in the first source operand. From the infinite precision intermediate result, adds the odd double precision floating-point elements and subtracts the even double precision floating-point values in the third source operand, performs rounding and stores the resulting two or four packed double precision floating-point values to the destination operand (first source operand).

+

VFMADDSUB231PD: Multiplies the two, four, or eight packed double precision floating-point values from the second source operand to the two or four packed double precision floating-point values in the third source operand. From the infinite precision intermediate result, adds the odd double precision floating-point elements and subtracts the even double precision floating-point values in the first source operand, performs rounding and stores the resulting two or four packed double precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFMADDSUB132PD DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    DEST[63:0] := RoundFPControl_MXCSR(DEST[63:0]*SRC3[63:0] - SRC2[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(DEST[127:64]*SRC3[127:64] + SRC2[127:64])
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[63:0] := RoundFPControl_MXCSR(DEST[63:0]*SRC3[63:0] - SRC2[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(DEST[127:64]*SRC3[127:64] + SRC2[127:64])
+    DEST[191:128] := RoundFPControl_MXCSR(DEST[191:128]*SRC3[191:128] - SRC2[191:128])
+    DEST[255:192] := RoundFPControl_MXCSR(DEST[255:192]*SRC3[255:192] + SRC2[255:192]
+FI
+
+

VFMADDSUB213PD DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*DEST[63:0] - SRC3[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(SRC2[127:64]*DEST[127:64] + SRC3[127:64])
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*DEST[63:0] - SRC3[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(SRC2[127:64]*DEST[127:64] + SRC3[127:64])
+    DEST[191:128] := RoundFPControl_MXCSR(SRC2[191:128]*DEST[191:128] - SRC3[191:128])
+    DEST[255:192] := RoundFPControl_MXCSR(SRC2[255:192]*DEST[255:192] + SRC3[255:192]
+FI
+
+

VFMADDSUB231PD DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*SRC3[63:0] - DEST[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(SRC2[127:64]*SRC3[127:64] + DEST[127:64])
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*SRC3[63:0] - DEST[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(SRC2[127:64]*SRC3[127:64] + DEST[127:64])
+    DEST[191:128] := RoundFPControl_MXCSR(SRC2[191:128]*SRC3[191:128] - DEST[191:128])
+    DEST[255:192] := RoundFPControl_MXCSR(SRC2[255:192]*SRC3[255:192] + DEST[255:192]
+FI
+
+

VFMADDSUB132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+63:i] :=
+                    RoundFPControl(DEST[i+63:i]*SRC3[i+63:i] - SRC2[i+63:i])
+                ELSE DEST[i+63:i] :=
+                    RoundFPControl(DEST[i+63:i]*SRC3[i+63:i] + SRC2[i+63:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[63:0] - SRC2[i+63:i])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[i+63:i] - SRC2[i+63:i])
+                FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[63:0] + SRC2[i+63:i])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[i+63:i] + SRC2[i+63:i])
+                FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+63:i] :=
+                    RoundFPControl(SRC2[i+63:i]*DEST[i+63:i] - SRC3[i+63:i])
+                ELSE DEST[i+63:i] :=
+                    RoundFPControl(SRC2[i+63:i]*DEST[i+63:i] + SRC3[i+63:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] - SRC3[63:0])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] - SRC3[i+63:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] + SRC3[63:0])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] + SRC3[i+63:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+63:i] :=
+                    RoundFPControl(SRC2[i+63:i]*SRC3[i+63:i] - DEST[i+63:i])
+                ELSE DEST[i+63:i] :=
+                    RoundFPControl(SRC2[i+63:i]*SRC3[i+63:i] + DEST[i+63:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                        RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[63:0] - DEST[i+63:i])
+                        ELSE
+                            DEST[i+63:i] :=
+                        RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[i+63:i] - DEST[i+63:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                        RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[63:0] + DEST[i+63:i])
+                        ELSE
+                            DEST[i+63:i] :=
+                        RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[i+63:i] + DEST[i+63:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADDSUBxxxPD __m512d _mm512_fmaddsub_pd(__m512d a, __m512d b, __m512d c);
+
+
VFMADDSUBxxxPD __m512d _mm512_fmaddsub_round_pd(__m512d a, __m512d b, __m512d c, int r);
+
+
VFMADDSUBxxxPD __m512d _mm512_mask_fmaddsub_pd(__m512d a, __mmask8 k, __m512d b, __m512d c);
+
+
VFMADDSUBxxxPD __m512d _mm512_maskz_fmaddsub_pd(__mmask8 k, __m512d a, __m512d b, __m512d c);
+
+
VFMADDSUBxxxPD __m512d _mm512_mask3_fmaddsub_pd(__m512d a, __m512d b, __m512d c, __mmask8 k);
+
+
VFMADDSUBxxxPD __m512d _mm512_mask_fmaddsub_round_pd(__m512d a, __mmask8 k, __m512d b, __m512d c, int r);
+
+
VFMADDSUBxxxPD __m512d _mm512_maskz_fmaddsub_round_pd(__mmask8 k, __m512d a, __m512d b, __m512d c, int r);
+
+
VFMADDSUBxxxPD __m512d _mm512_mask3_fmaddsub_round_pd(__m512d a, __m512d b, __m512d c, __mmask8 k, int r);
+
+
VFMADDSUBxxxPD __m256d _mm256_mask_fmaddsub_pd(__m256d a, __mmask8 k, __m256d b, __m256d c);
+
+
VFMADDSUBxxxPD __m256d _mm256_maskz_fmaddsub_pd(__mmask8 k, __m256d a, __m256d b, __m256d c);
+
+
VFMADDSUBxxxPD __m256d _mm256_mask3_fmaddsub_pd(__m256d a, __m256d b, __m256d c, __mmask8 k);
+
+
VFMADDSUBxxxPD __m128d _mm_mask_fmaddsub_pd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFMADDSUBxxxPD __m128d _mm_maskz_fmaddsub_pd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFMADDSUBxxxPD __m128d _mm_mask3_fmaddsub_pd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFMADDSUBxxxPD __m128d _mm_fmaddsub_pd (__m128d a, __m128d b, __m128d c);
+
+
VFMADDSUBxxxPD __m256d _mm256_fmaddsub_pd (__m256d a, __m256d b, __m256d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmaddsub132ph.vfmaddsub213ph.vfmaddsub231ph.html b/x86/vfmaddsub132ph.vfmaddsub213ph.vfmaddsub231ph.html new file mode 100644 index 0000000..dc4c757 --- /dev/null +++ b/x86/vfmaddsub132ph.vfmaddsub213ph.vfmaddsub231ph.html @@ -0,0 +1,282 @@ + +VFMADDSUB132PH/VFMADDSUB213PH/VFMADDSUB231PH + — Fused Multiply-AlternatingAdd/Subtract of Packed FP16 Values

VFMADDSUB132PH/VFMADDSUB213PH/VFMADDSUB231PH + — Fused Multiply-AlternatingAdd/Subtract of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP6.W0 96 /r VFMADDSUB132PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm3/m128/m16bcst, add/subtract elements in xmm2, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 96 /r VFMADDSUB132PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm3/m256/m16bcst, add/subtract elements in ymm2, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 96 /r VFMADDSUB132PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm3/m512/m16bcst, add/subtract elements in zmm2, and store the result in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 A6 /r VFMADDSUB213PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm2, add/subtract elements in xmm3/m128/m16bcst, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 A6 /r VFMADDSUB213PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm2, add/subtract elements in ymm3/m256/m16bcst, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 A6 /r VFMADDSUB213PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm2, add/subtract elements in zmm3/m512/m16bcst, and store the result in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 B6 /r VFMADDSUB231PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm2 and xmm3/m128/m16bcst, add/subtract elements in xmm1, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 B6 /r VFMADDSUB231PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm2 and ymm3/m256/m16bcst, add/subtract elements in ymm1, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 B6 /r VFMADDSUB231PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm2 and zmm3/m512/m16bcst, add/subtract elements in zmm1, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a packed multiply-add (odd elements) or multiply-subtract (even elements) computation on FP16 values using three source operands and writes the results in the destination operand. The destination operand is also the first source operand. The notation’ “132”, “213” and “231” indicate the use of the operands in A * B ± C, where each digit corresponds to the operand number, with the destination being operand 1; see Table 5-8.

+

The destination elements are updated according to the writemask.

+
+ + + + + + + + + + + + + + + + +
NotationOdd ElementsEven Elements
132dest = dest*src3+src2dest = dest*src3-src2
231dest = src2*src3+destdest = src2*src3-dest
213dest = src2*dest+src3dest = src2*dest-src3
+
Table 5-5. VFMADDSUB[132,213,231]PH Notation for Odd and Even Elements
+

Operation + ¶ +

+

VFMADDSUB132PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j] * SRC3.fp16[j] - SRC2.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j] * SRC3.fp16[j] + SRC2.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+// else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB132PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j] * t3 - SRC2.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j] * t3 + SRC2.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB213PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*DEST.fp16[j] - SRC3.fp16[j])
+        ELSE
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*DEST.fp16[j] + SRC3.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB213PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * DEST.fp16[j] - t3)
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * DEST.fp16[j] + t3)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB231PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *j is even:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * SRC3.fp16[j] - DEST.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * SRC3.fp16[j] + DEST.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB231PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * t3 - DEST.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * t3 + DEST.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADDSUB132PH, VFMADDSUB213PH, and VFMADDSUB231PH: __m128h _mm_fmaddsub_ph (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fmaddsub_ph (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fmaddsub_ph (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fmaddsub_ph (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
__m256h _mm256_fmaddsub_ph (__m256h a, __m256h b, __m256h c);
+
+
__m256h _mm256_mask_fmaddsub_ph (__m256h a, __mmask16 k, __m256h b, __m256h c);
+
+
__m256h _mm256_mask3_fmaddsub_ph (__m256h a, __m256h b, __m256h c, __mmask16 k);
+
+
__m256h _mm256_maskz_fmaddsub_ph (__mmask16 k, __m256h a, __m256h b, __m256h c);
+
+
__m512h _mm512_fmaddsub_ph (__m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_mask_fmaddsub_ph (__m512h a, __mmask32 k, __m512h b, __m512h c);
+
+
__m512h _mm512_mask3_fmaddsub_ph (__m512h a, __m512h b, __m512h c, __mmask32 k);
+
+
__m512h _mm512_maskz_fmaddsub_ph (__mmask32 k, __m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_fmaddsub_round_ph (__m512h a, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask_fmaddsub_round_ph (__m512h a, __mmask32 k, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask3_fmaddsub_round_ph (__m512h a, __m512h b, __m512h c, __mmask32 k, const int rounding);
+
+
__m512h _mm512_maskz_fmaddsub_round_ph (__mmask32 k, __m512h a, __m512h b, __m512h c, const int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmaddsub132ps.vfmaddsub213ps.vfmaddsub231ps.html b/x86/vfmaddsub132ps.vfmaddsub213ps.vfmaddsub231ps.html new file mode 100644 index 0000000..1a19610 --- /dev/null +++ b/x86/vfmaddsub132ps.vfmaddsub213ps.vfmaddsub231ps.html @@ -0,0 +1,454 @@ + +VFMADDSUB132PS/VFMADDSUB213PS/VFMADDSUB231PS + — Fused Multiply-AlternatingAdd/Subtract of Packed Single Precision Floating-Point Values

VFMADDSUB132PS/VFMADDSUB213PS/VFMADDSUB231PS + — Fused Multiply-AlternatingAdd/Subtract of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 96 /r VFMADDSUB132PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm3/mem, add/subtract elements in xmm2 and put result in xmm1.
VEX.128.66.0F38.W0 A6 /r VFMADDSUB213PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm2, add/subtract elements in xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W0 B6 /r VFMADDSUB231PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm2 and xmm3/mem, add/subtract elements in xmm1 and put result in xmm1.
VEX.256.66.0F38.W0 96 /r VFMADDSUB132PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm3/mem, add/subtract elements in ymm2 and put result in ymm1.
VEX.256.66.0F38.W0 A6 /r VFMADDSUB213PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm2, add/subtract elements in ymm3/mem and put result in ymm1.
VEX.256.66.0F38.W0 B6 /r VFMADDSUB231PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm2 and ymm3/mem, add/subtract elements in ymm1 and put result in ymm1.
EVEX.128.66.0F38.W0 A6 /r VFMADDSUB213PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm2, add/subtract elements in xmm3/m128/m32bcst and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W0 B6 /r VFMADDSUB231PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm2 and xmm3/m128/m32bcst, add/subtract elements in xmm1 and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W0 96 /r VFMADDSUB132PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm3/m128/m32bcst, add/subtract elements in zmm2 and put result in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W0 A6 /r VFMADDSUB213PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm2, add/subtract elements in ymm3/m256/m32bcst and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W0 B6 /r VFMADDSUB231PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm2 and ymm3/m256/m32bcst, add/subtract elements in ymm1 and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W0 96 /r VFMADDSUB132PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm3/m256/m32bcst, add/subtract elements in ymm2 and put result in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W0 A6 /r VFMADDSUB213PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm1 and zmm2, add/subtract elements in zmm3/m512/m32bcst and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W0 B6 /r VFMADDSUB231PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm2 and zmm3/m512/m32bcst, add/subtract elements in zmm1 and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W0 96 /r VFMADDSUB132PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm1 and zmm3/m512/m32bcst, add/subtract elements in zmm2 and put result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFMADDSUB132PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the first source operand to the corresponding packed single precision floating-point values in the third source operand. From the infinite precision intermediate result, adds the odd single precision floating-point elements and subtracts the even single precision floating-point values in the second source operand, performs rounding and stores the resulting packed single precision floating-point values to the destination operand (first source operand).

+

VFMADDSUB213PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source operand to the corresponding packed single precision floating-point values in the first source operand. From the infinite precision intermediate result, adds the odd single precision floating-point elements and subtracts the even single precision floating-point values in the third source operand, performs rounding and stores the resulting packed single precision floating-point values to the destination operand (first source operand).

+

VFMADDSUB231PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source operand to the corresponding packed single precision floating-point values in the third source operand. From the infinite precision intermediate result, adds the odd single precision floating-point elements and subtracts the even single precision floating-point values in the first source operand, performs rounding and stores the resulting packed single precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFMADDSUB132PS DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM :=2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM -1{
+    n := 64*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(DEST[n+31:n]*SRC3[n+31:n] - SRC2[n+31:n])
+    DEST[n+63:n+32] := RoundFPControl_MXCSR(DEST[n+63:n+32]*SRC3[n+63:n+32] + SRC2[n+63:n+32])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMADDSUB213PS DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM -1{
+    n := 64*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(SRC2[n+31:n]*DEST[n+31:n] - SRC3[n+31:n])
+    DEST[n+63:n+32] := RoundFPControl_MXCSR(SRC2[n+63:n+32]*DEST[n+63:n+32] + SRC3[n+63:n+32])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMADDSUB231PS DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM -1{
+    n := 64*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(SRC2[n+31:n]*SRC3[n+31:n] - DEST[n+31:n])
+    DEST[n+63:n+32] :=RoundFPControl_MXCSR(SRC2[n+63:n+32]*SRC3[n+63:n+32] + DEST[n+63:n+32])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMADDSUB132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) (4, 128), (8, 256),= (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+31:i] :=
+                    RoundFPControl(DEST[i+31:i]*SRC3[i+31:i] - SRC2[i+31:i])
+                ELSE DEST[i+31:i] :=
+                    RoundFPControl(DEST[i+31:i]*SRC3[i+31:i] + SRC2[i+31:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[31:0] - SRC2[i+31:i])
+                        ELSE
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[i+31:i] - SRC2[i+31:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[31:0] + SRC2[i+31:i])
+                        ELSE
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[i+31:i] + SRC2[i+31:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+31:i] :=
+                    RoundFPControl(SRC2[i+31:i]*DEST[i+31:i] - SRC3[i+31:i])
+                ELSE DEST[i+31:i] :=
+                    RoundFPControl(SRC2[i+31:i]*DEST[i+31:i] + SRC3[i+31:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] - SRC3[31:0])
+                        ELSE
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] - SRC3[i+31:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] + SRC3[31:0])
+                        ELSE
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] + SRC3[i+31:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+31:i] :=
+                    RoundFPControl(SRC2[i+31:i]*SRC3[i+31:i] - DEST[i+31:i])
+                ELSE DEST[i+31:i] :=
+                    RoundFPControl(SRC2[i+31:i]*SRC3[i+31:i] + DEST[i+31:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMADDSUB231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[31:0] - DEST[i+31:i])
+                        ELSE
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[i+31:i] - DEST[i+31:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[31:0] + DEST[i+31:i])
+                        ELSE
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[i+31:i] + DEST[i+31:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMADDSUBxxxPS __m512 _mm512_fmaddsub_ps(__m512 a, __m512 b, __m512 c);
+
+
VFMADDSUBxxxPS __m512 _mm512_fmaddsub_round_ps(__m512 a, __m512 b, __m512 c, int r);
+
+
VFMADDSUBxxxPS __m512 _mm512_mask_fmaddsub_ps(__m512 a, __mmask16 k, __m512 b, __m512 c);
+
+
VFMADDSUBxxxPS __m512 _mm512_maskz_fmaddsub_ps(__mmask16 k, __m512 a, __m512 b, __m512 c);
+
+
VFMADDSUBxxxPS __m512 _mm512_mask3_fmaddsub_ps(__m512 a, __m512 b, __m512 c, __mmask16 k);
+
+
VFMADDSUBxxxPS __m512 _mm512_mask_fmaddsub_round_ps(__m512 a, __mmask16 k, __m512 b, __m512 c, int r);
+
+
VFMADDSUBxxxPS __m512 _mm512_maskz_fmaddsub_round_ps(__mmask16 k, __m512 a, __m512 b, __m512 c, int r);
+
+
VFMADDSUBxxxPS __m512 _mm512_mask3_fmaddsub_round_ps(__m512 a, __m512 b, __m512 c, __mmask16 k, int r);
+
+
VFMADDSUBxxxPS __m256 _mm256_mask_fmaddsub_ps(__m256 a, __mmask8 k, __m256 b, __m256 c);
+
+
VFMADDSUBxxxPS __m256 _mm256_maskz_fmaddsub_ps(__mmask8 k, __m256 a, __m256 b, __m256 c);
+
+
VFMADDSUBxxxPS __m256 _mm256_mask3_fmaddsub_ps(__m256 a, __m256 b, __m256 c, __mmask8 k);
+
+
VFMADDSUBxxxPS __m128 _mm_mask_fmaddsub_ps(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFMADDSUBxxxPS __m128 _mm_maskz_fmaddsub_ps(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFMADDSUBxxxPS __m128 _mm_mask3_fmaddsub_ps(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFMADDSUBxxxPS __m128 _mm_fmaddsub_ps (__m128 a, __m128 b, __m128 c);
+
+
VFMADDSUBxxxPS __m256 _mm256_fmaddsub_ps (__m256 a, __m256 b, __m256 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmsub132pd.vfmsub213pd.vfmsub231pd.html b/x86/vfmsub132pd.vfmsub213pd.vfmsub231pd.html new file mode 100644 index 0000000..2f81e86 --- /dev/null +++ b/x86/vfmsub132pd.vfmsub213pd.vfmsub231pd.html @@ -0,0 +1,401 @@ + +VFMSUB132PD/VFMSUB213PD/VFMSUB231PD + — Fused Multiply-Subtract of Packed DoublePrecision Floating-Point Values

VFMSUB132PD/VFMSUB213PD/VFMSUB231PD + — Fused Multiply-Subtract of Packed DoublePrecision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W1 9A /r VFMSUB132PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm3/mem, subtract xmm2 and put result in xmm1.
VEX.128.66.0F38.W1 AA /r VFMSUB213PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm2, subtract xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W1 BA /r VFMSUB231PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm2 and xmm3/mem, subtract xmm1 and put result in xmm1.
VEX.256.66.0F38.W1 9A /r VFMSUB132PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm3/mem, subtract ymm2 and put result in ymm1.
VEX.256.66.0F38.W1 AA /r VFMSUB213PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm2, subtract ymm3/mem and put result in ymm1.
VEX.256.66.0F38.W1 BA /r VFMSUB231PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm2 and ymm3/mem, subtract ymm1 and put result in ymm1.S
EVEX.128.66.0F38.W1 9A /r VFMSUB132PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm3/m128/m64bcst, subtract xmm2 and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W1 AA /r VFMSUB213PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm2, subtract xmm3/m128/m64bcst and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W1 BA /r VFMSUB231PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm2 and xmm3/m128/m64bcst, subtract xmm1 and put result in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W1 9A /r VFMSUB132PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm3/m256/m64bcst, subtract ymm2 and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W1 AA /r VFMSUB213PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm2, subtract ymm3/m256/m64bcst and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W1 BA /r VFMSUB231PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm2 and ymm3/m256/m64bcst, subtract ymm1 and put result in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W1 9A /r VFMSUB132PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm3/m512/m64bcst, subtract zmm2 and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W1 AA /r VFMSUB213PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm2, subtract zmm3/m512/m64bcst and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W1 BA /r VFMSUB231PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm2 and zmm3/m512/m64bcst, subtract zmm1 and put result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a set of SIMD multiply-subtract computation on packed double precision floating-point values using three source operands and writes the multiply-subtract results in the destination operand. The destination operand is also the first source operand. The second operand must be a SIMD register. The third source operand can be a SIMD register or a memory location.

+

VFMSUB132PD: Multiplies the two, four or eight packed double precision floating-point values from the first source operand to the two, four or eight packed double precision floating-point values in the third source operand. From the infinite precision intermediate result, subtracts the two, four or eight packed double precision floating-point values in the second source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

VFMSUB213PD: Multiplies the two, four or eight packed double precision floating-point values from the second source operand to the two, four or eight packed double precision floating-point values in the first source operand. From the infinite precision intermediate result, subtracts the two, four or eight packed double precision floating-point values in the third source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

VFMSUB231PD: Multiplies the two, four or eight packed double precision floating-point values from the second source to the two, four or eight packed double precision floating-point values in the third source operand. From the infinite precision intermediate result, subtracts the two, four or eight packed double precision floating-point values in the first source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFMSUB132PD DEST, SRC2, SRC3 (VEX encoded versions) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR(DEST[n+63:n]*SRC3[n+63:n] - SRC2[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMSUB213PD DEST, SRC2, SRC3 (VEX encoded versions) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR(SRC2[n+63:n]*DEST[n+63:n] - SRC3[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMSUB231PD DEST, SRC2, SRC3 (VEX encoded versions) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR(SRC2[n+63:n]*SRC3[n+63:n] - DEST[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMSUB132PD DEST, SRC2, SRC3 (EVEX encoded versions, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(DEST[i+63:i]*SRC3[i+63:i] - SRC2[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB132PD DEST, SRC2, SRC3 (EVEX encoded versions, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[63:0] - SRC2[i+63:i])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[i+63:i] - SRC2[i+63:i])
+            FI;
+            ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB213PD DEST, SRC2, SRC3 (EVEX encoded versions, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(SRC2[i+63:i]*DEST[i+63:i] - SRC3[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB213PD DEST, SRC2, SRC3 (EVEX encoded versions, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] - SRC3[63:0])
++31:i])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] - SRC3[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB231PD DEST, SRC2, SRC3 (EVEX encoded versions, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(SRC2[i+63:i]*SRC3[i+63:i] - DEST[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB231PD DEST, SRC2, SRC3 (EVEX encoded versions, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[63:0] - DEST[i+63:i])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[i+63:i] - DEST[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMSUBxxxPD __m512d _mm512_fmsub_pd(__m512d a, __m512d b, __m512d c);
+
+
VFMSUBxxxPD __m512d _mm512_fmsub_round_pd(__m512d a, __m512d b, __m512d c, int r);
+
+
VFMSUBxxxPD __m512d _mm512_mask_fmsub_pd(__m512d a, __mmask8 k, __m512d b, __m512d c);
+
+
VFMSUBxxxPD __m512d _mm512_maskz_fmsub_pd(__mmask8 k, __m512d a, __m512d b, __m512d c);
+
+
VFMSUBxxxPD __m512d _mm512_mask3_fmsub_pd(__m512d a, __m512d b, __m512d c, __mmask8 k);
+
+
VFMSUBxxxPD __m512d _mm512_mask_fmsub_round_pd(__m512d a, __mmask8 k, __m512d b, __m512d c, int r);
+
+
VFMSUBxxxPD __m512d _mm512_maskz_fmsub_round_pd(__mmask8 k, __m512d a, __m512d b, __m512d c, int r);
+
+
VFMSUBxxxPD __m512d _mm512_mask3_fmsub_round_pd(__m512d a, __m512d b, __m512d c, __mmask8 k, int r);
+
+
VFMSUBxxxPD __m256d _mm256_mask_fmsub_pd(__m256d a, __mmask8 k, __m256d b, __m256d c);
+
+
VFMSUBxxxPD __m256d _mm256_maskz_fmsub_pd(__mmask8 k, __m256d a, __m256d b, __m256d c);
+
+
VFMSUBxxxPD __m256d _mm256_mask3_fmsub_pd(__m256d a, __m256d b, __m256d c, __mmask8 k);
+
+
VFMSUBxxxPD __m128d _mm_mask_fmsub_pd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFMSUBxxxPD __m128d _mm_maskz_fmsub_pd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFMSUBxxxPD __m128d _mm_mask3_fmsub_pd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFMSUBxxxPD __m128d _mm_fmsub_pd (__m128d a, __m128d b, __m128d c);
+
+
VFMSUBxxxPD __m256d _mm256_fmsub_pd (__m256d a, __m256d b, __m256d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmsub132ph.vfnmsub132ph.vfmsub213ph.vfnmsub213ph.vfmsub231ph.vfnmsub231ph.html b/x86/vfmsub132ph.vfnmsub132ph.vfmsub213ph.vfnmsub213ph.vfmsub231ph.vfnmsub231ph.html new file mode 100644 index 0000000..c030bc8 --- /dev/null +++ b/x86/vfmsub132ph.vfnmsub132ph.vfmsub213ph.vfnmsub213ph.vfmsub231ph.vfnmsub231ph.html @@ -0,0 +1,367 @@ + +VFMSUB132PH/VFNMSUB132PH/VFMSUB213PH/VFNMSUB213PH/VFMSUB231PH/VFNMSUB231PH + — Fused Multiply-Subtract of Packed FP16 Values

VFMSUB132PH/VFNMSUB132PH/VFMSUB213PH/VFNMSUB213PH/VFMSUB231PH/VFNMSUB231PH + — Fused Multiply-Subtract of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP6.W0 9A /r VFMSUB132PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm3/m128/m16bcst, subtract xmm2, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 9A /r VFMSUB132PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm3/m256/m16bcst, subtract ymm2, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 9A /r VFMSUB132PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm3/m512/m16bcst, subtract zmm2, and store the result in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 AA /r VFMSUB213PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm2, subtract xmm3/m128/m16bcst, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 AA /r VFMSUB213PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm2, subtract ymm3/m256/m16bcst, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 AA /r VFMSUB213PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm2, subtract zmm3/m512/m16bcst, and store the result in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 BA /r VFMSUB231PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm2 and xmm3/m128/m16bcst, subtract xmm1, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 BA /r VFMSUB231PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm2 and ymm3/m256/m16bcst, subtract ymm1, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 BA /r VFMSUB231PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm2 and zmm3/m512/m16bcst, subtract zmm1, and store the result in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 9E /r VFNMSUB132PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm3/m128/m16bcst, and negate the value. Subtract xmm2 from this value, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 9E /r VFNMSUB132PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm3/m256/m16bcst, and negate the value. Subtract ymm2 from this value, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 9E /r VFNMSUB132PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm3/m512/m16bcst, and negate the value. Subtract zmm2 from this value, and store the result in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 AE /r VFNMSUB213PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm2, and negate the value. Subtract xmm3/m128/m16bcst from this value, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 AE /r VFNMSUB213PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm2, and negate the value. Subtract ymm3/m256/m16bcst from this value, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 AE /r VFNMSUB213PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm2, and negate the value. Subtract zmm3/m512/m16bcst from this value, and store the result in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 BE /r VFNMSUB231PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm2 and xmm3/m128/m16bcst, and negate the value. Subtract xmm1 from this value, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 BE /r VFNMSUB231PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm2 and ymm3/m256/m16bcst, and negate the value. Subtract ymm1 from this value, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 BE /r VFNMSUB231PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm2 and zmm3/m512/m16bcst, and negate the value. Subtract zmm1 from this value, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a packed multiply-subtract or a negated multiply-subtract computation on FP16 values using three source operands and writes the results in the destination operand. The destination operand is also the first source operand. The “N” (negated) forms of this instruction subtract the remaining operand from the negated infinite precision intermediate product. The notation’ “132”, “213” and “231” indicate the use of the operands in ±A * B − C, where each digit corresponds to the operand number, with the destination being operand 1; see Table 5-6.

+

The destination elements are updated according to the writemask.

+
+ + + + + + + + + + + + +
NotationOperands
132dest = ± dest*src3-src2
231dest = ± src2*src3-dest
213dest = ± src2*dest-src3
+
Table 5-6. VF[,N]MSUB[132,213,231]PH Notation for Operands
+

Operation + ¶ +

+

VF[,N]MSUB132PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-DEST.fp16[j]*SRC3.fp16[j] - SRC2.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j]*SRC3.fp16[j] - SRC2.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MSUB132PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-DEST.fp16[j] * t3 - SRC2.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j] * t3 - SRC2.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MSUB213PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-SRC2.fp16[j]*DEST.fp16[j] - SRC3.fp16[j])
+        ELSE
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*DEST.fp16[j] - SRC3.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MSUB213PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-SRC2.fp16[j] * DEST.fp16[j] - t3 )
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * DEST.fp16[j] - t3 )
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MSUB231PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *negative form:
+            DEST.fp16[j] := RoundFPControl(-SRC2.fp16[j]*SRC3.fp16[j] - DEST.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*SRC3.fp16[j] - DEST.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VF[,N]MSUB231PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *negative form*:
+            DEST.fp16[j] := RoundFPControl(-SRC2.fp16[j] * t3 - DEST.fp16[j] )
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * t3 - DEST.fp16[j] )
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMSUB132PH, VFMSUB213PH, and VFMSUB231PH: __m128h _mm_fmsub_ph (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fmsub_ph (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fmsub_ph (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fmsub_ph (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
__m256h _mm256_fmsub_ph (__m256h a, __m256h b, __m256h c);
+
+
__m256h _mm256_mask_fmsub_ph (__m256h a, __mmask16 k, __m256h b, __m256h c);
+
+
__m256h _mm256_mask3_fmsub_ph (__m256h a, __m256h b, __m256h c, __mmask16 k);
+
+
__m256h _mm256_maskz_fmsub_ph (__mmask16 k, __m256h a, __m256h b, __m256h c);
+
+
__m512h _mm512_fmsub_ph (__m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_mask_fmsub_ph (__m512h a, __mmask32 k, __m512h b, __m512h c);
+
+
__m512h _mm512_mask3_fmsub_ph (__m512h a, __m512h b, __m512h c, __mmask32 k);
+
+
__m512h _mm512_maskz_fmsub_ph (__mmask32 k, __m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_fmsub_round_ph (__m512h a, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask_fmsub_round_ph (__m512h a, __mmask32 k, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask3_fmsub_round_ph (__m512h a, __m512h b, __m512h c, __mmask32 k, const int rounding);
+
+
__m512h _mm512_maskz_fmsub_round_ph (__mmask32 k, __m512h a, __m512h b, __m512h c, const int rounding);
+
+
VFNMSUB132PH, VFNMSUB213PH, and VFNMSUB231PH: __m128h _mm_fnmsub_ph (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fnmsub_ph (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fnmsub_ph (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fnmsub_ph (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
__m256h _mm256_fnmsub_ph (__m256h a, __m256h b, __m256h c);
+
+
__m256h _mm256_mask_fnmsub_ph (__m256h a, __mmask16 k, __m256h b, __m256h c);
+
+
__m256h _mm256_mask3_fnmsub_ph (__m256h a, __m256h b, __m256h c, __mmask16 k);
+
+
__m256h _mm256_maskz_fnmsub_ph (__mmask16 k, __m256h a, __m256h b, __m256h c);
+
+
__m512h _mm512_fnmsub_ph (__m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_mask_fnmsub_ph (__m512h a, __mmask32 k, __m512h b, __m512h c);
+
+
__m512h _mm512_mask3_fnmsub_ph (__m512h a, __m512h b, __m512h c, __mmask32 k);
+
+
__m512h _mm512_maskz_fnmsub_ph (__mmask32 k, __m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_fnmsub_round_ph (__m512h a, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask_fnmsub_round_ph (__m512h a, __mmask32 k, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask3_fnmsub_round_ph (__m512h a, __m512h b, __m512h c, __mmask32 k, const int rounding);
+
+
__m512h _mm512_maskz_fnmsub_round_ph (__mmask32 k, __m512h a, __m512h b, __m512h c, const int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmsub132ps.vfmsub213ps.vfmsub231ps.html b/x86/vfmsub132ps.vfmsub213ps.vfmsub231ps.html new file mode 100644 index 0000000..6a6484d --- /dev/null +++ b/x86/vfmsub132ps.vfmsub213ps.vfmsub231ps.html @@ -0,0 +1,400 @@ + +VFMSUB132PS/VFMSUB213PS/VFMSUB231PS + — Fused Multiply-Subtract of Packed SinglePrecision Floating-Point Values

VFMSUB132PS/VFMSUB213PS/VFMSUB231PS + — Fused Multiply-Subtract of Packed SinglePrecision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 9A /r VFMSUB132PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm3/mem, subtract xmm2 and put result in xmm1.
VEX.128.66.0F38.W0 AA /r VFMSUB213PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm2, subtract xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W0 BA /r VFMSUB231PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm2 and xmm3/mem, subtract xmm1 and put result in xmm1.
VEX.256.66.0F38.W0 9A /r VFMSUB132PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm3/mem, subtract ymm2 and put result in ymm1.
VEX.256.66.0F38.W0 AA /r VFMSUB213PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm2, subtract ymm3/mem and put result in ymm1.
VEX.256.66.0F38.0 BA /r VFMSUB231PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm2 and ymm3/mem, subtract ymm1 and put result in ymm1.
EVEX.128.66.0F38.W0 9A /r VFMSUB132PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm3/m128/m32bcst, subtract xmm2 and put result in xmm1.
EVEX.128.66.0F38.W0 AA /r VFMSUB213PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm2, subtract xmm3/m128/m32bcst and put result in xmm1.
EVEX.128.66.0F38.W0 BA /r VFMSUB231PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm2 and xmm3/m128/m32bcst, subtract xmm1 and put result in xmm1.
EVEX.256.66.0F38.W0 9A /r VFMSUB132PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm3/m256/m32bcst, subtract ymm2 and put result in ymm1.
EVEX.256.66.0F38.W0 AA /r VFMSUB213PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm2, subtract ymm3/m256/m32bcst and put result in ymm1.
EVEX.256.66.0F38.W0 BA /r VFMSUB231PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm2 and ymm3/m256/m32bcst, subtract ymm1 and put result in ymm1.
EVEX.512.66.0F38.W0 9A /r VFMSUB132PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm1 and zmm3/m512/m32bcst, subtract zmm2 and put result in zmm1.
EVEX.512.66.0F38.W0 AA /r VFMSUB213PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm1 and zmm2, subtract zmm3/m512/m32bcst and put result in zmm1.
EVEX.512.66.0F38.W0 BA /r VFMSUB231PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm2 and zmm3/m512/m32bcst, subtract zmm1 and put result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a set of SIMD multiply-subtract computation on packed single precision floating-point values using three source operands and writes the multiply-subtract results in the destination operand. The destination operand is also the first source operand. The second operand must be a SIMD register. The third source operand can be a SIMD register or a memory location.

+

VFMSUB132PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the first source operand to the four, eight or sixteen packed single precision floating-point values in the third source operand. From the infinite precision intermediate result, subtracts the four, eight or sixteen packed single precision floating-point values in the second source operand, performs rounding and stores the resulting four, eight or sixteen packed single precision floating-point values to the destination operand (first source operand).

+

VFMSUB213PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source operand to the four, eight or sixteen packed single precision floating-point values in the first source operand. From the infinite precision intermediate result, subtracts the four, eight or sixteen packed single precision floating-point values in the third source operand, performs rounding and stores the resulting four, eight or sixteen packed single precision floating-point values to the destination operand (first source operand).

+

VFMSUB231PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source to the four, eight or sixteen packed single precision floating-point values in the third source operand. From the infinite precision intermediate result, subtracts the four, eight or sixteen packed single precision floating-point values in the first source operand, performs rounding and stores the resulting four, eight or sixteen packed single precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFMSUB132PS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(DEST[n+31:n]*SRC3[n+31:n] - SRC2[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMSUB213PS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(SRC2[n+31:n]*DEST[n+31:n] - SRC3[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMSUB231PS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(SRC2[n+31:n]*SRC3[n+31:n] - DEST[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMSUB132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl(DEST[i+31:i]*SRC3[i+31:i] - SRC2[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[31:0] - SRC2[i+31:i])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[i+31:i] - SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] - SRC3[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] - SRC3[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] - SRC3[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[i+31:i] - DEST[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUB231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[31:0] - DEST[i+31:i])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[i+31:i] - DEST[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMSUBxxxPS __m512 _mm512_fmsub_ps(__m512 a, __m512 b, __m512 c);
+
+
VFMSUBxxxPS __m512 _mm512_fmsub_round_ps(__m512 a, __m512 b, __m512 c, int r);
+
+
VFMSUBxxxPS __m512 _mm512_mask_fmsub_ps(__m512 a, __mmask16 k, __m512 b, __m512 c);
+
+
VFMSUBxxxPS __m512 _mm512_maskz_fmsub_ps(__mmask16 k, __m512 a, __m512 b, __m512 c);
+
+
VFMSUBxxxPS __m512 _mm512_mask3_fmsub_ps(__m512 a, __m512 b, __m512 c, __mmask16 k);
+
+
VFMSUBxxxPS __m512 _mm512_mask_fmsub_round_ps(__m512 a, __mmask16 k, __m512 b, __m512 c, int r);
+
+
VFMSUBxxxPS __m512 _mm512_maskz_fmsub_round_ps(__mmask16 k, __m512 a, __m512 b, __m512 c, int r);
+
+
VFMSUBxxxPS __m512 _mm512_mask3_fmsub_round_ps(__m512 a, __m512 b, __m512 c, __mmask16 k, int r);
+
+
VFMSUBxxxPS __m256 _mm256_mask_fmsub_ps(__m256 a, __mmask8 k, __m256 b, __m256 c);
+
+
VFMSUBxxxPS __m256 _mm256_maskz_fmsub_ps(__mmask8 k, __m256 a, __m256 b, __m256 c);
+
+
VFMSUBxxxPS __m256 _mm256_mask3_fmsub_ps(__m256 a, __m256 b, __m256 c, __mmask8 k);
+
+
VFMSUBxxxPS __m128 _mm_mask_fmsub_ps(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFMSUBxxxPS __m128 _mm_maskz_fmsub_ps(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFMSUBxxxPS __m128 _mm_mask3_fmsub_ps(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFMSUBxxxPS __m128 _mm_fmsub_ps (__m128 a, __m128 b, __m128 c);
+
+
VFMSUBxxxPS __m256 _mm256_fmsub_ps (__m256 a, __m256 b, __m256 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmsub132sd.vfmsub213sd.vfmsub231sd.html b/x86/vfmsub132sd.vfmsub213sd.vfmsub231sd.html new file mode 100644 index 0000000..7678092 --- /dev/null +++ b/x86/vfmsub132sd.vfmsub213sd.vfmsub231sd.html @@ -0,0 +1,207 @@ + +VFMSUB132SD/VFMSUB213SD/VFMSUB231SD + — Fused Multiply-Subtract of Scalar DoublePrecision Floating-Point Values

VFMSUB132SD/VFMSUB213SD/VFMSUB231SD + — Fused Multiply-Subtract of Scalar DoublePrecision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.LIG.66.0F38.W1 9B /r VFMSUB132SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm1 and xmm3/m64, subtract xmm2 and put result in xmm1.
VEX.LIG.66.0F38.W1 AB /r VFMSUB213SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm1 and xmm2, subtract xmm3/m64 and put result in xmm1.
VEX.LIG.66.0F38.W1 BB /r VFMSUB231SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm2 and xmm3/m64, subtract xmm1 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 9B /r VFMSUB132SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm1 and xmm3/m64, subtract xmm2 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 AB /r VFMSUB213SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm1 and xmm2, subtract xmm3/m64 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 BB /r VFMSUB231SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm2 and xmm3/m64, subtract xmm1 and put result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BTuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD multiply-subtract computation on the low packed double precision floating-point values using three source operands and writes the multiply-subtract result in the destination operand. The destination operand is also the first source operand. The second operand must be a XMM register. The third source operand can be a XMM register or a 64-bit memory location.

+

VFMSUB132SD: Multiplies the low packed double precision floating-point value from the first source operand to the low packed double precision floating-point value in the third source operand. From the infinite precision intermediate result, subtracts the low packed double precision floating-point values in the second source operand, performs rounding and stores the resulting packed double precision floating-point value to the destination operand (first source operand).

+

VFMSUB213SD: Multiplies the low packed double precision floating-point value from the second source operand to the low packed double precision floating-point value in the first source operand. From the infinite precision intermediate result, subtracts the low packed double precision floating-point value in the third source operand, performs rounding and stores the resulting packed double precision floating-point value to the destination operand (first source operand).

+

VFMSUB231SD: Multiplies the low packed double precision floating-point value from the second source to the low packed double precision floating-point value in the third source operand. From the infinite precision intermediate result, subtracts the low packed double precision floating-point value in the first source operand, performs rounding and stores the resulting packed double precision floating-point value to the destination operand (first source operand).

+

VEX.128 and EVEX encoded version: The destination operand (also first source operand) is encoded in reg_field. The second source operand is encoded in VEX.vvvv/EVEX.vvvv. The third source operand is encoded in rm_field. Bits 127:64 of the destination are unchanged. Bits MAXVL-1:128 of the destination register are zeroed.

+

EVEX encoded version: The low quadword element of the destination is updated according to the writemask.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFMSUB132SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(DEST[63:0]*SRC3[63:0] - SRC2[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB213SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(SRC2[63:0]*DEST[63:0] - SRC3[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB231SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(SRC2[63:0]*SRC3[63:0] - DEST[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB132SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(DEST[63:0]*SRC3[63:0] - SRC2[63:0])
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB213SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*DEST[63:0] - SRC3[63:0])
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB231SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*SRC3[63:0] - DEST[63:0])
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMSUBxxxSD __m128d _mm_fmsub_round_sd(__m128d a, __m128d b, __m128d c, int r);
+
+
VFMSUBxxxSD __m128d _mm_mask_fmsub_sd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFMSUBxxxSD __m128d _mm_maskz_fmsub_sd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFMSUBxxxSD __m128d _mm_mask3_fmsub_sd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFMSUBxxxSD __m128d _mm_mask_fmsub_round_sd(__m128d a, __mmask8 k, __m128d b, __m128d c, int r);
+
+
VFMSUBxxxSD __m128d _mm_maskz_fmsub_round_sd(__mmask8 k, __m128d a, __m128d b, __m128d c, int r);
+
+
VFMSUBxxxSD __m128d _mm_mask3_fmsub_round_sd(__m128d a, __m128d b, __m128d c, __mmask8 k, int r);
+
+
VFMSUBxxxSD __m128d _mm_fmsub_sd (__m128d a, __m128d b, __m128d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfmsub132sh.vfnmsub132sh.vfmsub213sh.vfnmsub213sh.vfmsub231sh.vfnmsub231sh.html b/x86/vfmsub132sh.vfnmsub132sh.vfmsub213sh.vfnmsub213sh.vfmsub231sh.vfnmsub231sh.html new file mode 100644 index 0000000..0531624 --- /dev/null +++ b/x86/vfmsub132sh.vfnmsub132sh.vfmsub213sh.vfnmsub213sh.vfmsub231sh.vfnmsub231sh.html @@ -0,0 +1,197 @@ + +VFMSUB132SH/VFNMSUB132SH/VFMSUB213SH/VFNMSUB213SH/VFMSUB231SH/VFNMSUB231SH + — Fused Multiply-Subtract of Scalar FP16 Values

VFMSUB132SH/VFNMSUB132SH/VFMSUB213SH/VFNMSUB213SH/VFMSUB231SH/VFNMSUB231SH + — Fused Multiply-Subtract of Scalar FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.66.MAP6.W0 9B /r VFMSUB132SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm1 and xmm3/m16, subtract xmm2, and store the result in xmm1 subject to writemask k1.
EVEX.LLIG.66.MAP6.W0 AB /r VFMSUB213SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm1 and xmm2, subtract xmm3/m16, and store the result in xmm1 subject to writemask k1.
EVEX.LLIG.66.MAP6.W0 BB /r VFMSUB231SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm2 and xmm3/m16, subtract xmm1, and store the result in xmm1 subject to writemask k1.
EVEX.LLIG.66.MAP6.W0 9F /r VFNMSUB132SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm1 and xmm3/m16, and negate the value. Subtract xmm2 from this value, and store the result in xmm1 subject to writemask k1.
EVEX.LLIG.66.MAP6.W0 AF /r VFNMSUB213SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm1 and xmm2, and negate the value. Subtract xmm3/m16 from this value, and store the result in xmm1 subject to writemask k1.
EVEX.LLIG.66.MAP6.W0 BF /r VFNMSUB231SH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply FP16 values from xmm2 and xmm3/m16, and negate the value. Subtract xmm1 from this value, and store the result in xmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a scalar multiply-subtract or negated multiply-subtract computation on the low FP16 values using three source operands and writes the result in the destination operand. The destination operand is also the first source operand. The “N” (negated) forms of this instruction subtract the remaining operand from the negated infinite precision intermediate product. The notation’ “132”, “213” and “231” indicate the use of the operands in ±A * B − C, where each digit corresponds to the operand number, with the destination being operand 1; see Table 5-7.

+

Bits 127:16 of the destination operand are preserved. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+
+ + + + + + + + + + + + +
NotationOperands
132dest = ± dest*src3-src2
231dest = ± src2*src3-dest
213dest = ± src2*dest-src3
+
Table 5-7. VF[,N]MSUB[132,213,231]SH Notation for Operands
+

Operation + ¶ +

+

VF[,N]MSUB132SH DEST, SRC2, SRC3 (EVEX encoded versions) + ¶ +

+
IF EVEX.b = 1 and SRC3 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    IF *negative form*:
+        DEST.fp16[0] := RoundFPControl(-DEST.fp16[0]*SRC3.fp16[0] - SRC2.fp16[0])
+    ELSE:
+        DEST.fp16[0] := RoundFPControl(DEST.fp16[0]*SRC3.fp16[0] - SRC2.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else DEST.fp16[0] remains unchanged
+//DEST[127:16] remains unchanged
+DEST[MAXVL-1:128] := 0
+
+

VF[,N]MSUB213SH DEST, SRC2, SRC3 (EVEX encoded versions) + ¶ +

+
IF EVEX.b = 1 and SRC3 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    IF *negative form:
+        DEST.fp16[0] := RoundFPControl(-SRC2.fp16[0]*DEST.fp16[0] - SRC3.fp16[0])
+    ELSE:
+        DEST.fp16[0] := RoundFPControl(SRC2.fp16[0]*DEST.fp16[0] - SRC3.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else DEST.fp16[0] remains unchanged
+//DEST[127:16] remains unchanged
+DEST[MAXVL-1:128] := 0
+
+

VF[,N]MSUB231SH DEST, SRC2, SRC3 (EVEX encoded versions) + ¶ +

+
IF EVEX.b = 1 and SRC3 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    IF *negative form*:
+        DEST.fp16[0] := RoundFPControl(-SRC2.fp16[0]*SRC3.fp16[0] - DEST.fp16[0])
+    ELSE:
+        DEST.fp16[0] := RoundFPControl(SRC2.fp16[0]*SRC3.fp16[0] - DEST.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else DEST.fp16[0] remains unchanged
+//DEST[127:16] remains unchanged
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMSUB132SH, VFMSUB213SH, and VFMSUB231SH: __m128h _mm_fmsub_round_sh (__m128h a, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_mask_fmsub_round_sh (__m128h a, __mmask8 k, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_mask3_fmsub_round_sh (__m128h a, __m128h b, __m128h c, __mmask8 k, const int rounding);
+
+
__m128h _mm_maskz_fmsub_round_sh (__mmask8 k, __m128h a, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_fmsub_sh (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fmsub_sh (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fmsub_sh (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fmsub_sh (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
VFNMSUB132SH, VFNMSUB213SH, and VFNMSUB231SH: __m128h _mm_fnmsub_round_sh (__m128h a, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_mask_fnmsub_round_sh (__m128h a, __mmask8 k, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_mask3_fnmsub_round_sh (__m128h a, __m128h b, __m128h c, __mmask8 k, const int rounding);
+
+
__m128h _mm_maskz_fnmsub_round_sh (__mmask8 k, __m128h a, __m128h b, __m128h c, const int rounding);
+
+
__m128h _mm_fnmsub_sh (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fnmsub_sh (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fnmsub_sh (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fnmsub_sh (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfmsub132ss.vfmsub213ss.vfmsub231ss.html b/x86/vfmsub132ss.vfmsub213ss.vfmsub231ss.html new file mode 100644 index 0000000..ef90dcd --- /dev/null +++ b/x86/vfmsub132ss.vfmsub213ss.vfmsub231ss.html @@ -0,0 +1,207 @@ + +VFMSUB132SS/VFMSUB213SS/VFMSUB231SS + — Fused Multiply-Subtract of Scalar SinglePrecision Floating-Point Values

VFMSUB132SS/VFMSUB213SS/VFMSUB231SS + — Fused Multiply-Subtract of Scalar SinglePrecision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.LIG.66.0F38.W0 9B /r VFMSUB132SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single precision floating-point value from xmm1 and xmm3/m32, subtract xmm2 and put result in xmm1.
VEX.LIG.66.0F38.W0 AB /r VFMSUB213SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single precision floating-point value from xmm1 and xmm2, subtract xmm3/m32 and put result in xmm1.
VEX.LIG.66.0F38.W0 BB /r VFMSUB231SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single precision floating-point value from xmm2 and xmm3/m32, subtract xmm1 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 9B /r VFMSUB132SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single precision floating-point value from xmm1 and xmm3/m32, subtract xmm2 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 AB /r VFMSUB213SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single precision floating-point value from xmm1 and xmm2, subtract xmm3/m32 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 BB /r VFMSUB231SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single precision floating-point value from xmm2 and xmm3/m32, subtract xmm1 and put result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BTuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD multiply-subtract computation on the low packed single precision floating-point values using three source operands and writes the multiply-subtract result in the destination operand. The destination operand is also the first source operand. The second operand must be a XMM register. The third source operand can be a XMM register or a 32-bit memory location.

+

VFMSUB132SS: Multiplies the low packed single precision floating-point value from the first source operand to the low packed single precision floating-point value in the third source operand. From the infinite precision intermediate result, subtracts the low packed single precision floating-point values in the second source operand, performs rounding and stores the resulting packed single precision floating-point value to the destination operand (first source operand).

+

VFMSUB213SS: Multiplies the low packed single precision floating-point value from the second source operand to the low packed single precision floating-point value in the first source operand. From the infinite precision intermediate result, subtracts the low packed single precision floating-point value in the third source operand, performs rounding and stores the resulting packed single precision floating-point value to the destination operand (first source operand).

+

VFMSUB231SS: Multiplies the low packed single precision floating-point value from the second source to the low packed single precision floating-point value in the third source operand. From the infinite precision intermediate result, subtracts the low packed single precision floating-point value in the first source operand, performs rounding and stores the resulting packed single precision floating-point value to the destination operand (first source operand).

+

VEX.128 and EVEX encoded version: The destination operand (also first source operand) is encoded in reg_field. The second source operand is encoded in VEX.vvvv/EVEX.vvvv. The third source operand is encoded in rm_field. Bits 127:32 of the destination are unchanged. Bits MAXVL-1:128 of the destination register are zeroed.

+

EVEX encoded version: The low doubleword element of the destination is updated according to the writemask.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFMSUB132SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(DEST[31:0]*SRC3[31:0] - SRC2[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB213SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(SRC2[31:0]*DEST[31:0] - SRC3[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB231SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(SRC2[31:0]*SRC3[63:0] - DEST[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB132SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(DEST[31:0]*SRC3[31:0] - SRC2[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB213SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(SRC2[31:0]*DEST[31:0] - SRC3[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFMSUB231SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(SRC2[31:0]*SRC3[31:0] - DEST[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMSUBxxxSS __m128 _mm_fmsub_round_ss(__m128 a, __m128 b, __m128 c, int r);
+
+
VFMSUBxxxSS __m128 _mm_mask_fmsub_ss(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFMSUBxxxSS __m128 _mm_maskz_fmsub_ss(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFMSUBxxxSS __m128 _mm_mask3_fmsub_ss(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFMSUBxxxSS __m128 _mm_mask_fmsub_round_ss(__m128 a, __mmask8 k, __m128 b, __m128 c, int r);
+
+
VFMSUBxxxSS __m128 _mm_maskz_fmsub_round_ss(__mmask8 k, __m128 a, __m128 b, __m128 c, int r);
+
+
VFMSUBxxxSS __m128 _mm_mask3_fmsub_round_ss(__m128 a, __m128 b, __m128 c, __mmask8 k, int r);
+
+
VFMSUBxxxSS __m128 _mm_fmsub_ss (__m128 a, __m128 b, __m128 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfmsubadd132pd.vfmsubadd213pd.vfmsubadd231pd.html b/x86/vfmsubadd132pd.vfmsubadd213pd.vfmsubadd231pd.html new file mode 100644 index 0000000..928640c --- /dev/null +++ b/x86/vfmsubadd132pd.vfmsubadd213pd.vfmsubadd231pd.html @@ -0,0 +1,437 @@ + +VFMSUBADD132PD/VFMSUBADD213PD/VFMSUBADD231PD + — Fused Multiply-AlternatingSubtract/Add of Packed Double Precision Floating-Point Values

VFMSUBADD132PD/VFMSUBADD213PD/VFMSUBADD231PD + — Fused Multiply-AlternatingSubtract/Add of Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W1 97 /r VFMSUBADD132PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm3/mem, subtract/add elements in xmm2 and put result in xmm1.
VEX.128.66.0F38.W1 A7 /r VFMSUBADD213PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm2, subtract/add elements in xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W1 B7 /r VFMSUBADD231PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm2 and xmm3/mem, subtract/add elements in xmm1 and put result in xmm1.
VEX.256.66.0F38.W1 97 /r VFMSUBADD132PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm3/mem, subtract/add elements in ymm2 and put result in ymm1.
VEX.256.66.0F38.W1 A7 /r VFMSUBADD213PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm2, subtract/add elements in ymm3/mem and put result in ymm1.
VEX.256.66.0F38.W1 B7 /r VFMSUBADD231PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm2 and ymm3/mem, subtract/add elements in ymm1 and put result in ymm1.
EVEX.128.66.0F38.W1 97 /r VFMSUBADD132PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm3/m128/m64bcst, subtract/add elements in xmm2 and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W1 A7 /r VFMSUBADD213PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm2, subtract/add elements in xmm3/m128/m64bcst and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W1 B7 /r VFMSUBADD231PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm2 and xmm3/m128/m64bcst, subtract/add elements in xmm1 and put result in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W1 97 /r VFMSUBADD132PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm3/m256/m64bcst, subtract/add elements in ymm2 and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W1 A7 /r VFMSUBADD213PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm2, subtract/add elements in ymm3/m256/m64bcst and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W1 B7 /r VFMSUBADD231PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm2 and ymm3/m256/m64bcst, subtract/add elements in ymm1 and put result in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W1 97 /r VFMSUBADD132PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm3/m512/m64bcst, subtract/add elements in zmm2 and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W1 A7 /r VFMSUBADD213PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm2, subtract/add elements in zmm3/m512/m64bcst and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W1 B7 /r VFMSUBADD231PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm2 and zmm3/m512/m64bcst, subtract/add elements in zmm1 and put result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFMSUBADD132PD: Multiplies the two, four, or eight packed double precision floating-point values from the first source operand to the two or four packed double precision floating-point values in the third source operand. From the infinite precision intermediate result, subtracts the odd double precision floating-point elements and adds the even double precision floating-point values in the second source operand, performs rounding and stores the resulting two or four packed double precision floating-point values to the destination operand (first source operand).

+

VFMSUBADD213PD: Multiplies the two, four, or eight packed double precision floating-point values from the second source operand to the two or four packed double precision floating-point values in the first source operand. From the infinite precision intermediate result, subtracts the odd double precision floating-point elements and adds the even double precision floating-point values in the third source operand, performs rounding and stores the resulting two or four packed double precision floating-point values to the destination operand (first source operand).

+

VFMSUBADD231PD: Multiplies the two, four, or eight packed double precision floating-point values from the second source operand to the two or four packed double precision floating-point values in the third source operand. From the infinite precision intermediate result, subtracts the odd double precision floating-point elements and adds the even double precision floating-point values in the first source operand, performs rounding and stores the resulting two or four packed double precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations

+

involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFMSUBADD132PD DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    DEST[63:0] := RoundFPControl_MXCSR(DEST[63:0]*SRC3[63:0] + SRC2[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(DEST[127:64]*SRC3[127:64] - SRC2[127:64])
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[63:0] := RoundFPControl_MXCSR(DEST[63:0]*SRC3[63:0] + SRC2[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(DEST[127:64]*SRC3[127:64] - SRC2[127:64])
+    DEST[191:128] := RoundFPControl_MXCSR(DEST[191:128]*SRC3[191:128] + SRC2[191:128])
+    DEST[255:192] := RoundFPControl_MXCSR(DEST[255:192]*SRC3[255:192] - SRC2[255:192]
+FI
+VFMSUBADD213PD DEST, SRC2, SRC3
+IF (VEX.128) THEN
+    DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*DEST[63:0] + SRC3[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(SRC2[127:64]*DEST[127:64] - SRC3[127:64])
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*DEST[63:0] + SRC3[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(SRC2[127:64]*DEST[127:64] - SRC3[127:64])
+    DEST[191:128] := RoundFPControl_MXCSR(SRC2[191:128]*DEST[191:128] + SRC3[191:128])
+    DEST[255:192] := RoundFPControl_MXCSR(SRC2[255:192]*DEST[255:192] - SRC3[255:192]
+FI
+
+

VFMSUBADD231PD DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*SRC3[63:0] + DEST[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(SRC2[127:64]*SRC3[127:64] - DEST[127:64])
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[63:0] := RoundFPControl_MXCSR(SRC2[63:0]*SRC3[63:0] + DEST[63:0])
+    DEST[127:64] := RoundFPControl_MXCSR(SRC2[127:64]*SRC3[127:64] - DEST[127:64])
+    DEST[191:128] := RoundFPControl_MXCSR(SRC2[191:128]*SRC3[191:128] + DEST[191:128])
+    DEST[255:192] := RoundFPControl_MXCSR(SRC2[255:192]*SRC3[255:192] - DEST[255:192]
+FI
+
+

VFMSUBADD132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+63:i] :=
+                    RoundFPControl(DEST[i+63:i]*SRC3[i+63:i] + SRC2[i+63:i])
+                ELSE DEST[i+63:i] :=
+                    RoundFPControl(DEST[i+63:i]*SRC3[i+63:i] - SRC2[i+63:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[63:0] + SRC2[i+63:i])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[i+63:i] + SRC2[i+63:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[63:0] - SRC2[i+63:i])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(DEST[i+63:i]*SRC3[i+63:i] - SRC2[i+63:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+63:i] :=
+                    RoundFPControl(SRC2[i+63:i]*DEST[i+63:i] + SRC3[i+63:i])
+                ELSE DEST[i+63:i] :=
+                    RoundFPControl(SRC2[i+63:i]*DEST[i+63:i] - SRC3[i+63:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] + SRC3[63:0])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] + SRC3[i+63:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] - SRC3[63:0])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*DEST[i+63:i] - SRC3[i+63:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+63:i] :=
+                    RoundFPControl(SRC2[i+63:i]*SRC3[i+63:i] + DEST[i+63:i])
+                ELSE DEST[i+63:i] :=
+                    RoundFPControl(SRC2[i+63:i]*SRC3[i+63:i] - DEST[i+63:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[63:0] + DEST[i+63:i])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[i+63:i] + DEST[i+63:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[63:0] - DEST[i+63:i])
+                        ELSE
+                            DEST[i+63:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+63:i]*SRC3[i+63:i] - DEST[i+63:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMSUBADDxxxPD __m512d _mm512_fmsubadd_pd(__m512d a, __m512d b, __m512d c);
+
+
VFMSUBADDxxxPD __m512d _mm512_fmsubadd_round_pd(__m512d a, __m512d b, __m512d c, int r);
+
+
VFMSUBADDxxxPD __m512d _mm512_mask_fmsubadd_pd(__m512d a, __mmask8 k, __m512d b, __m512d c);
+
+
VFMSUBADDxxxPD __m512d _mm512_maskz_fmsubadd_pd(__mmask8 k, __m512d a, __m512d b, __m512d c);
+
+
VFMSUBADDxxxPD __m512d _mm512_mask3_fmsubadd_pd(__m512d a, __m512d b, __m512d c, __mmask8 k);
+
+
VFMSUBADDxxxPD __m512d _mm512_mask_fmsubadd_round_pd(__m512d a, __mmask8 k, __m512d b, __m512d c, int r);
+
+
VFMSUBADDxxxPD __m512d _mm512_maskz_fmsubadd_round_pd(__mmask8 k, __m512d a, __m512d b, __m512d c, int r);
+
+
VFMSUBADDxxxPD __m512d _mm512_mask3_fmsubadd_round_pd(__m512d a, __m512d b, __m512d c, __mmask8 k, int r);
+
+
VFMSUBADDxxxPD __m256d _mm256_mask_fmsubadd_pd(__m256d a, __mmask8 k, __m256d b, __m256d c);
+
+
VFMSUBADDxxxPD __m256d _mm256_maskz_fmsubadd_pd(__mmask8 k, __m256d a, __m256d b, __m256d c);
+
+
VFMSUBADDxxxPD __m256d _mm256_mask3_fmsubadd_pd(__m256d a, __m256d b, __m256d c, __mmask8 k);
+
+
VFMSUBADDxxxPD __m128d _mm_mask_fmsubadd_pd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFMSUBADDxxxPD __m128d _mm_maskz_fmsubadd_pd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFMSUBADDxxxPD __m128d _mm_mask3_fmsubadd_pd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFMSUBADDxxxPD __m128d _mm_fmsubadd_pd (__m128d a, __m128d b, __m128d c);
+
+
VFMSUBADDxxxPD __m256d _mm256_fmsubadd_pd (__m256d a, __m256d b, __m256d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmsubadd132ph.vfmsubadd213ph.vfmsubadd231ph.html b/x86/vfmsubadd132ph.vfmsubadd213ph.vfmsubadd231ph.html new file mode 100644 index 0000000..0b83b42 --- /dev/null +++ b/x86/vfmsubadd132ph.vfmsubadd213ph.vfmsubadd231ph.html @@ -0,0 +1,282 @@ + +VFMSUBADD132PH/VFMSUBADD213PH/VFMSUBADD231PH + — Fused Multiply-AlternatingSubtract/Add of Packed FP16 Values

VFMSUBADD132PH/VFMSUBADD213PH/VFMSUBADD231PH + — Fused Multiply-AlternatingSubtract/Add of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP6.W0 97 /r VFMSUBADD132PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm3/m128/m16bcst, subtract/add elements in xmm2, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 97 /r VFMSUBADD132PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm3/m256/m16bcst, subtract/add elements in ymm2, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 97 /r VFMSUBADD132PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm3/m512/m16bcst, subtract/add elements in zmm2, and store the result in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 A7 /r VFMSUBADD213PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm1 and xmm2, subtract/add elements in xmm3/m128/m16bcst, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 A7 /r VFMSUBADD213PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm1 and ymm2, subtract/add elements in ymm3/m256/m16bcst, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 A7 /r VFMSUBADD213PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm1 and zmm2, subtract/add elements in zmm3/m512/m16bcst, and store the result in zmm1 subject to writemask k1.
EVEX.128.66.MAP6.W0 B7 /r VFMSUBADD231PH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm2 and xmm3/m128/m16bcst, subtract/add elements in xmm1, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 B7 /r VFMSUBADD231PH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm2 and ymm3/m256/m16bcst, subtract/add elements in ymm1, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 B7 /r VFMSUBADD231PH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values from zmm2 and zmm3/m512/m16bcst, subtract/add elements in zmm1, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a packed multiply-add (even elements) or multiply-subtract (odd elements) computation on FP16 values using three source operands and writes the results in the destination operand. The destination operand is also the first source operand. The notation “132”, “213” and “231” indicate the use of the operands in A * B ± C, where each digit corresponds to the operand number, with the destination being operand 1; see Table 5-8.

+

The destination elements are updated according to the writemask.

+
+ + + + + + + + + + + + + + + + +
NotationOdd ElementsEven Elements
132dest = dest*src3-src2dest = dest*src3+src2
231dest = src2*src3-destdest = src2*src3+dest
213dest = src2*dest-src3dest = src2*dest+src3
+
Table 5-8. VFMSUBADD[132,213,231]PH Notation for Odd and Even Elements
+

Operation + ¶ +

+

VFMSUBADD132PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j]*SRC3.fp16[j] + SRC2.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j]*SRC3.fp16[j] - SRC2.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD132PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j] * t3 + SRC2.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(DEST.fp16[j] * t3 - SRC2.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0:
+
+

VFMSUBADD213PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*DEST.fp16[j] + SRC3.fp16[j])
+        ELSE
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*DEST.fp16[j] - SRC3.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD213PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * DEST.fp16[j] + t3 )
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * DEST.fp16[j] - t3 )
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0:
+
+

VFMSUBADD231PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF *j is even:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*SRC3.fp16[j] + DEST.fp16[j])
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j]*SRC3.fp16[j] - DEST.fp16[j])
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD231PH DEST, SRC2, SRC3 (EVEX encoded versions) when src3 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            t3 := SRC3.fp16[0]
+        ELSE:
+            t3 := SRC3.fp16[j]
+        IF *j is even*:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * t3 + DEST.fp16[j] )
+        ELSE:
+            DEST.fp16[j] := RoundFPControl(SRC2.fp16[j] * t3 - DEST.fp16[j] )
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMSUBADD132PH, VFMSUBADD213PH, and VFMSUBADD231PH: __m128h _mm_fmsubadd_ph (__m128h a, __m128h b, __m128h c);
+
+
__m128h _mm_mask_fmsubadd_ph (__m128h a, __mmask8 k, __m128h b, __m128h c);
+
+
__m128h _mm_mask3_fmsubadd_ph (__m128h a, __m128h b, __m128h c, __mmask8 k);
+
+
__m128h _mm_maskz_fmsubadd_ph (__mmask8 k, __m128h a, __m128h b, __m128h c);
+
+
__m256h _mm256_fmsubadd_ph (__m256h a, __m256h b, __m256h c);
+
+
__m256h _mm256_mask_fmsubadd_ph (__m256h a, __mmask16 k, __m256h b, __m256h c);
+
+
__m256h _mm256_mask3_fmsubadd_ph (__m256h a, __m256h b, __m256h c, __mmask16 k);
+
+
__m256h _mm256_maskz_fmsubadd_ph (__mmask16 k, __m256h a, __m256h b, __m256h c);
+
+
__m512h _mm512_fmsubadd_ph (__m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_mask_fmsubadd_ph (__m512h a, __mmask32 k, __m512h b, __m512h c);
+
+
__m512h _mm512_mask3_fmsubadd_ph (__m512h a, __m512h b, __m512h c, __mmask32 k);
+
+
__m512h _mm512_maskz_fmsubadd_ph (__mmask32 k, __m512h a, __m512h b, __m512h c);
+
+
__m512h _mm512_fmsubadd_round_ph (__m512h a, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask_fmsubadd_round_ph (__m512h a, __mmask32 k, __m512h b, __m512h c, const int rounding);
+
+
__m512h _mm512_mask3_fmsubadd_round_ph (__m512h a, __m512h b, __m512h c, __mmask32 k, const int rounding);
+
+
__m512h _mm512_maskz_fmsubadd_round_ph (__mmask32 k, __m512h a, __m512h b, __m512h c, const int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfmsubadd132ps.vfmsubadd213ps.vfmsubadd231ps.html b/x86/vfmsubadd132ps.vfmsubadd213ps.vfmsubadd231ps.html new file mode 100644 index 0000000..7613a9d --- /dev/null +++ b/x86/vfmsubadd132ps.vfmsubadd213ps.vfmsubadd231ps.html @@ -0,0 +1,454 @@ + +VFMSUBADD132PS/VFMSUBADD213PS/VFMSUBADD231PS + — Fused Multiply-AlternatingSubtract/Add of Packed Single Precision Floating-Point Values

VFMSUBADD132PS/VFMSUBADD213PS/VFMSUBADD231PS + — Fused Multiply-AlternatingSubtract/Add of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 97 /r VFMSUBADD132PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm3/mem, subtract/add elements in xmm2 and put result in xmm1.
VEX.128.66.0F38.W0 A7 /r VFMSUBADD213PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm2, subtract/add elements in xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W0 B7 /r VFMSUBADD231PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm2 and xmm3/mem, subtract/add elements in xmm1 and put result in xmm1.
VEX.256.66.0F38.W0 97 /r VFMSUBADD132PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm3/mem, subtract/add elements in ymm2 and put result in ymm1.
VEX.256.66.0F38.W0 A7 /r VFMSUBADD213PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm2, subtract/add elements in ymm3/mem and put result in ymm1.
VEX.256.66.0F38.W0 B7 /r VFMSUBADD231PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm2 and ymm3/mem, subtract/add elements in ymm1 and put result in ymm1.
EVEX.128.66.0F38.W0 97 /r VFMSUBADD132PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm3/m128/m32bcst, subtract/add elements in xmm2 and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W0 A7 /r VFMSUBADD213PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm2, subtract/add elements in xmm3/m128/m32bcst and put result in xmm1 subject to writemask k1.
EVEX.128.66.0F38.W0 B7 /r VFMSUBADD231PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm2 and xmm3/m128/m32bcst, subtract/add elements in xmm1 and put result in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W0 97 /r VFMSUBADD132PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm3/m256/m32bcst, subtract/add elements in ymm2 and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W0 A7 /r VFMSUBADD213PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm2, subtract/add elements in ymm3/m256/m32bcst and put result in ymm1 subject to writemask k1.
EVEX.256.66.0F38.W0 B7 /r VFMSUBADD231PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm2 and ymm3/m256/m32bcst, subtract/add elements in ymm1 and put result in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W0 97 /r VFMSUBADD132PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm1 and zmm3/m512/m32bcst, subtract/add elements in zmm2 and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W0 A7 /r VFMSUBADD213PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm1 and zmm2, subtract/add elements in zmm3/m512/m32bcst and put result in zmm1 subject to writemask k1.
EVEX.512.66.0F38.W0 B7 /r VFMSUBADD231PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm2 and zmm3/m512/m32bcst, subtract/add elements in zmm1 and put result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFMSUBADD132PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the first source operand to the corresponding packed single precision floating-point values in the third source operand. From the infinite precision intermediate result, subtracts the odd single precision floating-point elements and adds the even single precision floating-point values in the second source operand, performs rounding and stores the resulting packed single precision floating-point values to the destination operand (first source operand).

+

VFMSUBADD213PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source operand to the corresponding packed single precision floating-point values in the first source operand. From the infinite precision intermediate result, subtracts the odd single precision floating-point elements and adds the even single precision floating-point values in the third source operand, performs rounding and stores the resulting packed single precision floating-point values to the destination operand (first source operand).

+

VFMSUBADD231PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source operand to the corresponding packed single precision floating-point values in the third source operand. From the infinite precision intermediate result, subtracts the odd single precision floating-point elements and adds the even single precision floating-point values in the first source operand, performs rounding and stores the resulting packed single precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFMSUBADD132PS DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM -1{
+    n := 64*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(DEST[n+31:n]*SRC3[n+31:n] + SRC2[n+31:n])
+    DEST[n+63:n+32] := RoundFPControl_MXCSR(DEST[n+63:n+32]*SRC3[n+63:n+32] -SRC2[n+63:n+32])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMSUBADD213PS DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM -1{
+    n := 64*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(SRC2[n+31:n]*DEST[n+31:n] +SRC3[n+31:n])
+    DEST[n+63:n+32] := RoundFPControl_MXCSR(SRC2[n+63:n+32]*DEST[n+63:n+32] -SRC3[n+63:n+32])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMSUBADD231PS DEST, SRC2, SRC3 + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM -1{
+    n := 64*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(SRC2[n+31:n]*SRC3[n+31:n] + DEST[n+31:n])
+    DEST[n+63:n+32] := RoundFPControl_MXCSR(SRC2[n+63:n+32]*SRC3[n+63:n+32] -DEST[n+63:n+32])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFMSUBADD132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+31:i] :=
+                    RoundFPControl(DEST[i+31:i]*SRC3[i+31:i] + SRC2[i+31:i])
+                ELSE DEST[i+31:i] :=
+                    RoundFPControl(DEST[i+31:i]*SRC3[i+31:i] - SRC2[i+31:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                        RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[31:0] + SRC2[i+31:i])
+                        ELSE
+                            DEST[i+31:i] :=
+                        RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[i+31:i] + SRC2[i+31:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[31:0] - SRC2[i+31:i])
+                        ELSE
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(DEST[i+31:i]*SRC3[i+31:i] - SRC2[i+31:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+31:i] :=
+                    RoundFPControl(SRC2[i+31:i]*DEST[i+31:i] + SRC3[i+31:i])
+                ELSE DEST[i+31:i] :=
+                    RoundFPControl(SRC2[i+31:i]*DEST[i+31:i] - SRC3[i+31:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] + SRC3[31:0])
+                    ELSE
+                        DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] + SRC3[i+31:i])
+                FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] - SRC3[i+31:i])
+                        ELSE
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*DEST[i+31:i] - SRC3[31:0])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN DEST[i+31:i] :=
+                    RoundFPControl(SRC2[i+31:i]*SRC3[i+31:i] + DEST[i+31:i])
+                ELSE DEST[i+31:i] :=
+                    RoundFPControl(SRC2[i+31:i]*SRC3[i+31:i] - DEST[i+31:i])
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFMSUBADD231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF j *is even*
+                THEN
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                        RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[31:0] + DEST[i+31:i])
+                        ELSE
+                            DEST[i+31:i] :=
+                        RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[i+31:i] + DEST[i+31:i])
+                    FI;
+                ELSE
+                    IF (EVEX.b = 1)
+                        THEN
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[31:0] - DEST[i+31:i])
+                        ELSE
+                            DEST[i+31:i] :=
+                    RoundFPControl_MXCSR(SRC2[i+31:i]*SRC3[i+31:i] - DEST[i+31:i])
+                    FI;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFMSUBADDxxxPS __m512 _mm512_fmsubadd_ps(__m512 a, __m512 b, __m512 c);
+
+
VFMSUBADDxxxPS __m512 _mm512_fmsubadd_round_ps(__m512 a, __m512 b, __m512 c, int r);
+
+
VFMSUBADDxxxPS __m512 _mm512_mask_fmsubadd_ps(__m512 a, __mmask16 k, __m512 b, __m512 c);
+
+
VFMSUBADDxxxPS __m512 _mm512_maskz_fmsubadd_ps(__mmask16 k, __m512 a, __m512 b, __m512 c);
+
+
VFMSUBADDxxxPS __m512 _mm512_mask3_fmsubadd_ps(__m512 a, __m512 b, __m512 c, __mmask16 k);
+
+
VFMSUBADDxxxPS __m512 _mm512_mask_fmsubadd_round_ps(__m512 a, __mmask16 k, __m512 b, __m512 c, int r);
+
+
VFMSUBADDxxxPS __m512 _mm512_maskz_fmsubadd_round_ps(__mmask16 k, __m512 a, __m512 b, __m512 c, int r);
+
+
VFMSUBADDxxxPS __m512 _mm512_mask3_fmsubadd_round_ps(__m512 a, __m512 b, __m512 c, __mmask16 k, int r);
+
+
VFMSUBADDxxxPS __m256 _mm256_mask_fmsubadd_ps(__m256 a, __mmask8 k, __m256 b, __m256 c);
+
+
VFMSUBADDxxxPS __m256 _mm256_maskz_fmsubadd_ps(__mmask8 k, __m256 a, __m256 b, __m256 c);
+
+
VFMSUBADDxxxPS __m256 _mm256_mask3_fmsubadd_ps(__m256 a, __m256 b, __m256 c, __mmask8 k);
+
+
VFMSUBADDxxxPS __m128 _mm_mask_fmsubadd_ps(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFMSUBADDxxxPS __m128 _mm_maskz_fmsubadd_ps(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFMSUBADDxxxPS __m128 _mm_mask3_fmsubadd_ps(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFMSUBADDxxxPS __m128 _mm_fmsubadd_ps (__m128 a, __m128 b, __m128 c);
+
+
VFMSUBADDxxxPS __m256 _mm256_fmsubadd_ps (__m256 a, __m256 b, __m256 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfnmadd132pd.vfnmadd213pd.vfnmadd231pd.html b/x86/vfnmadd132pd.vfnmadd213pd.vfnmadd231pd.html new file mode 100644 index 0000000..b2e88cc --- /dev/null +++ b/x86/vfnmadd132pd.vfnmadd213pd.vfnmadd231pd.html @@ -0,0 +1,399 @@ + +VFNMADD132PD/VFNMADD213PD/VFNMADD231PD + — Fused Negative Multiply-Add of PackedDouble Precision Floating-Point Values

VFNMADD132PD/VFNMADD213PD/VFNMADD231PD + — Fused Negative Multiply-Add of PackedDouble Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W1 9C /r VFNMADD132PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm3/mem, negate the multiplication result and add to xmm2 and put result in xmm1.
VEX.128.66.0F38.W1 AC /r VFNMADD213PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm2, negate the multiplication result and add to xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W1 BC /r VFNMADD231PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm2 and xmm3/mem, negate the multiplication result and add to xmm1 and put result in xmm1.
VEX.256.66.0F38.W1 9C /r VFNMADD132PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm3/mem, negate the multiplication result and add to ymm2 and put result in ymm1.
VEX.256.66.0F38.W1 AC /r VFNMADD213PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm2, negate the multiplication result and add to ymm3/mem and put result in ymm1.
VEX.256.66.0F38.W1 BC /r VFNMADD231PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm2 and ymm3/mem, negate the multiplication result and add to ymm1 and put result in ymm1.
EVEX.128.66.0F38.W1 9C /r VFNMADD132PD xmm0 {k1}{z}, xmm1, xmm2/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm3/m128/m64bcst, negate the multiplication result and add to xmm2 and put result in xmm1.
EVEX.128.66.0F38.W1 AC /r VFNMADD213PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm2, negate the multiplication result and add to xmm3/m128/m64bcst and put result in xmm1.
EVEX.128.66.0F38.W1 BC /r VFNMADD231PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm2 and xmm3/m128/m64bcst, negate the multiplication result and add to xmm1 and put result in xmm1.
EVEX.256.66.0F38.W1 9C /r VFNMADD132PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm3/m256/m64bcst, negate the multiplication result and add to ymm2 and put result in ymm1.
EVEX.256.66.0F38.W1 AC /r VFNMADD213PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm2, negate the multiplication result and add to ymm3/m256/m64bcst and put result in ymm1.
EVEX.256.66.0F38.W1 BC /r VFNMADD231PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm2 and ymm3/m256/m64bcst, negate the multiplication result and add to ymm1 and put result in ymm1.
EVEX.512.66.0F38.W1 9C /r VFNMADD132PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm3/m512/m64bcst, negate the multiplication result and add to zmm2 and put result in zmm1.
EVEX.512.66.0F38.W1 AC /r VFNMADD213PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm2, negate the multiplication result and add to zmm3/m512/m64bcst and put result in zmm1.
EVEX.512.66.0F38.W1 BC /r VFNMADD231PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm2 and zmm3/m512/m64bcst, negate the multiplication result and add to zmm1 and put result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFNMADD132PD: Multiplies the two, four or eight packed double precision floating-point values from the first source operand to the two, four or eight packed double precision floating-point values in the third source operand, adds the negated infinite precision intermediate result to the two, four or eight packed double precision floating-point values in the second source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

VFNMADD213PD: Multiplies the two, four or eight packed double precision floating-point values from the second source operand to the two, four or eight packed double precision floating-point values in the first source operand, adds the negated infinite precision intermediate result to the two, four or eight packed double precision floating-point values in the third source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

VFNMADD231PD: Multiplies the two, four or eight packed double precision floating-point values from the second source to the two, four or eight packed double precision floating-point values in the third source operand, the negated infinite precision intermediate result to the two, four or eight packed double precision floating-point values in the first source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFNMADD132PD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR(-(DEST[n+63:n]*SRC3[n+63:n]) + SRC2[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMADD213PD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR(-(SRC2[n+63:n]*DEST[n+63:n]) + SRC3[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMADD231PD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR(-(SRC2[n+63:n]*SRC3[n+63:n]) + DEST[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMADD132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(-(DEST[i+63:i]*SRC3[i+63:i]) + SRC2[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(DEST[i+63:i]*SRC3[63:0]) + SRC2[i+63:i])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(DEST[i+63:i]*SRC3[i+63:i]) + SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(-(SRC2[i+63:i]*DEST[i+63:i]) + SRC3[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+63:i]*DEST[i+63:i]) + SRC3[63:0])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+63:i]*DEST[i+63:i]) + SRC3[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(-(SRC2[i+63:i]*SRC3[i+63:i]) + DEST[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+63:i]*SRC3[63:0]) + DEST[i+63:i])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+63:i]*SRC3[i+63:i]) + DEST[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFNMADDxxxPD __m512d _mm512_fnmadd_pd(__m512d a, __m512d b, __m512d c);
+
+
VFNMADDxxxPD __m512d _mm512_fnmadd_round_pd(__m512d a, __m512d b, __m512d c, int r);
+
+
VFNMADDxxxPD __m512d _mm512_mask_fnmadd_pd(__m512d a, __mmask8 k, __m512d b, __m512d c);
+
+
VFNMADDxxxPD __m512d _mm512_maskz_fnmadd_pd(__mmask8 k, __m512d a, __m512d b, __m512d c);
+
+
VFNMADDxxxPD __m512d _mm512_mask3_fnmadd_pd(__m512d a, __m512d b, __m512d c, __mmask8 k);
+
+
VFNMADDxxxPD __m512d _mm512_mask_fnmadd_round_pd(__m512d a, __mmask8 k, __m512d b, __m512d c, int r);
+
+
VFNMADDxxxPD __m512d _mm512_maskz_fnmadd_round_pd(__mmask8 k, __m512d a, __m512d b, __m512d c, int r);
+
+
VFNMADDxxxPD __m512d _mm512_mask3_fnmadd_round_pd(__m512d a, __m512d b, __m512d c, __mmask8 k, int r);
+
+
VFNMADDxxxPD __m256d _mm256_mask_fnmadd_pd(__m256d a, __mmask8 k, __m256d b, __m256d c);
+
+
VFNMADDxxxPD __m256d _mm256_maskz_fnmadd_pd(__mmask8 k, __m256d a, __m256d b, __m256d c);
+
+
VFNMADDxxxPD __m256d _mm256_mask3_fnmadd_pd(__m256d a, __m256d b, __m256d c, __mmask8 k);
+
+
VFNMADDxxxPD __m128d _mm_mask_fnmadd_pd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFNMADDxxxPD __m128d _mm_maskz_fnmadd_pd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFNMADDxxxPD __m128d _mm_mask3_fnmadd_pd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFNMADDxxxPD __m128d _mm_fnmadd_pd (__m128d a, __m128d b, __m128d c);
+
+
VFNMADDxxxPD __m256d _mm256_fnmadd_pd (__m256d a, __m256d b, __m256d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfnmadd132ps.vfnmadd213ps.vfnmadd231ps.html b/x86/vfnmadd132ps.vfnmadd213ps.vfnmadd231ps.html new file mode 100644 index 0000000..2730c72 --- /dev/null +++ b/x86/vfnmadd132ps.vfnmadd213ps.vfnmadd231ps.html @@ -0,0 +1,399 @@ + +VFNMADD132PS/VFNMADD213PS/VFNMADD231PS + — Fused Negative Multiply-Add of PackedSingle Precision Floating-Point Values

VFNMADD132PS/VFNMADD213PS/VFNMADD231PS + — Fused Negative Multiply-Add of PackedSingle Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 9C /r VFNMADD132PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm3/mem, negate the multiplication result and add to xmm2 and put result in xmm1.
VEX.128.66.0F38.W0 AC /r VFNMADD213PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm1 and xmm2, negate the multiplication result and add to xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W0 BC /r VFNMADD231PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single precision floating-point values from xmm2 and xmm3/mem, negate the multiplication result and add to xmm1 and put result in xmm1.
VEX.256.66.0F38.W0 9C /r VFNMADD132PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm3/mem, negate the multiplication result and add to ymm2 and put result in ymm1.
VEX.256.66.0F38.W0 AC /r VFNMADD213PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm1 and ymm2, negate the multiplication result and add to ymm3/mem and put result in ymm1.
VEX.256.66.0F38.0 BC /r VFNMADD231PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single precision floating-point values from ymm2 and ymm3/mem, negate the multiplication result and add to ymm1 and put result in ymm1.
EVEX.128.66.0F38.W0 9C /r VFNMADD132PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm3/m128/m32bcst, negate the multiplication result and add to xmm2 and put result in xmm1.
EVEX.128.66.0F38.W0 AC /r VFNMADD213PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm1 and xmm2, negate the multiplication result and add to xmm3/m128/m32bcst and put result in xmm1.
EVEX.128.66.0F38.W0 BC /r VFNMADD231PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from xmm2 and xmm3/m128/m32bcst, negate the multiplication result and add to xmm1 and put result in xmm1.
EVEX.256.66.0F38.W0 9C /r VFNMADD132PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm3/m256/m32bcst, negate the multiplication result and add to ymm2 and put result in ymm1.
EVEX.256.66.0F38.W0 AC /r VFNMADD213PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm1 and ymm2, negate the multiplication result and add to ymm3/m256/m32bcst and put result in ymm1.
EVEX.256.66.0F38.W0 BC /r VFNMADD231PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single precision floating-point values from ymm2 and ymm3/m256/m32bcst, negate the multiplication result and add to ymm1 and put result in ymm1.
EVEX.512.66.0F38.W0 9C /r VFNMADD132PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512VL AVX512FMultiply packed single precision floating-point values from zmm1 and zmm3/m512/m32bcst, negate the multiplication result and add to zmm2 and put result in zmm1.
EVEX.512.66.0F38.W0 AC /r VFNMADD213PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm1 and zmm2, negate the multiplication result and add to zmm3/m512/m32bcst and put result in zmm1.
EVEX.512.66.0F38.W0 BC /r VFNMADD231PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single precision floating-point values from zmm2 and zmm3/m512/m32bcst, negate the multiplication result and add to zmm1 and put result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFNMADD132PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the first source operand to the four, eight or sixteen packed single precision floating-point values in the third source operand, adds the negated infinite precision intermediate result to the four, eight or sixteen packed single precision floating-point values in the second source operand, performs rounding and stores the resulting four, eight or sixteen packed single precision floating-point values to the destination operand (first source operand).

+

VFNMADD213PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source operand to the four, eight or sixteen packed single precision floating-point values in the first source operand, adds the negated infinite precision intermediate result to the four, eight or sixteen packed single precision floating-point values in the third source operand, performs rounding and stores the resulting the four, eight or sixteen packed single precision floating-point values to the destination operand (first source operand).

+

VFNMADD231PS: Multiplies the four, eight or sixteen packed single precision floating-point values from the second source operand to the four, eight or sixteen packed single precision floating-point values in the third source operand, adds the negated infinite precision intermediate result to the four, eight or sixteen packed single precision floating-point values in the first source operand, performs rounding and stores the resulting four, eight or sixteen packed single precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFNMADD132PS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(- (DEST[n+31:n]*SRC3[n+31:n]) + SRC2[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMADD213PS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(- (SRC2[n+31:n]*DEST[n+31:n]) + SRC3[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMADD231PS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR(- (SRC2[n+31:n]*SRC3[n+31:n]) + DEST[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMADD132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl(-(DEST[i+31:i]*SRC3[i+31:i]) + SRC2[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(DEST[i+31:i]*SRC3[31:0]) + SRC2[i+31:i])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(DEST[i+31:i]*SRC3[i+31:i]) + SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl(-(SRC2[i+31:i]*DEST[i+31:i]) + SRC3[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*DEST[i+31:i]) + SRC3[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*DEST[i+31:i]) + SRC3[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl(-(SRC2[i+31:i]*SRC3[i+31:i]) + DEST[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMADD231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*SRC3[31:0]) + DEST[i+31:i])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*SRC3[i+31:i]) + DEST[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFNMADDxxxPS __m512 _mm512_fnmadd_ps(__m512 a, __m512 b, __m512 c);
+
+
VFNMADDxxxPS __m512 _mm512_fnmadd_round_ps(__m512 a, __m512 b, __m512 c, int r);
+
+
VFNMADDxxxPS __m512 _mm512_mask_fnmadd_ps(__m512 a, __mmask16 k, __m512 b, __m512 c);
+
+
VFNMADDxxxPS __m512 _mm512_maskz_fnmadd_ps(__mmask16 k, __m512 a, __m512 b, __m512 c);
+
+
VFNMADDxxxPS __m512 _mm512_mask3_fnmadd_ps(__m512 a, __m512 b, __m512 c, __mmask16 k);
+
+
VFNMADDxxxPS __m512 _mm512_mask_fnmadd_round_ps(__m512 a, __mmask16 k, __m512 b, __m512 c, int r);
+
+
VFNMADDxxxPS __m512 _mm512_maskz_fnmadd_round_ps(__mmask16 k, __m512 a, __m512 b, __m512 c, int r);
+
+
VFNMADDxxxPS __m512 _mm512_mask3_fnmadd_round_ps(__m512 a, __m512 b, __m512 c, __mmask16 k, int r);
+
+
VFNMADDxxxPS __m256 _mm256_mask_fnmadd_ps(__m256 a, __mmask8 k, __m256 b, __m256 c);
+
+
VFNMADDxxxPS __m256 _mm256_maskz_fnmadd_ps(__mmask8 k, __m256 a, __m256 b, __m256 c);
+
+
VFNMADDxxxPS __m256 _mm256_mask3_fnmadd_ps(__m256 a, __m256 b, __m256 c, __mmask8 k);
+
+
VFNMADDxxxPS __m128 _mm_mask_fnmadd_ps(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFNMADDxxxPS __m128 _mm_maskz_fnmadd_ps(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFNMADDxxxPS __m128 _mm_mask3_fnmadd_ps(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFNMADDxxxPS __m128 _mm_fnmadd_ps (__m128 a, __m128 b, __m128 c);
+
+
VFNMADDxxxPS __m256 _mm256_fnmadd_ps (__m256 a, __m256 b, __m256 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfnmadd132sd.vfnmadd213sd.vfnmadd231sd.html b/x86/vfnmadd132sd.vfnmadd213sd.vfnmadd231sd.html new file mode 100644 index 0000000..f85be2c --- /dev/null +++ b/x86/vfnmadd132sd.vfnmadd213sd.vfnmadd231sd.html @@ -0,0 +1,206 @@ + +VFNMADD132SD/VFNMADD213SD/VFNMADD231SD + — Fused Negative Multiply-Add of ScalarDouble Precision Floating-Point Values

VFNMADD132SD/VFNMADD213SD/VFNMADD231SD + — Fused Negative Multiply-Add of ScalarDouble Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.LIG.66.0F38.W1 9D /r VFNMADD132SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm1 and xmm3/mem, negate the multiplication result and add to xmm2 and put result in xmm1.
VEX.LIG.66.0F38.W1 AD /r VFNMADD213SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm1 and xmm2, negate the multiplication result and add to xmm3/mem and put result in xmm1.
VEX.LIG.66.0F38.W1 BD /r VFNMADD231SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm2 and xmm3/mem, negate the multiplication result and add to xmm1 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 9D /r VFNMADD132SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm1 and xmm3/m64, negate the multiplication result and add to xmm2 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 AD /r VFNMADD213SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm1 and xmm2, negate the multiplication result and add to xmm3/m64 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 BD /r VFNMADD231SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm2 and xmm3/m64, negate the multiplication result and add to xmm1 and put result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BTuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFNMADD132SD: Multiplies the low packed double precision floating-point value from the first source operand to the low packed double precision floating-point value in the third source operand, adds the negated infinite precision intermediate result to the low packed double precision floating-point values in the second source operand, performs rounding and stores the resulting packed double precision floating-point value to the destination operand (first source operand).

+

VFNMADD213SD: Multiplies the low packed double precision floating-point value from the second source operand to the low packed double precision floating-point value in the first source operand, adds the negated infinite precision intermediate result to the low packed double precision floating-point value in the third source operand, performs rounding and stores the resulting packed double precision floating-point value to the destination operand (first source operand).

+

VFNMADD231SD: Multiplies the low packed double precision floating-point value from the second source to the low packed double precision floating-point value in the third source operand, adds the negated infinite precision intermediate result to the low packed double precision floating-point value in the first source operand, performs rounding and stores the resulting packed double precision floating-point value to the destination operand (first source operand).

+

VEX.128 and EVEX encoded version: The destination operand (also first source operand) is encoded in reg_field. The second source operand is encoded in VEX.vvvv/EVEX.vvvv. The third source operand is encoded in rm_field. Bits 127:64 of the destination are unchanged. Bits MAXVL-1:128 of the destination register are zeroed.

+

EVEX encoded version: The low quadword element of the destination is updated according to the writemask.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFNMADD132SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(-(DEST[63:0]*SRC3[63:0]) + SRC2[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD213SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(-(SRC2[63:0]*DEST[63:0]) + SRC3[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD231SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(-(SRC2[63:0]*SRC3[63:0]) + DEST[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD132SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(- (DEST[63:0]*SRC3[63:0]) + SRC2[63:0])
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD213SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(- (SRC2[63:0]*DEST[63:0]) + SRC3[63:0])
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD231SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(- (SRC2[63:0]*SRC3[63:0]) + DEST[63:0])
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFNMADDxxxSD __m128d _mm_fnmadd_round_sd(__m128d a, __m128d b, __m128d c, int r);
+
+
VFNMADDxxxSD __m128d _mm_mask_fnmadd_sd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFNMADDxxxSD __m128d _mm_maskz_fnmadd_sd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFNMADDxxxSD __m128d _mm_mask3_fnmadd_sd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFNMADDxxxSD __m128d _mm_mask_fnmadd_round_sd(__m128d a, __mmask8 k, __m128d b, __m128d c, int r);
+
+
VFNMADDxxxSD __m128d _mm_maskz_fnmadd_round_sd(__mmask8 k, __m128d a, __m128d b, __m128d c, int r);
+
+
VFNMADDxxxSD __m128d _mm_mask3_fnmadd_round_sd(__m128d a, __m128d b, __m128d c, __mmask8 k, int r);
+
+
VFNMADDxxxSD __m128d _mm_fnmadd_sd (__m128d a, __m128d b, __m128d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfnmadd132ss.vfnmadd213ss.vfnmadd231ss.html b/x86/vfnmadd132ss.vfnmadd213ss.vfnmadd231ss.html new file mode 100644 index 0000000..159fc8b --- /dev/null +++ b/x86/vfnmadd132ss.vfnmadd213ss.vfnmadd231ss.html @@ -0,0 +1,206 @@ + +VFNMADD132SS/VFNMADD213SS/VFNMADD231SS + — Fused Negative Multiply-Add of ScalarSingle Precision Floating-Point Values

VFNMADD132SS/VFNMADD213SS/VFNMADD231SS + — Fused Negative Multiply-Add of ScalarSingle Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.LIG.66.0F38.W0 9D /r VFNMADD132SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single-precision floating-point value from xmm1 and xmm3/m32, negate the multiplication result and add to xmm2 and put result in xmm1.
VEX.LIG.66.0F38.W0 AD /r VFNMADD213SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single-precision floating-point value from xmm1 and xmm2, negate the multiplication result and add to xmm3/m32 and put result in xmm1.
VEX.LIG.66.0F38.W0 BD /r VFNMADD231SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single-precision floating-point value from xmm2 and xmm3/m32, negate the multiplication result and add to xmm1 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 9D /r VFNMADD132SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single-precision floating-point value from xmm1 and xmm3/m32, negate the multiplication result and add to xmm2 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 AD /r VFNMADD213SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single-precision floating-point value from xmm1 and xmm2, negate the multiplication result and add to xmm3/m32 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 BD /r VFNMADD231SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single-precision floating-point value from xmm2 and xmm3/m32, negate the multiplication result and add to xmm1 and put result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BTuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFNMADD132SS: Multiplies the low packed single-precision floating-point value from the first source operand to the low packed single-precision floating-point value in the third source operand, adds the negated infinite precision intermediate result to the low packed single-precision floating-point value in the second source operand, performs rounding and stores the resulting packed single-precision floating-point value to the destination operand (first source operand).

+

VFNMADD213SS: Multiplies the low packed single-precision floating-point value from the second source operand to the low packed single-precision floating-point value in the first source operand, adds the negated infinite precision intermediate result to the low packed single-precision floating-point value in the third source operand, performs rounding and stores the resulting packed single-precision floating-point value to the destination operand (first source operand).

+

VFNMADD231SS: Multiplies the low packed single-precision floating-point value from the second source operand to the low packed single-precision floating-point value in the third source operand, adds the negated infinite precision intermediate result to the low packed single-precision floating-point value in the first source operand, performs rounding and stores the resulting packed single-precision floating-point value to the destination operand (first source operand).

+

VEX.128 and EVEX encoded version: The destination operand (also first source operand) is encoded in reg_field. The second source operand is encoded in VEX.vvvv/EVEX.vvvv. The third source operand is encoded in rm_field. Bits 127:32 of the destination are unchanged. Bits MAXVL-1:128 of the destination register are zeroed.

+

EVEX encoded version: The low doubleword element of the destination is updated according to the writemask.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “+” symbols represent multiplication and addition with infinite precision inputs and outputs (no
+rounding).
+
+

VFNMADD132SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(-(DEST[31:0]*SRC3[31:0]) + SRC2[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD213SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(-(SRC2[31:0]*DEST[31:0]) + SRC3[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD231SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(-(SRC2[31:0]*SRC3[63:0]) + DEST[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD132SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(- (DEST[31:0]*SRC3[31:0]) + SRC2[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD213SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(- (SRC2[31:0]*DEST[31:0]) + SRC3[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMADD231SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(- (SRC2[31:0]*SRC3[31:0]) + DEST[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFNMADDxxxSS __m128 _mm_fnmadd_round_ss(__m128 a, __m128 b, __m128 c, int r);
+
+
VFNMADDxxxSS __m128 _mm_mask_fnmadd_ss(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFNMADDxxxSS __m128 _mm_maskz_fnmadd_ss(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFNMADDxxxSS __m128 _mm_mask3_fnmadd_ss(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFNMADDxxxSS __m128 _mm_mask_fnmadd_round_ss(__m128 a, __mmask8 k, __m128 b, __m128 c, int r);
+
+
VFNMADDxxxSS __m128 _mm_maskz_fnmadd_round_ss(__mmask8 k, __m128 a, __m128 b, __m128 c, int r);
+
+
VFNMADDxxxSS __m128 _mm_mask3_fnmadd_round_ss(__m128 a, __m128 b, __m128 c, __mmask8 k, int r);
+
+
VFNMADDxxxSS __m128 _mm_fnmadd_ss (__m128 a, __m128 b, __m128 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfnmsub132pd.vfnmsub213pd.vfnmsub231pd.html b/x86/vfnmsub132pd.vfnmsub213pd.vfnmsub231pd.html new file mode 100644 index 0000000..9b71930 --- /dev/null +++ b/x86/vfnmsub132pd.vfnmsub213pd.vfnmsub231pd.html @@ -0,0 +1,399 @@ + +VFNMSUB132PD/VFNMSUB213PD/VFNMSUB231PD + — Fused Negative Multiply-Subtract ofPacked Double Precision Floating-Point Values

VFNMSUB132PD/VFNMSUB213PD/VFNMSUB231PD + — Fused Negative Multiply-Subtract ofPacked Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W1 9E /r VFNMSUB132PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm3/mem, negate the multiplication result and subtract xmm2 and put result in xmm1.
VEX.128.66.0F38.W1 AE /r VFNMSUB213PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm1 and xmm2, negate the multiplication result and subtract xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W1 BE /r VFNMSUB231PD xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed double precision floating-point values from xmm2 and xmm3/mem, negate the multiplication result and subtract xmm1 and put result in xmm1.
VEX.256.66.0F38.W1 9E /r VFNMSUB132PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm3/mem, negate the multiplication result and subtract ymm2 and put result in ymm1.
VEX.256.66.0F38.W1 AE /r VFNMSUB213PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm1 and ymm2, negate the multiplication result and subtract ymm3/mem and put result in ymm1.
VEX.256.66.0F38.W1 BE /r VFNMSUB231PD ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed double precision floating-point values from ymm2 and ymm3/mem, negate the multiplication result and subtract ymm1 and put result in ymm1.
EVEX.128.66.0F38.W1 9E /r VFNMSUB132PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm3/m128/m64bcst, negate the multiplication result and subtract xmm2 and put result in xmm1.
EVEX.128.66.0F38.W1 AE /r VFNMSUB213PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm1 and xmm2, negate the multiplication result and subtract xmm3/m128/m64bcst and put result in xmm1.
EVEX.128.66.0F38.W1 BE /r VFNMSUB231PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from xmm2 and xmm3/m128/m64bcst, negate the multiplication result and subtract xmm1 and put result in xmm1.
EVEX.256.66.0F38.W1 9E /r VFNMSUB132PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm3/m256/m64bcst, negate the multiplication result and subtract ymm2 and put result in ymm1.
EVEX.256.66.0F38.W1 AE /r VFNMSUB213PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm1 and ymm2, negate the multiplication result and subtract ymm3/m256/m64bcst and put result in ymm1.
EVEX.256.66.0F38.W1 BE /r VFNMSUB231PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FMultiply packed double precision floating-point values from ymm2 and ymm3/m256/m64bcst, negate the multiplication result and subtract ymm1 and put result in ymm1.
EVEX.512.66.0F38.W1 9E /r VFNMSUB132PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm3/m512/m64bcst, negate the multiplication result and subtract zmm2 and put result in zmm1.
EVEX.512.66.0F38.W1 AE /r VFNMSUB213PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm1 and zmm2, negate the multiplication result and subtract zmm3/m512/m64bcst and put result in zmm1.
EVEX.512.66.0F38.W1 BE /r VFNMSUB231PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}BV/VAVX512FMultiply packed double precision floating-point values from zmm2 and zmm3/m512/m64bcst, negate the multiplication result and subtract zmm1 and put result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFNMSUB132PD: Multiplies the two, four or eight packed double precision floating-point values from the first source operand to the two, four or eight packed double precision floating-point values in the third source operand. From negated infinite precision intermediate results, subtracts the two, four or eight packed double precision floating-point values in the second source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

VFNMSUB213PD: Multiplies the two, four or eight packed double precision floating-point values from the second source operand to the two, four or eight packed double precision floating-point values in the first source operand. From negated infinite precision intermediate results, subtracts the two, four or eight packed double precision floating-point values in the third source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

VFNMSUB231PD: Multiplies the two, four or eight packed double precision floating-point values from the second source to the two, four or eight packed double precision floating-point values in the third source operand. From negated infinite precision intermediate results, subtracts the two, four or eight packed double precision floating-point values in the first source operand, performs rounding and stores the resulting two, four or eight packed double precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFNMSUB132PD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR( - (DEST[n+63:n]*SRC3[n+63:n]) - SRC2[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMSUB213PD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR( - (SRC2[n+63:n]*DEST[n+63:n]) - SRC3[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMSUB231PD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 64*i;
+    DEST[n+63:n] := RoundFPControl_MXCSR( - (SRC2[n+63:n]*SRC3[n+63:n]) - DEST[n+63:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMSUB132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(-(DEST[i+63:i]*SRC3[i+63:i]) - SRC2[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB132PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(DEST[i+63:i]*SRC3[63:0]) - SRC2[i+63:i])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(DEST[i+63:i]*SRC3[i+63:i]) - SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(-(SRC2[i+63:i]*DEST[i+63:i]) - SRC3[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB213PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+63:i]*DEST[i+63:i]) - SRC3[63:0])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+63:i]*DEST[i+63:i]) - SRC3[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] :=
+            RoundFPControl(-(SRC2[i+63:i]*SRC3[i+63:i]) - DEST[i+63:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB231PD DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+63:i]*SRC3[63:0]) - DEST[i+63:i])
+                ELSE
+                    DEST[i+63:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+63:i]*SRC3[i+63:i]) - DEST[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFNMSUBxxxPD __m512d _mm512_fnmsub_pd(__m512d a, __m512d b, __m512d c);
+
+
VFNMSUBxxxPD __m512d _mm512_fnmsub_round_pd(__m512d a, __m512d b, __m512d c, int r);
+
+
VFNMSUBxxxPD __m512d _mm512_mask_fnmsub_pd(__m512d a, __mmask8 k, __m512d b, __m512d c);
+
+
VFNMSUBxxxPD __m512d _mm512_maskz_fnmsub_pd(__mmask8 k, __m512d a, __m512d b, __m512d c);
+
+
VFNMSUBxxxPD __m512d _mm512_mask3_fnmsub_pd(__m512d a, __m512d b, __m512d c, __mmask8 k);
+
+
VFNMSUBxxxPD __m512d _mm512_mask_fnmsub_round_pd(__m512d a, __mmask8 k, __m512d b, __m512d c, int r);
+
+
VFNMSUBxxxPD __m512d _mm512_maskz_fnmsub_round_pd(__mmask8 k, __m512d a, __m512d b, __m512d c, int r);
+
+
VFNMSUBxxxPD __m512d _mm512_mask3_fnmsub_round_pd(__m512d a, __m512d b, __m512d c, __mmask8 k, int r);
+
+
VFNMSUBxxxPD __m256d _mm256_mask_fnmsub_pd(__m256d a, __mmask8 k, __m256d b, __m256d c);
+
+
VFNMSUBxxxPD __m256d _mm256_maskz_fnmsub_pd(__mmask8 k, __m256d a, __m256d b, __m256d c);
+
+
VFNMSUBxxxPD __m256d _mm256_mask3_fnmsub_pd(__m256d a, __m256d b, __m256d c, __mmask8 k);
+
+
VFNMSUBxxxPD __m128d _mm_mask_fnmsub_pd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFNMSUBxxxPD __m128d _mm_maskz_fnmsub_pd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFNMSUBxxxPD __m128d _mm_mask3_fnmsub_pd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFNMSUBxxxPD __m128d _mm_fnmsub_pd (__m128d a, __m128d b, __m128d c);
+
+
VFNMSUBxxxPD __m256d _mm256_fnmsub_pd (__m256d a, __m256d b, __m256d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfnmsub132ps.vfnmsub213ps.vfnmsub231ps.html b/x86/vfnmsub132ps.vfnmsub213ps.vfnmsub231ps.html new file mode 100644 index 0000000..30bb6cf --- /dev/null +++ b/x86/vfnmsub132ps.vfnmsub213ps.vfnmsub231ps.html @@ -0,0 +1,399 @@ + +VFNMSUB132PS/VFNMSUB213PS/VFNMSUB231PS + — Fused Negative Multiply-Subtract ofPacked Single Precision Floating-Point Values

VFNMSUB132PS/VFNMSUB213PS/VFNMSUB231PS + — Fused Negative Multiply-Subtract ofPacked Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 9E /r VFNMSUB132PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single-precision floating-point values from xmm1 and xmm3/mem, negate the multiplication result and subtract xmm2 and put result in xmm1.
VEX.128.66.0F38.W0 AE /r VFNMSUB213PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single-precision floating-point values from xmm1 and xmm2, negate the multiplication result and subtract xmm3/mem and put result in xmm1.
VEX.128.66.0F38.W0 BE /r VFNMSUB231PS xmm1, xmm2, xmm3/m128AV/VFMAMultiply packed single-precision floating-point values from xmm2 and xmm3/mem, negate the multiplication result and subtract xmm1 and put result in xmm1.
VEX.256.66.0F38.W0 9E /r VFNMSUB132PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single-precision floating-point values from ymm1 and ymm3/mem, negate the multiplication result and subtract ymm2 and put result in ymm1.
VEX.256.66.0F38.W0 AE /r VFNMSUB213PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single-precision floating-point values from ymm1 and ymm2, negate the multiplication result and subtract ymm3/mem and put result in ymm1.
VEX.256.66.0F38.0 BE /r VFNMSUB231PS ymm1, ymm2, ymm3/m256AV/VFMAMultiply packed single-precision floating-point values from ymm2 and ymm3/mem, negate the multiplication result and subtract ymm1 and put result in ymm1.
EVEX.128.66.0F38.W0 9E /r VFNMSUB132PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single-precision floating-point values from xmm1 and xmm3/m128/m32bcst, negate the multiplication result and subtract xmm2 and put result in xmm1.
EVEX.128.66.0F38.W0 AE /r VFNMSUB213PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single-precision floating-point values from xmm1 and xmm2, negate the multiplication result and subtract xmm3/m128/m32bcst and put result in xmm1.
EVEX.128.66.0F38.W0 BE /r VFNMSUB231PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FMultiply packed single-precision floating-point values from xmm2 and xmm3/m128/m32bcst, negate the multiplication result subtract add to xmm1 and put result in xmm1.
EVEX.256.66.0F38.W0 9E /r VFNMSUB132PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single-precision floating-point values from ymm1 and ymm3/m256/m32bcst, negate the multiplication result and subtract ymm2 and put result in ymm1.
EVEX.256.66.0F38.W0 AE /r VFNMSUB213PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single-precision floating-point values from ymm1 and ymm2, negate the multiplication result and subtract ymm3/m256/m32bcst and put result in ymm1.
EVEX.256.66.0F38.W0 BE /r VFNMSUB231PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FMultiply packed single-precision floating-point values from ymm2 and ymm3/m256/m32bcst, negate the multiplication result subtract add to ymm1 and put result in ymm1.
EVEX.512.66.0F38.W0 9E /r VFNMSUB132PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single-precision floating-point values from zmm1 and zmm3/m512/m32bcst, negate the multiplication result and subtract zmm2 and put result in zmm1.
EVEX.512.66.0F38.W0 AE /r VFNMSUB213PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single-precision floating-point values from zmm1 and zmm2, negate the multiplication result and subtract zmm3/m512/m32bcst and put result in zmm1.
EVEX.512.66.0F38.W0 BE /r VFNMSUB231PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}BV/VAVX512FMultiply packed single-precision floating-point values from zmm2 and zmm3/m512/m32bcst, negate the multiplication result subtract add to zmm1 and put result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFNMSUB132PS: Multiplies the four, eight or sixteen packed single-precision floating-point values from the first source operand to the four, eight or sixteen packed single-precision floating-point values in the third source operand. From negated infinite precision intermediate results, subtracts the four, eight or sixteen packed single-precision floating-point values in the second source operand, performs rounding and stores the resulting four, eight or sixteen packed single-precision floating-point values to the destination operand (first source operand).

+

VFNMSUB213PS: Multiplies the four, eight or sixteen packed single-precision floating-point values from the second source operand to the four, eight or sixteen packed single-precision floating-point values in the first source operand. From negated infinite precision intermediate results, subtracts the four, eight or sixteen packed single-precision floating-point values in the third source operand, performs rounding and stores the resulting four, eight or sixteen packed single-precision floating-point values to the destination operand (first source operand).

+

VFNMSUB231PS: Multiplies the four, eight or sixteen packed single-precision floating-point values from the second source to the four, eight or sixteen packed single-precision floating-point values in the third source operand. From negated infinite precision intermediate results, subtracts the four, eight or sixteen packed single-precision floating-point values in the first source operand, performs rounding and stores the resulting four, eight or sixteen packed single-precision floating-point values to the destination operand (first source operand).

+

EVEX encoded versions: The destination operand (also first source operand) and the second source operand are ZMM/YMM/XMM register. The third source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is conditionally updated with write mask k1.

+

VEX.256 encoded version: The destination operand (also first source operand) is a YMM register and encoded in reg_field. The second source operand is a YMM register and encoded in VEX.vvvv. The third source operand is a YMM register or a 256-bit memory location and encoded in rm_field.

+

VEX.128 encoded version: The destination operand (also first source operand) is a XMM register and encoded in reg_field. The second source operand is a XMM register and encoded in VEX.vvvv. The third source operand is a XMM register or a 128-bit memory location and encoded in rm_field. The upper 128 bits of the YMM destination register are zeroed.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFNMSUB132PS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR( - (DEST[n+31:n]*SRC3[n+31:n]) - SRC2[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMSUB213PS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR( - (SRC2[n+31:n]*DEST[n+31:n]) - SRC3[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMSUB231PS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
IF (VEX.128) THEN
+    MAXNUM := 2
+ELSEIF (VEX.256)
+    MAXNUM := 4
+FI
+For i = 0 to MAXNUM-1 {
+    n := 32*i;
+    DEST[n+31:n] := RoundFPControl_MXCSR( - (SRC2[n+31:n]*SRC3[n+31:n]) - DEST[n+31:n])
+}
+IF (VEX.128) THEN
+    DEST[MAXVL-1:128] := 0
+ELSEIF (VEX.256)
+    DEST[MAXVL-1:256] := 0
+FI
+
+

VFNMSUB132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl(-(DEST[i+31:i]*SRC3[i+31:i]) - SRC2[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB132PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(DEST[i+31:i]*SRC3[31:0]) - SRC2[i+31:i])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(DEST[i+31:i]*SRC3[i+31:i]) - SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*DEST[i+31:i]) - SRC3[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB213PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*DEST[i+31:i]) - SRC3[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*DEST[i+31:i]) - SRC3[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a register) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*SRC3[i+31:i]) - DEST[i+31:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VFNMSUB231PS DEST, SRC2, SRC3 (EVEX encoded version, when src3 operand is a memory source) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1)
+                THEN
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*SRC3[31:0]) - DEST[i+31:i])
+                ELSE
+                    DEST[i+31:i] :=
+            RoundFPControl_MXCSR(-(SRC2[i+31:i]*SRC3[i+31:i]) - DEST[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFNMSUBxxxPS __m512 _mm512_fnmsub_ps(__m512 a, __m512 b, __m512 c);
+
+
VFNMSUBxxxPS __m512 _mm512_fnmsub_round_ps(__m512 a, __m512 b, __m512 c, int r);
+
+
VFNMSUBxxxPS __m512 _mm512_mask_fnmsub_ps(__m512 a, __mmask16 k, __m512 b, __m512 c);
+
+
VFNMSUBxxxPS __m512 _mm512_maskz_fnmsub_ps(__mmask16 k, __m512 a, __m512 b, __m512 c);
+
+
VFNMSUBxxxPS __m512 _mm512_mask3_fnmsub_ps(__m512 a, __m512 b, __m512 c, __mmask16 k);
+
+
VFNMSUBxxxPS __m512 _mm512_mask_fnmsub_round_ps(__m512 a, __mmask16 k, __m512 b, __m512 c, int r);
+
+
VFNMSUBxxxPS __m512 _mm512_maskz_fnmsub_round_ps(__mmask16 k, __m512 a, __m512 b, __m512 c, int r);
+
+
VFNMSUBxxxPS __m512 _mm512_mask3_fnmsub_round_ps(__m512 a, __m512 b, __m512 c, __mmask16 k, int r);
+
+
VFNMSUBxxxPS __m256 _mm256_mask_fnmsub_ps(__m256 a, __mmask8 k, __m256 b, __m256 c);
+
+
VFNMSUBxxxPS __m256 _mm256_maskz_fnmsub_ps(__mmask8 k, __m256 a, __m256 b, __m256 c);
+
+
VFNMSUBxxxPS __m256 _mm256_mask3_fnmsub_ps(__m256 a, __m256 b, __m256 c, __mmask8 k);
+
+
VFNMSUBxxxPS __m128 _mm_mask_fnmsub_ps(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFNMSUBxxxPS __m128 _mm_maskz_fnmsub_ps(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFNMSUBxxxPS __m128 _mm_mask3_fnmsub_ps(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFNMSUBxxxPS __m128 _mm_fnmsub_ps (__m128 a, __m128 b, __m128 c);
+
+
VFNMSUBxxxPS __m256 _mm256_fnmsub_ps (__m256 a, __m256 b, __m256 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-19, “Type 2 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vfnmsub132sd.vfnmsub213sd.vfnmsub231sd.html b/x86/vfnmsub132sd.vfnmsub213sd.vfnmsub231sd.html new file mode 100644 index 0000000..dda9e32 --- /dev/null +++ b/x86/vfnmsub132sd.vfnmsub213sd.vfnmsub231sd.html @@ -0,0 +1,206 @@ + +VFNMSUB132SD/VFNMSUB213SD/VFNMSUB231SD + — Fused Negative Multiply-Subtract ofScalar Double Precision Floating-Point Values

VFNMSUB132SD/VFNMSUB213SD/VFNMSUB231SD + — Fused Negative Multiply-Subtract ofScalar Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.LIG.66.0F38.W1 9F /r VFNMSUB132SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm1 and xmm3/mem, negate the multiplication result and subtract xmm2 and put result in xmm1.
VEX.LIG.66.0F38.W1 AF /r VFNMSUB213SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm1 and xmm2, negate the multiplication result and subtract xmm3/mem and put result in xmm1.
VEX.LIG.66.0F38.W1 BF /r VFNMSUB231SD xmm1, xmm2, xmm3/m64AV/VFMAMultiply scalar double precision floating-point value from xmm2 and xmm3/mem, negate the multiplication result and subtract xmm1 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 9F /r VFNMSUB132SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm1 and xmm3/m64, negate the multiplication result and subtract xmm2 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 AF /r VFNMSUB213SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm1 and xmm2, negate the multiplication result and subtract xmm3/m64 and put result in xmm1.
EVEX.LLIG.66.0F38.W1 BF /r VFNMSUB231SD xmm1 {k1}{z}, xmm2, xmm3/m64{er}BV/VAVX512FMultiply scalar double precision floating-point value from xmm2 and xmm3/m64, negate the multiplication result and subtract xmm1 and put result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BTuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFNMSUB132SD: Multiplies the low packed double precision floating-point value from the first source operand to the low packed double precision floating-point value in the third source operand. From negated infinite precision intermediate result, subtracts the low double precision floating-point value in the second source operand, performs rounding and stores the resulting packed double precision floating-point value to the destination operand (first source operand).

+

VFNMSUB213SD: Multiplies the low packed double precision floating-point value from the second source operand to the low packed double precision floating-point value in the first source operand. From negated infinite precision intermediate result, subtracts the low double precision floating-point value in the third source operand, performs rounding and stores the resulting packed double precision floating-point value to the destination operand (first source operand).

+

VFNMSUB231SD: Multiplies the low packed double precision floating-point value from the second source to the low packed double precision floating-point value in the third source operand. From negated infinite precision intermediate result, subtracts the low double precision floating-point value in the first source operand, performs rounding and stores the resulting packed double precision floating-point value to the destination operand (first source operand).

+

VEX.128 and EVEX encoded version: The destination operand (also first source operand) is encoded in reg_field. The second source operand is encoded in VEX.vvvv/EVEX.vvvv. The third source operand is encoded in rm_field. Bits 127:64 of the destination are unchanged. Bits MAXVL-1:128 of the destination register are zeroed.

+

EVEX encoded version: The low quadword element of the destination is updated according to the writemask.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFNMSUB132SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(-(DEST[63:0]*SRC3[63:0]) - SRC2[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB213SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(-(SRC2[63:0]*DEST[63:0]) - SRC3[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB231SD DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundFPControl(-(SRC2[63:0]*SRC3[63:0]) - DEST[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB132SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(- (DEST[63:0]*SRC3[63:0]) - SRC2[63:0])
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB213SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(- (SRC2[63:0]*DEST[63:0]) - SRC3[63:0])
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB231SD DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[63:0] := RoundFPControl_MXCSR(- (SRC2[63:0]*SRC3[63:0]) - DEST[63:0])
+DEST[127:64] := DEST[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFNMSUBxxxSD __m128d _mm_fnmsub_round_sd(__m128d a, __m128d b, __m128d c, int r);
+
+
VFNMSUBxxxSD __m128d _mm_mask_fnmsub_sd(__m128d a, __mmask8 k, __m128d b, __m128d c);
+
+
VFNMSUBxxxSD __m128d _mm_maskz_fnmsub_sd(__mmask8 k, __m128d a, __m128d b, __m128d c);
+
+
VFNMSUBxxxSD __m128d _mm_mask3_fnmsub_sd(__m128d a, __m128d b, __m128d c, __mmask8 k);
+
+
VFNMSUBxxxSD __m128d _mm_mask_fnmsub_round_sd(__m128d a, __mmask8 k, __m128d b, __m128d c, int r);
+
+
VFNMSUBxxxSD __m128d _mm_maskz_fnmsub_round_sd(__mmask8 k, __m128d a, __m128d b, __m128d c, int r);
+
+
VFNMSUBxxxSD __m128d _mm_mask3_fnmsub_round_sd(__m128d a, __m128d b, __m128d c, __mmask8 k, int r);
+
+
VFNMSUBxxxSD __m128d _mm_fnmsub_sd (__m128d a, __m128d b, __m128d c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfnmsub132ss.vfnmsub213ss.vfnmsub231ss.html b/x86/vfnmsub132ss.vfnmsub213ss.vfnmsub231ss.html new file mode 100644 index 0000000..2eef1f5 --- /dev/null +++ b/x86/vfnmsub132ss.vfnmsub213ss.vfnmsub231ss.html @@ -0,0 +1,207 @@ + +VFNMSUB132SS/VFNMSUB213SS/VFNMSUB231SS + — Fused Negative Multiply-Subtract ofScalar Single Precision Floating-Point Values

VFNMSUB132SS/VFNMSUB213SS/VFNMSUB231SS + — Fused Negative Multiply-Subtract ofScalar Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.LIG.66.0F38.W0 9F /r VFNMSUB132SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single-precision floating-point value from xmm1 and xmm3/m32, negate the multiplication result and subtract xmm2 and put result in xmm1.
VEX.LIG.66.0F38.W0 AF /r VFNMSUB213SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single-precision floating-point value from xmm1 and xmm2, negate the multiplication result and subtract xmm3/m32 and put result in xmm1.
VEX.LIG.66.0F38.W0 BF /r VFNMSUB231SS xmm1, xmm2, xmm3/m32AV/VFMAMultiply scalar single-precision floating-point value from xmm2 and xmm3/m32, negate the multiplication result and subtract xmm1 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 9F /r VFNMSUB132SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single-precision floating-point value from xmm1 and xmm3/m32, negate the multiplication result and subtract xmm2 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 AF /r VFNMSUB213SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single-precision floating-point value from xmm1 and xmm2, negate the multiplication result and subtract xmm3/m32 and put result in xmm1.
EVEX.LLIG.66.0F38.W0 BF /r VFNMSUB231SS xmm1 {k1}{z}, xmm2, xmm3/m32{er}BV/VAVX512FMultiply scalar single-precision floating-point value from xmm2 and xmm3/m32, negate the multiplication result and subtract xmm1 and put result in xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BTuple1 ScalarModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

VFNMSUB132SS: Multiplies the low packed single-precision floating-point value from the first source operand to the low packed single-precision floating-point value in the third source operand. From negated infinite precision intermediate result, the low single-precision floating-point value in the second source operand, performs rounding and stores the resulting packed single-precision floating-point value to the destination operand (first source operand).

+

VFNMSUB213SS: Multiplies the low packed single-precision floating-point value from the second source operand to the low packed single-precision floating-point value in the first source operand. From negated infinite precision intermediate result, the low single-precision floating-point value in the third source operand, performs rounding and stores the resulting packed single-precision floating-point value to the destination operand (first source operand).

+

VFNMSUB231SS: Multiplies the low packed single-precision floating-point value from the second source to the low packed single-precision floating-point value in the third source operand. From negated infinite precision intermediate result, the low single-precision floating-point value in the first source operand, performs rounding and stores the resulting packed single-precision floating-point value to the destination operand (first source operand).

+

VEX.128 and EVEX encoded version: The destination operand (also first source operand) is encoded in reg_field. The second source operand is encoded in VEX.vvvv/EVEX.vvvv. The third source operand is encoded in rm_field. Bits 127:32 of the destination are unchanged. Bits MAXVL-1:128 of the destination register are zeroed.

+

EVEX encoded version: The low doubleword element of the destination is updated according to the writemask.

+

Compiler tools may optionally support a complementary mnemonic for each instruction mnemonic listed in the opcode/instruction column of the summary table. The behavior of the complementary mnemonic in situations

+

involving NANs are governed by the definition of the instruction mnemonic defined in the opcode/instruction column.

+

Operation + ¶ +

+
In the operations below, “*” and “-” symbols represent multiplication and subtraction with infinite precision inputs and outputs (no
+rounding).
+
+

VFNMSUB132SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(-(DEST[31:0]*SRC3[31:0]) - SRC2[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB213SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(-(SRC2[31:0]*DEST[31:0]) - SRC3[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB231SS DEST, SRC2, SRC3 (EVEX encoded version) + ¶ +

+
IF (EVEX.b = 1) and SRC3 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundFPControl(-(SRC2[31:0]*SRC3[63:0]) - DEST[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB132SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(- (DEST[31:0]*SRC3[31:0]) - SRC2[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB213SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(- (SRC2[31:0]*DEST[31:0]) - SRC3[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

VFNMSUB231SS DEST, SRC2, SRC3 (VEX encoded version) + ¶ +

+
DEST[31:0] := RoundFPControl_MXCSR(- (SRC2[31:0]*SRC3[31:0]) - DEST[31:0])
+DEST[127:32] := DEST[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFNMSUBxxxSS __m128 _mm_fnmsub_round_ss(__m128 a, __m128 b, __m128 c, int r);
+
+
VFNMSUBxxxSS __m128 _mm_mask_fnmsub_ss(__m128 a, __mmask8 k, __m128 b, __m128 c);
+
+
VFNMSUBxxxSS __m128 _mm_maskz_fnmsub_ss(__mmask8 k, __m128 a, __m128 b, __m128 c);
+
+
VFNMSUBxxxSS __m128 _mm_mask3_fnmsub_ss(__m128 a, __m128 b, __m128 c, __mmask8 k);
+
+
VFNMSUBxxxSS __m128 _mm_mask_fnmsub_round_ss(__m128 a, __mmask8 k, __m128 b, __m128 c, int r);
+
+
VFNMSUBxxxSS __m128 _mm_maskz_fnmsub_round_ss(__mmask8 k, __m128 a, __m128 b, __m128 c, int r);
+
+
VFNMSUBxxxSS __m128 _mm_mask3_fnmsub_round_ss(__m128 a, __m128 b, __m128 c, __mmask8 k, int r);
+
+
VFNMSUBxxxSS __m128 _mm_fnmsub_ss (__m128 a, __m128 b, __m128 c);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vfpclasspd.html b/x86/vfpclasspd.html new file mode 100644 index 0000000..b982d5f --- /dev/null +++ b/x86/vfpclasspd.html @@ -0,0 +1,164 @@ + +VFPCLASSPD + — Tests Types of Packed Float64 Values

VFPCLASSPD + — Tests Types of Packed Float64 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 66 /r ib VFPCLASSPD k2 {k1}, xmm2/m128/m64bcst, imm8AV/VAVX512VL AVX512DQTests the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bit for each of these category tests. The masked test results are OR-ed together to form a mask result.
EVEX.256.66.0F3A.W1 66 /r ib VFPCLASSPD k2 {k1}, ymm2/m256/m64bcst, imm8AV/VAVX512VL AVX512DQTests the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bit for each of these category tests. The masked test results are OR-ed together to form a mask result.
EVEX.512.66.0F3A.W1 66 /r ib VFPCLASSPD k2 {k1}, zmm2/m512/m64bcst, imm8AV/VAVX512DQTests the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bit for each of these category tests. The masked test results are OR-ed together to form a mask result.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The FPCLASSPD instruction checks the packed double precision floating-point values for special categories, specified by the set bits in the imm8 byte. Each set bit in imm8 specifies a category of floating-point values that the input data element is classified against. The classified results of all specified categories of an input value are ORed together to form the final boolean result for the input element. The result of each element is written to the corresponding bit in a mask register k2 according to the writemask k1. Bits [MAX_KL-1:8/4/2] of the destination are cleared.

+

The classification categories specified by imm8 are shown in Figure 5-13. The classification test for each category is listed in Table 5-14.

+
+ + + + + + + + + + + + + + + + + + + + +
76543210
SNaNNeg. FiniteDenormalNeg. INF+INFNeg. 0+0QNaN
+
Figure 5-13. Imm8 Byte Specifier of Special Case Floating-Point Values for VFPCLASSPD/SD/PS/SS
+
Table 5-14. Classifier Operations for VFPCLASSPD/SD/PS/SS
+

Bits Imm8[0] Imm8[1] Imm8[2] Imm8[3] Imm8[4] Imm8[5] Imm8[6] Imm8[7]

+

Category QNAN PosZero NegZero PosINF NegINF Denormal Negative SNAN

+

Classifier Checks for Checks for Checks for Checks for Checks for Checks for Checks for Checks for QNaN +0 0 +INF INF Denormal Negative finite SNaN

+

The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+
CheckFPClassDP (tsrc[63:0], imm8[7:0]){
+    //* Start checking the source operand for special type *//
+    NegNum :=tsrc[63];
+    IF (tsrc[62:52]=07FFh) Then ExpAllOnes := 1; FI;
+    IF (tsrc[62:52]=0h) Then ExpAllZeros := 1;
+    IF (ExpAllZeros AND MXCSR.DAZ) Then
+        MantAllZeros := 1;
+    ELSIF (tsrc[51:0]=0h) Then
+        MantAllZeros := 1;
+    FI;
+    ZeroNumber := ExpAllZeros AND MantAllZeros
+    SignalingBit := tsrc[51];
+    sNaN_res := ExpAllOnes AND NOT(MantAllZeros) AND NOT(SignalingBit); // sNaN
+    qNaN_res := ExpAllOnes AND NOT(MantAllZeros) AND SignalingBit; // qNaN
+    Pzero_res := NOT(NegNum) AND ExpAllZeros AND MantAllZeros; // +0
+    Nzero_res := NegNum AND ExpAllZeros AND MantAllZeros; // -0
+    PInf_res := NOT(NegNum) AND ExpAllOnes AND MantAllZeros; // +Inf
+    NInf_res := NegNum AND ExpAllOnes AND MantAllZeros; // -Inf
+    Denorm_res := ExpAllZeros AND NOT(MantAllZeros); // denorm
+    FinNeg_res := NegNum AND NOT(ExpAllOnes) AND NOT(ZeroNumber); // -finite
+    bResult = ( imm8[0] AND qNaN_res ) OR (imm8[1] AND Pzero_res ) OR
+            ( imm8[2] AND Nzero_res ) OR ( imm8[3] AND PInf_res ) OR
+            ( imm8[4] AND NInf_res ) OR ( imm8[5] AND Denorm_res ) OR
+            ( imm8[6] AND FinNeg_res ) OR ( imm8[7] AND sNaN_res );
+    Return bResult;
+} //* end of CheckFPClassDP() *//
+
+

VFPCLASSPD (EVEX Encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1) AND (SRC *is memory*)
+                THEN
+                    DEST[j] := CheckFPClassDP(SRC1[63:0], imm8[7:0]);
+                ELSE
+                    DEST[j] := CheckFPClassDP(SRC1[i+63:i], imm8[7:0]);
+            FI;
+        ELSE DEST[j] := 0
+                        ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFPCLASSPD __mmask8 _mm512_fpclass_pd_mask( __m512d a, int c);
+
+
VFPCLASSPD __mmask8 _mm512_mask_fpclass_pd_mask( __mmask8 m, __m512d a, int c)
+
+
VFPCLASSPD __mmask8 _mm256_fpclass_pd_mask( __m256d a, int c)
+
+
VFPCLASSPD __mmask8 _mm256_mask_fpclass_pd_mask( __mmask8 m, __m256d a, int c)
+
+
VFPCLASSPD __mmask8 _mm_fpclass_pd_mask( __m128d a, int c)
+
+
VFPCLASSPD __mmask8 _mm_mask_fpclass_pd_mask( __mmask8 m, __m128d a, int c)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vfpclassph.html b/x86/vfpclassph.html new file mode 100644 index 0000000..1aa4153 --- /dev/null +++ b/x86/vfpclassph.html @@ -0,0 +1,163 @@ + +VFPCLASSPH + — Test Types of Packed FP16 Values

VFPCLASSPH + — Test Types of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.0F3A.W0 66 /r /ib VFPCLASSPH k1{k2}, xmm1/m128/m16bcst, imm8AV/VAVX512-FP16 AVX512VLTest the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bitforeachofthesecategorytests. Themasked test results are OR-ed together to form a mask result.
EVEX.256.NP.0F3A.W0 66 /r /ib VFPCLASSPH k1{k2}, ymm1/m256/m16bcst, imm8AV/VAVX512-FP16 AVX512VLTest the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bitforeachofthesecategorytests. Themasked test results are OR-ed together to form a mask result.
EVEX.512.NP.0F3A.W0 66 /r /ib VFPCLASSPH k1{k2}, zmm1/m512/m16bcst, imm8AV/VAVX512-FP16Test the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bitforeachofthesecategorytests. Themasked test results are OR-ed together to form a mask result.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8 (r)N/A
+

Description + ¶ +

+

This instruction checks the packed FP16 values in the source operand for special categories, specified by the set bits in the imm8 byte. Each set bit in imm8 specifies a category of floating-point values that the input data element is classified against; see Table 5-9 for the categories. The classified results of all specified categories of an input value are ORed together to form the final boolean result for the input element. The result is written to the corresponding bits in the destination mask register according to the writemask.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
BitsCategoryClassifier
imm8[0]QNANChecks for QNAN
imm8[1]PosZeroChecks +0
imm8[2]NegZeroChecks for -0
imm8[3]PosINFChecks for +∞
imm8[4]NegINFChecks for −∞
imm8[5]DenormalChecks for Denormal
imm8[6]NegativeChecks for Negative finite
imm8[7]SNANChecks for SNAN
+
Table 5-9. Classifier Operations for VFPCLASSPH/VFPCLASSSH
+

Operation + ¶ +

+
def check_fp_class_fp16(tsrc, imm8):
+    negative := tsrc[15]
+    exponent_all_ones := (tsrc[14:10] == 0x1F)
+    exponent_all_zeros := (tsrc[14:10] == 0)
+    mantissa_all_zeros := (tsrc[9:0] == 0)
+    zero := exponent_all_zeros and mantissa_all_zeros
+    signaling_bit := tsrc[9]
+    snan := exponent_all_ones and not(mantissa_all_zeros) and not(signaling_bit)
+    qnan := exponent_all_ones and not(mantissa_all_zeros) and signaling_bit
+    positive_zero := not(negative) and zero
+    negative_zero := negative and zero
+    positive_infinity := not(negative) and exponent_all_ones and mantissa_all_zeros
+    negative_infinity := negative and exponent_all_ones and mantissa_all_zeros
+    denormal := exponent_all_zeros and not(mantissa_all_zeros)
+    finite_negative := negative and not(exponent_all_ones) and not(zero)
+    return (imm8[0] and qnan) OR
+        (imm8[1] and positive_zero) OR
+        (imm8[2] and negative_zero) OR
+        (imm8[3] and positive_infinity) OR
+        (imm8[4] and negative_infinity) OR
+        (imm8[5] and denormal) OR
+        (imm8[6] and finite_negative) OR
+        (imm8[7] and snan)
+
+

VFPCLASSPH dest{k2}, src, imm8 + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR i := 0 to KL-1:
+    IF k2[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := SRC.fp16[0]
+        ELSE:
+            tsrc := SRC.fp16[i]
+        DEST.bit[i] := check_fp_class_fp16(tsrc, imm8)
+    ELSE:
+        DEST.bit[i] := 0
+DEST[MAXKL-1:kl] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFPCLASSPH __mmask8 _mm_fpclass_ph_mask (__m128h a, int imm8);
+
+
VFPCLASSPH __mmask8 _mm_mask_fpclass_ph_mask (__mmask8 k1, __m128h a, int imm8);
+
+
VFPCLASSPH __mmask16 _mm256_fpclass_ph_mask (__m256h a, int imm8);
+
+
VFPCLASSPH __mmask16 _mm256_mask_fpclass_ph_mask (__mmask16 k1, __m256h a, int imm8);
+
+
VFPCLASSPH __mmask32 _mm512_fpclass_ph_mask (__m512h a, int imm8);
+
+
VFPCLASSPH __mmask32 _mm512_mask_fpclass_ph_mask (__mmask32 k1, __m512h a, int imm8);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vfpclassps.html b/x86/vfpclassps.html new file mode 100644 index 0000000..33f38da --- /dev/null +++ b/x86/vfpclassps.html @@ -0,0 +1,136 @@ + +VFPCLASSPS + — Tests Types of Packed Float32 Values

VFPCLASSPS + — Tests Types of Packed Float32 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 66 /r ib VFPCLASSPS k2 {k1}, xmm2/m128/m32bcst, imm8AV/VAVX512VL AVX512DQTests the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bit for each of these category tests. The masked test results are OR-ed together to form a mask result.
EVEX.256.66.0F3A.W0 66 /r ib VFPCLASSPS k2 {k1}, ymm2/m256/m32bcst, imm8AV/VAVX512VL AVX512DQTests the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bit for each of these category tests. The masked test results are OR-ed together to form a mask result.
EVEX.512.66.0F3A.W0 66 /r ib VFPCLASSPS k2 {k1}, zmm2/m512/m32bcst, imm8AV/VAVX512DQTests the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bit for each of these category tests. The masked test results are OR-ed together to form a mask result.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The FPCLASSPS instruction checks the packed single-precision floating-point values for special categories, specified by the set bits in the imm8 byte. Each set bit in imm8 specifies a category of floating-point values that the input data element is classified against. The classified results of all specified categories of an input value are ORed together to form the final boolean result for the input element. The result of each element is written to the corresponding bit in a mask register k2 according to the writemask k1. Bits [MAX_KL-1:16/8/4] of the destination are cleared.

+

The classification categories specified by imm8 are shown in Figure 5-13. The classification test for each category is listed in Table 5-14.

+

The source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+
CheckFPClassSP (tsrc[31:0], imm8[7:0]){
+    //* Start checking the source operand for special type *//
+    NegNum :=tsrc[31];
+    IF (tsrc[30:23]=0FFh) Then ExpAllOnes := 1; FI;
+    IF (tsrc[30:23]=0h) Then ExpAllZeros := 1;
+    IF (ExpAllZeros AND MXCSR.DAZ) Then
+        MantAllZeros := 1;
+    ELSIF (tsrc[22:0]=0h) Then
+        MantAllZeros := 1;
+    FI;
+    ZeroNumber= ExpAllZeros AND MantAllZeros
+    SignalingBit= tsrc[22];
+    sNaN_res := ExpAllOnes AND NOT(MantAllZeros) AND NOT(SignalingBit); // sNaN
+    qNaN_res := ExpAllOnes AND NOT(MantAllZeros) AND SignalingBit; // qNaN
+    Pzero_res := NOT(NegNum) AND ExpAllZeros AND MantAllZeros; // +0
+    Nzero_res := NegNum AND ExpAllZeros AND MantAllZeros; // -0
+    PInf_res := NOT(NegNum) AND ExpAllOnes AND MantAllZeros; // +Inf
+    NInf_res := NegNum AND ExpAllOnes AND MantAllZeros; // -Inf
+    Denorm_res := ExpAllZeros AND NOT(MantAllZeros); // denorm
+    FinNeg_res := NegNum AND NOT(ExpAllOnes) AND NOT(ZeroNumber); // -finite
+    bResult = ( imm8[0] AND qNaN_res ) OR (imm8[1] AND Pzero_res ) OR
+            ( imm8[2] AND Nzero_res ) OR ( imm8[3] AND PInf_res ) OR
+            ( imm8[4] AND NInf_res ) OR ( imm8[5] AND Denorm_res ) OR
+            ( imm8[6] AND FinNeg_res ) OR ( imm8[7] AND sNaN_res );
+    Return bResult;
+} //* end of CheckSPClassSP() *//
+
+

VFPCLASSPS (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b == 1) AND (SRC *is memory*)
+                THEN
+                    DEST[j] := CheckFPClassDP(SRC1[31:0], imm8[7:0]);
+                ELSE
+                    DEST[j] := CheckFPClassDP(SRC1[i+31:i], imm8[7:0]);
+            FI;
+        ELSE DEST[j] := 0 ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFPCLASSPS __mmask16 _mm512_fpclass_ps_mask( __m512 a, int c);
+
+
VFPCLASSPS __mmask16 _mm512_mask_fpclass_ps_mask( __mmask16 m, __m512 a, int c)
+
+
VFPCLASSPS __mmask8 _mm256_fpclass_ps_mask( __m256 a, int c)
+
+
VFPCLASSPS __mmask8 _mm256_mask_fpclass_ps_mask( __mmask8 m, __m256 a, int c)
+
+
VFPCLASSPS __mmask8 _mm_fpclass_ps_mask( __m128 a, int c)
+
+
VFPCLASSPS __mmask8 _mm_mask_fpclass_ps_mask( __mmask8 m, __m128 a, int c)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vfpclasssd.html b/x86/vfpclasssd.html new file mode 100644 index 0000000..695ffd9 --- /dev/null +++ b/x86/vfpclasssd.html @@ -0,0 +1,105 @@ + +VFPCLASSSD + — Tests Type of a Scalar Float64 Value

VFPCLASSSD + — Tests Type of a Scalar Float64 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W1 67 /r ib VFPCLASSSD k2 {k1}, xmm2/m64, imm8AV/VAVX512DQTests the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bit for each of these category tests. The masked test results are OR-ed together to form a mask result.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The FPCLASSSD instruction checks the low double precision floating-point value in the source operand for special categories, specified by the set bits in the imm8 byte. Each set bit in imm8 specifies a category of floating-point values that the input data element is classified against. The classified results of all specified categories of an input value are ORed together to form the final boolean result for the input element. The result is written to the low bit in a mask register k2 according to the writemask k1. Bits MAX_KL-1: 1 of the destination are cleared.

+

The classification categories specified by imm8 are shown in Figure 5-13. The classification test for each category is listed in Table 5-14.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+
CheckFPClassDP (tsrc[63:0], imm8[7:0]){
+    NegNum :=tsrc[63];
+    IF (tsrc[62:52]=07FFh) Then ExpAllOnes := 1; FI;
+    IF (tsrc[62:52]=0h) Then ExpAllZeros := 1;
+    IF (ExpAllZeros AND MXCSR.DAZ) Then
+        MantAllZeros := 1;
+    ELSIF (tsrc[51:0]=0h) Then
+        MantAllZeros := 1;
+    FI;
+    ZeroNumber := ExpAllZeros AND MantAllZeros
+    SignalingBit := tsrc[51];
+    sNaN_res := ExpAllOnes AND NOT(MantAllZeros) AND NOT(SignalingBit); // sNaN
+    qNaN_res := ExpAllOnes AND NOT(MantAllZeros) AND SignalingBit; // qNaN
+    Pzero_res := NOT(NegNum) AND ExpAllZeros AND MantAllZeros; // +0
+    Nzero_res := NegNum AND ExpAllZeros AND MantAllZeros; // -0
+    PInf_res := NOT(NegNum) AND ExpAllOnes AND MantAllZeros; // +Inf
+    NInf_res := NegNum AND ExpAllOnes AND MantAllZeros; // -Inf
+    Denorm_res := ExpAllZeros AND NOT(MantAllZeros); // denorm
+    FinNeg_res := NegNum AND NOT(ExpAllOnes) AND NOT(ZeroNumber); // -finite
+    bResult = ( imm8[0] AND qNaN_res ) OR (imm8[1] AND Pzero_res ) OR
+            ( imm8[2] AND Nzero_res ) OR ( imm8[3] AND PInf_res ) OR
+            ( imm8[4] AND NInf_res ) OR ( imm8[5] AND Denorm_res ) OR
+            ( imm8[6] AND FinNeg_res ) OR ( imm8[7] AND sNaN_res );
+    Return bResult;
+} //* end of CheckFPClassDP() *//
+
+

VFPCLASSSD (EVEX encoded version) + ¶ +

+
IF k1[0] OR *no writemask*
+    THEN DEST[0] :=
+        CheckFPClassDP(SRC1[63:0], imm8[7:0])
+    ELSE DEST[0] := 0 ; zeroing-masking only
+FI;
+DEST[MAX_KL-1:1] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFPCLASSSD __mmask8 _mm_fpclass_sd_mask( __m128d a, int c)
+
+
VFPCLASSSD __mmask8 _mm_mask_fpclass_sd_mask( __mmask8 m, __m128d a, int c)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vfpclasssh.html b/x86/vfpclasssh.html new file mode 100644 index 0000000..187993a --- /dev/null +++ b/x86/vfpclasssh.html @@ -0,0 +1,74 @@ + +VFPCLASSSH + — Test Types of Scalar FP16 Values

VFPCLASSSH + — Test Types of Scalar FP16 Values

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.NP.0F3A.W0 67 /r /ib VFPCLASSSH k1{k2}, xmm1/m16, imm8AV/VAVX512-FP16Test the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bit for each of these category tests. The masked test results are OR-ed together to form a mask result.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)ModRM:r/m (r)imm8 (r)N/A
+

Description + ¶ +

+

This instruction checks the low FP16 value in the source operand for special categories, specified by the set bits in the imm8 byte. Each set bit in imm8 specifies a category of floating-point values that the input data element is classified against; see Table 5-9 for the categories. The classified results of all specified categories of an input value are ORed together to form the final boolean result for the input element. The result is written to the low bit in the destination mask register according to the writemask. The other bits in the destination mask register are zeroed.

+

Operation + ¶ +

+

VFPCLASSSH dest{k2}, src, imm8 + ¶ +

+
IF k2[0] or *no writemask*:
+    DEST.bit[0] := check_fp_class_fp16(src.fp16[0], imm8)
+        // see VFPCLASSPH
+ELSE:
+    DEST.bit[0] := 0
+DEST[MAXKL-1:1] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFPCLASSSH __mmask8 _mm_fpclass_sh_mask (__m128h a, int imm8);
+
+
VFPCLASSSH __mmask8 _mm_mask_fpclass_sh_mask (__mmask8 k1, __m128h a, int imm8);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-58, “Type E10 Class Exception Conditions.”

diff --git a/x86/vfpclassss.html b/x86/vfpclassss.html new file mode 100644 index 0000000..09cf2f2 --- /dev/null +++ b/x86/vfpclassss.html @@ -0,0 +1,106 @@ + +VFPCLASSSS + — Tests Type of a Scalar Float32 Value

VFPCLASSSS + — Tests Type of a Scalar Float32 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W0 67 /r VFPCLASSSS k2 {k1}, xmm2/m32, imm8AV/VAVX512DQTests the input for the following categories: NaN, +0, -0, +Infinity, -Infinity, denormal, finite negative. The immediate field provides a mask bit for each of these category tests. The masked test results are OR-ed together to form a mask result.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

The FPCLASSSS instruction checks the low single-precision floating-point value in the source operand for special categories, specified by the set bits in the imm8 byte. Each set bit in imm8 specifies a category of floating-point values that the input data element is classified against. The classified results of all specified categories of an input value are ORed together to form the final boolean result for the input element. The result is written to the low bit in a mask register k2 according to the writemask k1. Bits MAX_KL-1: 1 of the destination are cleared.

+

The classification categories specified by imm8 are shown in Figure 5-13. The classification test for each category is listed in Table 5-14.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+
CheckFPClassSP (tsrc[31:0], imm8[7:0]){
+    //* Start checking the source operand for special type *//
+    NegNum :=tsrc[31];
+    IF (tsrc[30:23]=0FFh) Then ExpAllOnes := 1; FI;
+    IF (tsrc[30:23]=0h) Then ExpAllZeros := 1;
+    IF (ExpAllZeros AND MXCSR.DAZ) Then
+        MantAllZeros := 1;
+    ELSIF (tsrc[22:0]=0h) Then
+        MantAllZeros := 1;
+    FI;
+    ZeroNumber= ExpAllZeros AND MantAllZeros
+    SignalingBit= tsrc[22];
+    sNaN_res := ExpAllOnes AND NOT(MantAllZeros) AND NOT(SignalingBit); // sNaN
+    qNaN_res := ExpAllOnes AND NOT(MantAllZeros) AND SignalingBit; // qNaN
+    Pzero_res := NOT(NegNum) AND ExpAllZeros AND MantAllZeros; // +0
+    Nzero_res := NegNum AND ExpAllZeros AND MantAllZeros; // -0
+    PInf_res := NOT(NegNum) AND ExpAllOnes AND MantAllZeros; // +Inf
+    NInf_res := NegNum AND ExpAllOnes AND MantAllZeros; // -Inf
+    Denorm_res := ExpAllZeros AND NOT(MantAllZeros); // denorm
+    FinNeg_res := NegNum AND NOT(ExpAllOnes) AND NOT(ZeroNumber); // -finite
+    bResult = ( imm8[0] AND qNaN_res ) OR (imm8[1] AND Pzero_res ) OR
+            ( imm8[2] AND Nzero_res ) OR ( imm8[3] AND PInf_res ) OR
+            ( imm8[4] AND NInf_res ) OR ( imm8[5] AND Denorm_res ) OR
+            ( imm8[6] AND FinNeg_res ) OR ( imm8[7] AND sNaN_res );
+    Return bResult;
+} //* end of CheckSPClassSP() *//
+
+

VFPCLASSSS (EVEX encoded version) + ¶ +

+
IF k1[0] OR *no writemask*
+    THEN DEST[0] :=
+        CheckFPClassSP(SRC1[31:0], imm8[7:0])
+    ELSE DEST[0] := 0 ; zeroing-masking only
+FI;
+DEST[MAX_KL-1:1] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VFPCLASSSS __mmask8 _mm_fpclass_ss_mask( __m128 a, int c)
+
+
VFPCLASSSS __mmask8 _mm_mask_fpclass_ss_mask( __mmask8 m, __m128 a, int c)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vgatherdpd.vgatherqpd.html b/x86/vgatherdpd.vgatherqpd.html new file mode 100644 index 0000000..b2ce7a6 --- /dev/null +++ b/x86/vgatherdpd.vgatherqpd.html @@ -0,0 +1,205 @@ + +VGATHERDPD/VGATHERQPD + — Gather Packed Double Precision Floating-Point Values UsingSigned Dword/Qword Indices

VGATHERDPD/VGATHERQPD + — Gather Packed Double Precision Floating-Point Values UsingSigned Dword/Qword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W1 92 /r VGATHERDPD xmm1, vm32x, xmm2RMVV/VAVX2Using dword indices specified in vm32x, gather double precision floating-point values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VEX.128.66.0F38.W1 93 /r VGATHERQPD xmm1, vm64x, xmm2RMVV/VAVX2Using qword indices specified in vm64x, gather double precision floating-point values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VEX.256.66.0F38.W1 92 /r VGATHERDPD ymm1, vm32x, ymm2RMVV/VAVX2Using dword indices specified in vm32x, gather double precision floating-point values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
VEX.256.66.0F38.W1 93 /r VGATHERQPD ymm1, vm64y, ymm2RMVV/VAVX2Using qword indices specified in vm64y, gather double precision floating-point values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMVModRM:reg (r,w)BaseReg (R): VSIB:base, VectorReg(R): VSIB:indexVEX.vvvv (r, w)N/A
+

Description + ¶ +

+

The instruction conditionally loads up to 2 or 4 double precision floating-point values from memory addresses specified by the memory operand (the second operand) and using qword indices. The memory operand uses the VSIB form of the SIB byte to specify a general purpose register operand as the common base, a vector register for an array of indices relative to the base and a constant scale factor.

+

The mask operand (the third operand) specifies the conditional load operation from each memory address and the corresponding update of each data element of the destination operand (the first operand). Conditionality is specified by the most significant bit of each data element of the mask register. If an element’s mask bit is not set, the corresponding element of the destination register is left unchanged. The width of data element in the destination register and mask register are identical. The entire mask register will be set to zero by this instruction unless the instruction causes an exception.

+

Using dword indices in the lower half of the mask register, the instruction conditionally loads up to 2 or 4 double precision floating-point values from the VSIB addressing memory operand, and updates the destination register.

+

This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask operand are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gathered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

If the data size and index size are different, part of the destination register and part of the mask register do not correspond to any elements being gathered. This instruction sets those parts to zero. It may do this to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.

+

VEX.128 version: The instruction will gather two double precision floating-point values. For dword indices, only the lower two indices in the vector index register are used.

+

VEX.256 version: The instruction will gather four double precision floating-point values. For dword indices, only the lower four indices in the vector index register are used.

+

Note that:

+
    +
  • If any pair of the index, mask, or destination registers are the same, this instruction results a #UD fault.
  • +
  • The values may be read from memory in any order. Memory ordering with other instructions follows the Intel-64 memory-ordering model.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be gathered in any order, but faults must be delivered in a right-to-left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • This instruction will cause a #UD if the address size attribute is 16-bit.
  • +
  • This instruction will cause a #UD if the memory operand is encoded without the SIB byte.
  • +
  • This instruction should not be used to access memory mapped I/O as the ordering of the individual loads it does is implementation specific, and some implementations may use loads larger than the data element size or load elements an indeterminate number of times.
  • +
  • The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.
+

Operation + ¶ +

+
DEST := SRC1;
+BASE_ADDR: base register encoded in VSIB addressing;
+VINDEX: the vector index register encoded by VSIB addressing;
+SCALE: scale factor encoded by SIB:[7:6];
+DISP: optional 1, 4 byte displacement;
+MASK := SRC3;
+
+

VGATHERDPD (VEX.128 version) + ¶ +

+
MASK[MAXVL-1:128] := 0;
+FOR j := 0 to 1
+    i := j * 64;
+    IF MASK[63+i] THEN
+        MASK[i +63:i] := FFFFFFFF_FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +63:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 1
+    k := j * 32;
+    i := j * 64;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX[k+31:k])*SCALE + DISP;
+    IF MASK[63+i] THEN
+        DEST[i +63:i] := FETCH_64BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +63: i] := 0;
+ENDFOR
+DEST[MAXVL-1:128] := 0;
+
+

VGATHERQPD (VEX.128 version) + ¶ +

+
MASK[MAXVL-1:128] := 0;
+FOR j := 0 to 1
+    i := j * 64;
+    IF MASK[63+i] THEN
+        MASK[i +63:i] := FFFFFFFF_FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +63:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 1
+    i := j * 64;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[i+63:i])*SCALE + DISP;
+    IF MASK[63+i] THEN
+        DEST[i +63:i] := FETCH_64BITS(DATA_ADDR); // a fault exits this instruction
+    FI;
+    MASK[i +63: i] := 0;
+ENDFOR
+DEST[MAXVL-1:128] := 0;
+
+

VGATHERQPD (VEX.256 version) + ¶ +

+
MASK[MAXVL-1:256] := 0;
+FOR j := 0 to 3
+    i := j * 64;
+    IF MASK[63+i] THEN
+        MASK[i +63:i] := FFFFFFFF_FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +63:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 3
+    i := j * 64;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[i+63:i])*SCALE + DISP;
+    IF MASK[63+i] THEN
+        DEST[i +63:i] := FETCH_64BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +63: i] := 0;
+ENDFOR
+DEST[MAXVL-1:256] := 0;
+
+

VGATHERDPD (VEX.256 version) + ¶ +

+
MASK[MAXVL-1:256] := 0;
+FOR j := 0 to 3
+    i := j * 64;
+    IF MASK[63+i] THEN
+        MASK[i +63:i] := FFFFFFFF_FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +63:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 3
+    k := j * 32;
+    i := j * 64;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[k+31:k])*SCALE + DISP;
+    IF MASK[63+i] THEN
+        DEST[i +63:i] := FETCH_64BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +63:i] := 0;
+ENDFOR
+DEST[MAXVL-1:256] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGATHERDPD: __m128d _mm_i32gather_pd (double const * base, __m128i index, const int scale);
+
+
VGATHERDPD: __m128d _mm_mask_i32gather_pd (__m128d src, double const * base, __m128i index, __m128d mask, const int scale);
+
+
VGATHERDPD: __m256d _mm256_i32gather_pd (double const * base, __m128i index, const int scale);
+
+
VGATHERDPD: __m256d _mm256_mask_i32gather_pd (__m256d src, double const * base, __m128i index, __m256d mask, const int scale);
+
+
VGATHERQPD: __m128d _mm_i64gather_pd (double const * base, __m128i index, const int scale);
+
+
VGATHERQPD: __m128d _mm_mask_i64gather_pd (__m128d src, double const * base, __m128i index, __m128d mask, const int scale);
+
+
VGATHERQPD: __m256d _mm256_i64gather_pd (double const * base, __m256i index, const int scale);
+
+
VGATHERQPD: __m256d _mm256_mask_i64gather_pd (__m256d src, double const * base, __m256i index, __m256d mask, const int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-27, “Type 12 Class Exception Conditions.”

diff --git a/x86/vgatherdps.vgatherdpd.html b/x86/vgatherdps.vgatherdpd.html new file mode 100644 index 0000000..5bce29b --- /dev/null +++ b/x86/vgatherdps.vgatherdpd.html @@ -0,0 +1,157 @@ + +VGATHERDPS/VGATHERDPD + — Gather Packed Single, Packed Double with Signed Dword Indices

VGATHERDPS/VGATHERDPD + — Gather Packed Single, Packed Double with Signed Dword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 92 /vsib VGATHERDPS xmm1 {k1}, vm32xAV/VAVX512VL AVX512FUsing signed dword indices, gather single-precision floating-point values from memory using k1 as completion mask.
EVEX.256.66.0F38.W0 92 /vsib VGATHERDPS ymm1 {k1}, vm32yAV/VAVX512VL AVX512FUsing signed dword indices, gather single-precision floating-point values from memory using k1 as completion mask.
EVEX.512.66.0F38.W0 92 /vsib VGATHERDPS zmm1 {k1}, vm32zAV/VAVX512FUsing signed dword indices, gather single-precision floating-point values from memory using k1 as completion mask.
EVEX.128.66.0F38.W1 92 /vsib VGATHERDPD xmm1 {k1}, vm32xAV/VAVX512VL AVX512FUsing signed dword indices, gather float64 vector into float64 vector xmm1 using k1 as completion mask.
EVEX.256.66.0F38.W1 92 /vsib VGATHERDPD ymm1 {k1}, vm32xAV/VAVX512VL AVX512FUsing signed dword indices, gather float64 vector into float64 vector ymm1 using k1 as completion mask.
EVEX.512.66.0F38.W1 92 /vsib VGATHERDPD zmm1 {k1}, vm32yAV/VAVX512FUsing signed dword indices, gather float64 vector into float64 vector zmm1 using k1 as completion mask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)BaseReg (R): VSIB:base, VectorReg(R): VSIB:indexN/AN/A
+

Description + ¶ +

+

A set of single-precision/double precision faulting-point memory locations pointed by base address BASE_ADDR and index vector V_INDEX with scale SCALE are gathered. The result is written into a vector register. The elements are specified via the VSIB (i.e., the index register is a vector register, holding packed indices). Elements will only be loaded if their corresponding mask bit is one. If an element’s mask bit is not set, the corresponding element of the destination register is left unchanged. The entire mask register will be set to zero by this instruction unless it triggers an exception.

+

This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the right most one with its mask bit set). When this happens, the destination register and the mask register (k1) are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gathered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

If the data element size is less than the index element size, the higher part of the destination register and the mask register do not correspond to any elements being gathered. This instruction sets those higher parts to zero. It may update these unused elements to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.

+

Note that:

+
    +
  • The values may be read from memory in any order. Memory ordering with other instructions follows the Intel-64 memory-ordering model.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination zmm will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be gathered in any order, but faults must be delivered in a right-to left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • Not valid with 16-bit effective addresses. Will deliver a #UD fault.
+

Note that the presence of VSIB byte is enforced in this instruction. Hence, the instruction will #UD fault if ModRM.rm is different than 100b.

+

This instruction has special disp8*N and alignment rules. N is considered to be the size of a single vector element.

+

The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.

+

The instruction will #UD fault if the destination vector zmm1 is the same as index vector VINDEX. The instruction will #UD fault if the k0 mask register is specified.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist
+VINDEX stands for the memory operand vector of indices (a vector register)
+SCALE stands for the memory operand scalar (1, 2, 4 or 8)
+DISP is the optional 1 or 4 byte displacement
+
+

VGATHERDPS (EVEX encoded version) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j]
+        THEN DEST[i+31:i] :=
+            MEM[BASE_ADDR +
+                SignExtend(VINDEX[i+31:i]) * SCALE + DISP]
+            k1[j] := 0
+        ELSE *DEST[i+31:i] := remains unchanged*
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+DEST[MAXVL-1:VL] := 0
+
+

VGATHERDPD (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j]
+        THEN DEST[i+63:i] := MEM[BASE_ADDR +
+                SignExtend(VINDEX[k+31:k]) * SCALE + DISP]
+            k1[j] := 0
+        ELSE *DEST[i+63:i] := remains unchanged*
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGATHERDPD __m512d _mm512_i32gather_pd( __m256i vdx, void * base, int scale);
+
+
VGATHERDPD __m512d _mm512_mask_i32gather_pd(__m512d s, __mmask8 k, __m256i vdx, void * base, int scale);
+
+
VGATHERDPD __m256d _mm256_mmask_i32gather_pd(__m256d s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+
VGATHERDPD __m128d _mm_mmask_i32gather_pd(__m128d s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+
VGATHERDPS __m512 _mm512_i32gather_ps( __m512i vdx, void * base, int scale);
+
+
VGATHERDPS __m512 _mm512_mask_i32gather_ps(__m512 s, __mmask16 k, __m512i vdx, void * base, int scale);
+
+
VGATHERDPS __m256 _mm256_mmask_i32gather_ps(__m256 s, __mmask8 k, __m256i vdx, void * base, int scale);
+
+
GATHERDPS __m128 _mm_mmask_i32gather_ps(__m128 s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-61, “Type E12 Class Exception Conditions.”

diff --git a/x86/vgatherdps.vgatherqps.html b/x86/vgatherdps.vgatherqps.html new file mode 100644 index 0000000..f38e906 --- /dev/null +++ b/x86/vgatherdps.vgatherqps.html @@ -0,0 +1,205 @@ + +VGATHERDPS/VGATHERQPS + — Gather Packed Single Precision Floating-Point Values UsingSigned Dword/Qword Indices

VGATHERDPS/VGATHERQPS + — Gather Packed Single Precision Floating-Point Values UsingSigned Dword/Qword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 92 /r VGATHERDPS xmm1, vm32x, xmm2AV/VAVX2Using dword indices specified in vm32x, gather single-precision floating-point values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VEX.128.66.0F38.W0 93 /r VGATHERQPS xmm1, vm64x, xmm2AV/VAVX2Using qword indices specified in vm64x, gather single-precision floating-point values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VEX.256.66.0F38.W0 92 /r VGATHERDPS ymm1, vm32y, ymm2AV/VAVX2Using dword indices specified in vm32y, gather single-precision floating-point values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
VEX.256.66.0F38.W0 93 /r VGATHERQPS xmm1, vm64y, xmm2AV/VAVX2Using qword indices specified in vm64y, gather single-precision floating-point values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
AModRM:reg (r,w)BaseReg (R): VSIB:base, VectorReg(R): VSIB:indexVEX.vvvv (r, w)N/A
+

Description + ¶ +

+

The instruction conditionally loads up to 4 or 8 single-precision floating-point values from memory addresses specified by the memory operand (the second operand) and using dword indices. The memory operand uses the VSIB form of the SIB byte to specify a general purpose register operand as the common base, a vector register for an array of indices relative to the base and a constant scale factor.

+

The mask operand (the third operand) specifies the conditional load operation from each memory address and the corresponding update of each data element of the destination operand (the first operand). Conditionality is specified by the most significant bit of each data element of the mask register. If an element’s mask bit is not set, the corresponding element of the destination register is left unchanged. The width of data element in the destination register and mask register are identical. The entire mask register will be set to zero by this instruction unless the instruction causes an exception.

+

Using qword indices, the instruction conditionally loads up to 2 or 4 single-precision floating-point values from the VSIB addressing memory operand, and updates the lower half of the destination register. The upper 128 or 256 bits of the destination register are zero’ed with qword indices.

+

This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask operand are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gathered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

If the data size and index size are different, part of the destination register and part of the mask register do not correspond to any elements being gathered. This instruction sets those parts to zero. It may do this to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.

+

VEX.128 version: For dword indices, the instruction will gather four single-precision floating-point values. For qword indices, the instruction will gather two values and zero the upper 64 bits of the destination.

+

VEX.256 version: For dword indices, the instruction will gather eight single-precision floating-point values. For qword indices, the instruction will gather four values and zero the upper 128 bits of the destination.

+

Note that:

+
    +
  • If any pair of the index, mask, or destination registers are the same, this instruction results a UD fault.
  • +
  • The values may be read from memory in any order. Memory ordering with other instructions follows the Intel-64 memory-ordering model.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be gathered in any order, but faults must be delivered in a right-to-left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • This instruction will cause a #UD if the address size attribute is 16-bit.
  • +
  • This instruction will cause a #UD if the memory operand is encoded without the SIB byte.
  • +
  • This instruction should not be used to access memory mapped I/O as the ordering of the individual loads it does is implementation specific, and some implementations may use loads larger than the data element size or load elements an indeterminate number of times.
  • +
  • The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.
+

Operation + ¶ +

+
DEST := SRC1;
+BASE_ADDR: base register encoded in VSIB addressing;
+VINDEX: the vector index register encoded by VSIB addressing;
+SCALE: scale factor encoded by SIB:[7:6];
+DISP: optional 1, 4 byte displacement;
+MASK := SRC3;
+
+

VGATHERDPS (VEX.128 version) + ¶ +

+
MASK[MAXVL-1:128] := 0;
+FOR j := 0 to 3
+    i := j * 32;
+    IF MASK[31+i] THEN
+        MASK[i +31:i] := FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +31:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 3
+    i := j * 32;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX[i+31:i])*SCALE + DISP;
+    IF MASK[31+i] THEN
+        DEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +31:i] := 0;
+ENDFOR
+DEST[MAXVL-1:128] := 0;
+
+

VGATHERQPS (VEX.128 version) + ¶ +

+
MASK[MAXVL-1:64] := 0;
+FOR j := 0 to 3
+    i := j * 32;
+    IF MASK[31+i] THEN
+        MASK[i +31:i] := FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +31:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 1
+    k := j * 64;
+    i := j * 32;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[k+63:k])*SCALE + DISP;
+    IF MASK[31+i] THEN
+        DEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +31:i] := 0;
+ENDFOR
+DEST[MAXVL-1:64] := 0;
+
+

VGATHERDPS (VEX.256 version) + ¶ +

+
MASK[MAXVL-1:256] := 0;
+FOR j := 0 to 7
+    i := j * 32;
+    IF MASK[31+i] THEN
+        MASK[i +31:i] := FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +31:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 7
+    i := j * 32;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[i+31:i])*SCALE + DISP;
+    IF MASK[31+i] THEN
+        DEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +31:i] := 0;
+ENDFOR
+DEST[MAXVL-1:256] := 0;
+
+

VGATHERQPS (VEX.256 version) + ¶ +

+
MASK[MAXVL-1:128] := 0;
+FOR j := 0 to 7
+    i := j * 32;
+    IF MASK[31+i] THEN
+        MASK[i +31:i] := FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +31:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 3
+    k := j * 64;
+    i := j * 32;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[k+63:k])*SCALE + DISP;
+    IF MASK[31+i] THEN
+        DEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +31:i] := 0;
+ENDFOR
+DEST[MAXVL-1:128] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGATHERDPS: __m128 _mm_i32gather_ps (float const * base, __m128i index, const int scale);
+
+
VGATHERDPS: __m128 _mm_mask_i32gather_ps (__m128 src, float const * base, __m128i index, __m128 mask, const int scale);
+
+
VGATHERDPS: __m256 _mm256_i32gather_ps (float const * base, __m256i index, const int scale);
+
+
VGATHERDPS: __m256 _mm256_mask_i32gather_ps (__m256 src, float const * base, __m256i index, __m256 mask, const int scale);
+
+
VGATHERQPS: __m128 _mm_i64gather_ps (float const * base, __m128i index, const int scale);
+
+
VGATHERQPS: __m128 _mm_mask_i64gather_ps (__m128 src, float const * base, __m128i index, __m128 mask, const int scale);
+
+
VGATHERQPS: __m128 _mm256_i64gather_ps (float const * base, __m256i index, const int scale);
+
+
VGATHERQPS: __m128 _mm256_mask_i64gather_ps (__m128 src, float const * base, __m256i index, __m128 mask, const int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-27, “Type 12 Class Exception Conditions.”

diff --git a/x86/vgatherpf0dps.vgatherpf0qps.vgatherpf0dpd.vgatherpf0qpd.html b/x86/vgatherpf0dps.vgatherpf0qps.vgatherpf0dpd.vgatherpf0qpd.html new file mode 100644 index 0000000..14a47bc --- /dev/null +++ b/x86/vgatherpf0dps.vgatherpf0qps.vgatherpf0dpd.vgatherpf0qpd.html @@ -0,0 +1,153 @@ + +VGATHERPF0DPS/VGATHERPF0QPS/VGATHERPF0DPD/VGATHERPF0QPD + — Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint

VGATHERPF0DPS/VGATHERPF0QPS/VGATHERPF0DPD/VGATHERPF0QPD + — Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T0 Hint

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W0 C6 /1 /vsib VGATHERPF0DPS vm32z {k1}AV/VAVX512PFUsing signed dword indices, prefetch sparse byte memory locations containing single-precision data using opmask k1 and T0 hint.
EVEX.512.66.0F38.W0 C7 /1 /vsib VGATHERPF0QPS vm64z {k1}AV/VAVX512PFUsing signed qword indices, prefetch sparse byte memory locations containing single-precision data using opmask k1 and T0 hint.
EVEX.512.66.0F38.W1 C6 /1 /vsib VGATHERPF0DPD vm32y {k1}AV/VAVX512PFUsing signed dword indices, prefetch sparse byte memory locations containing double precision data using opmask k1 and T0 hint.
EVEX.512.66.0F38.W1 C7 /1 /vsib VGATHERPF0QPD vm64z {k1}AV/VAVX512PFUsing signed qword indices, prefetch sparse byte memory locations containing double precision data using opmask k1 and T0 hint.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarBaseReg (R): VSIB:base, VectorReg(R): VSIB:indexN/AN/AN/A
+

Description + ¶ +

+

The instruction conditionally prefetches up to sixteen 32-bit or eight 64-bit integer byte data elements. The elements are specified via the VSIB (i.e., the index register is an zmm, holding packed indices). Elements will only be prefetched if their corresponding mask bit is one.

+

Lines prefetched are loaded into to a location in the cache hierarchy specified by a locality hint (T0):

+
    +
  • T0 (temporal data)—prefetch data into the first level cache.
+

[PS data] For dword indices, the instruction will prefetch sixteen memory locations. For qword indices, the instruction will prefetch eight values.

+

[PD data] For dword and qword indices, the instruction will prefetch eight memory locations.

+

Note that:

+

(1) The prefetches may happen in any order (or not at all). The instruction is a hint.

+

(2) The mask is left unchanged.

+

(3) Not valid with 16-bit effective addresses. Will deliver a #UD fault.

+

(4) No FP nor memory faults may be produced by this instruction.

+

(5) Prefetches do not handle cache line splits

+

(6) A #UD is signaled if the memory operand is encoded without the SIB byte.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist.
+VINDEX stands for the memory operand vector of indices (a vector register).
+SCALE stands for the memory operand scalar (1, 2, 4 or 8).
+DISP is the optional 1, 2 or 4 byte displacement.
+PREFETCH(mem, Level, State) Prefetches a byte memory location pointed by ‘mem’ into the cache level specified by ‘Level’; a request
+for exclusive/ownership is done if ‘State’ is 1. Note that the memory location ignore cache line splits. This operation is considered a
+hint for the processor and may be skipped depending on implementation.
+
+

VGATHERPF0DPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[i+31:i]) * SCALE + DISP], Level=0, RFO = 0)
+    FI;
+ENDFOR
+
+

VGATHERPF0DPD (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[k+31:k]) * SCALE + DISP], Level=0, RFO = 0)
+    FI;
+ENDFOR
+
+

VGATHERPF0QPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 256)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[i+63:i]) * SCALE + DISP], Level=0, RFO = 0)
+    FI;
+ENDFOR
+
+

VGATHERPF0QPD (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 64
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[k+63:k]) * SCALE + DISP], Level=0, RFO = 0)
+    FI;
+ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGATHERPF0DPD void _mm512_mask_prefetch_i32gather_pd(__m256i vdx, __mmask8 m, void * base, int scale, int hint);
+
+
VGATHERPF0DPS void _mm512_mask_prefetch_i32gather_ps(__m512i vdx, __mmask16 m, void * base, int scale, int hint);
+
+
VGATHERPF0QPD void _mm512_mask_prefetch_i64gather_pd(__m512i vdx, __mmask8 m, void * base, int scale, int hint);
+
+
VGATHERPF0QPS void _mm512_mask_prefetch_i64gather_ps(__m512i vdx, __mmask8 m, void * base, int scale, int hint);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-62, “Type E12NP Class Exception Conditions.”

diff --git a/x86/vgatherpf1dps.vgatherpf1qps.vgatherpf1dpd.vgatherpf1qpd.html b/x86/vgatherpf1dps.vgatherpf1qps.vgatherpf1dpd.vgatherpf1qpd.html new file mode 100644 index 0000000..314d781 --- /dev/null +++ b/x86/vgatherpf1dps.vgatherpf1qps.vgatherpf1dpd.vgatherpf1qpd.html @@ -0,0 +1,153 @@ + +VGATHERPF1DPS/VGATHERPF1QPS/VGATHERPF1DPD/VGATHERPF1QPD + — Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint

VGATHERPF1DPS/VGATHERPF1QPS/VGATHERPF1DPD/VGATHERPF1QPD + — Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W0 C6 /2 /vsib VGATHERPF1DPS vm32z {k1}AV/VAVX512PFUsing signed dword indices, prefetch sparse byte memory locations containing single-precision data using opmask k1 and T1 hint.
EVEX.512.66.0F38.W0 C7 /2 /vsib VGATHERPF1QPS vm64z {k1}AV/VAVX512PFUsing signed qword indices, prefetch sparse byte memory locations containing single-precision data using opmask k1 and T1 hint.
EVEX.512.66.0F38.W1 C6 /2 /vsib VGATHERPF1DPD vm32y {k1}AV/VAVX512PFUsing signed dword indices, prefetch sparse byte memory locations containing double precision data using opmask k1 and T1 hint.
EVEX.512.66.0F38.W1 C7 /2 /vsib VGATHERPF1QPD vm64z {k1}AV/VAVX512PFUsing signed qword indices, prefetch sparse byte memory locations containing double precision data using opmask k1 and T1 hint.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarBaseReg (R): VSIB:base, VectorReg(R): VSIB:indexN/AN/AN/A
+

Description + ¶ +

+

The instruction conditionally prefetches up to sixteen 32-bit or eight 64-bit integer byte data elements. The elements are specified via the VSIB (i.e., the index register is an zmm, holding packed indices). Elements will only be prefetched if their corresponding mask bit is one.

+

Lines prefetched are loaded into to a location in the cache hierarchy specified by a locality hint (T1):

+
    +
  • T1 (temporal data)—prefetch data into the second level cache.
+

[PS data] For dword indices, the instruction will prefetch sixteen memory locations. For qword indices, the instruction will prefetch eight values.

+

[PD data] For dword and qword indices, the instruction will prefetch eight memory locations.

+

Note that:

+

(1) The prefetches may happen in any order (or not at all). The instruction is a hint.

+

(2) The mask is left unchanged.

+

(3) Not valid with 16-bit effective addresses. Will deliver a #UD fault.

+

(4) No FP nor memory faults may be produced by this instruction.

+

(5) Prefetches do not handle cache line splits

+

(6) A #UD is signaled if the memory operand is encoded without the SIB byte.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist.
+VINDEX stands for the memory operand vector of indices (a vector register).
+SCALE stands for the memory operand scalar (1, 2, 4 or 8).
+DISP is the optional 1, 2 or 4 byte displacement.
+PREFETCH(mem, Level, State) Prefetches a byte memory location pointed by ‘mem’ into the cache level specified by ‘Level’; a request
+for exclusive/ownership is done if ‘State’ is 1. Note that the memory location ignore cache line splits. This operation is considered a
+hint for the processor and may be skipped depending on implementation.
+
+

VGATHERPF1DPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[i+31:i]) * SCALE + DISP], Level=1, RFO = 0)
+    FI;
+ENDFOR
+
+

VGATHERPF1DPD (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[k+31:k]) * SCALE + DISP], Level=1, RFO = 0)
+    FI;
+ENDFOR
+
+

VGATHERPF1QPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 256)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[i+63:i]) * SCALE + DISP], Level=1, RFO = 0)
+    FI;
+ENDFOR
+
+

VGATHERPF1QPD (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 64
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[k+63:k]) * SCALE + DISP], Level=1, RFO = 0)
+    FI;
+ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGATHERPF1DPD void _mm512_mask_prefetch_i32gather_pd(__m256i vdx, __mmask8 m, void * base, int scale, int hint);
+
+
VGATHERPF1DPS void _mm512_mask_prefetch_i32gather_ps(__m512i vdx, __mmask16 m, void * base, int scale, int hint);
+
+
VGATHERPF1QPD void _mm512_mask_prefetch_i64gather_pd(__m512i vdx, __mmask8 m, void * base, int scale, int hint);
+
+
VGATHERPF1QPS void _mm512_mask_prefetch_i64gather_ps(__m512i vdx, __mmask8 m, void * base, int scale, int hint);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-62, “Type E12NP Class Exception Conditions.”

diff --git a/x86/vgatherqps.vgatherqpd.html b/x86/vgatherqps.vgatherqpd.html new file mode 100644 index 0000000..ab30146 --- /dev/null +++ b/x86/vgatherqps.vgatherqpd.html @@ -0,0 +1,155 @@ + +VGATHERQPS/VGATHERQPD + — Gather Packed Single, Packed Double with Signed Qword Indices

VGATHERQPS/VGATHERQPD + — Gather Packed Single, Packed Double with Signed Qword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 93 /vsib VGATHERQPS xmm1 {k1}, vm64xAV/VAVX512VL AVX512FUsing signed qword indices, gather single-precision floating-point values from memory using k1 as completion mask.
EVEX.256.66.0F38.W0 93 /vsib VGATHERQPS xmm1 {k1}, vm64yAV/VAVX512VL AVX512FUsing signed qword indices, gather single-precision floating-point values from memory using k1 as completion mask.
EVEX.512.66.0F38.W0 93 /vsib VGATHERQPS ymm1 {k1}, vm64zAV/VAVX512FUsing signed qword indices, gather single-precision floating-point values from memory using k1 as completion mask.
EVEX.128.66.0F38.W1 93 /vsib VGATHERQPD xmm1 {k1}, vm64xAV/VAVX512VL AVX512FUsing signed qword indices, gather float64 vector into float64 vector xmm1 using k1 as completion mask.
EVEX.256.66.0F38.W1 93 /vsib VGATHERQPD ymm1 {k1}, vm64yAV/VAVX512VL AVX512FUsing signed qword indices, gather float64 vector into float64 vector ymm1 using k1 as completion mask.
EVEX.512.66.0F38.W1 93 /vsib VGATHERQPD zmm1 {k1}, vm64zAV/VAVX512FUsing signed qword indices, gather float64 vector into float64 vector zmm1 using k1 as completion mask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)BaseReg (R): VSIB:base, VectorReg(R): VSIB:indexN/AN/A
+

Description + ¶ +

+

A set of 8 single-precision/double precision faulting-point memory locations pointed by base address BASE_ADDR and index vector V_INDEX with scale SCALE are gathered. The result is written into vector a register. The elements are specified via the VSIB (i.e., the index register is a vector register, holding packed indices). Elements will only be loaded if their corresponding mask bit is one. If an element’s mask bit is not set, the corresponding element of the destination register is left unchanged. The entire mask register will be set to zero by this instruction unless it triggers an exception.

+

This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask register (k1) are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gathered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

If the data element size is less than the index element size, the higher part of the destination register and the mask register do not correspond to any elements being gathered. This instruction sets those higher parts to zero. It may update these unused elements to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.

+

Note that:

+
    +
  • The values may be read from memory in any order. Memory ordering with other instructions follows the Intel-64 memory-ordering model.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination zmm will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be gathered in any order, but faults must be delivered in a right-to left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • Not valid with 16-bit effective addresses. Will deliver a #UD fault.
+

Note that the presence of VSIB byte is enforced in this instruction. Hence, the instruction will #UD fault if ModRM.rm is different than 100b.

+

This instruction has special disp8*N and alignment rules. N is considered to be the size of a single vector element.

+

The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.

+

The instruction will #UD fault if the destination vector zmm1 is the same as index vector VINDEX. The instruction will #UD fault if the k0 mask register is specified.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist
+VINDEX stands for the memory operand vector of indices (a ZMM register)
+SCALE stands for the memory operand scalar (1, 2, 4 or 8)
+DISP is the optional 1 or 4 byte displacement
+
+

VGATHERQPS (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] :=
+            MEM[BASE_ADDR + (VINDEX[k+63:k]) * SCALE + DISP]
+            k1[j] := 0
+        ELSE *DEST[i+31:i] := remains unchanged*
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+DEST[MAXVL-1:VL/2] := 0
+
+

VGATHERQPD (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := MEM[BASE_ADDR + (VINDEX[i+63:i]) * SCALE + DISP]
+            k1[j] := 0
+        ELSE *DEST[i+63:i] := remains unchanged*
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGATHERQPD __m512d _mm512_i64gather_pd( __m512i vdx, void * base, int scale);
+
+
VGATHERQPD __m512d _mm512_mask_i64gather_pd(__m512d s, __mmask8 k, __m512i vdx, void * base, int scale);
+
+
VGATHERQPD __m256d _mm256_mask_i64gather_pd(__m256d s, __mmask8 k, __m256i vdx, void * base, int scale);
+
+
VGATHERQPD __m128d _mm_mask_i64gather_pd(__m128d s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+
VGATHERQPS __m256 _mm512_i64gather_ps( __m512i vdx, void * base, int scale);
+
+
VGATHERQPS __m256 _mm512_mask_i64gather_ps(__m256 s, __mmask16 k, __m512i vdx, void * base, int scale);
+
+
VGATHERQPS __m128 _mm256_mask_i64gather_ps(__m128 s, __mmask8 k, __m256i vdx, void * base, int scale);
+
+
VGATHERQPS __m128 _mm_mask_i64gather_ps(__m128 s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-61, “Type E12 Class Exception Conditions.”

diff --git a/x86/vgetexppd.html b/x86/vgetexppd.html new file mode 100644 index 0000000..05be01b --- /dev/null +++ b/x86/vgetexppd.html @@ -0,0 +1,209 @@ + +VGETEXPPD + — Convert Exponents of Packed Double Precision Floating-Point Values to DoublePrecision Floating-Point Values

VGETEXPPD + — Convert Exponents of Packed Double Precision Floating-Point Values to DoublePrecision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 42 /r VGETEXPPD xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512FConvert the exponent of packed double precision floating-point values in the source operand to double precision floating-point results representing unbiased integer exponents and stores the results in the destination register.
EVEX.256.66.0F38.W1 42 /r VGETEXPPD ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512FConvert the exponent of packed double precision floating-point values in the source operand to double precision floating-point results representing unbiased integer exponents and stores the results in the destination register.
EVEX.512.66.0F38.W1 42 /r VGETEXPPD zmm1 {k1}{z}, zmm2/m512/m64bcst{sae}AV/VAVX512FConvert the exponent of packed double precision floating-point values in the source operand to double precision floating-point results representing unbiased integer exponents and stores the results in the destination under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Extracts the biased exponents from the normalized double precision floating-point representation of each qword data element of the source operand (the second operand) as unbiased signed integer value, or convert the denormal representation of input data to unbiased negative integer values. Each integer value of the unbiased exponent is converted to double precision floating-point value and written to the corresponding qword elements of the destination operand (the first operand) as double precision floating-point numbers.

+

The destination operand is a ZMM/YMM/XMM register and updated under the writemask. The source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location.

+

EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Each GETEXP operation converts the exponent value into a floating-point number (permitting input value in denormal representation). Special cases of input values are listed in Table 5-15.

+

The formula is:

+

GETEXP(x) = floor(log2(|x|))

+

Notation floor(x) stands for the greatest integer not exceeding real number x.

+
+ + + + + + + + + + + + + + + + + +
Input OperandResultComments
src1 = NaNQNaN(src1)If (SRC = SNaN) then #IE If (SRC = denormal) then #DE
0 < |src1| < INFfloor(log2(|src1|))
| src1| = +INF+INF
| src1| = 0-INF
+
Table 5-15. VGETEXPPD/SD Special Cases
+

Operation + ¶ +

+
NormalizeExpTinyDPFP(SRC[63:0])
+{
+    // Jbit is the hidden integral bit of a floating-point number. In case of denormal number it has the value of ZERO.
+    Src.Jbit := 0;
+    Dst.exp := 1;
+    Dst.fraction := SRC[51:0];
+    WHILE(Src.Jbit = 0)
+    {
+        Src.Jbit := Dst.fraction[51];
+                        // Get the fraction MSB
+        Dst.fraction := Dst.fraction << 1 ;
+                            // One bit shift left
+        Dst.exp-- ;
+                // Decrement the exponent
+    }
+    Dst.fraction := 0;
+    Dst.sign := 1;
+    TMP[63:0] := MXCSR.DAZ? 0 : (Dst.sign << 63) OR (Dst.exp << 52) OR (Dst.fraction) ;
+    Return (TMP[63:0]);
+}
+ConvertExpDPFP(SRC[63:0])
+{
+    Src.sign := 0;
+                // Zero out sign bit
+    Src.exp := SRC[62:52];
+    Src.fraction := SRC[51:0];
+    // Check for NaN
+    IF (SRC = NaN)
+    {
+        IF ( SRC = SNAN ) SET IE;
+        Return QNAN(SRC);
+    }
+    // Check for +INF
+    IF (Src = +INF) RETURN (Src);
+    // check if zero operand
+    IF ((Src.exp = 0) AND ((Src.fraction = 0) OR (MXCSR.DAZ = 1))) Return (-INF);
+    }
+    ELSE // check if denormal operand (notice that MXCSR.DAZ = 0)
+    {
+        IF ((Src.exp = 0) AND (Src.fraction != 0))
+        {
+            TMP[63:0] := NormalizeExpTinyDPFP(SRC[63:0]) ;
+                                // Get Normalized Exponent
+            Set #DE
+        }
+        ELSE // exponent value is correct
+        {
+            TMP[63:0] := (Src.sign << 63) OR (Src.exp << 52) OR (Src.fraction) ;
+        }
+        TMP := SAR(TMP, 52) ;
+                    // Shift Arithmetic Right
+        TMP := TMP – 1023;
+                    // Subtract Bias
+        Return CvtI2D(TMP);
+                    // Convert INT to double precision floating-point number
+    }
+}
+
+

VGETEXPPD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN
+                    DEST[i+63:i] :=
+            ConvertExpDPFP(SRC[63:0])
+                ELSE
+                    DEST[i+63:i] :=
+            ConvertExpDPFP(SRC[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETEXPPD __m512d _mm512_getexp_pd(__m512d a);
+
+
VGETEXPPD __m512d _mm512_mask_getexp_pd(__m512d s, __mmask8 k, __m512d a);
+
+
VGETEXPPD __m512d _mm512_maskz_getexp_pd( __mmask8 k, __m512d a);
+
+
VGETEXPPD __m512d _mm512_getexp_round_pd(__m512d a, int sae);
+
+
VGETEXPPD __m512d _mm512_mask_getexp_round_pd(__m512d s, __mmask8 k, __m512d a, int sae);
+
+
VGETEXPPD __m512d _mm512_maskz_getexp_round_pd( __mmask8 k, __m512d a, int sae);
+
+
VGETEXPPD __m256d _mm256_getexp_pd(__m256d a);
+
+
VGETEXPPD __m256d _mm256_mask_getexp_pd(__m256d s, __mmask8 k, __m256d a);
+
+
VGETEXPPD __m256d _mm256_maskz_getexp_pd( __mmask8 k, __m256d a);
+
+
VGETEXPPD __m128d _mm_getexp_pd(__m128d a);
+
+
VGETEXPPD __m128d _mm_mask_getexp_pd(__m128d s, __mmask8 k, __m128d a);
+
+
VGETEXPPD __m128d _mm_maskz_getexp_pd( __mmask8 k, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vgetexpph.html b/x86/vgetexpph.html new file mode 100644 index 0000000..b54df3b --- /dev/null +++ b/x86/vgetexpph.html @@ -0,0 +1,180 @@ + +VGETEXPPH + — Convert Exponents of Packed FP16 Values to FP16 Values

VGETEXPPH + — Convert Exponents of Packed FP16 Values to FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP6.W0 42 /r VGETEXPPH xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLConvert the exponent of FP16 values in the source operand to FP16 results representing unbiased integer exponents and stores the results in the destination register subject to writemask k1.
EVEX.256.66.MAP6.W0 42 /r VGETEXPPH ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLConvert the exponent of FP16 values in the source operand to FP16 results representing unbiased integer exponents and stores the results in the destination register subject to writemask k1.
EVEX.512.66.MAP6.W0 42 /r VGETEXPPH zmm1{k1}{z}, zmm2/m512/m16bcst {sae}AV/VAVX512-FP16Convert the exponent of FP16 values in the source operand to FP16 results representing unbiased integer exponents and stores the results in the destination register subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction extracts the biased exponents from the normalized FP16 representation of each word element of the source operand (the second operand) as unbiased signed integer value, or convert the denormal representation of input data to unbiased negative integer values. Each integer value of the unbiased exponent is converted to an FP16 value and written to the corresponding word elements of the destination operand (the first operand) as FP16 numbers.

+

The destination elements are updated according to the writemask.

+

Each GETEXP operation converts the exponent value into a floating-point number (permitting input value in denormal representation). Special cases of input values are listed in Table 5-6.

+

The formula is:

+

GETEXP(x) = floor(log2(|x|))

+

Notation floor(x) stands for maximal integer not exceeding real number x.

+

Software usage of VGETEXPxx and VGETMANTxx instructions generally involve a combination of GETEXP operation and GETMANT operation (see VGETMANTPH). Thus, the VGETEXPPH instruction does not require software to handle SIMD floating-point exceptions.

+
+ + + + + + + + + + + + + + + + + +
Input OperandResultComments
src1 = NaNQNaN(src1)If (SRC = SNaN), then #IE. If (SRC = denormal), then #DE.
0 < |src1| < INFfloor(log2(|src1|))
| src1| = +INF+INF
| src1| = 0-INF
+
Table 5-16. VGETEXPPH/VGETEXPSH Special Cases
+

Operation + ¶ +

+
def normalize_exponent_tiny_fp16(src):
+    jbit := 0
+    // src & dst are FP16 numbers with sign(1b), exp(5b) and fraction (10b) fields
+    dst.exp := 1 // write bits 14:10
+    dst.fraction := src.fraction // copy bits 9:0
+    while jbit == 0:
+        jbit := dst.fraction[9] // msb of the fraction
+        dst.fraction := dst.fraction << 1
+        dst.exp := dst.exp - 1
+    dst.fraction := 0
+    return dst
+def getexp_fp16(src):
+    src.sign := 0 // make positive
+    exponent_all_ones := (src[14:10] == 0x1F)
+    exponent_all_zeros := (src[14:10] == 0)
+    mantissa_all_zeros := (src[9:0] == 0)
+    zero := exponent_all_zeros and mantissa_all_zeros
+    signaling_bit := src[9]
+    nan := exponent_all_ones and not(mantissa_all_zeros)
+    snan := nan and not(signaling_bit)
+    qnan := nan and signaling_bit
+    positive_infinity := not(negative) and exponent_all_ones and mantissa_all_zeros
+    denormal := exponent_all_zeros and not(mantissa_all_zeros)
+    if nan:
+        if snan:
+            MXCSR.IE := 1
+        return qnan(src)
+                // convert snan to a qnan
+    if positive_infinity:
+        return src
+    if zero:
+        return -INF
+    if denormal:
+        tmp := normalize_exponent_tiny_fp16(src)
+        MXCSR.DE := 1
+    else:
+        tmp := src
+    tmp := SAR(tmp, 10) // shift arithmetic right
+    tmp := tmp - 15 // subtract bias
+    return convert_integer_to_fp16(tmp)
+
+

VGETEXPPH dest{k1}, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := src.fp16[0]
+        ELSE:
+            tsrc := src.fp16[i]
+        DEST.fp16[i] := getexp_fp16(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[i] := 0
+    //else DEST.fp16[i] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETEXPPH __m128h _mm_getexp_ph (__m128h a);
+
+
VGETEXPPH __m128h _mm_mask_getexp_ph (__m128h src, __mmask8 k, __m128h a);
+
+
VGETEXPPH __m128h _mm_maskz_getexp_ph (__mmask8 k, __m128h a);
+
+
VGETEXPPH __m256h _mm256_getexp_ph (__m256h a);
+
+
VGETEXPPH __m256h _mm256_mask_getexp_ph (__m256h src, __mmask16 k, __m256h a);
+
+
VGETEXPPH __m256h _mm256_maskz_getexp_ph (__mmask16 k, __m256h a);
+
+
VGETEXPPH __m512h _mm512_getexp_ph (__m512h a);
+
+
VGETEXPPH __m512h _mm512_mask_getexp_ph (__m512h src, __mmask32 k, __m512h a);
+
+
VGETEXPPH __m512h _mm512_maskz_getexp_ph (__mmask32 k, __m512h a);
+
+
VGETEXPPH __m512h _mm512_getexp_round_ph (__m512h a, const int sae);
+
+
VGETEXPPH __m512h _mm512_mask_getexp_round_ph (__m512h src, __mmask32 k, __m512h a, const int sae);
+
+
VGETEXPPH __m512h _mm512_maskz_getexp_round_ph (__mmask32 k, __m512h a, const int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vgetexpps.html b/x86/vgetexpps.html new file mode 100644 index 0000000..fb26c99 --- /dev/null +++ b/x86/vgetexpps.html @@ -0,0 +1,639 @@ + +VGETEXPPS + — Convert Exponents of Packed Single Precision Floating-Point Values to SinglePrecision Floating-Point Values

VGETEXPPS + — Convert Exponents of Packed Single Precision Floating-Point Values to SinglePrecision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 42 /r VGETEXPPS xmm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512FConvert the exponent of packed single-precision floating-point values in the source operand to single-precision floating-point results representing unbiased integer exponents and stores the results in the destination register.
EVEX.256.66.0F38.W0 42 /r VGETEXPPS ymm1 {k1}{z}, ymm2/m256/m32bcstAV/VAVX512VL AVX512FConvert the exponent of packed single-precision floating-point values in the source operand to single-precision floating-point results representing unbiased integer exponents and stores the results in the destination register.
EVEX.512.66.0F38.W0 42 /r VGETEXPPS zmm1 {k1}{z}, zmm2/m512/m32bcst{sae}AV/VAVX512FConvert the exponent of packed single-precision floating-point values in the source operand to single-precision floating-point results representing unbiased integer exponents and stores the results in the destination register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Extracts the biased exponents from the normalized single-precision floating-point representation of each dword element of the source operand (the second operand) as unbiased signed integer value, or convert the denormal representation of input data to unbiased negative integer values. Each integer value of the unbiased exponent is converted to single-precision floating-point value and written to the corresponding dword elements of the destination operand (the first operand) as single-precision floating-point numbers.

+

The destination operand is a ZMM/YMM/XMM register and updated under the writemask. The source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location.

+

EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Each GETEXP operation converts the exponent value into a floating-point number (permitting input value in denormal representation). Special cases of input values are listed in Table 5-17.

+

The formula is:

+

GETEXP(x) = floor(log2(|x|))

+

Notation floor(x) stands for maximal integer not exceeding real number x.

+

Software usage of VGETEXPxx and VGETMANTxx instructions generally involve a combination of GETEXP operation and GETMANT operation (see VGETMANTPD). Thus VGETEXPxx instruction do not require software to handle SIMD floating-point exceptions.

+
+ + + + + + + + + + + + + + + + + +
Input OperandResultComments
src1 = NaNQNaN(src1)If (SRC = SNaN) then #IE If (SRC = denormal) then #DE
0 < |src1| < INFfloor(log2(|src1|))
| src1| = +INF+INF
| src1| = 0-INF
+
Table 5-17. VGETEXPPS/SS Special Cases
+

Figure 5-14 illustrates the VGETEXPPS functionality on input values with normalized representation.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +31 +30 29 28 27 26 25 24 23 +22212019181716151413121110 9 8 7 6 5 4 3 2 1 0 +exp +Fraction +s +Src = 2^1 +0 +1 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +SAR Src, 23 = 080h +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +1 +0 +0 +0 +0 +0 +0 +0 +-Bias +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +0 +0 +0 +0 +0 +0 +1 +Tmp - Bias = 1 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +1 +Cvt_PI2PS(01h) = 2^0 +0 +0 +1 +1 +1 +1 +1 +1 +1 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +0 +
Figure 5-14. VGETEXPPS Functionality On Normal Input values
+

Operation + ¶ +

+
NormalizeExpTinySPFP(SRC[31:0])
+{
+    // Jbit is the hidden integral bit of a floating-point number. In case of denormal number it has the value of ZERO.
+    Src.Jbit := 0;
+    Dst.exp := 1;
+    Dst.fraction := SRC[22:0];
+    WHILE(Src.Jbit = 0)
+    {
+        Src.Jbit := Dst.fraction[22];
+                        // Get the fraction MSB
+        Dst.fraction := Dst.fraction << 1 ;
+                        // One bit shift left
+        Dst.exp-- ;
+                // Decrement the exponent
+    }
+    Dst.fraction := 0;
+    Dst.sign := 1;
+    TMP[31:0] := MXCSR.DAZ? 0 : (Dst.sign << 31) OR (Dst.exp << 23) OR (Dst.fraction) ;
+    Return (TMP[31:0]);
+}
+ConvertExpSPFP(SRC[31:0])
+{
+    Src.sign := 0;
+                // Zero out sign bit
+    Src.exp := SRC[30:23];
+    Src.fraction := SRC[22:0];
+    // Check for NaN
+    IF (SRC = NaN)
+    {
+        IF ( SRC = SNAN ) SET IE;
+        Return QNAN(SRC);
+    }
+    // Check for +INF
+    IF (Src = +INF) RETURN (Src);
+    // check if zero operand
+    IF ((Src.exp = 0) AND ((Src.fraction = 0) OR (MXCSR.DAZ = 1))) Return (-INF);
+    }
+    ELSE // check if denormal operand (notice that MXCSR.DAZ = 0)
+    {
+        IF ((Src.exp = 0) AND (Src.fraction != 0))
+        {
+            TMP[31:0] := NormalizeExpTinySPFP(SRC[31:0]) ;
+                            // Get Normalized Exponent
+            Set #DE
+        }
+        ELSE // exponent value is correct
+        {
+            TMP[31:0] := (Src.sign << 31) OR (Src.exp << 23) OR (Src.fraction) ;
+        }
+        TMP := SAR(TMP, 23) ;
+                    // Shift Arithmetic Right
+        TMP := TMP – 127;
+                    // Subtract Bias
+        Return CvtI2S(TMP);
+                    // Convert INT to single precision floating-point number
+    }
+}
+
+

VGETEXPPS (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN
+                    DEST[i+31:i] :=
+            ConvertExpSPFP(SRC[31:0])
+                ELSE
+                    DEST[i+31:i] :=
+            ConvertExpSPFP(SRC[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETEXPPS __m512 _mm512_getexp_ps( __m512 a);
+
+
VGETEXPPS __m512 _mm512_mask_getexp_ps(__m512 s, __mmask16 k, __m512 a);
+
+
VGETEXPPS __m512 _mm512_maskz_getexp_ps( __mmask16 k, __m512 a);
+
+
VGETEXPPS __m512 _mm512_getexp_round_ps( __m512 a, int sae);
+
+
VGETEXPPS __m512 _mm512_mask_getexp_round_ps(__m512 s, __mmask16 k, __m512 a, int sae);
+
+
VGETEXPPS __m512 _mm512_maskz_getexp_round_ps( __mmask16 k, __m512 a, int sae);
+
+
VGETEXPPS __m256 _mm256_getexp_ps(__m256 a);
+
+
VGETEXPPS __m256 _mm256_mask_getexp_ps(__m256 s, __mmask8 k, __m256 a);
+
+
VGETEXPPS __m256 _mm256_maskz_getexp_ps( __mmask8 k, __m256 a);
+
+
VGETEXPPS __m128 _mm_getexp_ps(__m128 a);
+
+
VGETEXPPS __m128 _mm_mask_getexp_ps(__m128 s, __mmask8 k, __m128 a);
+
+
VGETEXPPS __m128 _mm_maskz_getexp_ps( __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vgetexpsd.html b/x86/vgetexpsd.html new file mode 100644 index 0000000..215ffab --- /dev/null +++ b/x86/vgetexpsd.html @@ -0,0 +1,95 @@ + +VGETEXPSD + — Convert Exponents of Scalar Double Precision Floating-Point Value to DoublePrecision Floating-Point Value

VGETEXPSD + — Convert Exponents of Scalar Double Precision Floating-Point Value to DoublePrecision Floating-Point Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W1 43 /r VGETEXPSD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}AV/VAVX512FConvert the biased exponent (bits 62:52) of the low double precision floating-point value in xmm3/m64 to a double precision floating-point value representing unbiased integer exponent. Stores the result to the low 64-bit of xmm1 under the writemask k1 and merge with the other elements of xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Extracts the biased exponent from the normalized double precision floating-point representation of the low qword data element of the source operand (the third operand) as unbiased signed integer value, or convert the denormal representation of input data to unbiased negative integer values. The integer value of the unbiased exponent is converted to double precision floating-point value and written to the destination operand (the first operand) as double precision floating-point numbers. Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand.

+

The destination must be a XMM register, the source operand can be a XMM register or a float64 memory location.

+

If writemasking is used, the low quadword element of the destination operand is conditionally updated depending on the value of writemask register k1. If writemasking is not used, the low quadword element of the destination operand is unconditionally updated.

+

Each GETEXP operation converts the exponent value into a floating-point number (permitting input value in denormal representation). Special cases of input values are listed in Table 5-15.

+

The formula is:

+

GETEXP(x) = floor(log2(|x|))

+

Notation floor(x) stands for maximal integer not exceeding real number x.

+

Operation + ¶ +

+
// NormalizeExpTinyDPFP(SRC[63:0]) is defined in the Operation section of VGETEXPPD
+// ConvertExpDPFP(SRC[63:0]) is defined in the Operation section of VGETEXPPD
+
+

VGETEXPSD (EVEX encoded version) + ¶ +

+
IF k1[0] OR *no writemask*
+    THEN DEST[63:0] :=
+            ConvertExpDPFP(SRC2[63:0])
+    ELSE
+        IF *merging-masking*
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[63:0] := 0
+        FI
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETEXPSD __m128d _mm_getexp_sd( __m128d a, __m128d b);
+
+
VGETEXPSD __m128d _mm_mask_getexp_sd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VGETEXPSD __m128d _mm_maskz_getexp_sd( __mmask8 k, __m128d a, __m128d b);
+
+
VGETEXPSD __m128d _mm_getexp_round_sd( __m128d a, __m128d b, int sae);
+
+
VGETEXPSD __m128d _mm_mask_getexp_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, int sae);
+
+
VGETEXPSD __m128d _mm_maskz_getexp_round_sd( __mmask8 k, __m128d a, __m128d b, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vgetexpsh.html b/x86/vgetexpsh.html new file mode 100644 index 0000000..ad3db34 --- /dev/null +++ b/x86/vgetexpsh.html @@ -0,0 +1,89 @@ + +VGETEXPSH + — Convert Exponents of Scalar FP16 Values to FP16 Values

VGETEXPSH + — Convert Exponents of Scalar FP16 Values to FP16 Values

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.66.MAP6.W0 43 /r VGETEXPSH xmm1{k1}{z}, xmm2, xmm3/m16 {sae}AV/VAVX512-FP16Convert the exponent of FP16 values in the low word of the source operand to FP16 results representing unbiased integer exponents, and stores the results in the low word of the destination register subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction extracts the biased exponents from the normalized FP16 representation of the low word element of the source operand (the second operand) as unbiased signed integer value, or convert the denormal representation of input data to an unbiased negative integer value. The integer value of the unbiased exponent is converted to an FP16 value and written to the low word element of the destination operand (the first operand) as an FP16 number.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Each GETEXP operation converts the exponent value into a floating-point number (permitting input value in denormal representation). Special cases of input values are listed in Table 5-16.

+

The formula is:

+

GETEXP(x) = floor(log2(|x|))

+

Notation floor(x) stands for maximal integer not exceeding real number x.

+

Software usage of VGETEXPxx and VGETMANTxx instructions generally involve a combination of GETEXP operation and GETMANT operation (see VGETMANTSH). Thus, the VGETEXPSH instruction does not require software to handle SIMD floating-point exceptions.

+

Operation + ¶ +

+

VGETEXPSH dest{k1}, src1, src2 + ¶ +

+
IF k1[0] or *no writemask*:
+    DEST.fp16[0] := getexp_fp16(src2.fp16[0]) // see VGETEXPPH
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+//else DEST.fp16[0] remains unchanged
+DEST[127:16] := src1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETEXPSH __m128h _mm_getexp_round_sh (__m128h a, __m128h b, const int sae);
+
+
VGETEXPSH __m128h _mm_mask_getexp_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, const int sae);
+
+
VGETEXPSH __m128h _mm_maskz_getexp_round_sh (__mmask8 k, __m128h a, __m128h b, const int sae);
+
+
VGETEXPSH __m128h _mm_getexp_sh (__m128h a, __m128h b);
+
+
VGETEXPSH __m128h _mm_mask_getexp_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VGETEXPSH __m128h _mm_maskz_getexp_sh (__mmask8 k, __m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vgetexpss.html b/x86/vgetexpss.html new file mode 100644 index 0000000..f1d9e64 --- /dev/null +++ b/x86/vgetexpss.html @@ -0,0 +1,97 @@ + +VGETEXPSS + — Convert Exponents of Scalar Single Precision Floating-Point Value to SinglePrecision Floating-Point Value

VGETEXPSS + — Convert Exponents of Scalar Single Precision Floating-Point Value to SinglePrecision Floating-Point Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W0 43 /r VGETEXPSS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}AV/VAVX512FConvert the biased exponent (bits 30:23) of the low single-precision floating-point value in xmm3/m32 to a single-precision floating-point value representing unbiased integer exponent. Stores the result to xmm1 under the writemask k1 and merge with the other elements of xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Extracts the biased exponent from the normalized single-precision floating-point representation of the low double-word data element of the source operand (the third operand) as unbiased signed integer value, or convert the denormal representation of input data to unbiased negative integer values. The integer value of the unbiased exponent is converted to single-precision floating-point value and written to the destination operand (the first operand) as single-precision floating-point numbers. Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand.

+

The destination must be a XMM register, the source operand can be a XMM register or a float32 memory location.

+

If writemasking is used, the low doubleword element of the destination operand is conditionally updated depending on the value of writemask register k1. If writemasking is not used, the low doubleword element of the destination operand is unconditionally updated.

+

Each GETEXP operation converts the exponent value into a floating-point number (permitting input value in denormal representation). Special cases of input values are listed in Table 5-17.

+

The formula is:

+

GETEXP(x) = floor(log2(|x|))

+

Notation floor(x) stands for maximal integer not exceeding real number x.

+

Software usage of VGETEXPxx and VGETMANTxx instructions generally involve a combination of GETEXP operation and GETMANT operation (see VGETMANTPD). Thus VGETEXPxx instruction do not require software to handle SIMD floating-point exceptions.

+

Operation + ¶ +

+
// NormalizeExpTinySPFP(SRC[31:0]) is defined in the Operation section of VGETEXPPS
+// ConvertExpSPFP(SRC[31:0]) is defined in the Operation section of VGETEXPPS
+
+

VGETEXPSS (EVEX encoded version) + ¶ +

+
IF k1[0] OR *no writemask*
+    THEN DEST[31:0] :=
+            ConvertExpDPFP(SRC2[31:0])
+    ELSE
+        IF *merging-masking*
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[31:0]:= 0
+            FI
+    FI;
+ENDFOR
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETEXPSS __m128 _mm_getexp_ss( __m128 a, __m128 b);
+
+
VGETEXPSS __m128 _mm_mask_getexp_ss(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VGETEXPSS __m128 _mm_maskz_getexp_ss( __mmask8 k, __m128 a, __m128 b);
+
+
VGETEXPSS __m128 _mm_getexp_round_ss( __m128 a, __m128 b, int sae);
+
+
VGETEXPSS __m128 _mm_mask_getexp_round_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, int sae);
+
+
VGETEXPSS __m128 _mm_maskz_getexp_round_ss( __mmask8 k, __m128 a, __m128 b, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vgetmantpd.html b/x86/vgetmantpd.html new file mode 100644 index 0000000..5249cf4 --- /dev/null +++ b/x86/vgetmantpd.html @@ -0,0 +1,250 @@ + +VGETMANTPD + — Extract Float64 Vector of Normalized Mantissas From Float64 Vector

VGETMANTPD + — Extract Float64 Vector of Normalized Mantissas From Float64 Vector

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 26 /r ib VGETMANTPD xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8AV/VAVX512VL AVX512FGet Normalized Mantissa from float64 vector xmm2/m128/m64bcst and store the result in xmm1, using imm8 for sign control and mantissa interval normalization, under writemask.
EVEX.256.66.0F3A.W1 26 /r ib VGETMANTPD ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8AV/VAVX512VL AVX512FGet Normalized Mantissa from float64 vector ymm2/m256/m64bcst and store the result in ymm1, using imm8 for sign control and mantissa interval normalization, under writemask.
EVEX.512.66.0F3A.W1 26 /r ib VGETMANTPD zmm1 {k1}{z}, zmm2/m512/m64bcst{sae}, imm8AV/VAVX512FGet Normalized Mantissa from float64 vector zmm2/m512/m64bcst and store the result in zmm1, using imm8 for sign control and mantissa interval normalization, under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Convert double precision floating values in the source operand (the second operand) to double precision floating-point values with the mantissa normalization and sign control specified by the imm8 byte, see Figure 5-15. The converted results are written to the destination operand (the first operand) using writemask k1. The normalized mantissa is specified by interv (imm8[1:0]) and the sign control (sc) is specified by bits 3:2 of the immediate byte.

+

The destination operand is a ZMM/YMM/XMM register updated under the writemask. The source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location.

+
+ + + + + + + + + + + + + + + + + + + + + +0 +imm8 +Must Be Zero +Sign Control (SC) +Normaiization Interval +Imm8[1:0] = 00b : Interval is [ 1, 2) +Imm8[3:2] = 00b : sign(SRC) +Imm8[1:0] = 01b : Interval is [1/2, 2) +Imm8[3:2] = 01b : 0 +Imm8[1:0] = 10b : Interval is [ 1/2, 1) +Imm8[3] = 1b : qNan_Indefinite if sign(SRC) != 0, regardless of imm8[2]. +Imm8[1:0] = 11b : Interval is [3/4, 3/2) +
Figure 5-15. Imm8 Controls for VGETMANTPD/SD/PS/SS
+

For each input double precision floating-point value x, The conversion operation is:

+

GetMant(x) = ±2k|x.significand|

+

where:

+

1 <= |x.significand| < 2

+

Unbiased exponent k can be either 0 or -1, depending on the interval range defined by interv, the range of the significand and whether the exponent of the source is even or odd. The sign of the final result is determined by sc

+

and the source sign. The encoded value of imm8[1:0] and sign control are shown in Figure 5-15.

+

Each converted double precision floating-point result is encoded according to the sign control, the unbiased exponent k (adding bias) and a mantissa normalized to the range specified by interv.

+

The GetMant() function follows Table 5-18 when dealing with floating-point special numbers.

+

This instruction is writemasked, so only those elements with the corresponding bit set in vector mask register k1 are computed and stored into the destination. Elements in zmm1 with the corresponding bit clear in k1 retain their previous values.

+

Note: EVEX.vvvv is reserved and must be 1111b; otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InputResultExceptions / Comments
NaNQNaN(SRC)Ignore interv If (SRC = SNaN) then #IE
+∞1.0Ignore interv
+01.0Ignore interv
-0IF (SC[0]) THEN +1.0 ELSE -1.0Ignore interv
-∞IF (SC[1]) THEN {QNaN_Indefinite} ELSE { IF (SC[0]) THEN +1.0 ELSE -1.0Ignore interv If (SC[1]) then #IE
negativeSC[1] ? QNaN_Indefinite : Getmant(SRC)1If (SC[1]) then #IE
+
Table 5-18. GetMant() Special Float Values Behavior
+
+

1. In case SC[1]==0, the sign of Getmant(SRC) is declared according to SC[0].

+

Operation + ¶ +

+
def getmant_fp64(src, sign_control, normalization_interval):
+    bias := 1023
+    dst.sign := sign_control[0] ? 0 : src.sign
+    signed_one := sign_control[0] ? +1.0 : -1.0
+    dst.exp := src.exp
+    dst.fraction := src.fraction
+    zero := (dst.exp = 0) and ((dst.fraction = 0) or (MXCSR.DAZ=1))
+    denormal := (dst.exp = 0) and (dst.fraction != 0) and (MXCSR.DAZ=0)
+    infinity := (dst.exp = 0x7FF) and (dst.fraction = 0)
+    nan := (dst.exp = 0x7FF) and (dst.fraction != 0)
+    src_signaling := src.fraction[51]
+    snan := nan and (src_signaling = 0)
+    positive := (src.sign = 0)
+    negative := (src.sign = 1)
+    if nan:
+        if snan:
+            MXCSR.IE := 1
+        return qnan(src)
+    if positive and (zero or infinity):
+        return 1.0
+    if negative:
+        if zero:
+            return signed_one
+        if infinity:
+            if sign_control[1]:
+                MXCSR.IE := 1
+                return QNaN_Indefinite
+            return signed_one
+        if sign_control[1]:
+            MXCSR.IE := 1
+            return QNaN_Indefinite
+    if denormal:
+        jbit := 0
+        dst.exp := bias
+        while jbit = 0:
+            jbit := dst.fraction[51]
+            dst.fraction := dst.fraction << 1
+            dst.exp : = dst.exp - 1
+        MXCSR.DE := 1
+    unbiased_exp := dst.exp - bias
+    odd_exp := unbiased_exp[0]
+    signaling_bit := dst.fraction[51]
+    if normalization_interval = 0b00:
+        dst.exp := bias
+    else if normalization_interval = 0b01:
+        dst.exp := odd_exp ? bias-1 : bias
+    else if normalization_interval = 0b10:
+        dst.exp := bias-1
+    else if normalization_interval = 0b11:
+        dst.exp := signaling_bit ? bias-1 : bias
+    return dst
+
+

VGETMANTPD (EVEX Encoded Versions) + ¶ +

+
VGETMANTPD dest{k1}, src, imm8
+VL = 128, 256, or 512
+KL := VL / 64
+sign_control := imm8[3:2]
+normalization_interval := imm8[1:0]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := src.double[0]
+        ELSE:
+            tsrc := src.double[i]
+        DEST.double[i] := getmant_fp64(tsrc, sign_control, normalization_interval)
+    ELSE IF *zeroing*:
+        DEST.double[i] := 0
+    //else DEST.double[i] remains unchanged
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETMANTPD __m512d _mm512_getmant_pd( __m512d a, enum intv, enum sgn);
+
+
VGETMANTPD __m512d _mm512_mask_getmant_pd(__m512d s, __mmask8 k, __m512d a, enum intv, enum sgn);
+
+
VGETMANTPD __m512d _mm512_maskz_getmant_pd( __mmask8 k, __m512d a, enum intv, enum sgn);
+
+
VGETMANTPD __m512d _mm512_getmant_round_pd( __m512d a, enum intv, enum sgn, int r);
+
+
VGETMANTPD __m512d _mm512_mask_getmant_round_pd(__m512d s, __mmask8 k, __m512d a, enum intv, enum sgn, int r);
+
+
VGETMANTPD __m512d _mm512_maskz_getmant_round_pd( __mmask8 k, __m512d a, enum intv, enum sgn, int r);
+
+
VGETMANTPD __m256d _mm256_getmant_pd( __m256d a, enum intv, enum sgn);
+
+
VGETMANTPD __m256d _mm256_mask_getmant_pd(__m256d s, __mmask8 k, __m256d a, enum intv, enum sgn);
+
+
VGETMANTPD __m256d _mm256_maskz_getmant_pd( __mmask8 k, __m256d a, enum intv, enum sgn);
+
+
VGETMANTPD __m128d _mm_getmant_pd( __m128d a, enum intv, enum sgn);
+
+
VGETMANTPD __m128d _mm_mask_getmant_pd(__m128d s, __mmask8 k, __m128d a, enum intv, enum sgn);
+
+
VGETMANTPD __m128d _mm_maskz_getmant_pd( __mmask8 k, __m128d a, enum intv, enum sgn);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Denormal, Invalid.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vgetmantph.html b/x86/vgetmantph.html new file mode 100644 index 0000000..6ebd0c1 --- /dev/null +++ b/x86/vgetmantph.html @@ -0,0 +1,224 @@ + +VGETMANTPH + — Extract FP16 Vector of Normalized Mantissas from FP16 Vector

VGETMANTPH + — Extract FP16 Vector of Normalized Mantissas from FP16 Vector

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.0F3A.W0 26 /r /ib VGETMANTPH xmm1{k1}{z}, xmm2/m128/m16bcst, imm8AV/VAVX512-FP16 AVX512VLGet normalized mantissa from FP16 vector xmm2/m128/m16bcst and store the result in xmm1, using imm8 for sign control and mantissa interval normalization, subject to writemask k1.
EVEX.256.NP.0F3A.W0 26 /r /ib VGETMANTPH ymm1{k1}{z}, ymm2/m256/m16bcst, imm8AV/VAVX512-FP16 AVX512VLGet normalized mantissa from FP16 vector ymm2/m256/m16bcst and store the result in ymm1, using imm8 for sign control and mantissa interval normalization, subject to writemask k1.
EVEX.512.NP.0F3A.W0 26 /r /ib VGETMANTPH zmm1{k1}{z}, zmm2/m512/m16bcst {sae}, imm8AV/VAVX512-FP16Get normalized mantissa from FP16 vector zmm2/m512/m16bcst and store the result in zmm1, using imm8 for sign control and mantissa interval normalization, subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8 (r)N/A
+

Description + ¶ +

+

This instruction converts the FP16 values in the source operand (the second operand) to FP16 values with the mantissa normalization and sign control specified by the imm8 byte, see Table 5-19. The converted results are written to the destination operand (the first operand) using writemask k1. The normalized mantissa is specified by interv (imm8[1:0]) and the sign control (SC) is specified by bits 3:2 of the immediate byte.

+

The destination elements are updated according to the writemask.

+
+ + + + + + + + + + + + +
imm8 BitsDefinition
imm8[7:4]Must be zero.
imm8[3:2]Sign Control (SC) 0b00: Sign(SRC) 0b01: 0 0b1x: QNaN_Indefinite if sign(SRC)!=0
imm8[1:0]Interv 0b00: Interval is [1, 2) 0b01: Interval is [1/2, 2) 0b10: Interval is [1/2, 1) 0b11: Interval is [3/4, 3/2)
+
Table 5-19. imm8 Controls for VGETMANTPH/VGETMANTSH
+

For each input FP16 value x, The conversion operation is:

+

GetMant(x) = ±2k|x.significand|

+

where:

+

1 ≤ |x.significand| < 2

+

Unbiased exponent k depends on the interval range defined by interv and whether the exponent of the source is even or odd. The sign of the final result is determined by the sign control and the source sign and the leading fraction bit.

+

The encoded value of imm8[1:0] and sign control are shown in Table 5-19.

+

Each converted FP16 result is encoded according to the sign control, the unbiased exponent k (adding bias) and a mantissa normalized to the range specified by interv.

+

The GetMant() function follows Table 5-20 when dealing with floating-point special numbers.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InputResultExceptions / Comments
NaNQNaN(SRC)Ignore interv. If (SRC = SNaN), then #IE.
+∞1.0Ignore interv.
+01.0Ignore interv.
-0IF (SC[0]) THEN +1.0 ELSE -1.0Ignore interv.
-∞IF (SC[1]) THEN {QNaN_Indefinite} ELSE { IF (SC[0]) THEN +1.0 ELSE -1.0Ignore interv. If (SC[1]), then #IE.
negativeSC[1] ? QNaN_Indefinite : Getmant(SRC)1If (SC[1]), then #IE.
+
Table 5-20. GetMant() Special Float Values Behavior
+
+

1. In case SC[1]==0, the sign of Getmant(SRC) is declared according to SC[0].

+

Operation + ¶ +

+
def getmant_fp16(src, sign_control, normalization_interval):
+    bias := 15
+    dst.sign := sign_control[0] ? 0 : src.sign
+    signed_one := sign_control[0] ? +1.0 : -1.0
+    dst.exp := src.exp
+    dst.fraction := src.fraction
+    zero := (dst.exp = 0) and (dst.fraction = 0)
+    denormal := (dst.exp = 0) and (dst.fraction != 0)
+    infinity := (dst.exp = 0x1F) and (dst.fraction = 0)
+    nan := (dst.exp = 0x1F) and (dst.fraction != 0)
+    src_signaling := src.fraction[9]
+    snan := nan and (src_signaling = 0)
+    positive := (src.sign = 0)
+    negative := (src.sign = 1)
+    if nan:
+        if snan:
+            MXCSR.IE := 1
+        return qnan(src)
+    if positive and (zero or infinity):
+        return 1.0
+    if negative:
+        if zero:
+            return signed_one
+        if infinity:
+            if sign_control[1]:
+                MXCSR.IE := 1
+                return QNaN_Indefinite
+            return signed_one
+        if sign_control[1]:
+            MXCSR.IE := 1
+            return QNaN_Indefinite
+    if denormal:
+        jbit := 0
+        dst.exp := bias // set exponent to bias value
+        while jbit = 0:
+            jbit := dst.fraction[9]
+            dst.fraction := dst.fraction << 1
+            dst.exp : = dst.exp - 1
+        MXCSR.DE := 1
+    unbaiased_exp := dst.exp - bias
+    odd_exp := unbaiased_exp[0]
+    signaling_bit := dst.fraction[9]
+    if normalization_interval = 0b00:
+        dst.exp := bias
+    else if normalization_interval = 0b01:
+        dst.exp := odd_exp ? bias-1 : bias
+    else if normalization_interval = 0b10:
+        dst.exp := bias-1
+    else if normalization_interval = 0b11:
+        dst.exp := signaling_bit ? bias-1 : bias
+    return dst
+
+

VGETMANTPH dest{k1}, src, imm8 + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+sign_control := imm8[3:2]
+normalization_interval := imm8[1:0]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := src.fp16[0]
+        ELSE:
+            tsrc := src.fp16[i]
+        DEST.fp16[i] := getmant_fp16(tsrc, sign_control, normalization_interval)
+    ELSE IF *zeroing*:
+        DEST.fp16[i] := 0
+    //else DEST.fp16[i] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETMANTPH __m128h _mm_getmant_ph (__m128h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTPH __m128h _mm_mask_getmant_ph (__m128h src, __mmask8 k, __m128h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTPH __m128h _mm_maskz_getmant_ph (__mmask8 k, __m128h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTPH __m256h _mm256_getmant_ph (__m256h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTPH __m256h _mm256_mask_getmant_ph (__m256h src, __mmask16 k, __m256h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTPH __m256h _mm256_maskz_getmant_ph (__mmask16 k, __m256h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTPH __m512h _mm512_getmant_ph (__m512h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTPH __m512h _mm512_mask_getmant_ph (__m512h src, __mmask32 k, __m512h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTPH __m512h _mm512_maskz_getmant_ph (__mmask32 k, __m512h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTPH __m512h _mm512_getmant_round_ph (__m512h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign, const int sae);
+
+
VGETMANTPH __m512h _mm512_mask_getmant_round_ph (__m512h src, __mmask32 k, __m512h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign, const int sae);
+
+
VGETMANTPH __m512h _mm512_maskz_getmant_round_ph (__mmask32 k, __m512h a, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign, const int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vgetmantps.html b/x86/vgetmantps.html new file mode 100644 index 0000000..0670c23 --- /dev/null +++ b/x86/vgetmantps.html @@ -0,0 +1,181 @@ + +VGETMANTPS + — Extract Float32 Vector of Normalized Mantissas From Float32 Vector

VGETMANTPS + — Extract Float32 Vector of Normalized Mantissas From Float32 Vector

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 26 /r ib VGETMANTPS xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8AV/VAVX512VL AVX512FGet normalized mantissa from float32 vector xmm2/m128/m32bcst and store the result in xmm1, using imm8 for sign control and mantissa interval normalization, under writemask.
EVEX.256.66.0F3A.W0 26 /r ib VGETMANTPS ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8AV/VAVX512VL AVX512FGet normalized mantissa from float32 vector ymm2/m256/m32bcst and store the result in ymm1, using imm8 for sign control and mantissa interval normalization, under writemask.
EVEX.512.66.0F3A.W0 26 /r ib VGETMANTPS zmm1 {k1}{z}, zmm2/m512/m32bcst{sae}, imm8AV/VAVX512FGet normalized mantissa from float32 vector zmm2/m512/m32bcst and store the result in zmm1, using imm8 for sign control and mantissa interval normalization, under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Convert single-precision floating values in the source operand (the second operand) to single-precision floating-point values with the mantissa normalization and sign control specified by the imm8 byte, see Figure 5-15. The converted results are written to the destination operand (the first operand) using writemask k1. The normalized mantissa is specified by interv (imm8[1:0]) and the sign control (sc) is specified by bits 3:2 of the immediate byte.

+

The destination operand is a ZMM/YMM/XMM register updated under the writemask. The source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32-bit memory location.

+

For each input single-precision floating-point value x, The conversion operation is:

+

GetMant(x) = ±2k|x.significand|

+

where:

+

1 <= |x.significand| < 2

+

Unbiased exponent k can be either 0 or -1, depending on the interval range defined by interv, the range of the significand and whether the exponent of the source is even or odd. The sign of the final result is determined by sc and the source sign. The encoded value of imm8[1:0] and sign control are shown in Figure 5-15.

+

Each converted single-precision floating-point result is encoded according to the sign control, the unbiased exponent k (adding bias) and a mantissa normalized to the range specified by interv.

+

The GetMant() function follows Table 5-18 when dealing with floating-point special numbers.

+

This instruction is writemasked, so only those elements with the corresponding bit set in vector mask register k1 are computed and stored into the destination. Elements in zmm1 with the corresponding bit clear in k1 retain their previous values.

+

Note: EVEX.vvvv is reserved and must be 1111b, VEX.L must be 0; otherwise instructions will #UD.

+

Operation + ¶ +

+
def getmant_fp32(src, sign_control, normalization_interval):
+    bias := 127
+    dst.sign := sign_control[0] ? 0 : src.sign
+    signed_one := sign_control[0] ? +1.0 : -1.0
+    dst.exp := src.exp
+    dst.fraction := src.fraction
+    zero := (dst.exp = 0) and ((dst.fraction = 0) or (MXCSR.DAZ=1))
+    denormal := (dst.exp = 0) and (dst.fraction != 0) and (MXCSR.DAZ=0)
+    infinity := (dst.exp = 0xFF) and (dst.fraction = 0)
+    nan := (dst.exp = 0xFF) and (dst.fraction != 0)
+    src_signaling := src.fraction[22]
+    snan := nan and (src_signaling = 0)
+    positive := (src.sign = 0)
+    negative := (src.sign = 1)
+    if nan:
+        if snan:
+            MXCSR.IE := 1
+        return qnan(src)
+    if positive and (zero or infinity):
+        return 1.0
+    if negative:
+        if zero:
+            return signed_one
+        if infinity:
+            if sign_control[1]:
+                MXCSR.IE := 1
+                return QNaN_Indefinite
+            return signed_one
+        if sign_control[1]:
+            MXCSR.IE := 1
+            return QNaN_Indefinite
+    if denormal:
+        jbit := 0
+        dst.exp := bias
+        while jbit = 0:
+            jbit := dst.fraction[22]
+            dst.fraction := dst.fraction << 1
+            dst.exp : = dst.exp - 1
+        MXCSR.DE := 1
+    unbiased_exp := dst.exp - bias
+    odd_exp := unbiased_exp[0]
+    signaling_bit := dst.fraction[22]
+    if normalization_interval = 0b00:
+        dst.exp := bias
+    else if normalization_interval = 0b01:
+        dst.exp := odd_exp ? bias-1 : bias
+    else if normalization_interval = 0b10:
+        dst.exp := bias-1
+    else if normalization_interval = 0b11:
+        dst.exp := signaling_bit ? bias-1 : bias
+    return dst
+
+

VGETMANTPS (EVEX encoded versions) + ¶ +

+
VGETMANTPS dest{k1}, src, imm8
+VL = 128, 256, or 512
+KL := VL / 32
+sign_control := imm8[3:2]
+normalization_interval := imm8[1:0]
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := src.float[0]
+        ELSE:
+            tsrc := src.float[i]
+        DEST.float[i] := getmant_fp32(tsrc, sign_control, normalization_interval)
+    ELSE IF *zeroing*:
+        DEST.float[i] := 0
+    //else DEST.float[i] remains unchanged
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETMANTPS __m512 _mm512_getmant_ps( __m512 a, enum intv, enum sgn);
+
+
VGETMANTPS __m512 _mm512_mask_getmant_ps(__m512 s, __mmask16 k, __m512 a, enum intv, enum sgn;
+
+
VGETMANTPS __m512 _mm512_maskz_getmant_ps(__mmask16 k, __m512 a, enum intv, enum sgn);
+
+
VGETMANTPS __m512 _mm512_getmant_round_ps( __m512 a, enum intv, enum sgn, int r);
+
+
VGETMANTPS __m512 _mm512_mask_getmant_round_ps(__m512 s, __mmask16 k, __m512 a, enum intv, enum sgn, int r);
+
+
VGETMANTPS __m512 _mm512_maskz_getmant_round_ps(__mmask16 k, __m512 a, enum intv, enum sgn, int r);
+
+
VGETMANTPS __m256 _mm256_getmant_ps( __m256 a, enum intv, enum sgn);
+
+
VGETMANTPS __m256 _mm256_mask_getmant_ps(__m256 s, __mmask8 k, __m256 a, enum intv, enum sgn);
+
+
VGETMANTPS __m256 _mm256_maskz_getmant_ps( __mmask8 k, __m256 a, enum intv, enum sgn);
+
+
VGETMANTPS __m128 _mm_getmant_ps( __m128 a, enum intv, enum sgn);
+
+
VGETMANTPS __m128 _mm_mask_getmant_ps(__m128 s, __mmask8 k, __m128 a, enum intv, enum sgn);
+
+
VGETMANTPS __m128 _mm_maskz_getmant_ps( __mmask8 k, __m128 a, enum intv, enum sgn);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Denormal, Invalid.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vgetmantsd.html b/x86/vgetmantsd.html new file mode 100644 index 0000000..6aaa5ac --- /dev/null +++ b/x86/vgetmantsd.html @@ -0,0 +1,98 @@ + +VGETMANTSD + — Extract Float64 of Normalized Mantissa From Float64 Scalar

VGETMANTSD + — Extract Float64 of Normalized Mantissa From Float64 Scalar

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W1 27 /r ib VGETMANTSD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}, imm8AV/VAVX512FExtract the normalized mantissa of the low float64 element in xmm3/m64 using imm8 for sign control and mantissa interval normalization. Store the mantissa to xmm1 under the writemask k1 and merge with the other elements of xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Convert the double precision floating values in the low quadword element of the second source operand (the third operand) to double precision floating-point value with the mantissa normalization and sign control specified by the imm8 byte, see Figure 5-15. The converted result is written to the low quadword element of the destination operand (the first operand) using writemask k1. Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand. The normalized mantissa is specified by interv (imm8[1:0]) and the sign control (sc) is specified by bits 3:2 of the immediate byte.

+

The conversion operation is:

+

GetMant(x) = ±2k|x.significand|

+

where:

+

1 <= |x.significand| < 2

+

Unbiased exponent k can be either 0 or -1, depending on the interval range defined by interv, the range of the significand and whether the exponent of the source is even or odd. The sign of the final result is determined by sc and the source sign. The encoded value of imm8[1:0] and sign control are shown in Figure 5-15.

+

The converted double precision floating-point result is encoded according to the sign control, the unbiased exponent k (adding bias) and a mantissa normalized to the range specified by interv.

+

The GetMant() function follows Table 5-18 when dealing with floating-point special numbers.

+

If writemasking is used, the low quadword element of the destination operand is conditionally updated depending on the value of writemask register k1. If writemasking is not used, the low quadword element of the destination operand is unconditionally updated.

+

Operation + ¶ +

+
// getmant_fp64(src, sign_control, normalization_interval) is defined in the operation section of VGETMANTPD
+
+

VGETMANTSD (EVEX encoded version) + ¶ +

+
SignCtrl[1:0] := IMM8[3:2];
+Interv[1:0] := IMM8[1:0];
+IF k1[0] OR *no writemask*
+    THEN DEST[63:0] :=
+            getmant_fp64(src, sign_control, normalization_interval)
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[63:0] := 0
+        FI
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETMANTSD __m128d _mm_getmant_sd( __m128d a, __m128 b, enum intv, enum sgn);
+
+
VGETMANTSD __m128d _mm_mask_getmant_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, enum intv, enum sgn);
+
+
VGETMANTSD __m128d _mm_maskz_getmant_sd( __mmask8 k, __m128 a, __m128d b, enum intv, enum sgn);
+
+
VGETMANTSD __m128d _mm_getmant_round_sd( __m128d a, __m128 b, enum intv, enum sgn, int r);
+
+
VGETMANTSD __m128d _mm_mask_getmant_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, enum intv, enum sgn, int r);
+
+
VGETMANTSD __m128d _mm_maskz_getmant_round_sd( __mmask8 k, __m128d a, __m128d b, enum intv, enum sgn, int r);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Denormal, Invalid

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vgetmantsh.html b/x86/vgetmantsh.html new file mode 100644 index 0000000..da92335 --- /dev/null +++ b/x86/vgetmantsh.html @@ -0,0 +1,96 @@ + +VGETMANTSH + — Extract FP16 of Normalized Mantissa from FP16 Scalar

VGETMANTSH + — Extract FP16 of Normalized Mantissa from FP16 Scalar

+ + + + + + + + + + + + + +
Instruction En Bit Mode Flag +Support Instruction En Bit Mode Flag +Support 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature Instruction En Bit Mode Flag 64/32 CPUID Feature Instruction En Bit Mode Flag CPUID Feature Instruction En Bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.NP.0F3A.W0 27 /r /ib VGETMANTSH xmm1{k1}{z}, xmm2, xmm3/m16 {sae}, imm8AV/VAVX512-FP16Extract the normalized mantissa of the low FP16 element in xmm3/m16 using imm8 for sign control and mantissa interval normalization. Store the mantissa to xmm1 subject to writemask k1 and merge with the other elements of xmm2. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

This instruction converts the FP16 value in the low element of the second source operand to FP16 values with the mantissa normalization and sign control specified by the imm8 byte, see Table 5-19. The converted result is written to the low element of the destination operand using writemask k1. The normalized mantissa is specified by interv (imm8[1:0]) and the sign control (SC) is specified by bits 3:2 of the immediate byte.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

For each input FP16 value x, The conversion operation is:

+

GetMant(x) = ±2k|x.significand|

+

where:

+

1 ≤ |x.significand| < 2

+

Unbiased exponent k depends on the interval range defined by interv and whether the exponent of the source is even or odd. The sign of the final result is determined by the sign control and the source sign and the leading fraction bit.

+

The encoded value of imm8[1:0] and sign control are shown in Table 5-19.

+

Each converted FP16 result is encoded according to the sign control, the unbiased exponent k (adding bias) and a mantissa normalized to the range specified by interv.

+

The GetMant() function follows Table 5-20 when dealing with floating-point special numbers.

+

Operation + ¶ +

+

VGETMANTSH dest{k1}, src1, src2, imm8 + ¶ +

+
sign_control := imm8[3:2]
+normalization_interval := imm8[1:0]
+IF k1[0] or *no writemask*:
+    dest.fp16[0] := getmant_fp16(src2.fp16[0],
+            // see VGETMANTPH
+        normalization_interval)
+ELSE IF *zeroing*:
+    dest.fp16[0] := 0
+//else dest.fp16[0] remains unchanged
+DEST[127:16] := src1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETMANTSH __m128h _mm_getmant_round_sh (__m128h a, __m128h b, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign, const int sae);
+
+
VGETMANTSH __m128h _mm_mask_getmant_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign, const int sae);
+
+
VGETMANTSH __m128h _mm_maskz_getmant_round_sh (__mmask8 k, __m128h a, __m128h b, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign, const int sae);
+
+
VGETMANTSH __m128h _mm_getmant_sh (__m128h a, __m128h b, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTSH __m128h _mm_mask_getmant_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+
VGETMANTSH __m128h _mm_maskz_getmant_sh (__mmask8 k, __m128h a, __m128h b, _MM_MANTISSA_NORM_ENUM norm, _MM_MANTISSA_SIGN_ENUM sign);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vgetmantss.html b/x86/vgetmantss.html new file mode 100644 index 0000000..1076e73 --- /dev/null +++ b/x86/vgetmantss.html @@ -0,0 +1,98 @@ + +VGETMANTSS + — Extract Float32 Vector of Normalized Mantissa From Float32 Scalar

VGETMANTSS + — Extract Float32 Vector of Normalized Mantissa From Float32 Scalar

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W0 27 /r ib VGETMANTSS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}, imm8AV/VAVX512FExtract the normalized mantissa from the low float32 element of xmm3/m32 using imm8 for sign control and mantissa interval normalization, store the mantissa to xmm1 under the writemask k1 and merge with the other elements of xmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Convert the single-precision floating values in the low doubleword element of the second source operand (the third operand) to single-precision floating-point value with the mantissa normalization and sign control specified by the imm8 byte, see Figure 5-15. The converted result is written to the low doubleword element of the destination operand (the first operand) using writemask k1. Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand. The normalized mantissa is specified by interv (imm8[1:0]) and the sign control (sc) is specified by bits 3:2 of the immediate byte.

+

The conversion operation is:

+

GetMant(x) = ±2k|x.significand|

+

where:

+

1 <= |x.significand| < 2

+

Unbiased exponent k can be either 0 or -1, depending on the interval range defined by interv, the range of the significand and whether the exponent of the source is even or odd. The sign of the final result is determined by sc and the source sign. The encoded value of imm8[1:0] and sign control are shown in Figure 5-15.

+

The converted single-precision floating-point result is encoded according to the sign control, the unbiased exponent k (adding bias) and a mantissa normalized to the range specified by interv.

+

The GetMant() function follows Table 5-18 when dealing with floating-point special numbers.

+

If writemasking is used, the low doubleword element of the destination operand is conditionally updated depending on the value of writemask register k1. If writemasking is not used, the low doubleword element of the destination operand is unconditionally updated.

+

Operation + ¶ +

+
// getmant_fp32(src, sign_control, normalization_interval) is defined in the operation section of VGETMANTPS
+
+

VGETMANTSS (EVEX encoded version) + ¶ +

+
SignCtrl[1:0] := IMM8[3:2];
+Interv[1:0] := IMM8[1:0];
+IF k1[0] OR *no writemask*
+    THEN DEST[31:0] :=
+            getmant_fp32(src, sign_control, normalization_interval)
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[31:0] := 0
+        FI
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VGETMANTSS __m128 _mm_getmant_ss( __m128 a, __m128 b, enum intv, enum sgn);
+
+
VGETMANTSS __m128 _mm_mask_getmant_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, enum intv, enum sgn);
+
+
VGETMANTSS __m128 _mm_maskz_getmant_ss( __mmask8 k, __m128 a, __m128 b, enum intv, enum sgn);
+
+
VGETMANTSS __m128 _mm_getmant_round_ss( __m128 a, __m128 b, enum intv, enum sgn, int r);
+
+
VGETMANTSS __m128 _mm_mask_getmant_round_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, enum intv, enum sgn, int r);
+
+
VGETMANTSS __m128 _mm_maskz_getmant_round_ss( __mmask8 k, __m128 a, __m128 b, enum intv, enum sgn, int r);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Denormal, Invalid

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vinsertf128.vinsertf32x4.vinsertf64x2.vinsertf32x8.vinsertf64x4.html b/x86/vinsertf128.vinsertf32x4.vinsertf64x2.vinsertf32x8.vinsertf64x4.html new file mode 100644 index 0000000..c7d824e --- /dev/null +++ b/x86/vinsertf128.vinsertf32x4.vinsertf64x2.vinsertf32x8.vinsertf64x4.html @@ -0,0 +1,294 @@ + +VINSERTF128/VINSERTF32x4/VINSERTF64x2/VINSERTF32x8/VINSERTF64x4 + — Insert PackedFloating-Point Values

VINSERTF128/VINSERTF32x4/VINSERTF64x2/VINSERTF32x8/VINSERTF64x4 + — Insert PackedFloating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.256.66.0F3A.W0 18 /r ib VINSERTF128 ymm1, ymm2, xmm3/m128, imm8AV/VAVXInsert 128 bits of packed floating-point values from xmm3/m128 and the remaining values from ymm2 into ymm1.
EVEX.256.66.0F3A.W0 18 /r ib VINSERTF32X4 ymm1 {k1}{z}, ymm2, xmm3/m128, imm8CV/VAVX512VL AVX512FInsert 128 bits of packed single-precision floating-point values from xmm3/m128 and the remaining values from ymm2 into ymm1 under writemask k1.
EVEX.512.66.0F3A.W0 18 /r ib VINSERTF32X4 zmm1 {k1}{z}, zmm2, xmm3/m128, imm8CV/VAVX512FInsert 128 bits of packed single-precision floating-point values from xmm3/m128 and the remaining values from zmm2 into zmm1 under writemask k1.
EVEX.256.66.0F3A.W1 18 /r ib VINSERTF64X2 ymm1 {k1}{z}, ymm2, xmm3/m128, imm8BV/VAVX512VL AVX512DQInsert 128 bits of packed double precision floating-point values from xmm3/m128 and the remaining values from ymm2 into ymm1 under writemask k1.
EVEX.512.66.0F3A.W1 18 /r ib VINSERTF64X2 zmm1 {k1}{z}, zmm2, xmm3/m128, imm8BV/VAVX512DQInsert 128 bits of packed double precision floating-point values from xmm3/m128 and the remaining values from zmm2 into zmm1 under writemask k1.
EVEX.512.66.0F3A.W0 1A /r ib VINSERTF32X8 zmm1 {k1}{z}, zmm2, ymm3/m256, imm8DV/VAVX512DQInsert 256 bits of packed single-precision floating-point values from ymm3/m256 and the remaining values from zmm2 into zmm1 under writemask k1.
EVEX.512.66.0F3A.W1 1A /r ib VINSERTF64X4 zmm1 {k1}{z}, zmm2, ymm3/m256, imm8CV/VAVX512FInsert 256 bits of packed double precision floating-point values from ymm3/m256 and the remaining values from zmm2 into zmm1 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
BTuple2ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
CTuple4ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
DTuple8ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

VINSERTF128/VINSERTF32x4 and VINSERTF64x2 insert 128-bits of packed floating-point values from the second source operand (the third operand) into the destination operand (the first operand) at an 128-bit granularity offset multiplied by imm8[0] (256-bit) or imm8[1:0]. The remaining portions of the destination operand are copied from the corresponding fields of the first source operand (the second operand). The second source operand can be either an XMM register or a 128-bit memory location. The destination and first source operands are vector registers.

+

VINSERTF32x4: The destination operand is a ZMM/YMM register and updated at 32-bit granularity according to the writemask. The high 6/7 bits of the immediate are ignored.

+

VINSERTF64x2: The destination operand is a ZMM/YMM register and updated at 64-bit granularity according to the writemask. The high 6/7 bits of the immediate are ignored.

+

VINSERTF32x8 and VINSERTF64x4 inserts 256-bits of packed floating-point values from the second source operand (the third operand) into the destination operand (the first operand) at a 256-bit granular offset multiplied by imm8[0]. The remaining portions of the destination are copied from the corresponding fields of the first source operand (the second operand). The second source operand can be either an YMM register or a 256-bit memory location. The high 7 bits of the immediate are ignored. The destination operand is a ZMM register and updated at 32/64-bit granularity according to the writemask.

+

Operation + ¶ +

+

VINSERTF32x4 (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 256), (16, 512)
+TEMP_DEST[VL-1:0] := SRC1[VL-1:0]
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC2[127:0]
+        1: TMP_DEST[255:128] := SRC2[127:0]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0]:=SRC2[127:0]
+        01: TMP_DEST[255:128]:=SRC2[127:0]
+        10: TMP_DEST[383:256]:=SRC2[127:0]
+        11: TMP_DEST[511:384]:=SRC2[127:0]
+    ESAC.
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VINSERTF64x2 (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+TEMP_DEST[VL-1:0] := SRC1[VL-1:0]
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC2[127:0]
+        1: TMP_DEST[255:128] := SRC2[127:0]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0]:=SRC2[127:0]
+        01: TMP_DEST[255:128]:=SRC2[127:0]
+        10: TMP_DEST[383:256]:=SRC2[127:0]
+        11: TMP_DEST[511:384]:=SRC2[127:0]
+    ESAC.
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VINSERTF32x8 (EVEX.U1.512 encoded version) + ¶ +

+
TEMP_DEST[VL-1:0] := SRC1[VL-1:0]
+CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC2[255:0]
+    1: TMP_DEST[511:256] := SRC2[255:0]
+ESAC.
+FOR j := 0 TO 15
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VINSERTF64x4 (EVEX.512 encoded version) + ¶ +

+
VL = 512
+TEMP_DEST[VL-1:0] := SRC1[VL-1:0]
+CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC2[255:0]
+    1: TMP_DEST[511:256] := SRC2[255:0]
+ESAC.
+FOR j := 0 TO 7
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VINSERTF128 (VEX encoded version) + ¶ +

+
TEMP[255:0] := SRC1[255:0]
+CASE (imm8[0]) OF
+    0: TEMP[127:0] := SRC2[127:0]
+    1: TEMP[255:128] := SRC2[127:0]
+ESAC
+DEST := TEMP
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VINSERTF32x4 __m512 _mm512_insertf32x4( __m512 a, __m128 b, int imm);
+
+
VINSERTF32x4 __m512 _mm512_mask_insertf32x4(__m512 s, __mmask16 k, __m512 a, __m128 b, int imm);
+
+
VINSERTF32x4 __m512 _mm512_maskz_insertf32x4( __mmask16 k, __m512 a, __m128 b, int imm);
+
+
VINSERTF32x4 __m256 _mm256_insertf32x4( __m256 a, __m128 b, int imm);
+
+
VINSERTF32x4 __m256 _mm256_mask_insertf32x4(__m256 s, __mmask8 k, __m256 a, __m128 b, int imm);
+
+
VINSERTF32x4 __m256 _mm256_maskz_insertf32x4( __mmask8 k, __m256 a, __m128 b, int imm);
+
+
VINSERTF32x8 __m512 _mm512_insertf32x8( __m512 a, __m256 b, int imm);
+
+
VINSERTF32x8 __m512 _mm512_mask_insertf32x8(__m512 s, __mmask16 k, __m512 a, __m256 b, int imm);
+
+
VINSERTF32x8 __m512 _mm512_maskz_insertf32x8( __mmask16 k, __m512 a, __m256 b, int imm);
+
+
VINSERTF64x2 __m512d _mm512_insertf64x2( __m512d a, __m128d b, int imm);
+
+
VINSERTF64x2 __m512d _mm512_mask_insertf64x2(__m512d s, __mmask8 k, __m512d a, __m128d b, int imm);
+
+
VINSERTF64x2 __m512d _mm512_maskz_insertf64x2( __mmask8 k, __m512d a, __m128d b, int imm);
+
+
VINSERTF64x2 __m256d _mm256_insertf64x2( __m256d a, __m128d b, int imm);
+
+
VINSERTF64x2 __m256d _mm256_mask_insertf64x2(__m256d s, __mmask8 k, __m256d a, __m128d b, int imm);
+
+
VINSERTF64x2 __m256d _mm256_maskz_insertf64x2( __mmask8 k, __m256d a, __m128d b, int imm);
+
+
VINSERTF64x4 __m512d _mm512_insertf64x4( __m512d a, __m256d b, int imm);
+
+
VINSERTF64x4 __m512d _mm512_mask_insertf64x4(__m512d s, __mmask8 k, __m512d a, __m256d b, int imm);
+
+
VINSERTF64x4 __m512d _mm512_maskz_insertf64x4( __mmask8 k, __m512d a, __m256d b, int imm);
+
+
VINSERTF128 __m256 _mm256_insertf128_ps (__m256 a, __m128 b, int offset);
+
+
VINSERTF128 __m256d _mm256_insertf128_pd (__m256d a, __m128d b, int offset);
+
+
VINSERTF128 __m256i _mm256_insertf128_si256 (__m256i a, __m128i b, int offset);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-23, “Type 6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.L = 0.
+

EVEX-encoded instruction, see Table 2-54, “Type E6NF Class Exception Conditions.”

diff --git a/x86/vinserti128.vinserti32x4.vinserti64x2.vinserti32x8.vinserti64x4.html b/x86/vinserti128.vinserti32x4.vinserti64x2.vinserti32x8.vinserti64x4.html new file mode 100644 index 0000000..27f3efe --- /dev/null +++ b/x86/vinserti128.vinserti32x4.vinserti64x2.vinserti32x8.vinserti64x4.html @@ -0,0 +1,290 @@ + +VINSERTI128/VINSERTI32x4/VINSERTI64x2/VINSERTI32x8/VINSERTI64x4 + — Insert PackedInteger Values

VINSERTI128/VINSERTI32x4/VINSERTI64x2/VINSERTI32x8/VINSERTI64x4 + — Insert PackedInteger Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 Bit Mode SupportCPUID Feature FlagDescription
VEX.256.66.0F3A.W0 38 /r ib VINSERTI128 ymm1, ymm2, xmm3/m128, imm8AV/VAVX2Insert 128 bits of integer data from xmm3/m128 and the remaining values from ymm2 into ymm1.
EVEX.256.66.0F3A.W0 38 /r ib VINSERTI32X4 ymm1 {k1}{z}, ymm2, xmm3/m128, imm8CV/VAVX512VL AVX512FInsert 128 bits of packed doubleword integer values from xmm3/m128 and the remaining values from ymm2 into ymm1 under writemask k1.
EVEX.512.66.0F3A.W0 38 /r ib VINSERTI32X4 zmm1 {k1}{z}, zmm2, xmm3/m128, imm8CV/VAVX512FInsert 128 bits of packed doubleword integer values from xmm3/m128 and the remaining values from zmm2 into zmm1 under writemask k1.
EVEX.256.66.0F3A.W1 38 /r ib VINSERTI64X2 ymm1 {k1}{z}, ymm2, xmm3/m128, imm8BV/VAVX512VL AVX512DQInsert 128 bits of packed quadword integer values from xmm3/m128 and the remaining values from ymm2 into ymm1 under writemask k1.
EVEX.512.66.0F3A.W1 38 /r ib VINSERTI64X2 zmm1 {k1}{z}, zmm2, xmm3/m128, imm8BV/VAVX512DQInsert 128 bits of packed quadword integer values from xmm3/m128 and the remaining values from zmm2 into zmm1 under writemask k1.
EVEX.512.66.0F3A.W0 3A /r ib VINSERTI32X8 zmm1 {k1}{z}, zmm2, ymm3/m256, imm8DV/VAVX512DQInsert 256 bits of packed doubleword integer values from ymm3/m256 and the remaining values from zmm2 into zmm1 under writemask k1.
EVEX.512.66.0F3A.W1 3A /r ib VINSERTI64X4 zmm1 {k1}{z}, zmm2, ymm3/m256, imm8CV/VAVX512FInsert 256 bits of packed quadword integer values from ymm3/m256 and the remaining values from zmm2 into zmm1 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
BTuple2ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
CTuple4ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
DTuple8ModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

VINSERTI32x4 and VINSERTI64x2 inserts 128-bits of packed integer values from the second source operand (the third operand) into the destination operand (the first operand) at an 128-bit granular offset multiplied by imm8[0] (256-bit) or imm8[1:0]. The remaining portions of the destination are copied from the corresponding fields of the first source operand (the second operand). The second source operand can be either an XMM register or a 128-bit memory location. The high 6/7bits of the immediate are ignored. The destination operand is a ZMM/YMM register and updated at 32 and 64-bit granularity according to the writemask.

+

VINSERTI32x8 and VINSERTI64x4 inserts 256-bits of packed integer values from the second source operand (the third operand) into the destination operand (the first operand) at a 256-bit granular offset multiplied by imm8[0]. The remaining portions of the destination are copied from the corresponding fields of the first source operand (the second operand). The second source operand can be either an YMM register or a 256-bit memory location. The upper bits of the immediate are ignored. The destination operand is a ZMM register and updated at 32 and 64-bit granularity according to the writemask.

+

VINSERTI128 inserts 128-bits of packed integer data from the second source operand (the third operand) into the destination operand (the first operand) at a 128-bit granular offset multiplied by imm8[0]. The remaining portions of the destination are copied from the corresponding fields of the first source operand (the second operand). The

+

second source operand can be either an XMM register or a 128-bit memory location. The high 7 bits of the immediate are ignored. VEX.L must be 1, otherwise attempt to execute this instruction with VEX.L=0 will cause #UD.

+

Operation + ¶ +

+

VINSERTI32x4 (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 256), (16, 512)
+TEMP_DEST[VL-1:0] := SRC1[VL-1:0]
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC2[127:0]
+        1: TMP_DEST[255:128] := SRC2[127:0]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0]:=SRC2[127:0]
+        01: TMP_DEST[255:128]:=SRC2[127:0]
+        10: TMP_DEST[383:256]:=SRC2[127:0]
+        11: TMP_DEST[511:384]:=SRC2[127:0]
+    ESAC.
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VINSERTI64x2 (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+TEMP_DEST[VL-1:0] := SRC1[VL-1:0]
+IF VL = 256
+    CASE (imm8[0]) OF
+        0: TMP_DEST[127:0] := SRC2[127:0]
+        1: TMP_DEST[255:128] := SRC2[127:0]
+    ESAC.
+FI;
+IF VL = 512
+    CASE (imm8[1:0]) OF
+        00: TMP_DEST[127:0]:=SRC2[127:0]
+        01: TMP_DEST[255:128]:=SRC2[127:0]
+        10: TMP_DEST[383:256]:=SRC2[127:0]
+        11: TMP_DEST[511:384]:=SRC2[127:0]
+    ESAC.
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VINSERTI32x8 (EVEX.U1.512 encoded version) + ¶ +

+
TEMP_DEST[VL-1:0] := SRC1[VL-1:0]
+CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC2[255:0]
+    1: TMP_DEST[511:256] := SRC2[255:0]
+ESAC.
+FOR j := 0 TO 15
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VINSERTI64x4 (EVEX.512 encoded version) + ¶ +

+
VL = 512
+TEMP_DEST[VL-1:0] := SRC1[VL-1:0]
+CASE (imm8[0]) OF
+    0: TMP_DEST[255:0] := SRC2[255:0]
+    1: TMP_DEST[511:256] := SRC2[255:0]
+ESAC.
+FOR j := 0 TO 7
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VINSERTI128 + ¶ +

+
TEMP[255:0] := SRC1[255:0]
+CASE (imm8[0]) OF
+    0: TEMP[127:0] := SRC2[127:0]
+    1: TEMP[255:128] := SRC2[127:0]
+ESAC
+DEST := TEMP
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VINSERTI32x4 _mm512i _inserti32x4( __m512i a, __m128i b, int imm);
+
+
VINSERTI32x4 _mm512i _mask_inserti32x4(__m512i s, __mmask16 k, __m512i a, __m128i b, int imm);
+
+
VINSERTI32x4 _mm512i _maskz_inserti32x4( __mmask16 k, __m512i a, __m128i b, int imm);
+
+
VINSERTI32x4 __m256i _mm256_inserti32x4( __m256i a, __m128i b, int imm);
+
+
VINSERTI32x4 __m256i _mm256_mask_inserti32x4(__m256i s, __mmask8 k, __m256i a, __m128i b, int imm);
+
+
VINSERTI32x4 __m256i _mm256_maskz_inserti32x4( __mmask8 k, __m256i a, __m128i b, int imm);
+
+
VINSERTI32x8 __m512i _mm512_inserti32x8( __m512i a, __m256i b, int imm);
+
+
VINSERTI32x8 __m512i _mm512_mask_inserti32x8(__m512i s, __mmask16 k, __m512i a, __m256i b, int imm);
+
+
VINSERTI32x8 __m512i _mm512_maskz_inserti32x8( __mmask16 k, __m512i a, __m256i b, int imm);
+
+
VINSERTI64x2 __m512i _mm512_inserti64x2( __m512i a, __m128i b, int imm);
+
+
VINSERTI64x2 __m512i _mm512_mask_inserti64x2(__m512i s, __mmask8 k, __m512i a, __m128i b, int imm);
+
+
VINSERTI64x2 __m512i _mm512_maskz_inserti64x2( __mmask8 k, __m512i a, __m128i b, int imm);
+
+
VINSERTI64x2 __m256i _mm256_inserti64x2( __m256i a, __m128i b, int imm);
+
+
VINSERTI64x2 __m256i _mm256_mask_inserti64x2(__m256i s, __mmask8 k, __m256i a, __m128i b, int imm);
+
+
VINSERTI64x2 __m256i _mm256_maskz_inserti64x2( __mmask8 k, __m256i a, __m128i b, int imm);
+
+
VINSERTI64x4 _mm512_inserti64x4( __m512i a, __m256i b, int imm);
+
+
VINSERTI64x4 _mm512_mask_inserti64x4(__m512i s, __mmask8 k, __m512i a, __m256i b, int imm);
+
+
VINSERTI64x4 _mm512_maskz_inserti64x4( __mmask m, __m512i a, __m256i b, int imm);
+
+
VINSERTI128 __m256i _mm256_insertf128_si256 (__m256i a, __m128i b, int offset);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VEX-encoded instruction, see Table 2-23, “Type 6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.L = 0.
+

EVEX-encoded instruction, see Table 2-54, “Type E6NF Class Exception Conditions.”

diff --git a/x86/vmaskmov.html b/x86/vmaskmov.html new file mode 100644 index 0000000..de992d7 --- /dev/null +++ b/x86/vmaskmov.html @@ -0,0 +1,204 @@ + +VMASKMOV + — Conditional SIMD Packed Loads and Stores

VMASKMOV + — Conditional SIMD Packed Loads and Stores

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
VEX.128.66.0F38.W0 2C /r VMASKMOVPS xmm1, xmm2, m128RV MV/VAVXConditionally load packed single-precision values from m128 using mask in xmm2 and store in xmm1.
VEX.256.66.0F38.W0 2C /r VMASKMOVPS ymm1, ymm2, m256RV MV/VAVXConditionally load packed single-precision values from m256 using mask in ymm2 and store in ymm1.
VEX.128.66.0F38.W0 2D /r VMASKMOVPD xmm1, xmm2, m128RV MV/VAVXConditionally load packed double precision values from m128 using mask in xmm2 and store in xmm1.
VEX.256.66.0F38.W0 2D /r VMASKMOVPD ymm1, ymm2, m256RV MV/VAVXConditionally load packed double precision values from m256 using mask in ymm2 and store in ymm1.
VEX.128.66.0F38.W0 2E /r VMASKMOVPS m128, xmm1, xmm2MV RV/VAVXConditionally store packed single-precision values from xmm2 using mask in xmm1.
VEX.256.66.0F38.W0 2E /r VMASKMOVPS m256, ymm1, ymm2MV RV/VAVXConditionally store packed single-precision values from ymm2 using mask in ymm1.
VEX.128.66.0F38.W0 2F /r VMASKMOVPD m128, xmm1, xmm2MV RV/VAVXConditionally store packed double precision values from xmm2 using mask in xmm1.
VEX.256.66.0F38.W0 2F /r VMASKMOVPD m256, ymm1, ymm2MV RV/VAVXConditionally store packed double precision values from ymm2 using mask in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
MVRModRM:r/m (w)VEX.vvvv (r)ModRM:reg (r)N/A
+

Description + ¶ +

+

Conditionally moves packed data elements from the second source operand into the corresponding data element of the destination operand, depending on the mask bits associated with each data element. The mask bits are specified in the first source operand.

+

The mask bit for each data element is the most significant bit of that element in the first source operand. If a mask is 1, the corresponding data element is copied from the second source operand to the destination operand. If the mask is 0, the corresponding data element is set to zero in the load form of these instructions, and unmodified in the store form.

+

The second source operand is a memory address for the load form of these instruction. The destination operand is a memory address for the store form of these instructions. The other operands are both XMM registers (for VEX.128 version) or YMM registers (for VEX.256 version).

+

Faults occur only due to mask-bit required memory accesses that caused the faults. Faults will not occur due to referencing any memory location if the corresponding mask bit for that memory location is 0. For example, no faults will be detected if the mask bits are all zero.

+

Unlike previous MASKMOV instructions (MASKMOVQ and MASKMOVDQU), a nontemporal hint is not applied to these instructions.

+

Instruction behavior on alignment check reporting with mask bits of less than all 1s are the same as with mask bits of all 1s.

+

VMASKMOV should not be used to access memory mapped I/O and un-cached memory as the access and the ordering of the individual loads or stores it does is implementation specific.

+

In cases where mask bits indicate data should not be loaded or stored paging A and D bits will be set in an implementation dependent way. However, A and D bits are always set for pages where data is actually loaded/stored.

+

Note: for load forms, the first source (the mask) is encoded in VEX.vvvv; the second source is encoded in rm_field, and the destination register is encoded in reg_field.

+

Note: for store forms, the first source (the mask) is encoded in VEX.vvvv; the second source register is encoded in reg_field, and the destination memory location is encoded in rm_field.

+

Operation + ¶ +

+

VMASKMOVPS -128-bit load + ¶ +

+
DEST[31:0] := IF (SRC1[31]) Load_32(mem) ELSE 0
+DEST[63:32] := IF (SRC1[63]) Load_32(mem + 4) ELSE 0
+DEST[95:64] := IF (SRC1[95]) Load_32(mem + 8) ELSE 0
+DEST[127:97] := IF (SRC1[127]) Load_32(mem + 12) ELSE 0
+DEST[MAXVL-1:128] := 0
+
+

VMASKMOVPS - 256-bit load + ¶ +

+
DEST[31:0] := IF (SRC1[31]) Load_32(mem) ELSE 0
+DEST[63:32] := IF (SRC1[63]) Load_32(mem + 4) ELSE 0
+DEST[95:64] := IF (SRC1[95]) Load_32(mem + 8) ELSE 0
+DEST[127:96] := IF (SRC1[127]) Load_32(mem + 12) ELSE 0
+DEST[159:128] := IF (SRC1[159]) Load_32(mem + 16) ELSE 0
+DEST[191:160] := IF (SRC1[191]) Load_32(mem + 20) ELSE 0
+DEST[223:192] := IF (SRC1[223]) Load_32(mem + 24) ELSE 0
+DEST[255:224] := IF (SRC1[255]) Load_32(mem + 28) ELSE 0
+
+

VMASKMOVPD - 128-bit load + ¶ +

+
DEST[63:0] := IF (SRC1[63]) Load_64(mem) ELSE 0
+DEST[127:64] := IF (SRC1[127]) Load_64(mem + 16) ELSE 0
+DEST[MAXVL-1:128] := 0
+
+

VMASKMOVPD - 256-bit load + ¶ +

+
DEST[63:0] := IF (SRC1[63]) Load_64(mem) ELSE 0
+DEST[127:64] := IF (SRC1[127]) Load_64(mem + 8) ELSE 0
+DEST[195:128] := IF (SRC1[191]) Load_64(mem + 16) ELSE 0
+DEST[255:196] := IF (SRC1[255]) Load_64(mem + 24) ELSE 0
+
+

VMASKMOVPS - 128-bit store + ¶ +

+
IF (SRC1[31]) DEST[31:0] := SRC2[31:0]
+IF (SRC1[63]) DEST[63:32] := SRC2[63:32]
+IF (SRC1[95]) DEST[95:64] := SRC2[95:64]
+IF (SRC1[127]) DEST[127:96] := SRC2[127:96]
+
+

VMASKMOVPS - 256-bit store + ¶ +

+
IF (SRC1[31]) DEST[31:0] := SRC2[31:0]
+IF (SRC1[63]) DEST[63:32] := SRC2[63:32]
+IF (SRC1[95]) DEST[95:64] := SRC2[95:64]
+IF (SRC1[127]) DEST[127:96] := SRC2[127:96]
+IF (SRC1[159]) DEST[159:128] :=SRC2[159:128]
+IF (SRC1[191]) DEST[191:160] := SRC2[191:160]
+IF (SRC1[223]) DEST[223:192] := SRC2[223:192]
+IF (SRC1[255]) DEST[255:224] := SRC2[255:224]
+
+

VMASKMOVPD - 128-bit store + ¶ +

+
IF (SRC1[63]) DEST[63:0] := SRC2[63:0]
+IF (SRC1[127]) DEST[127:64] := SRC2[127:64]
+
+

VMASKMOVPD - 256-bit store + ¶ +

+
IF (SRC1[63]) DEST[63:0] := SRC2[63:0]
+IF (SRC1[127]) DEST[127:64] := SRC2[127:64]
+IF (SRC1[191]) DEST[191:128] := SRC2[191:128]
+IF (SRC1[255]) DEST[255:192] := SRC2[255:192]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
__m256 _mm256_maskload_ps(float const *a, __m256i mask)
+
+
void _mm256_maskstore_ps(float *a, __m256i mask, __m256 b)
+
+
__m256d _mm256_maskload_pd(double *a, __m256i mask);
+
+
void _mm256_maskstore_pd(double *a, __m256i mask, __m256d b);
+
+
__m128 _mm_maskload_ps(float const *a, __m128i mask)
+
+
void _mm_maskstore_ps(float *a, __m128i mask, __m128 b)
+
+
__m128d _mm_maskload_pd(double const *a, __m128i mask);
+
+
void _mm_maskstore_pd(double *a, __m128i mask, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-23, “Type 6 Class Exception Conditions” (No AC# reported for any mask bit combinations).

+

Additionally:

+ + + +
#UDIf VEX.W = 1.
diff --git a/x86/vmaxph.html b/x86/vmaxph.html new file mode 100644 index 0000000..40296a7 --- /dev/null +++ b/x86/vmaxph.html @@ -0,0 +1,128 @@ + +VMAXPH + — Return Maximum of Packed FP16 Values

VMAXPH + — Return Maximum of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 5F /r VMAXPH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLReturn the maximum packed FP16 values between xmm2 and xmm3/m128/m16bcst and store the result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 5F /r VMAXPH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLReturn the maximum packed FP16 values between ymm2 and ymm3/m256/m16bcst and store the result in ymm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 5F /r VMAXPH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {sae}AV/VAVX512-FP16Return the maximum packed FP16 values between zmm2 and zmm3/m512/m16bcst and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a SIMD compare of the packed FP16 values in the first source operand and the second source operand and returns the maximum value for each pair of values to the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned. If a value in the second operand is an SNaN, then SNaN is forwarded unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second operand (source operand), either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN source operand (from either the first or second operand) be returned, the action of VMAXPH can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN and OR.

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcast from a 16-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Operation + ¶ +

+
def MAX(SRC1, SRC2):
+    IF (SRC1 = 0.0) and (SRC2 = 0.0):
+        DEST := SRC2
+    ELSE IF (SRC1 = NaN):
+        DEST := SRC2
+    ELSE IF (SRC2 = NaN):
+        DEST := SRC2
+    ELSE IF (SRC1 > SRC2):
+        DEST := SRC1
+    ELSE:
+        DEST := SRC2
+
+

VMAXPH dest, src1, src2 + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            tsrc2 := SRC2.fp16[0]
+        ELSE:
+            tsrc2 := SRC2.fp16[j]
+        DEST.fp16[j] := MAX(SRC1.fp16[j], tsrc2)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMAXPH __m128h _mm_mask_max_ph (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VMAXPH __m128h _mm_maskz_max_ph (__mmask8 k, __m128h a, __m128h b);
+
+
VMAXPH __m128h _mm_max_ph (__m128h a, __m128h b);
+
+
VMAXPH __m256h _mm256_mask_max_ph (__m256h src, __mmask16 k, __m256h a, __m256h b);
+
+
VMAXPH __m256h _mm256_maskz_max_ph (__mmask16 k, __m256h a, __m256h b);
+
+
VMAXPH __m256h _mm256_max_ph (__m256h a, __m256h b);
+
+
VMAXPH __m512h _mm512_mask_max_ph (__m512h src, __mmask32 k, __m512h a, __m512h b);
+
+
VMAXPH __m512h _mm512_maskz_max_ph (__mmask32 k, __m512h a, __m512h b);
+
+
VMAXPH __m512h _mm512_max_ph (__m512h a, __m512h b);
+
+
VMAXPH __m512h _mm512_mask_max_round_ph (__m512h src, __mmask32 k, __m512h a, __m512h b, int sae);
+
+
VMAXPH __m512h _mm512_maskz_max_round_ph (__mmask32 k, __m512h a, __m512h b, int sae);
+
+
VMAXPH __m512h _mm512_max_round_ph (__m512h a, __m512h b, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vmaxsh.html b/x86/vmaxsh.html new file mode 100644 index 0000000..74973c4 --- /dev/null +++ b/x86/vmaxsh.html @@ -0,0 +1,98 @@ + +VMAXSH + — Return Maximum of Scalar FP16 Values

VMAXSH + — Return Maximum of Scalar FP16 Values

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 5F /r VMAXSH xmm1{k1}{z}, xmm2, xmm3/m16 {sae}AV/VAVX512-FP16Return the maximum low FP16 value between xmm3/m16 and xmm2 and store the result in xmm1 subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a compare of the low packed FP16 values in the first source operand and the second source operand and returns the maximum value for the pair of values to the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned. If a value in the second operand is an SNaN, then SNaN is forwarded unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second operand (source operand), either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN source operand (from either the first or second operand) be returned, the action of VMAXSH can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN, and OR.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+
def MAX(SRC1, SRC2):
+    IF (SRC1 = 0.0) and (SRC2 = 0.0):
+        DEST := SRC2
+    ELSE IF (SRC1 = NaN):
+        DEST := SRC2
+    ELSE IF (SRC2 = NaN):
+        DEST := SRC2
+    ELSE IF (SRC1 > SRC2):
+        DEST := SRC1
+    ELSE:
+        DEST := SRC2
+
+

VMAXSH dest, src1, src2 + ¶ +

+
IF k1[0] OR *no writemask*:
+    DEST.fp16[0] := MAX(SRC1.fp16[0], SRC2.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else dest.fp16[j] remains unchanged
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMAXSH __m128h _mm_mask_max_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int sae);
+
+
VMAXSH __m128h _mm_maskz_max_round_sh (__mmask8 k, __m128h a, __m128h b, int sae);
+
+
VMAXSH __m128h _mm_max_round_sh (__m128h a, __m128h b, int sae);
+
+
VMAXSH __m128h _mm_mask_max_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VMAXSH __m128h _mm_maskz_max_sh (__mmask8 k, __m128h a, __m128h b);
+
+
VMAXSH __m128h _mm_max_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vmcall.html b/x86/vmcall.html new file mode 100644 index 0000000..6a5a5bc --- /dev/null +++ b/x86/vmcall.html @@ -0,0 +1,121 @@ + +VMCALL + — Call to VM Monitor

VMCALL + — Call to VM Monitor

+ + + + + + + + + +
Opcode/InstructionOp/EnDescription
0F 01 C1 VMCALLZOCall to VM monitor by causing VM exit.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZONANANANA
+

Description + ¶ +

+

This instruction allows guest software can make a call for service into an underlying VM monitor. The details of the programming interface for such calls are VMM-specific; this instruction does nothing more than cause a VM exit, registering the appropriate exit reason.

+

Use of this instruction in VMX root operation invokes an SMM monitor (see Section 32.15.2). This invocation will activate the dual-monitor treatment of system-management interrupts (SMIs) and system-management mode (SMM) if it is not already active (see Section 32.15.6).

+

Operation + ¶ +

+
IF not in VMX operation
+    THEN #UD;
+ELSIF in VMX non-root operation
+    THEN VM exit;
+ELSIF (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF CPL > 0
+    THEN #GP(0);
+ELSIF in SMM or the logical processor does not support the dual-monitor treatment of SMIs and SMM or the valid bit in the
+IA32_SMM_MONITOR_CTL MSR is clear
+    THEN VMfail (VMCALL executed in VMX root operation);
+ELSIF dual-monitor treatment of SMIs and SMM is active
+    THEN perform an SMM VM exit (see Section 32.15.2);
+ELSIF current-VMCS pointer is not valid
+    THEN VMfailInvalid;
+ELSIF launch state of current VMCS is not clear
+    THEN VMfailValid(VMCALL with non-clear VMCS);
+ELSIF VM-exit control fields are not valid (see Section 32.15.6.1)
+    THEN VMfailValid (VMCALL with invalid VM-exit control fields);
+ELSE
+    enter SMM;
+    read revision identifier in MSEG;
+    IF revision identifier does not match that supported by processor
+        THEN
+            leave SMM;
+            VMfailValid(VMCALL with incorrect MSEG revision identifier);
+        ELSE
+            read SMM-monitor features field in MSEG (see Section 32.15.6.1);
+            IF features field is invalid
+                THEN
+                    leave SMM;
+                    VMfailValid(VMCALL with invalid SMM-monitor features);
+                ELSE activate dual-monitor treatment of SMIs and SMM (see Section 32.15.6);
+            FI;
+        FI;
+FI;
+
+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the current privilege level is not 0 and the logical processor is in VMX root operation.
#UDIf executed outside VMX operation.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf executed outside VMX operation.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDIf executed outside VMX non-root operation.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDIf executed outside VMX non-root operation.
+

64-Bit Mode Exceptions + ¶ +

+ + + +
#UDIf executed outside VMX operation.
diff --git a/x86/vmclear.html b/x86/vmclear.html new file mode 100644 index 0000000..81dc159 --- /dev/null +++ b/x86/vmclear.html @@ -0,0 +1,140 @@ + +VMCLEAR + — Clear Virtual-Machine Control Structure

VMCLEAR + — Clear Virtual-Machine Control Structure

+ + + + + + + + + +
Opcode/InstructionOp/EnDescription
66 0F C7 /6 VMCLEAR m64MCopy VMCS data to VMCS region in memory.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)NANANA
+

Description + ¶ +

+

This instruction applies to the VMCS whose VMCS region resides at the physical address contained in the instruction operand. The instruction ensures that VMCS data for that VMCS (some of these data may be currently maintained on the processor) are copied to the VMCS region in memory. It also initializes parts of the VMCS region (for example, it sets the launch state of that VMCS to clear). See Chapter 25, “Virtual Machine Control Structures.”

+

The operand of this instruction is always 64 bits and is always in memory. If the operand is the current-VMCS pointer, then that pointer is made invalid (set to FFFFFFFF_FFFFFFFFH).

+

Note that the VMCLEAR instruction might not explicitly write any VMCS data to memory; the data may be already resident in memory before the VMCLEAR is executed.

+

Operation + ¶ +

+
IF (register operand) or (not in VMX operation) or (CR0.PE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF in VMX non-root operation
+    THEN VM exit;
+ELSIF CPL > 0
+    THEN #GP(0);
+    ELSE
+        addr := contents of 64-bit in-memory operand;
+        IF addr is not 4KB-aligned OR
+        addr sets any bits beyond the physical-address width1
+            THEN VMfail(VMCLEAR with invalid physical address);
+        ELSIF addr = VMXON pointer
+            THEN VMfail(VMCLEAR with VMXON pointer);
+            ELSE
+                ensure that data for VMCS referenced by the operand is in memory;
+                initialize implementation-specific data in VMCS region;
+                launch state of VMCS referenced by the operand := “clear”
+                IF operand addr = current-VMCS pointer
+                    THEN current-VMCS pointer := FFFFFFFF_FFFFFFFFH;
+                FI;
+                VMsucceed;
+        FI;
+FI;
+
+
+

1. If IA32_VMX_BASIC[48] is read as 1, VMfail occurs if addr sets any bits in the range 63:32; see Appendix A.1.

+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains an unusable segment.
If the operand is located in an execute-only code segment.
#PF(fault-code)If a page fault occurs in accessing the memory operand.
#SS(0)If the memory operand effective address is outside the SS segment limit.
If the SS register contains an unusable segment.
#UDIf operand is a register.
If not in VMX operation.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe VMCLEAR instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe VMCLEAR instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe VMCLEAR instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the source operand is in the CS, DS, ES, FS, or GS segments and the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs in accessing the memory operand.
#SS(0)If the source operand is in the SS segment and the memory address is in a non-canonical form.
#UDIf operand is a register.
If not in VMX operation.
diff --git a/x86/vmfunc.html b/x86/vmfunc.html new file mode 100644 index 0000000..bea9403 --- /dev/null +++ b/x86/vmfunc.html @@ -0,0 +1,70 @@ + +VMFUNC + — Invoke VM function

VMFUNC + — Invoke VM function

+ + + + + + + + + +
Opcode/InstructionOp/EnDescription
NP 0F 01 D4 VMFUNCZOInvoke VM function specified in EAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZONANANANA
+

Description + ¶ +

+

This instruction allows software in VMX non-root operation to invoke a VM function, which is processor functionality enabled and configured by software in VMX root operation. The value of EAX selects the specific VM function being invoked.

+

The behavior of each VM function (including any additional fault checking) is specified in Section 26.5.6, “VM Functions.”

+

Operation + ¶ +

+
Perform functionality of the VM function specified in EAX;
+
+

Flags Affected + ¶ +

+

Depends on the VM function specified in EAX. See Section 26.5.6, “VM Functions.”

+

Protected Mode Exceptions (not including those defined by specific VM functions) + ¶ +

+

#UD If executed outside VMX non-root operation.

+

If “enable VM functions” VM-execution control is 0.

+

If EAX ≥ 64.

+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/vminph.html b/x86/vminph.html new file mode 100644 index 0000000..ea04e97 --- /dev/null +++ b/x86/vminph.html @@ -0,0 +1,128 @@ + +VMINPH + — Return Minimum of Packed FP16 Values

VMINPH + — Return Minimum of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 5D /r VMINPH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLReturn the minimum packed FP16 values between xmm2 and xmm3/m128/m16bcst and store the result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 5D /r VMINPH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLReturn the minimum packed FP16 values between ymm2 and ymm3/m256/m16bcst and store the result in ymm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 5D /r VMINPH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {sae}AV/VAVX512-FP16Return the minimum packed FP16 values between zmm2 and zmm3/m512/m16bcst and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a SIMD compare of the packed FP16 values in the first source operand and the second source operand and returns the minimum value for each pair of values to the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned. If a value in the second operand is an SNaN, then SNaN is forwarded unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second operand (source operand), either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN source operand (from either the first or second operand) be returned, the action of VMINPH can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN and OR.

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcast from a 16-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Operation + ¶ +

+
def MIN(SRC1, SRC2):
+    IF (SRC1 = 0.0) and (SRC2 = 0.0):
+        DEST := SRC2
+    ELSE IF (SRC1 = NaN):
+        DEST := SRC2
+    ELSE IF (SRC2 = NaN):
+        DEST := SRC2
+    ELSE IF (SRC1 < SRC2):
+        DEST := SRC1
+    ELSE:
+        DEST := SRC2
+
+

VMINPH dest, src1, src2 + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            tsrc2 := SRC2.fp16[0]
+        ELSE:
+            tsrc2 := SRC2.fp16[j]
+        DEST.fp16[j] := MIN(SRC1.fp16[j], tsrc2)
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMINPH __m128h _mm_mask_min_ph (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VMINPH __m128h _mm_maskz_min_ph (__mmask8 k, __m128h a, __m128h b);
+
+
VMINPH __m128h _mm_min_ph (__m128h a, __m128h b);
+
+
VMINPH __m256h _mm256_mask_min_ph (__m256h src, __mmask16 k, __m256h a, __m256h b);
+
+
VMINPH __m256h _mm256_maskz_min_ph (__mmask16 k, __m256h a, __m256h b);
+
+
VMINPH __m256h _mm256_min_ph (__m256h a, __m256h b);
+
+
VMINPH __m512h _mm512_mask_min_ph (__m512h src, __mmask32 k, __m512h a, __m512h b);
+
+
VMINPH __m512h _mm512_maskz_min_ph (__mmask32 k, __m512h a, __m512h b);
+
+
VMINPH __m512h _mm512_min_ph (__m512h a, __m512h b);
+
+
VMINPH __m512h _mm512_mask_min_round_ph (__m512h src, __mmask32 k, __m512h a, __m512h b, int sae);
+
+
VMINPH __m512h _mm512_maskz_min_round_ph (__mmask32 k, __m512h a, __m512h b, int sae);
+
+
VMINPH __m512h _mm512_min_round_ph (__m512h a, __m512h b, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vminsh.html b/x86/vminsh.html new file mode 100644 index 0000000..c609e82 --- /dev/null +++ b/x86/vminsh.html @@ -0,0 +1,99 @@ + +VMINSH + — Return Minimum Scalar FP16 Value

VMINSH + — Return Minimum Scalar FP16 Value

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 5D /r VMINSH xmm1{k1}{z}, xmm2, xmm3/m16 {sae}AV/VAVX512-FP16Return the minimum low FP16 value between xmm3/m16 and xmm2. Stores the result in xmm1 subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a compare of the low packed FP16 values in the first source operand and the second source operand and returns the minimum value for the pair of values to the destination operand.

+

If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned. If a value in the second operand is an SNaN, then SNaN is forwarded unchanged to the destination (that is, a QNaN version of the SNaN is not returned).

+

If only one value is a NaN (SNaN or QNaN) for this instruction, the second operand (source operand), either a NaN or a valid floating-point value, is written to the result. If instead of this behavior, it is required that the NaN source operand (from either the first or second operand) be returned, the action of VMINSH can be emulated using a sequence of instructions, such as, a comparison followed by AND, ANDN, and OR.

+

EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcast from a 16-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+
def MIN(SRC1, SRC2):
+    IF (SRC1 = 0.0) and (SRC2 = 0.0):
+        DEST := SRC2
+    ELSE IF (SRC1 = NaN):
+        DEST := SRC2
+    ELSE IF (SRC2 = NaN):
+        DEST := SRC2
+    ELSE IF (SRC1 < SRC2):
+        DEST := SRC1
+    ELSE:
+        DEST := SRC2
+
+

VMINSH dest, src1, src2 + ¶ +

+
IF k1[0] OR *no writemask*:
+    DEST.fp16[0] := MIN(SRC1.fp16[0], SRC2.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else dest.fp16[j] remains unchanged
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMINSH __m128h _mm_mask_min_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int sae);
+
+
VMINSH __m128h _mm_maskz_min_round_sh (__mmask8 k, __m128h a, __m128h b, int sae);
+
+
VMINSH __m128h _mm_min_round_sh (__m128h a, __m128h b, int sae);
+
+
VMINSH __m128h _mm_mask_min_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VMINSH __m128h _mm_maskz_min_sh (__mmask8 k, __m128h a, __m128h b);
+
+
VMINSH __m128h _mm_min_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vmlaunch.vmresume.html b/x86/vmlaunch.vmresume.html new file mode 100644 index 0000000..20b04c7 --- /dev/null +++ b/x86/vmlaunch.vmresume.html @@ -0,0 +1,157 @@ + +VMLAUNCH/VMRESUME + — Launch/Resume Virtual Machine

VMLAUNCH/VMRESUME + — Launch/Resume Virtual Machine

+ + + + + + + + + + + + + + +
Opcode/InstructionOp/EnDescription
0F 01 C2 VMLAUNCHZOLaunch virtual machine managed by current VMCS.
0F 01 C3 VMRESUMEZOResume virtual machine managed by current VMCS.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZONANANANA
+

Description + ¶ +

+

Effects a VM entry managed by the current VMCS.

+
    +
  • VMLAUNCH fails if the launch state of current VMCS is not “clear”. If the instruction is successful, it sets the launch state to “launched.”
  • +
  • VMRESUME fails if the launch state of the current VMCS is not “launched.”
+

If VM entry is attempted, the logical processor performs a series of consistency checks as detailed in Chapter 27, “VM Entries.” Failure to pass checks on the VMX controls or on the host-state area passes control to the instruction following the VMLAUNCH or VMRESUME instruction. If these pass but checks on the guest-state area fail, the logical processor loads state from the host-state area of the VMCS, passing control to the instruction referenced by the RIP field in the host-state area.

+

VM entry is not allowed when events are blocked by MOV SS or POP SS. Neither VMLAUNCH nor VMRESUME should be used immediately after either MOV to SS or POP to SS.

+

Operation + ¶ +

+
IF (not in VMX operation) or (CR0.PE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF in VMX non-root operation
+    THEN VMexit;
+ELSIF CPL > 0
+    THEN #GP(0);
+ELSIF current-VMCS pointer is not valid
+    THEN VMfailInvalid;
+ELSIF events are being blocked by MOV SS
+    THEN VMfailValid(VM entry with events blocked by MOV SS);
+ELSIF (VMLAUNCH and launch state of current VMCS is not “clear”)
+    THEN VMfailValid(VMLAUNCH with non-clear VMCS);
+ELSIF (VMRESUME and launch state of current VMCS is not “launched”)
+    THEN VMfailValid(VMRESUME with non-launched VMCS);
+    ELSE
+        Check settings of VMX controls and host-state area;
+        IF invalid settings
+            THEN VMfailValid(VM entry with invalid VMX-control field(s)) or
+                    VMfailValid(VM entry with invalid host-state field(s)) or
+                    VMfailValid(VM entry with invalid executive-VMCS pointer)) or
+                    VMfailValid(VM entry with non-launched executive VMCS) or
+                    VMfailValid(VM entry with executive-VMCS pointer not VMXON pointer) or
+                    VMfailValid(VM entry with invalid VM-execution control fields in executive
+                    VMCS)
+                    as appropriate;
+            ELSE
+                Attempt to load guest state and PDPTRs as appropriate;
+                clear address-range monitoring;
+                IF failure in checking guest state or PDPTRs
+                    THEN VM entry fails (see Section 27.8);
+                    ELSE
+                        Attempt to load MSRs from VM-entry MSR-load area;
+                        IF failure
+                            THEN VM entry fails
+                            (see Section 27.8);
+                            ELSE
+                                IF VMLAUNCH
+                                    THEN launch state of VMCS := “launched”;
+                                FI;
+                                IF in SMM and “entry to SMM” VM-entry control is 0
+                                    THEN
+                                        IF “deactivate dual-monitor treatment” VM-entry
+                                        control is 0
+                                            THEN SMM-transfer VMCS pointer :=
+                                            current-VMCS pointer;
+                                        FI;
+                                        IF executive-VMCS pointer is VMXON pointer
+                                            THEN current-VMCS pointer :=
+                                            VMCS-link pointer;
+                                            ELSE current-VMCS pointer :=
+                                            executive-VMCS pointer;
+                                        FI;
+                                        leave SMM;
+                                FI;
+                                VM entry succeeds;
+                        FI;
+                FI;
+        FI;
+FI;
+Further details of the operation of the VM-entry appear in Chapter 27.
+
+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the current privilege level is not 0.
#UDIf executed outside VMX operation.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe VMLAUNCH and VMRESUME instructions are not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe VMLAUNCH and VMRESUME instructions are not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe VMLAUNCH and VMRESUME instructions are not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the current privilege level is not 0.
#UDIf executed outside VMX operation.
diff --git a/x86/vmovsh.html b/x86/vmovsh.html new file mode 100644 index 0000000..79dce04 --- /dev/null +++ b/x86/vmovsh.html @@ -0,0 +1,144 @@ + +VMOVSH + — Move Scalar FP16 Value

VMOVSH + — Move Scalar FP16 Value

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 10 /r VMOVSH xmm1{k1}{z}, m16AV/VAVX512-FP16Move FP16 value from m16 to xmm1 subject to writemask k1.
EVEX.LLIG.F3.MAP5.W0 11 /r VMOVSH m16{k1}, xmm1BV/VAVX512-FP16Move low FP16 value from xmm1 to m16 subject to writemask k1.
EVEX.LLIG.F3.MAP5.W0 10 /r VMOVSH xmm1{k1}{z}, xmm2, xmm3CV/VAVX512-FP16Move low FP16 values from xmm3 to xmm1 subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
EVEX.LLIG.F3.MAP5.W0 11 /r VMOVSH xmm1{k1}{z}, xmm2, xmm3DV/VAVX512-FP16Move low FP16 values from xmm3 to xmm1 subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
BScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
CN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
DN/AModRM:r/m (w)VEX.vvvv (r)ModRM:reg (r)N/A
+

Description + ¶ +

+

This instruction moves a FP16 value to a register or memory location.

+

The two register-only forms are aliases and differ only in where their operands are encoded; this is a side effect of the encodings selected.

+

Operation + ¶ +

+

VMOVSH dest, src (two operand load) + ¶ +

+
IF k1[0] or no writemask:
+    DEST.fp16[0] := SRC.fp16[0]
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// ELSE DEST.fp16[0] remains unchanged
+DEST[MAXVL:16] := 0
+
+

VMOVSH dest, src (two operand store) + ¶ +

+
IF k1[0] or no writemask:
+    DEST.fp16[0] := SRC.fp16[0]
+// ELSE DEST.fp16[0] remains unchanged
+
+

VMOVSH dest, src1, src2 (three operand copy) + ¶ +

+
IF k1[0] or no writemask:
+    DEST.fp16[0] := SRC2.fp16[0]
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// ELSE DEST.fp16[0] remains unchanged
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVSH __m128h _mm_load_sh (void const* mem_addr);
+
+
VMOVSH __m128h _mm_mask_load_sh (__m128h src, __mmask8 k, void const* mem_addr);
+
+
VMOVSH __m128h _mm_maskz_load_sh (__mmask8 k, void const* mem_addr);
+
+
VMOVSH __m128h _mm_mask_move_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VMOVSH __m128h _mm_maskz_move_sh (__mmask8 k, __m128h a, __m128h b);
+
+
VMOVSH __m128h _mm_move_sh (__m128h a, __m128h b);
+
+
VMOVSH void _mm_mask_store_sh (void * mem_addr, __mmask8 k, __m128h a);
+
+
VMOVSH void _mm_store_sh (void * mem_addr, __m128h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-51, “Type E5 Class Exception Conditions.”

diff --git a/x86/vmovw.html b/x86/vmovw.html new file mode 100644 index 0000000..0f0a531 --- /dev/null +++ b/x86/vmovw.html @@ -0,0 +1,89 @@ + +VMOVW + — Move Word

VMOVW + — Move Word

+ + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP5.WIG 6E /r VMOVW xmm1, reg/m16AV/VAVX512-FP16Copy word from reg/m16 to xmm1.
EVEX.128.66.MAP5.WIG 7E /r VMOVW reg/m16, xmm1BV/VAVX512-FP16Copy word from xmm1 to reg/m16.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
BScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

This instruction either (a) copies one word element from an XMM register to a general-purpose register or memory location or (b) copies one word element from a general-purpose register or memory location to an XMM register. When writing a general-purpose register, the lower 16-bits of the register will contain the word value. The upper bits of the general-purpose register are written with zeros.

+

Operation + ¶ +

+

VMOVW dest, src (two operand load) + ¶ +

+
DEST.word[0] := SRC.word[0]
+DEST[MAXVL:16] := 0
+
+

VMOVW dest, src (two operand store) + ¶ +

+
DEST.word[0] := SRC.word[0]
+// upper bits of GPR DEST are zeroed
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMOVW short _mm_cvtsi128_si16 (__m128i a);
+
+
VMOVW __m128i _mm_cvtsi16_si128 (short a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-57, “Type E9NF Class Exception Conditions.”

diff --git a/x86/vmptrld.html b/x86/vmptrld.html new file mode 100644 index 0000000..4638bce --- /dev/null +++ b/x86/vmptrld.html @@ -0,0 +1,141 @@ + +VMPTRLD + — Load Pointer to Virtual-Machine Control Structure

VMPTRLD + — Load Pointer to Virtual-Machine Control Structure

+ + + + + + + + + +
Opcode/InstructionOp/EnDescription
NP 0F C7 /6 VMPTRLD m64MLoads the current VMCS pointer from memory.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)NANANA
+

Description + ¶ +

+

Marks the current-VMCS pointer valid and loads it with the physical address in the instruction operand. The instruction fails if its operand is not properly aligned, sets unsupported physical-address bits, or is equal to the VMXON pointer. In addition, the instruction fails if the 32 bits in memory referenced by the operand do not match the VMCS revision identifier supported by this processor.1

+

The operand of this instruction is always 64 bits and is always in memory.

+

Operation + ¶ +

+
IF (register operand) or (not in VMX operation) or (CR0.PE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF in VMX non-root operation
+    THEN VMexit;
+ELSIF CPL > 0
+    THEN #GP(0);
+    ELSE
+        addr := contents of 64-bit in-memory source operand;
+        IF addr is not 4KB-aligned OR
+        addr sets any bits beyond the physical-address width2
+            THEN VMfail(VMPTRLD with invalid physical address);
+        ELSIF addr = VMXON pointer
+            THEN VMfail(VMPTRLD with VMXON pointer);
+            ELSE
+                rev := 32 bits located at physical address addr;
+                IF rev[30:0] ≠ VMCS revision identifier supported by processor OR
+                rev[31] = 1 AND processor does not support 1-setting of “VMCS shadowing”
+                    THEN VMfail(VMPTRLD with incorrect VMCS revision identifier);
+                    ELSE
+                        current-VMCS pointer := addr;
+                        VMsucceed;
+                FI;
+        FI;
+FI;
+
+
+

1. Software should consult the VMX capability MSR VMX_BASIC to discover the VMCS revision identifier supported by this processor (see Appendix A, “VMX Capability Reporting Facility”).

+

2. If IA32_VMX_BASIC[48] is read as 1, VMfail occurs if addr sets any bits in the range 63:32; see Appendix A.1.

+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory source operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains an unusable segment.
If the source operand is located in an execute-only code segment.
#PF(fault-code)If a page fault occurs in accessing the memory source operand.
#SS(0)If the memory source operand effective address is outside the SS segment limit.
If the SS register contains an unusable segment.
#UDIf operand is a register.
If not in VMX operation.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe VMPTRLD instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe VMPTRLD instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe VMPTRLD instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the source operand is in the CS, DS, ES, FS, or GS segments and the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs in accessing the memory source operand.
#SS(0)If the source operand is in the SS segment and the memory address is in a non-canonical form.
#UDIf operand is a register.
If not in VMX operation.
diff --git a/x86/vmptrst.html b/x86/vmptrst.html new file mode 100644 index 0000000..d64c094 --- /dev/null +++ b/x86/vmptrst.html @@ -0,0 +1,123 @@ + +VMPTRST + — Store Pointer to Virtual-Machine Control Structure

VMPTRST + — Store Pointer to Virtual-Machine Control Structure

+ + + + + + + + + +
Opcode/InstructionOp/EnDescription
NP 0F C7 /7 VMPTRST m64MStores the current VMCS pointer into memory.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)NANANA
+

Description + ¶ +

+

Stores the current-VMCS pointer into a specified memory address. The operand of this instruction is always 64 bits and is always in memory.

+

Operation + ¶ +

+
IF (register operand) or (not in VMX operation) or (CR0.PE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF in VMX non-root operation
+    THEN VMexit;
+ELSIF CPL > 0
+    THEN #GP(0);
+    ELSE
+        64-bit in-memory destination operand := current-VMCS pointer;
+        VMsucceed;
+FI;
+
+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory destination operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains an unusable segment.
If the destination operand is located in a read-only data segment or any code segment.
#PF(fault-code)If a page fault occurs in accessing the memory destination operand.
#SS(0)If the memory destination operand effective address is outside the SS segment limit.
If the SS register contains an unusable segment.
#UDIf operand is a register.
If not in VMX operation.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe VMPTRST instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe VMPTRST instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe VMPTRST instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the destination operand is in the CS, DS, ES, FS, or GS segments and the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs in accessing the memory destination operand.
#SS(0)If the destination operand is in the SS segment and the memory address is in a non-canonical form.
#UDIf operand is a register.
If not in VMX operation.
diff --git a/x86/vmread.html b/x86/vmread.html new file mode 100644 index 0000000..08ffd7a --- /dev/null +++ b/x86/vmread.html @@ -0,0 +1,137 @@ + +VMREAD + — Read Field from Virtual-Machine Control Structure

VMREAD + — Read Field from Virtual-Machine Control Structure

+ + + + + + + + + + + + + +
Opcode/InstructionOp/EnDescription
NP 0F 78 VMREAD r/m64, r64MRReads a specified VMCS field (in 64-bit mode).
NP 0F 78 VMREAD r/m32, r32MRReads a specified VMCS field (outside 64-bit mode).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (w)ModRM:reg (r)NANA
+

Description + ¶ +

+

Reads a specified field from a VMCS and stores it into a specified destination operand (register or memory). In VMX root operation, the instruction reads from the current VMCS. If executed in VMX non-root operation, the instruction reads from the VMCS referenced by the VMCS link pointer field in the current VMCS.

+

The VMCS field is specified by the VMCS-field encoding contained in the register source operand. Outside IA-32e mode, the source operand has 32 bits, regardless of the value of CS.D. In 64-bit mode, the source operand has 64 bits.

+

The effective size of the destination operand, which may be a register or in memory, is always 32 bits outside IA-32e mode (the setting of CS.D is ignored with respect to operand size) and 64 bits in 64-bit mode. If the VMCS field specified by the source operand is shorter than this effective operand size, the high bits of the destination operand are cleared to 0. If the VMCS field is longer, then the high bits of the field are not read.

+

Note that any faults resulting from accessing a memory destination operand can occur only after determining, in the operation section below, that the relevant VMCS pointer is valid and that the specified VMCS field is supported.

+

Operation + ¶ +

+
IF (not in VMX operation) or (CR0.PE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF in VMX non-root operation AND (“VMCS shadowing” is 0 OR source operand sets bits in range 63:15 OR
+VMREAD bit corresponding to bits 14:0 of source operand is 1)1
+    THEN VMexit;
+ELSIF CPL > 0
+    THEN #GP(0);
+ELSIF (in VMX root operation AND current-VMCS pointer is not valid) OR
+(in VMX non-root operation AND VMCS link pointer is not valid)
+    THEN VMfailInvalid;
+ELSIF source operand does not correspond to any VMCS field
+    THEN VMfailValid(VMREAD/VMWRITE from/to unsupported VMCS component);
+    ELSE
+        IF in VMX root operation
+            THEN destination operand := contents of field indexed by source operand in current VMCS;
+            ELSE destination operand := contents of field indexed by source operand in VMCS referenced by VMCS link pointer;
+        FI;
+        VMsucceed;
+FI;
+
+
+

1. The VMREAD bit for a source operand is defined as follows. Let x be the value of bits 14:0 of the source operand and let addr be the VMREAD-bitmap address. The corresponding VMREAD bit is in bit position x & 7 of the byte at physical address addr | (x » 3).

+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If a memory destination operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains an unusable segment.
If the destination operand is located in a read-only data segment or any code segment.
#PF(fault-code)If a page fault occurs in accessing a memory destination operand.
#SS(0)If a memory destination operand effective address is outside the SS segment limit.
If the SS register contains an unusable segment.
#UDIf not in VMX operation.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe VMREAD instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe VMREAD instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe VMREAD instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory destination operand is in the CS, DS, ES, FS, or GS segments and the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs in accessing a memory destination operand.
#SS(0)If the memory destination operand is in the SS segment and the memory address is in a non-canonical form.
#UDIf not in VMX operation.
diff --git a/x86/vmresume.html b/x86/vmresume.html new file mode 100644 index 0000000..e0851fc --- /dev/null +++ b/x86/vmresume.html @@ -0,0 +1,10 @@ + +VMRESUME + — Resume Virtual Machine

VMRESUME + — Resume Virtual Machine

+ +

See VMLAUNCH/VMRESUME—Launch/Resume Virtual Machine.

diff --git a/x86/vmulph.html b/x86/vmulph.html new file mode 100644 index 0000000..81e5425 --- /dev/null +++ b/x86/vmulph.html @@ -0,0 +1,129 @@ + +VMULPH + — Multiply Packed FP16 Values

VMULPH + — Multiply Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 59 /r VMULPH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from xmm3/m128/m16bcst to xmm2 and store the result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 59 /r VMULPH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLMultiply packed FP16 values from ymm3/m256/m16bcst to ymm2 and store the result in ymm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 59 /r VMULPH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Multiply packed FP16 values in zmm3/m512/m16bcst with zmm2 and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction multiplies packed FP16 values from source operands and stores the packed FP16 result in the destination operand. The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VMULPH (EVEX encoded versions) when src2 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.fp16[j] := SRC1.fp16[j] * SRC2.fp16[j]
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VMULPH (EVEX encoded versions) when src2 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            DEST.fp16[j] := SRC1.fp16[j] * SRC2.fp16[0]
+        ELSE:
+            DEST.fp16[j] := SRC1.fp16[j] * SRC2.fp16[j]
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMULPH __m128h _mm_mask_mul_ph (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VMULPH __m128h _mm_maskz_mul_ph (__mmask8 k, __m128h a, __m128h b);
+
+
VMULPH __m128h _mm_mul_ph (__m128h a, __m128h b);
+
+
VMULPH __m256h _mm256_mask_mul_ph (__m256h src, __mmask16 k, __m256h a, __m256h b);
+
+
VMULPH __m256h _mm256_maskz_mul_ph (__mmask16 k, __m256h a, __m256h b);
+
+
VMULPH __m256h _mm256_mul_ph (__m256h a, __m256h b);
+
+
VMULPH __m512h _mm512_mask_mul_ph (__m512h src, __mmask32 k, __m512h a, __m512h b);
+
+
VMULPH __m512h _mm512_maskz_mul_ph (__mmask32 k, __m512h a, __m512h b);
+
+
VMULPH __m512h _mm512_mul_ph (__m512h a, __m512h b);
+
+
VMULPH __m512h _mm512_mask_mul_round_ph (__m512h src, __mmask32 k, __m512h a, __m512h b, int rounding);
+
+
VMULPH __m512h _mm512_maskz_mul_round_ph (__mmask32 k, __m512h a, __m512h b, int rounding);
+
+
VMULPH __m512h _mm512_mul_round_ph (__m512h a, __m512h b, int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vmulsh.html b/x86/vmulsh.html new file mode 100644 index 0000000..4c8b151 --- /dev/null +++ b/x86/vmulsh.html @@ -0,0 +1,87 @@ + +VMULSH + — Multiply Scalar FP16 Values

VMULSH + — Multiply Scalar FP16 Values

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 59 /r VMULSH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Multiply the low FP16 value in xmm3/m16 by low FP16 value in xmm2, and store the result in xmm1 subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction multiplies the low FP16 value from the source operands and stores the FP16 result in the destination operand. Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+

VMULSH (EVEX encoded versions) + ¶ +

+
IF EVEX.b = 1 and SRC2 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    DEST.fp16[0] := SRC1.fp16[0] * SRC2.fp16[0]
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else dest.fp16[0] remains unchanged
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VMULSH __m128h _mm_mask_mul_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int rounding);
+
+
VMULSH __m128h _mm_maskz_mul_round_sh (__mmask8 k, __m128h a, __m128h b, int rounding);
+
+
VMULSH __m128h _mm_mul_round_sh (__m128h a, __m128h b, int rounding);
+
+
VMULSH __m128h _mm_mask_mul_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VMULSH __m128h _mm_maskz_mul_sh (__mmask8 k, __m128h a, __m128h b);
+
+
VMULSH __m128h _mm_mul_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vmwrite.html b/x86/vmwrite.html new file mode 100644 index 0000000..b16b69d --- /dev/null +++ b/x86/vmwrite.html @@ -0,0 +1,142 @@ + +VMWRITE + — Write Field to Virtual-Machine Control Structure

VMWRITE + — Write Field to Virtual-Machine Control Structure

+ + + + + + + + + + + + + +
Opcode/InstructionOp/EnDescription
NP 0F 79 VMWRITE r64, r/m64RMWrites a specified VMCS field (in 64-bit mode).
NP 0F 79 VMWRITE r32, r/m32RMWrites a specified VMCS field (outside 64-bit mode).
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)NANA
+

Description + ¶ +

+

Writes the contents of a primary source operand (register or memory) to a specified field in a VMCS. In VMX root operation, the instruction writes to the current VMCS. If executed in VMX non-root operation, the instruction writes to the VMCS referenced by the VMCS link pointer field in the current VMCS.

+

The VMCS field is specified by the VMCS-field encoding contained in the register secondary source operand. Outside IA-32e mode, the secondary source operand is always 32 bits, regardless of the value of CS.D. In 64-bit mode, the secondary source operand has 64 bits.

+

The effective size of the primary source operand, which may be a register or in memory, is always 32 bits outside IA-32e mode (the setting of CS.D is ignored with respect to operand size) and 64 bits in 64-bit mode. If the VMCS field specified by the secondary source operand is shorter than this effective operand size, the high bits of the primary source operand are ignored. If the VMCS field is longer, then the high bits of the field are cleared to 0.

+

Note that any faults resulting from accessing a memory source operand occur after determining, in the operation section below, that the relevant VMCS pointer is valid but before determining if the destination VMCS field is supported.

+

Operation + ¶ +

+
IF (not in VMX operation) or (CR0.PE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF in VMX non-root operation AND (“VMCS shadowing” is 0 OR secondary source operand sets bits in range 63:15 OR
+VMWRITE bit corresponding to bits 14:0 of secondary source operand is 1)1
+    THEN VMexit;
+ELSIF CPL > 0
+    THEN #GP(0);
+ELSIF (in VMX root operation AND current-VMCS pointer is not valid) OR
+(in VMX non-root operation AND VMCS-link pointer is not valid)
+    THEN VMfailInvalid;
+ELSIF secondary source operand does not correspond to any VMCS field
+    THEN VMfailValid(VMREAD/VMWRITE from/to unsupported VMCS component);
+ELSIF VMCS field indexed by secondary source operand is a VM-exit information field AND
+processor does not support writing to such fields2
+    THEN VMfailValid(VMWRITE to read-only VMCS component);
+    ELSE
+
+
+

1. The VMWRITE bit for a secondary source operand is defined as follows. Let x be the value of bits 14:0 of the secondary source operand and let addr be the VMWRITE-bitmap address. The corresponding VMWRITE bit is in bit position x & 7 of the byte at physical address addr | (x » 3).

+

2. Software can discover whether these fields can be written by reading the VMX capability MSR IA32_VMX_MISC (see Appendix A.6).

+
        IF in VMX root operation
+            THEN field indexed by secondary source operand in current VMCS := primary source operand;
+            ELSE field indexed by secondary source operand in VMCS referenced by VMCS link pointer := primary source operand;
+    FI;
+    VMsucceed;
+FI;
+
+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If a memory source operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains an unusable segment.
If the source operand is located in an execute-only code segment.
#PF(fault-code)If a page fault occurs in accessing a memory source operand.
#SS(0)If a memory source operand effective address is outside the SS segment limit.
If the SS register contains an unusable segment.
#UDIf not in VMX operation.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe VMWRITE instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe VMWRITE instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe VMWRITE instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the memory source operand is in the CS, DS, ES, FS, or GS segments and the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs in accessing a memory source operand.
#SS(0)If the memory source operand is in the SS segment and the memory address is in a non-canonical form.
#UDIf not in VMX operation.
diff --git a/x86/vmxoff.html b/x86/vmxoff.html new file mode 100644 index 0000000..764ed91 --- /dev/null +++ b/x86/vmxoff.html @@ -0,0 +1,111 @@ + +VMXOFF + — Leave VMX Operation

VMXOFF + — Leave VMX Operation

+ + + + + + + + + +
Opcode/InstructionOp/EnDescription
0F 01 C4 VMXOFFZOLeaves VMX operation.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZONANANANA
+

Description + ¶ +

+

Takes the logical processor out of VMX operation, unblocks INIT signals, conditionally re-enables A20M, and clears any address-range monitoring.1

+

Operation + ¶ +

+
IF (not in VMX operation) or (CR0.PE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF in VMX non-root operation
+    THEN VMexit;
+ELSIF CPL > 0
+    THEN #GP(0);
+ELSIF dual-monitor treatment of SMIs and SMM is active
+    THEN VMfail(VMXOFF under dual-monitor treatment of SMIs and SMM);
+    ELSE
+        leave VMX operation;
+        unblock INIT;
+        IF IA32_SMM_MONITOR_CTL[2] = 02
+            THEN unblock SMIs;
+        IF outside SMX operation3
+            THEN unblock and enable A20M;
+        FI;
+        clear address-range monitoring;
+        VMsucceed;
+FI;
+
+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + +
#GP(0)If executed in VMX root operation with CPL > 0.
+
+

1. See the information on MONITOR/MWAIT in Chapter 9, “Multiple-Processor Management,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

2. Setting IA32_SMM_MONITOR_CTL[bit 2] to 1 prevents VMXOFF from unblocking SMIs regardless of the value of the register’s value bit (bit 0). Not all processors allow this bit to be set to 1. Software should consult the VMX capability MSR IA32_VMX_MISC (see Appendix A.6) to determine whether this is allowed.

+

3. A logical processor is outside SMX operation if GETSEC[SENTER] has not been executed or if GETSEC[SEXIT] was executed after the last execution of GETSEC[SENTER]. See Chapter 6, “Safer Mode Extensions Reference.”

+ + + +
#UDIf executed outside VMX operation.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe VMXOFF instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe VMXOFF instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe VMXOFF instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If executed in VMX root operation with CPL > 0.
#UDIf executed outside VMX operation.
diff --git a/x86/vmxon.html b/x86/vmxon.html new file mode 100644 index 0000000..4c0ce11 --- /dev/null +++ b/x86/vmxon.html @@ -0,0 +1,167 @@ + +VMXON + — Enter VMX Operation

VMXON + — Enter VMX Operation

+ + + + + + + + + +
Opcode/InstructionOp/EnDescription
F3 0F C7 /6 VMXON m64MEnter VMX root operation.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)NANANA
+

Description + ¶ +

+

Puts the logical processor in VMX operation with no current VMCS, blocks INIT signals, disables A20M, and clears any address-range monitoring established by the MONITOR instruction.1

+

The operand of this instruction is a 4KB-aligned physical address (the VMXON pointer) that references the VMXON region, which the logical processor may use to support VMX operation. This operand is always 64 bits and is always in memory.

+

Operation + ¶ +

+
IF (register operand) or (CR0.PE = 0) or (CR4.VMXE = 0) or (RFLAGS.VM = 1) or (IA32_EFER.LMA = 1 and CS.L = 0)
+    THEN #UD;
+ELSIF not in VMX operation
+    THEN
+        IF (CPL > 0) or (in A20M mode) or
+        (the values of CR0 and CR4 are not supported in VMX operation; see Section 24.8) or
+        (bit 0 (lock bit) of IA32_FEATURE_CONTROL MSR is clear) or
+        (in SMX operation2 and bit 1 of IA32_FEATURE_CONTROL MSR is clear) or
+        (outside SMX operation and bit 2 of IA32_FEATURE_CONTROL MSR is clear)
+            THEN #GP(0);
+            ELSE
+                addr := contents of 64-bit in-memory source operand;
+                IF addr is not 4KB-aligned or
+                addr sets any bits beyond the physical-address width3
+                    THEN VMfailInvalid;
+                    ELSE
+                        rev := 32 bits located at physical address addr;
+                        IF rev[30:0] ≠ VMCS revision identifier supported by processor OR rev[31] = 1
+                            THEN VMfailInvalid;
+                            ELSE
+                                current-VMCS pointer := FFFFFFFF_FFFFFFFFH;
+                                enter VMX operation;
+                                block INIT signals;
+                                block and disable A20M;
+
+
+

1. See the information on MONITOR/MWAIT in Chapter 9, “Multiple-Processor Management,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

2. A logical processor is in SMX operation if GETSEC[SEXIT] has not been executed since the last execution of GETSEC[SENTER]. A logical processor is outside SMX operation if GETSEC[SENTER] has not been executed or if GETSEC[SEXIT] was executed after the last execution of GETSEC[SENTER]. See Chapter 6, “Safer Mode Extensions Reference.”

+

3. If IA32_VMX_BASIC[48] is read as 1, VMfailInvalid occurs if addr sets any bits in the range 63:32; see Appendix A.1.

+
                    clear address-range monitoring;
+                    IF the processor supports Intel PT but does not allow it to be used in VMX operation1
+                        THEN IA32_RTIT_CTL.TraceEn := 0;
+                    FI;
+                    VMsucceed;
+                FI;
+            FI;
+        FI;
+ELSIF in VMX non-root operation
+    THEN VMexit;
+ELSIF CPL > 0
+    THEN #GP(0);
+    ELSE VMfail(“VMXON executed in VMX root operation”);
+FI;
+
+
+

1. Software should read the VMX capability MSR IA32_VMX_MISC to determine whether the processor allows Intel PT to be used in VMX operation (see Appendix A.6).

+

Flags Affected + ¶ +

+

See the operation section and Section 31.2.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If executed outside VMX operation with CPL>0 or with invalid CR0 or CR4 fixed bits.
If executed in A20M mode.
If the memory source operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains an unusable segment.
If the source operand is located in an execute-only code segment.
If the value of the IA32_FEATURE_CONTROL MSR does not support entry to VMX operation in the current processor mode.
#PF(fault-code)If a page fault occurs in accessing the memory source operand.
#SS(0)If the memory source operand effective address is outside the SS segment limit.
If the SS register contains an unusable segment.
#UDIf operand is a register.
If executed with CR4.VMXE = 0.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe VMXON instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe VMXON instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe VMXON instruction is not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + +
#GP(0)If executed outside VMX operation with CPL > 0 or with invalid CR0 or CR4 fixed bits.
If executed in A20M mode.
If the source operand is in the CS, DS, ES, FS, or GS segments and the memory address is in a non-canonical form.
+

If the value of the IA32_FEATURE_CONTROL MSR does not support entry to VMX operation in the current processor mode.

+ + + + + + + + + + + +
#PF(fault-code)If a page fault occurs in accessing the memory source operand.
#SS(0)If the source operand is in the SS segment and the memory address is in a non-canonical form.
#UDIf operand is a register.
If executed with CR4.VMXE = 0.
diff --git a/x86/vp2intersectd.vp2intersectq.html b/x86/vp2intersectd.vp2intersectq.html new file mode 100644 index 0000000..a9c41cc --- /dev/null +++ b/x86/vp2intersectd.vp2intersectq.html @@ -0,0 +1,132 @@ + +VP2INTERSECTD/VP2INTERSECTQ + — Compute Intersection Between DWORDS/QUADWORDS to aPair of Mask Registers

VP2INTERSECTD/VP2INTERSECTQ + — Compute Intersection Between DWORDS/QUADWORDS to aPair of Mask Registers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.NDS.128.F2.0F38.W0 68 /r VP2INTERSECTD k1+1, xmm2, xmm3/m128/m32bcstAV/VAVX512VL AVX512_VP2INTERSECTStore, in an even/odd pair of mask registers, the indicators of the locations of value matches between dwords in xmm3/m128/m32bcst and xmm2.
EVEX.NDS.256.F2.0F38.W0 68 /r VP2INTERSECTD k1+1, ymm2, ymm3/m256/m32bcstAV/VAVX512VL AVX512_VP2INTERSECTStore, in an even/odd pair of mask registers, the indicators of the locations of value matches between dwords in ymm3/m256/m32bcst and ymm2.
EVEX.NDS.512.F2.0F38.W0 68 /r VP2INTERSECTD k1+1, zmm2, zmm3/m512/m32bcstAV/VAVX512F AVX512_VP2INTERSECTStore, in an even/odd pair of mask registers, the indicators of the locations of value matches between dwords in zmm3/m512/m32bcst and zmm2.
EVEX.NDS.128.F2.0F38.W1 68 /r VP2INTERSECTQ k1+1, xmm2, xmm3/m128/m64bcstAV/VAVX512VL AVX512_VP2INTERSECTStore, in an even/odd pair of mask registers, the indicators of the locations of value matches between quadwords in xmm3/m128/m64bcst and xmm2.
EVEX.NDS.256.F2.0F38.W1 68 /r VP2INTERSECTQ k1+1, ymm2, ymm3/m256/m64bcstAV/VAVX512VL AVX512_VP2INTERSECTStore, in an even/odd pair of mask registers, the indicators of the locations of value matches between quadwords in ymm3/m256/m64bcst and ymm2.
EVEX.NDS.512.F2.0F38.W1 68 /r VP2INTERSECTQ k1+1, zmm2, zmm3/m512/m64bcstAV/VAVX512F AVX512_VP2INTERSECTStore, in an even/odd pair of mask registers, the indicators of the locations of value matches between quadwords in zmm3/m512/m64bcst and zmm2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction writes an even/odd pair of mask registers. The mask register destination indicated in the MODRM.REG field is used to form the basis of the register pair. The low bit of that field is masked off (set to zero) to create the first register of the pair.

+

EVEX.aaa and EVEX.z must be zero.

+

Operation + ¶ +

+

VP2INTERSECTD destmask, src1, src2 + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+// dest_mask_reg_id is the register id specified in the instruction for destmask
+dest_base := dest_mask_reg_id & ~1
+// maskregs[ ] is an array representing the mask registers
+maskregs[dest_base+0][MAX_KL-1:0] := 0
+maskregs[dest_base+1][MAX_KL-1:0] := 0
+FOR i := 0 to KL-1:
+    FOR j := 0 to KL-1:
+        match := (src1.dword[i] == src2.dword[j])
+        maskregs[dest_base+0].bit[i] |= match
+        maskregs[dest_base+1].bit[j] |= match
+
+

VP2INTERSECTQ destmask, src1, src2 + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+// dest_mask_reg_id is the register id specified in the instruction for destmask
+dest_base := dest_mask_reg_id & ~1
+// maskregs[ ] is an array representing the mask registers
+maskregs[dest_base+0][MAX_KL-1:0] := 0
+maskregs[dest_base+1][MAX_KL-1:0] := 0
+FOR i = 0 to KL-1:
+    FOR j = 0 to KL-1:
+        match := (src1.qword[i] == src2.qword[j])
+        maskregs[dest_base+0].bit[i] |= match
+        maskregs[dest_base+1].bit[j] |= match
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VP2INTERSECTD void _mm_2intersect_epi32(__m128i, __m128i, __mmask8 *, __mmask8 *);
+
+
VP2INTERSECTD void _mm256_2intersect_epi32(__m256i, __m256i, __mmask8 *, __mmask8 *);
+
+
VP2INTERSECTD void _mm512_2intersect_epi32(__m512i, __m512i, __mmask16 *, __mmask16 *);
+
+
VP2INTERSECTQ void _mm_2intersect_epi64(__m128i, __m128i, __mmask8 *, __mmask8 *);
+
+
VP2INTERSECTQ void _mm256_2intersect_epi64(__m256i, __m256i, __mmask8 *, __mmask8 *);
+
+
VP2INTERSECTQ void _mm512_2intersect_epi64(__m512i, __m512i, __mmask8 *, __mmask8 *);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vp4dpwssd.html b/x86/vp4dpwssd.html new file mode 100644 index 0000000..50c2e19 --- /dev/null +++ b/x86/vp4dpwssd.html @@ -0,0 +1,238 @@ + +VP4DPWSSD + — Dot Product of Signed Words With Dword Accumulation (4-Iterations)

VP4DPWSSD + — Dot Product of Signed Words With Dword Accumulation (4-Iterations)

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.F2.0F38.W0 52 /r VP4DPWSSD zmm1{k1}{z}, zmm2+3, m128AV/VAVX512_4VNNIWMultiply signed words from source register block indicated by zmm2 by signed words from m128 and accumulate resulting signed dwords in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2 Operand 3 Operand 4
ATuple1_4XModRM:reg (r, w)EVEX.vvvv (r) ModRM:r/m (r) N/A
+

Description + ¶ +

+

This instruction computes 4 sequential register source-block dot-products of two signed word operands with doubleword accumulation; see Figure 8-1 below. The memory operand is sequentially selected in each of the four steps.

+

In the above box, the notation of “+3”' is used to denote that the instruction accesses 4 source registers based on that operand; sources are consecutive, start in a multiple of 4 boundary, and contain the encoded register operand.

+

This instruction supports memory fault suppression. The entire memory operand is loaded if any bit of the lowest 16-bits of the mask is set to 1 or if a “no masking” encoding is used.

+

The tuple type Tuple1_4X implies that four 32-bit elements (16 bytes) are referenced by the memory operation portion of this instruction.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +16 + 16 + 16 + 16b +a + a + a + a0 +b + b + b + b0 +32 + 32b +c + c0 +c1=c1+a2*b0+a3*b1 +c0=c0+a0*b0+a1*b1 +32 + 32b +
Figure 8-1. Register Source-Block Dot Product of Two Signed Word Operands With Doubleword Accumulation1
+
+

1. For illustration purposes, one source-block dot product instance is shown out of the four.

+

Operation + ¶ +

+
src_reg_id is the 5 bit index of the vector register specified in the instruction as the src1 register.
+VP4DPWSSD dest, src1, src2
+(KL,VL) = (16,512)
+N := 4
+ORIGDEST := DEST
+src_base := src_reg_id & ~ (N-1) // for src1 operand
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        FOR m := 0 to N-1:
+            t := SRC2.dword[m]
+            p1dword := reg[src_base+m].word[2*i] * t.word[0]
+            p2dword := reg[src_base+m].word[2*i+1] * t.word[1]
+            DEST.dword[i] := DEST.dword[i] + p1dword + p2dword
+    ELSE IF *zeroing*:
+        DEST.dword[i] := 0
+    ELSE
+        DEST.dword[i] := ORIGDEST.dword[i]
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VP4DPWSSD __m512i _mm512_4dpwssd_epi32(__m512i, __m512ix4, __m128i *);
+
+
VP4DPWSSD __m512i _mm512_mask_4dpwssd_epi32(__m512i, __mmask16, __m512ix4, __m128i *);
+
+
VP4DPWSSD __m512i _mm512_maskz_4dpwssd_epi32(__mmask16, __m512i, __m512ix4, __m128i *);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Type E4; additionally:

+ + + + + + +
#UDIf the EVEX broadcast bit is set to 1.
#UDIf the MODRM.mod = 0b11.
diff --git a/x86/vp4dpwssds.html b/x86/vp4dpwssds.html new file mode 100644 index 0000000..735a0a0 --- /dev/null +++ b/x86/vp4dpwssds.html @@ -0,0 +1,96 @@ + +VP4DPWSSDS + — Dot Product of Signed Words With Dword Accumulation and Saturation(4-Iterations)

VP4DPWSSDS + — Dot Product of Signed Words With Dword Accumulation and Saturation(4-Iterations)

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.F2.0F38.W0 53 /r VP4DPWSSDS zmm1{k1}{z}, zmm2+3, m128AV/VAVX512_4VNNIWMultiply signed words from source register block indicated by zmm2 by signed words from m128 and accumulate the resulting dword results with signed saturation in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Operand 1 Operand 2 Operand 3 Operand 4
A Tuple1_4X ModRM:reg (r, w) EVEX.vvvv (r) ModRM:r/m (r) N/A
+

Description + ¶ +

+

This instruction computes 4 sequential register source-block dot-products of two signed word operands with doubleword accumulation and signed saturation. The memory operand is sequentially selected in each of the four steps.

+

In the above box, the notation of “+3” is used to denote that the instruction accesses 4 source registers based on that operand; sources are consecutive, start in a multiple of 4 boundary, and contain the encoded register operand.

+

This instruction supports memory fault suppression. The entire memory operand is loaded if any bit of the lowest 16-bits of the mask is set to 1 or if a “no masking” encoding is used.

+

The tuple type Tuple1_4X implies that four 32-bit elements (16 bytes) are referenced by the memory operation portion of this instruction.

+

Operation + ¶ +

+
src_reg_id is the 5 bit index of the vector register specified in the instruction as the src1 register.
+
+

VP4DPWSSDS dest, src1, src2 + ¶ +

+
(KL,VL) = (16,512)
+N := 4
+ORIGDEST := DEST
+src_base := src_reg_id & ~ (N-1) // for src1 operand
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        FOR m := 0 to N-1:
+            t := SRC2.dword[m]
+            p1dword := reg[src_base+m].word[2*i] * t.word[0]
+            p2dword := reg[src_base+m].word[2*i+1] * t.word[1]
+            DEST.dword[i] := SIGNED_DWORD_SATURATE(DEST.dword[i] + p1dword + p2dword)
+    ELSE IF *zeroing*:
+        DEST.dword[i] := 0
+    ELSE
+        DEST.dword[i] := ORIGDEST.dword[i]
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VP4DPWSSDS __m512i _mm512_4dpwssds_epi32(__m512i, __m512ix4, __m128i *);
+
+
VP4DPWSSDS __m512i _mm512_mask_4dpwssds_epi32(__m512i, __mmask16, __m512ix4, __m128i *);
+
+
VP4DPWSSDS __m512i _mm512_maskz_4dpwssds_epi32(__mmask16, __m512i, __m512ix4, __m128i *);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Type E4; additionally:

+ + + + + + +
#UDIf the EVEX broadcast bit is set to 1.
#UDIf the MODRM.mod = 0b11.
diff --git a/x86/vpblendd.html b/x86/vpblendd.html new file mode 100644 index 0000000..ffdecf5 --- /dev/null +++ b/x86/vpblendd.html @@ -0,0 +1,106 @@ + +VPBLENDD + — Blend Packed Dwords

VPBLENDD + — Blend Packed Dwords

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 -bit ModeCPUID Feature FlagDescription
VEX.128.66.0F3A.W0 02 /r ib VPBLENDD xmm1, xmm2, xmm3/m128, imm8RVMIV/VAVX2Select dwords from xmm2 and xmm3/m128 from mask specified in imm8 and store the values into xmm1.
VEX.256.66.0F3A.W0 02 /r ib VPBLENDD ymm1, ymm2, ymm3/m256, imm8RVMIV/VAVX2Select dwords from ymm2 and ymm3/m256 from mask specified in imm8 and store the values into ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Dword elements from the source operand (second operand) are conditionally written to the destination operand (first operand) depending on bits in the immediate operand (third operand). The immediate bits (bits 7:0) form a mask that determines whether the corresponding dword in the destination is copied from the source. If a bit in the mask, corresponding to a dword, is “1", then the dword is copied, else the dword is unchanged.

+

VEX.128 encoded version: The second source operand can be an XMM register or a 128-bit memory location. The first source and destination operands are XMM registers. Bits (MAXVL-1:128) of the corresponding YMM register are zeroed.

+

VEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register.

+

Operation + ¶ +

+

VPBLENDD (VEX.256 encoded version) + ¶ +

+
IF (imm8[0] == 1) THEN DEST[31:0] := SRC2[31:0]
+ELSE DEST[31:0] := SRC1[31:0]
+IF (imm8[1] == 1) THEN DEST[63:32] := SRC2[63:32]
+ELSE DEST[63:32] := SRC1[63:32]
+IF (imm8[2] == 1) THEN DEST[95:64] := SRC2[95:64]
+ELSE DEST[95:64] := SRC1[95:64]
+IF (imm8[3] == 1) THEN DEST[127:96] := SRC2[127:96]
+ELSE DEST[127:96] := SRC1[127:96]
+IF (imm8[4] == 1) THEN DEST[159:128] := SRC2[159:128]
+ELSE DEST[159:128] := SRC1[159:128]
+IF (imm8[5] == 1) THEN DEST[191:160] := SRC2[191:160]
+ELSE DEST[191:160] := SRC1[191:160]
+IF (imm8[6] == 1) THEN DEST[223:192] := SRC2[223:192]
+ELSE DEST[223:192] := SRC1[223:192]
+IF (imm8[7] == 1) THEN DEST[255:224] := SRC2[255:224]
+ELSE DEST[255:224] := SRC1[255:224]
+
+

VPBLENDD (VEX.128 encoded version) + ¶ +

+
IF (imm8[0] == 1) THEN DEST[31:0] := SRC2[31:0]
+ELSE DEST[31:0] := SRC1[31:0]
+IF (imm8[1] == 1) THEN DEST[63:32] := SRC2[63:32]
+ELSE DEST[63:32] := SRC1[63:32]
+IF (imm8[2] == 1) THEN DEST[95:64] := SRC2[95:64]
+ELSE DEST[95:64] := SRC1[95:64]
+IF (imm8[3] == 1) THEN DEST[127:96] := SRC2[127:96]
+ELSE DEST[127:96] := SRC1[127:96]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPBLENDD: __m128i _mm_blend_epi32 (__m128i v1, __m128i v2, const int mask)
+
+
VPBLENDD: __m256i _mm256_blend_epi32 (__m256i v1, __m256i v2, const int mask)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.W = 1.
diff --git a/x86/vpblendmb.vpblendmw.html b/x86/vpblendmb.vpblendmw.html new file mode 100644 index 0000000..e735410 --- /dev/null +++ b/x86/vpblendmb.vpblendmw.html @@ -0,0 +1,141 @@ + +VPBLENDMB/VPBLENDMW + — Blend Byte/Word Vectors Using an Opmask Control

VPBLENDMB/VPBLENDMW + — Blend Byte/Word Vectors Using an Opmask Control

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 66 /r VPBLENDMB xmm1 {k1}{z}, xmm2, xmm3/m128AV/VAVX512VL AVX512BWBlend byte integer vector xmm2 and byte vector xmm3/m128 and store the result in xmm1, under control mask.
EVEX.256.66.0F38.W0 66 /r VPBLENDMB ymm1 {k1}{z}, ymm2, ymm3/m256AV/VAVX512VL AVX512BWBlend byte integer vector ymm2 and byte vector ymm3/m256 and store the result in ymm1, under control mask.
EVEX.512.66.0F38.W0 66 /r VPBLENDMB zmm1 {k1}{z}, zmm2, zmm3/m512AV/VAVX512BWBlend byte integer vector zmm2 and byte vector zmm3/m512 and store the result in zmm1, under control mask.
EVEX.128.66.0F38.W1 66 /r VPBLENDMW xmm1 {k1}{z}, xmm2, xmm3/m128AV/VAVX512VL AVX512BWBlend word integer vector xmm2 and word vector xmm3/m128 and store the result in xmm1, under control mask.
EVEX.256.66.0F38.W1 66 /r VPBLENDMW ymm1 {k1}{z}, ymm2, ymm3/m256AV/VAVX512VL AVX512BWBlend word integer vector ymm2 and word vector ymm3/m256 and store the result in ymm1, under control mask.
EVEX.512.66.0F38.W1 66 /r VPBLENDMW zmm1 {k1}{z}, zmm2, zmm3/m512AV/VAVX512BWBlend word integer vector zmm2 and word vector zmm3/m512 and store the result in zmm1, under control mask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an element-by-element blending of byte/word elements between the first source operand byte vector register and the second source operand byte vector from memory or register, using the instruction mask as selector. The result is written into the destination byte vector register.

+

The destination and first source operands are ZMM/YMM/XMM registers. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit memory location.

+

The mask is not used as a writemask for this instruction. Instead, the mask is used as an element selector: every element of the destination is conditionally selected between first source or second source using the value of the related mask bit (0 for first source, 1 for second source).

+

Operation + ¶ +

+

VPBLENDMB (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SRC2[i+7:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN DEST[i+7:i] := SRC1[i+7:i]
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0;
+
+

VPBLENDMW (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SRC2[i+15:i]
+        ELSE
+            IF *merging-masking*
+                THEN DEST[i+15:i] := SRC1[i+15:i]
+                ELSE ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPBLENDMB __m512i _mm512_mask_blend_epi8(__mmask64 m, __m512i a, __m512i b);
+
+
VPBLENDMB __m256i _mm256_mask_blend_epi8(__mmask32 m, __m256i a, __m256i b);
+
+
VPBLENDMB __m128i _mm_mask_blend_epi8(__mmask16 m, __m128i a, __m128i b);
+
+
VPBLENDMW __m512i _mm512_mask_blend_epi16(__mmask32 m, __m512i a, __m512i b);
+
+
VPBLENDMW __m256i _mm256_mask_blend_epi16(__mmask16 m, __m256i a, __m256i b);
+
+
VPBLENDMW __m128i _mm_mask_blend_epi16(__mmask8 m, __m128i a, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpblendmd.vpblendmq.html b/x86/vpblendmd.vpblendmq.html new file mode 100644 index 0000000..aea1ca2 --- /dev/null +++ b/x86/vpblendmd.vpblendmq.html @@ -0,0 +1,152 @@ + +VPBLENDMD/VPBLENDMQ + — Blend Int32/Int64 Vectors Using an OpMask Control

VPBLENDMD/VPBLENDMQ + — Blend Int32/Int64 Vectors Using an OpMask Control

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 64 /r VPBLENDMD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512VL AVX512FBlend doubleword integer vector xmm2 and doubleword vector xmm3/m128/m32bcst and store the result in xmm1, under control mask.
EVEX.256.66.0F38.W0 64 /r VPBLENDMD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512VL AVX512FBlend doubleword integer vector ymm2 and doubleword vector ymm3/m256/m32bcst and store the result in ymm1, under control mask.
EVEX.512.66.0F38.W0 64 /r VPBLENDMD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstAV/VAVX512FBlend doubleword integer vector zmm2 and doubleword vector zmm3/m512/m32bcst and store the result in zmm1, under control mask.
EVEX.128.66.0F38.W1 64 /r VPBLENDMQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstAV/VAVX512VL AVX512FBlend quadword integer vector xmm2 and quadword vector xmm3/m128/m64bcst and store the result in xmm1, under control mask.
EVEX.256.66.0F38.W1 64 /r VPBLENDMQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstAV/VAVX512VL AVX512FBlend quadword integer vector ymm2 and quadword vector ymm3/m256/m64bcst and store the result in ymm1, under control mask.
EVEX.512.66.0F38.W1 64 /r VPBLENDMQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstAV/VAVX512FBlend quadword integer vector zmm2 and quadword vector zmm3/m512/m64bcst and store the result in zmm1, under control mask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs an element-by-element blending of dword/qword elements between the first source operand (the second operand) and the elements of the second source operand (the third operand) using an opmask register as select control. The blended result is written into the destination.

+

The destination and first source operands are ZMM registers. The second source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32-bit memory location.

+

The opmask register is not used as a writemask for this instruction. Instead, the mask is used as an element selector: every element of the destination is conditionally selected between first source or second source using the value of the related mask bit (0 for the first source operand, 1 for the second source operand).

+

If EVEX.z is set, the elements with corresponding mask bit value of 0 in the destination operand are zeroed.

+

Operation + ¶ +

+

VPBLENDMD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no controlmask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := SRC2[31:0]
+                ELSE
+                    DEST[i+31:i] := SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking*
+                THEN DEST[i+31:i] := SRC1[i+31:i]
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0;
+
+

VPBLENDMD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no controlmask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    DEST[i+31:i] := SRC2[31:0]
+                ELSE
+                    DEST[i+31:i] := SRC2[i+31:i]
+            FI;
+        ELSE
+            IF *merging-masking*
+                THEN DEST[i+31:i] := SRC1[i+31:i]
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPBLENDMD __m512i _mm512_mask_blend_epi32(__mmask16 k, __m512i a, __m512i b);
+
+
VPBLENDMD __m256i _mm256_mask_blend_epi32(__mmask8 m, __m256i a, __m256i b);
+
+
VPBLENDMD __m128i _mm_mask_blend_epi32(__mmask8 m, __m128i a, __m128i b);
+
+
VPBLENDMQ __m512i _mm512_mask_blend_epi64(__mmask8 k, __m512i a, __m512i b);
+
+
VPBLENDMQ __m256i _mm256_mask_blend_epi64(__mmask8 m, __m256i a, __m256i b);
+
+
VPBLENDMQ __m128i _mm_mask_blend_epi64(__mmask8 m, __m128i a, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpbroadcast.html b/x86/vpbroadcast.html new file mode 100644 index 0000000..5c851ab --- /dev/null +++ b/x86/vpbroadcast.html @@ -0,0 +1,924 @@ + +VPBROADCAST + — Load Integer and Broadcast

VPBROADCAST + — Load Integer and Broadcast

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 78 /r VPBROADCASTB xmm1, xmm2/m8AV/VAVX2Broadcast a byte integer in the source operand to sixteen locations in xmm1.
VEX.256.66.0F38.W0 78 /r VPBROADCASTB ymm1, xmm2/m8AV/VAVX2Broadcast a byte integer in the source operand to thirty-two locations in ymm1.
EVEX.128.66.0F38.W0 78 /r VPBROADCASTB xmm1{k1}{z}, xmm2/m8BV/VAVX512VL AVX512BWBroadcast a byte integer in the source operand to locations in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W0 78 /r VPBROADCASTB ymm1{k1}{z}, xmm2/m8BV/VAVX512VL AVX512BWBroadcast a byte integer in the source operand to locations in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W0 78 /r VPBROADCASTB zmm1{k1}{z}, xmm2/m8BV/VAVX512BWBroadcast a byte integer in the source operand to 64 locations in zmm1 subject to writemask k1.
VEX.128.66.0F38.W0 79 /r VPBROADCASTW xmm1, xmm2/m16AV/VAVX2Broadcast a word integer in the source operand to eight locations in xmm1.
VEX.256.66.0F38.W0 79 /r VPBROADCASTW ymm1, xmm2/m16AV/VAVX2Broadcast a word integer in the source operand to sixteen locations in ymm1.
EVEX.128.66.0F38.W0 79 /r VPBROADCASTW xmm1{k1}{z}, xmm2/m16BV/VAVX512VL AVX512BWBroadcast a word integer in the source operand to locations in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W0 79 /r VPBROADCASTW ymm1{k1}{z}, xmm2/m16BV/VAVX512VL AVX512BWBroadcast a word integer in the source operand to locations in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W0 79 /r VPBROADCASTW zmm1{k1}{z}, xmm2/m16BV/VAVX512BWBroadcast a word integer in the source operand to 32 locations in zmm1 subject to writemask k1.
VEX.128.66.0F38.W0 58 /r VPBROADCASTD xmm1, xmm2/m32AV/VAVX2Broadcast a dword integer in the source operand to four locations in xmm1.
VEX.256.66.0F38.W0 58 /r VPBROADCASTD ymm1, xmm2/m32AV/VAVX2Broadcast a dword integer in the source operand to eight locations in ymm1.
EVEX.128.66.0F38.W0 58 /r VPBROADCASTD xmm1 {k1}{z}, xmm2/m32BV/VAVX512VL AVX512FBroadcast a dword integer in the source operand to locations in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W0 58 /r VPBROADCASTD ymm1 {k1}{z}, xmm2/m32BV/VAVX512VL AVX512FBroadcast a dword integer in the source operand to locations in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W0 58 /r VPBROADCASTD zmm1 {k1}{z}, xmm2/m32BV/VAVX512FBroadcast a dword integer in the source operand to locations in zmm1 subject to writemask k1.
VEX.128.66.0F38.W0 59 /r VPBROADCASTQ xmm1, xmm2/m64AV/VAVX2Broadcast a qword element in source operand to two locations in xmm1.
VEX.256.66.0F38.W0 59 /r VPBROADCASTQ ymm1, xmm2/m64AV/VAVX2Broadcast a qword element in source operand to four locations in ymm1.
EVEX.128.66.0F38.W1 59 /r VPBROADCASTQ xmm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512FBroadcast a qword element in source operand to locations in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W1 59 /r VPBROADCASTQ ymm1 {k1}{z}, xmm2/m64BV/VAVX512VL AVX512FBroadcast a qword element in source operand to locations in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W1 59 /r VPBROADCASTQ zmm1 {k1}{z}, xmm2/m64BV/VAVX512FBroadcast a qword element in source operand to locations in zmm1 subject to writemask k1.
EVEX.128.66.0F38.W0 59 /r VBROADCASTI32x2 xmm1 {k1}{z}, xmm2/m64CV/VAVX512VL AVX512DQBroadcast two dword elements in source operand to locations in xmm1 subject to writemask k1.
EVEX.256.66.0F38.W0 59 /r VBROADCASTI32x2 ymm1 {k1}{z}, xmm2/m64CV/VAVX512VL AVX512DQBroadcast two dword elements in source operand to locations in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W0 59 /r VBROADCASTI32x2 zmm1 {k1}{z}, xmm2/m64CV/VAVX512DQBroadcast two dword elements in source operand to locations in zmm1 subject to writemask k1.
VEX.256.66.0F38.W0 5A /r VBROADCASTI128 ymm1, m128AV/VAVX2Broadcast 128 bits of integer data in mem to low and high 128-bits in ymm1.
EVEX.256.66.0F38.W0 5A /r VBROADCASTI32X4 ymm1 {k1}{z}, m128DV/VAVX512VL AVX512FBroadcast 128 bits of 4 doubleword integer data in mem to locations in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 5A /r VBROADCASTI32X4 zmm1 {k1}{z}, m128DV/VAVX512FBroadcast 128 bits of 4 doubleword integer data in mem to locations in zmm1 using writemask k1.
EVEX.256.66.0F38.W1 5A /r VBROADCASTI64X2 ymm1 {k1}{z}, m128CV/VAVX512VL AVX512DQBroadcast 128 bits of 2 quadword integer data in mem to locations in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 5A /r VBROADCASTI64X2 zmm1 {k1}{z}, m128CV/VAVX512DQBroadcast 128 bits of 2 quadword integer data in mem to locations in zmm1 using writemask k1.
EVEX.512.66.0F38.W0 5B /r VBROADCASTI32X8 zmm1 {k1}{z}, m256EV/VAVX512DQBroadcast 256 bits of 8 doubleword integer data in mem to locations in zmm1 using writemask k1.
EVEX.512.66.0F38.W1 5B /r VBROADCASTI64X4 zmm1 {k1}{z}, m256DV/VAVX512FBroadcast 256 bits of 4 quadword integer data in mem to locations in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
BTuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
CTuple2ModRM:reg (w)ModRM:r/m (r)N/AN/A
DTuple4ModRM:reg (w)ModRM:r/m (r)N/AN/A
ETuple8ModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Load integer data from the source operand (the second operand) and broadcast to all elements of the destination operand (the first operand).

+

VEX256-encoded VPBROADCASTB/W/D/Q: The source operand is 8-bit, 16-bit, 32-bit, 64-bit memory location or the low 8-bit, 16-bit 32-bit, 64-bit data in an XMM register. The destination operand is a YMM register. VPBROAD-CASTI128 support the source operand of 128-bit memory location. Register source encodings for VPBROADCAS-TI128 is reserved and will #UD. Bits (MAXVL-1:256) of the destination register are zeroed.

+

EVEX-encoded VPBROADCASTD/Q: The source operand is a 32-bit, 64-bit memory location or the low 32-bit, 64-bit data in an XMM register. The destination operand is a ZMM/YMM/XMM register and updated according to the writemask k1.

+

VPBROADCASTI32X4 and VPBROADCASTI64X4: The destination operand is a ZMM register and updated according to the writemask k1. The source operand is 128-bit or 256-bit memory location. Register source encodings for VBROADCASTI32X4 and VBROADCASTI64X4 are reserved and will #UD.

+

Note: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

If VPBROADCASTI128 is encoded with VEX.L= 0, an attempt to execute the instruction encoded with VEX.L= 0 will cause an #UD exception.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X0 +m32 +DEST +X0 X0 X0 X0 X0 X0 X0 X0 +
Figure 5-16. VPBROADCASTD Operation (VEX.256 encoded version)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X0 +m32 +DEST +0 +0 +0 +0 +X0 X0 X0 X0 +
Figure 5-17. VPBROADCASTD Operation (128-bit version)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +m64 +X0 +DEST +X0 +X0 +X0 +X0 +
Figure 5-18. VPBROADCASTQ Operation (256-bit version)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +m128 +X0 +DEST +X0 +X0 +
Figure 5-19. VBROADCASTI128 Operation (256-bit version)
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +m256 +X0 +DEST +X0 +X0 +
Figure 5-20. VBROADCASTI256 Operation (512-bit version)
+

Operation + ¶ +

+

VPBROADCASTB (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SRC[7:0]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPBROADCASTW (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SRC[15:0]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPBROADCASTD (128 bit version) + ¶ +

+
temp := SRC[31:0]
+DEST[31:0] := temp
+DEST[63:32] := temp
+DEST[95:64] := temp
+DEST[127:96] := temp
+DEST[MAXVL-1:128] := 0
+
+

VPBROADCASTD (VEX.256 encoded version) + ¶ +

+
temp := SRC[31:0]
+DEST[31:0] := temp
+DEST[63:32] := temp
+DEST[95:64] := temp
+DEST[127:96] := temp
+DEST[159:128] := temp
+DEST[191:160] := temp
+DEST[223:192] := temp
+DEST[255:224] := temp
+DEST[MAXVL-1:256] := 0
+VPBROADCASTD (EVEX encoded versions)
+(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[31:0]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPBROADCASTQ (VEX.256 encoded version) + ¶ +

+
temp := SRC[63:0]
+DEST[63:0] := temp
+DEST[127:64] := temp
+DEST[191:128] := temp
+DEST[255:192] := temp
+DEST[MAXVL-1:256] := 0
+
+

VPBROADCASTQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[63:0]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+VBROADCASTI32x2 (EVEX encoded versions)
+(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    n := (j mod 2) * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[n+31:n]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VBROADCASTI128 (VEX.256 encoded version) + ¶ +

+
temp := SRC[127:0]
+DEST[127:0] := temp
+DEST[255:128] := temp
+DEST[MAXVL-1:256] := 0
+
+

VBROADCASTI32X4 (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j* 32
+    n := (j modulo 4) * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[n+31:n]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VBROADCASTI64X2 (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    n := (j modulo 2) * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[n+63:n]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] = 0
+            FI
+    FI;
+ENDFOR;
+
+

VBROADCASTI32X8 (EVEX.U1.512 encoded version) + ¶ +

+
FOR j := 0 TO 15
+    i := j * 32
+    n := (j modulo 8) * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[n+31:n]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VBROADCASTI64X4 (EVEX.512 encoded version) + ¶ +

+
FOR j := 0 TO 7
+    i := j * 64
+    n := (j modulo 4) * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[n+63:n]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPBROADCASTB __m512i _mm512_broadcastb_epi8( __m128i a);
+
+
VPBROADCASTB __m512i _mm512_mask_broadcastb_epi8(__m512i s, __mmask64 k, __m128i a);
+
+
VPBROADCASTB __m512i _mm512_maskz_broadcastb_epi8( __mmask64 k, __m128i a);
+
+
VPBROADCASTB __m256i _mm256_broadcastb_epi8(__m128i a);
+
+
VPBROADCASTB __m256i _mm256_mask_broadcastb_epi8(__m256i s, __mmask32 k, __m128i a);
+
+
VPBROADCASTB __m256i _mm256_maskz_broadcastb_epi8( __mmask32 k, __m128i a);
+
+
VPBROADCASTB __m128i _mm_mask_broadcastb_epi8(__m128i s, __mmask16 k, __m128i a);
+
+
VPBROADCASTB __m128i _mm_maskz_broadcastb_epi8( __mmask16 k, __m128i a);
+
+
VPBROADCASTB __m128i _mm_broadcastb_epi8(__m128i a);
+
+
VPBROADCASTD __m512i _mm512_broadcastd_epi32( __m128i a);
+
+
VPBROADCASTD __m512i _mm512_mask_broadcastd_epi32(__m512i s, __mmask16 k, __m128i a);
+
+
VPBROADCASTD __m512i _mm512_maskz_broadcastd_epi32( __mmask16 k, __m128i a);
+
+
VPBROADCASTD __m256i _mm256_broadcastd_epi32( __m128i a);
+
+
VPBROADCASTD __m256i _mm256_mask_broadcastd_epi32(__m256i s, __mmask8 k, __m128i a);
+
+
VPBROADCASTD __m256i _mm256_maskz_broadcastd_epi32( __mmask8 k, __m128i a);
+
+
VPBROADCASTD __m128i _mm_broadcastd_epi32(__m128i a);
+
+
VPBROADCASTD __m128i _mm_mask_broadcastd_epi32(__m128i s, __mmask8 k, __m128i a);
+
+
VPBROADCASTD __m128i _mm_maskz_broadcastd_epi32( __mmask8 k, __m128i a);
+
+
VPBROADCASTQ __m512i _mm512_broadcastq_epi64( __m128i a);
+
+
VPBROADCASTQ __m512i _mm512_mask_broadcastq_epi64(__m512i s, __mmask8 k, __m128i a);
+
+
VPBROADCASTQ __m512i _mm512_maskz_broadcastq_epi64( __mmask8 k, __m128i a);
+
+
VPBROADCASTQ __m256i _mm256_broadcastq_epi64(__m128i a);
+
+
VPBROADCASTQ __m256i _mm256_mask_broadcastq_epi64(__m256i s, __mmask8 k, __m128i a);
+
+
VPBROADCASTQ __m256i _mm256_maskz_broadcastq_epi64( __mmask8 k, __m128i a);
+
+
VPBROADCASTQ __m128i _mm_broadcastq_epi64(__m128i a);
+
+
VPBROADCASTQ __m128i _mm_mask_broadcastq_epi64(__m128i s, __mmask8 k, __m128i a);
+
+
VPBROADCASTQ __m128i _mm_maskz_broadcastq_epi64( __mmask8 k, __m128i a);
+
+
VPBROADCASTW __m512i _mm512_broadcastw_epi16(__m128i a);
+
+
VPBROADCASTW __m512i _mm512_mask_broadcastw_epi16(__m512i s, __mmask32 k, __m128i a);
+
+
VPBROADCASTW __m512i _mm512_maskz_broadcastw_epi16( __mmask32 k, __m128i a);
+
+
VPBROADCASTW __m256i _mm256_broadcastw_epi16(__m128i a);
+
+
VPBROADCASTW __m256i _mm256_mask_broadcastw_epi16(__m256i s, __mmask16 k, __m128i a);
+
+
VPBROADCASTW __m256i _mm256_maskz_broadcastw_epi16( __mmask16 k, __m128i a);
+
+
VPBROADCASTW __m128i _mm_broadcastw_epi16(__m128i a);
+
+
VPBROADCASTW __m128i _mm_mask_broadcastw_epi16(__m128i s, __mmask8 k, __m128i a);
+
+
VPBROADCASTW __m128i _mm_maskz_broadcastw_epi16( __mmask8 k, __m128i a);
+
+
VBROADCASTI32x2 __m512i _mm512_broadcast_i32x2( __m128i a);
+
+
VBROADCASTI32x2 __m512i _mm512_mask_broadcast_i32x2(__m512i s, __mmask16 k, __m128i a);
+
+
VBROADCASTI32x2 __m512i _mm512_maskz_broadcast_i32x2( __mmask16 k, __m128i a);
+
+
VBROADCASTI32x2 __m256i _mm256_broadcast_i32x2( __m128i a);
+
+
VBROADCASTI32x2 __m256i _mm256_mask_broadcast_i32x2(__m256i s, __mmask8 k, __m128i a);
+
+
VBROADCASTI32x2 __m256i _mm256_maskz_broadcast_i32x2( __mmask8 k, __m128i a);
+
+
VBROADCASTI32x2 __m128i _mm_broadcast_i32x2(__m128i a);
+
+
VBROADCASTI32x2 __m128i _mm_mask_broadcast_i32x2(__m128i s, __mmask8 k, __m128i a);
+
+
VBROADCASTI32x2 __m128i _mm_maskz_broadcast_i32x2( __mmask8 k, __m128i a);
+
+
VBROADCASTI32x4 __m512i _mm512_broadcast_i32x4( __m128i a);
+
+
VBROADCASTI32x4 __m512i _mm512_mask_broadcast_i32x4(__m512i s, __mmask16 k, __m128i a);
+
+
VBROADCASTI32x4 __m512i _mm512_maskz_broadcast_i32x4( __mmask16 k, __m128i a);
+
+
VBROADCASTI32x4 __m256i _mm256_broadcast_i32x4( __m128i a);
+
+
VBROADCASTI32x4 __m256i _mm256_mask_broadcast_i32x4(__m256i s, __mmask8 k, __m128i a);
+
+
VBROADCASTI32x4 __m256i _mm256_maskz_broadcast_i32x4( __mmask8 k, __m128i a);
+
+
VBROADCASTI32x8 __m512i _mm512_broadcast_i32x8( __m256i a);
+
+
VBROADCASTI32x8 __m512i _mm512_mask_broadcast_i32x8(__m512i s, __mmask16 k, __m256i a);
+
+
VBROADCASTI32x8 __m512i _mm512_maskz_broadcast_i32x8( __mmask16 k, __m256i a);
+
+
VBROADCASTI64x2 __m512i _mm512_broadcast_i64x2( __m128i a);
+
+
VBROADCASTI64x2 __m512i _mm512_mask_broadcast_i64x2(__m512i s, __mmask8 k, __m128i a);
+
+
VBROADCASTI64x2 __m512i _mm512_maskz_broadcast_i64x2( __mmask8 k, __m128i a);
+
+
VBROADCASTI64x2 __m256i _mm256_broadcast_i64x2( __m128i a);
+
+
VBROADCASTI64x2 __m256i _mm256_mask_broadcast_i64x2(__m256i s, __mmask8 k, __m128i a);
+
+
VBROADCASTI64x2 __m256i _mm256_maskz_broadcast_i64x2( __mmask8 k, __m128i a);
+
+
VBROADCASTI64x4 __m512i _mm512_broadcast_i64x4( __m256i a);
+
+
VBROADCASTI64x4 __m512i _mm512_mask_broadcast_i64x4(__m512i s, __mmask8 k, __m256i a);
+
+
VBROADCASTI64x4 __m512i _mm512_maskz_broadcast_i64x4( __mmask8 k, __m256i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-23, “Type 6 Class Exception Conditions.”

+

EVEX-encoded instructions, syntax with reg/mem operand, see Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + + + + + +
#UDIf VEX.L = 0 for VPBROADCASTQ, VPBROADCASTI128.
If EVEX.L’L = 0 for VBROADCASTI32X4/VBROADCASTI64X2.
If EVEX.L’L < 10b for VBROADCASTI32X8/VBROADCASTI64X4.
diff --git a/x86/vpbroadcastb.vpbroadcastw.vpbroadcastd.vpbroadcastq.html b/x86/vpbroadcastb.vpbroadcastw.vpbroadcastd.vpbroadcastq.html new file mode 100644 index 0000000..6b3ec4b --- /dev/null +++ b/x86/vpbroadcastb.vpbroadcastw.vpbroadcastd.vpbroadcastq.html @@ -0,0 +1,257 @@ + +VPBROADCASTB/VPBROADCASTW/VPBROADCASTD/VPBROADCASTQ + — Load With Broadcast Integer Data From General Purpose Register

VPBROADCASTB/VPBROADCASTW/VPBROADCASTD/VPBROADCASTQ + — Load With Broadcast Integer Data From General Purpose Register

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 7A /r VPBROADCASTB xmm1 {k1}{z}, regAV/VAVX512VL AVX512BWBroadcast an 8-bit value from a GPR to all bytes in the 128-bit destination subject to writemask k1.
EVEX.256.66.0F38.W0 7A /r VPBROADCASTB ymm1 {k1}{z}, regAV/VAVX512VL AVX512BWBroadcast an 8-bit value from a GPR to all bytes in the 256-bit destination subject to writemask k1.
EVEX.512.66.0F38.W0 7A /r VPBROADCASTB zmm1 {k1}{z}, regAV/VAVX512BWBroadcast an 8-bit value from a GPR to all bytes in the 512-bit destination subject to writemask k1.
EVEX.128.66.0F38.W0 7B /r VPBROADCASTW xmm1 {k1}{z}, regAV/VAVX512VL AVX512BWBroadcast a 16-bit value from a GPR to all words in the 128-bit destination subject to writemask k1.
EVEX.256.66.0F38.W0 7B /r VPBROADCASTW ymm1 {k1}{z}, regAV/VAVX512VL AVX512BWBroadcast a 16-bit value from a GPR to all words in the 256-bit destination subject to writemask k1.
EVEX.512.66.0F38.W0 7B /r VPBROADCASTW zmm1 {k1}{z}, regAV/VAVX512BWBroadcast a 16-bit value from a GPR to all words in the 512-bit destination subject to writemask k1.
EVEX.128.66.0F38.W0 7C /r VPBROADCASTD xmm1 {k1}{z}, r32AV/VAVX512VL AVX512FBroadcast a 32-bit value from a GPR to all doublewords in the 128-bit destination subject to writemask k1.
EVEX.256.66.0F38.W0 7C /r VPBROADCASTD ymm1 {k1}{z}, r32AV/VAVX512VL AVX512FBroadcast a 32-bit value from a GPR to all doublewords in the 256-bit destination subject to writemask k1.
EVEX.512.66.0F38.W0 7C /r VPBROADCASTD zmm1 {k1}{z}, r32AV/VAVX512FBroadcast a 32-bit value from a GPR to all doublewords in the 512-bit destination subject to writemask k1.
EVEX.128.66.0F38.W1 7C /r VPBROADCASTQ xmm1 {k1}{z}, r64AV/N.E.1AVX512VL AVX512FBroadcast a 64-bit value from a GPR to all quadwords in the 128-bit destination subject to writemask k1.
EVEX.256.66.0F38.W1 7C /r VPBROADCASTQ ymm1 {k1}{z}, r64AV/N.E.1AVX512VL AVX512FBroadcast a 64-bit value from a GPR to all quadwords in the 256-bit destination subject to writemask k1.
EVEX.512.66.0F38.W1 7C /r VPBROADCASTQ zmm1 {k1}{z}, r64AV/N.E.1AVX512FBroadcast a 64-bit value from a GPR to all quadwords in the 512-bit destination subject to writemask k1.
+
+

1. EVEX.W in non-64 bit is ignored; the instruction behaves as if the W0 version is used.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Broadcasts a 8-bit, 16-bit, 32-bit or 64-bit value from a general-purpose register (the second operand) to all the locations in the destination vector register (the first operand) using the writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPBROADCASTB (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SRC[7:0]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPBROADCASTW (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SRC[15:0]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPBROADCASTD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SRC[31:0]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPBROADCASTQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := SRC[63:0]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPBROADCASTB __m512i _mm512_mask_set1_epi8(__m512i s, __mmask64 k, int a);
+
+
VPBROADCASTB __m512i _mm512_maskz_set1_epi8( __mmask64 k, int a);
+
+
VPBROADCASTB __m256i _mm256_mask_set1_epi8(__m256i s, __mmask32 k, int a);
+
+
VPBROADCASTB __m256i _mm256_maskz_set1_epi8( __mmask32 k, int a);
+
+
VPBROADCASTB __m128i _mm_mask_set1_epi8(__m128i s, __mmask16 k, int a);
+
+
VPBROADCASTB __m128i _mm_maskz_set1_epi8( __mmask16 k, int a);
+
+
VPBROADCASTD __m512i _mm512_mask_set1_epi32(__m512i s, __mmask16 k, int a);
+
+
VPBROADCASTD __m512i _mm512_maskz_set1_epi32( __mmask16 k, int a);
+
+
VPBROADCASTD __m256i _mm256_mask_set1_epi32(__m256i s, __mmask8 k, int a);
+
+
VPBROADCASTD __m256i _mm256_maskz_set1_epi32( __mmask8 k, int a);
+
+
VPBROADCASTD __m128i _mm_mask_set1_epi32(__m128i s, __mmask8 k, int a);
+
+
VPBROADCASTD __m128i _mm_maskz_set1_epi32( __mmask8 k, int a);
+
+
VPBROADCASTQ __m512i _mm512_mask_set1_epi64(__m512i s, __mmask8 k, __int64 a);
+
+
VPBROADCASTQ __m512i _mm512_maskz_set1_epi64( __mmask8 k, __int64 a);
+
+
VPBROADCASTQ __m256i _mm256_mask_set1_epi64(__m256i s, __mmask8 k, __int64 a);
+
+
VPBROADCASTQ __m256i _mm256_maskz_set1_epi64( __mmask8 k, __int64 a);
+
+
VPBROADCASTQ __m128i _mm_mask_set1_epi64(__m128i s, __mmask8 k, __int64 a);
+
+
VPBROADCASTQ __m128i _mm_maskz_set1_epi64( __mmask8 k, __int64 a);
+
+
VPBROADCASTW __m512i _mm512_mask_set1_epi16(__m512i s, __mmask32 k, int a);
+
+
VPBROADCASTW __m512i _mm512_maskz_set1_epi16( __mmask32 k, int a);
+
+
VPBROADCASTW __m256i _mm256_mask_set1_epi16(__m256i s, __mmask16 k, int a);
+
+
VPBROADCASTW __m256i _mm256_maskz_set1_epi16( __mmask16 k, int a);
+
+
VPBROADCASTW __m128i _mm_mask_set1_epi16(__m128i s, __mmask8 k, int a);
+
+
VPBROADCASTW __m128i _mm_maskz_set1_epi16( __mmask8 k, int a);
+
+

Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-55, “Type E7NM Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpbroadcastm.html b/x86/vpbroadcastm.html new file mode 100644 index 0000000..1cd324c --- /dev/null +++ b/x86/vpbroadcastm.html @@ -0,0 +1,119 @@ + +VPBROADCASTM + — Broadcast Mask to Vector Register

VPBROADCASTM + — Broadcast Mask to Vector Register

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W1 2A /r VPBROADCASTMB2Q xmm1, k1RMV/VAVX512VL AVX512CDBroadcast low byte value in k1 to two locations in xmm1.
EVEX.256.F3.0F38.W1 2A /r VPBROADCASTMB2Q ymm1, k1RMV/VAVX512VL AVX512CDBroadcast low byte value in k1 to four locations in ymm1.
EVEX.512.F3.0F38.W1 2A /r VPBROADCASTMB2Q zmm1, k1RMV/VAVX512CDBroadcast low byte value in k1 to eight locations in zmm1.
EVEX.128.F3.0F38.W0 3A /r VPBROADCASTMW2D xmm1, k1RMV/VAVX512VL AVX512CDBroadcast low word value in k1 to four locations in xmm1.
EVEX.256.F3.0F38.W0 3A /r VPBROADCASTMW2D ymm1, k1RMV/VAVX512VL AVX512CDBroadcast low word value in k1 to eight locations in ymm1.
EVEX.512.F3.0F38.W0 3A /r VPBROADCASTMW2D zmm1, k1RMV/VAVX512CDBroadcast low word value in k1 to sixteen locations in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Broadcasts the zero-extended 64/32 bit value of the low byte/word of the source operand (the second operand) to each 64/32 bit element of the destination operand (the first operand). The source operand is an opmask register. The destination operand is a ZMM register (EVEX.512), YMM register (EVEX.256), or XMM register (EVEX.128).

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPBROADCASTMB2Q + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j*64
+    DEST[i+63:i] := ZeroExtend(SRC[7:0])
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPBROADCASTMW2D + ¶ +

+
(KL, VL) = (4, 128), (8, 256),(16, 512)
+FOR j := 0 TO KL-1
+    i := j*32
+    DEST[i+31:i] := ZeroExtend(SRC[15:0])
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPBROADCASTMB2Q __m512i _mm512_broadcastmb_epi64( __mmask8);
+
+
VPBROADCASTMW2D __m512i _mm512_broadcastmw_epi32( __mmask16);
+
+
VPBROADCASTMB2Q __m256i _mm256_broadcastmb_epi64( __mmask8);
+
+
VPBROADCASTMW2D __m256i _mm256_broadcastmw_epi32( __mmask8);
+
+
VPBROADCASTMB2Q __m128i _mm_broadcastmb_epi64( __mmask8);
+
+
VPBROADCASTMW2D __m128i _mm_broadcastmw_epi32( __mmask8);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-54, “Type E6NF Class Exception Conditions.”

diff --git a/x86/vpcmpb.vpcmpub.html b/x86/vpcmpb.vpcmpub.html new file mode 100644 index 0000000..ea6f47d --- /dev/null +++ b/x86/vpcmpb.vpcmpub.html @@ -0,0 +1,212 @@ + +VPCMPB/VPCMPUB + — Compare Packed Byte Values Into Mask

VPCMPB/VPCMPUB + — Compare Packed Byte Values Into Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 3F /r ib VPCMPB k1 {k2}, xmm2, xmm3/m128, imm8AV/VAVX512VL AVX512BWCompare packed signed byte values in xmm3/m128 and xmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F3A.W0 3F /r ib VPCMPB k1 {k2}, ymm2, ymm3/m256, imm8AV/VAVX512VL AVX512BWCompare packed signed byte values in ymm3/m256 and ymm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F3A.W0 3F /r ib VPCMPB k1 {k2}, zmm2, zmm3/m512, imm8AV/VAVX512BWCompare packed signed byte values in zmm3/m512 and zmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.128.66.0F3A.W0 3E /r ib VPCMPUB k1 {k2}, xmm2, xmm3/m128, imm8AV/VAVX512VL AVX512BWCompare packed unsigned byte values in xmm3/m128 and xmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F3A.W0 3E /r ib VPCMPUB k1 {k2}, ymm2, ymm3/m256, imm8AV/VAVX512VL AVX512BWCompare packed unsigned byte values in ymm3/m256 and ymm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F3A.W0 3E /r ib VPCMPUB k1 {k2}, zmm2, zmm3/m512, imm8AV/VAVX512BWCompare packed unsigned byte values in zmm3/m512 and zmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed byte values in the second source operand and the first source operand and returns the results of the comparison to the mask destination operand. The comparison predicate operand (immediate byte) specifies the type of comparison performed on each pair of packed values in the two source operands. The result of each comparison is a single mask bit result of 1 (comparison true) or 0 (comparison false).

+

VPCMPB performs a comparison between pairs of signed byte values.

+

VPCMPUB performs a comparison between pairs of unsigned byte values.

+

The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand (first operand) is a mask register k1. Up to 64/32/16 comparisons are performed with results written to the destination operand under the writemask k2.

+

The comparison predicate operand is an 8-bit immediate: bits 2:0 define the type of comparison to be performed. Bits 3 through 7 of the immediate are reserved. Compiler can implement the pseudo-op mnemonic listed in Table 5-21.

+
+ + + + + + + + + + + + + + + + + + + + + +
Pseudo-OpPCMPM Implementation
VPCMPEQ* reg1, reg2, reg3VPCMP* reg1, reg2, reg3, 0
VPCMPLT* reg1, reg2, reg3VPCMP*reg1, reg2, reg3, 1
VPCMPLE* reg1, reg2, reg3VPCMP* reg1, reg2, reg3, 2
VPCMPNEQ* reg1, reg2, reg3VPCMP* reg1, reg2, reg3, 4
VPPCMPNLT* reg1, reg2, reg3VPCMP* reg1, reg2, reg3, 5
VPCMPNLE* reg1, reg2, reg3VPCMP* reg1, reg2, reg3, 6
+
Table 5-21. Pseudo-Op and VPCMP* Implementation
+

Operation + ¶ +

+
CASE (COMPARISON PREDICATE) OF
+    0: OP := EQ;
+    1: OP := LT;
+    2: OP := LE;
+    3: OP := FALSE;
+    4: OP := NEQ;
+    5: OP := NLT;
+    6: OP := NLE;
+    7: OP := TRUE;
+ESAC;
+
+

VPCMPB (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k2[j] OR *no writemask*
+        THEN
+            CMP := SRC1[i+7:i] OP SRC2[i+7:i];
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] = 0
+                    ; zeroing-masking onlyFI;
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPCMPUB (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k2[j] OR *no writemask*
+        THEN
+            CMP := SRC1[i+7:i] OP SRC2[i+7:i];
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] = 0
+                    ; zeroing-masking onlyFI;
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCMPB __mmask64 _mm512_cmp_epi8_mask( __m512i a, __m512i b, int cmp);
+
+
VPCMPB __mmask64 _mm512_mask_cmp_epi8_mask( __mmask64 m, __m512i a, __m512i b, int cmp);
+
+
VPCMPB __mmask32 _mm256_cmp_epi8_mask( __m256i a, __m256i b, int cmp);
+
+
VPCMPB __mmask32 _mm256_mask_cmp_epi8_mask( __mmask32 m, __m256i a, __m256i b, int cmp);
+
+
VPCMPB __mmask16 _mm_cmp_epi8_mask( __m128i a, __m128i b, int cmp);
+
+
VPCMPB __mmask16 _mm_mask_cmp_epi8_mask( __mmask16 m, __m128i a, __m128i b, int cmp);
+
+
VPCMPB __mmask64 _mm512_cmp[eq|ge|gt|le|lt|neq]_epi8_mask( __m512i a, __m512i b);
+
+
VPCMPB __mmask64 _mm512_mask_cmp[eq|ge|gt|le|lt|neq]_epi8_mask( __mmask64 m, __m512i a, __m512i b);
+
+
VPCMPB __mmask32 _mm256_cmp[eq|ge|gt|le|lt|neq]_epi8_mask( __m256i a, __m256i b);
+
+
VPCMPB __mmask32 _mm256_mask_cmp[eq|ge|gt|le|lt|neq]_epi8_mask( __mmask32 m, __m256i a, __m256i b);
+
+
VPCMPB __mmask16 _mm_cmp[eq|ge|gt|le|lt|neq]_epi8_mask( __m128i a, __m128i b);
+
+
VPCMPB __mmask16 _mm_mask_cmp[eq|ge|gt|le|lt|neq]_epi8_mask( __mmask16 m, __m128i a, __m128i b);
+
+
VPCMPUB __mmask64 _mm512_cmp_epu8_mask( __m512i a, __m512i b, int cmp);
+
+
VPCMPUB __mmask64 _mm512_mask_cmp_epu8_mask( __mmask64 m, __m512i a, __m512i b, int cmp);
+
+
VPCMPUB __mmask32 _mm256_cmp_epu8_mask( __m256i a, __m256i b, int cmp);
+
+
VPCMPUB __mmask32 _mm256_mask_cmp_epu8_mask( __mmask32 m, __m256i a, __m256i b, int cmp);
+
+
VPCMPUB __mmask16 _mm_cmp_epu8_mask( __m128i a, __m128i b, int cmp);
+
+
VPCMPUB __mmask16 _mm_mask_cmp_epu8_mask( __mmask16 m, __m128i a, __m128i b, int cmp);
+
+
VPCMPUB __mmask64 _mm512_cmp[eq|ge|gt|le|lt|neq]_epu8_mask( __m512i a, __m512i b, int cmp);
+
+
VPCMPUB __mmask64 _mm512_mask_cmp[eq|ge|gt|le|lt|neq]_epu8_mask( __mmask64 m, __m512i a, __m512i b, int cmp);
+
+
VPCMPUB __mmask32 _mm256_cmp[eq|ge|gt|le|lt|neq]_epu8_mask( __m256i a, __m256i b, int cmp);
+
+
VPCMPUB __mmask32 _mm256_mask_cmp[eq|ge|gt|le|lt|neq]_epu8_mask( __mmask32 m, __m256i a, __m256i b, int cmp);
+
+
VPCMPUB __mmask16 _mm_cmp[eq|ge|gt|le|lt|neq]_epu8_mask( __m128i a, __m128i b, int cmp);
+
+
VPCMPUB __mmask16 _mm_mask_cmp[eq|ge|gt|le|lt|neq]_epu8_mask( __mmask16 m, __m128i a, __m128i b, int cmp);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpcmpd.vpcmpud.html b/x86/vpcmpd.vpcmpud.html new file mode 100644 index 0000000..6adfc3f --- /dev/null +++ b/x86/vpcmpd.vpcmpud.html @@ -0,0 +1,193 @@ + +VPCMPD/VPCMPUD + — Compare Packed Integer Values Into Mask

VPCMPD/VPCMPUD + — Compare Packed Integer Values Into Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 1F /r ib VPCMPD k1 {k2}, xmm2, xmm3/m128/m32bcst, imm8AV/VAVX512VL AVX512FCompare packed signed doubleword integer values in xmm3/m128/m32bcst and xmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F3A.W0 1F /r ib VPCMPD k1 {k2}, ymm2, ymm3/m256/m32bcst, imm8AV/VAVX512VL AVX512FCompare packed signed doubleword integer values in ymm3/m256/m32bcst and ymm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F3A.W0 1F /r ib VPCMPD k1 {k2}, zmm2, zmm3/m512/m32bcst, imm8AV/VAVX512FCompare packed signed doubleword integer values in zmm2 and zmm3/m512/m32bcst using bits 2:0 of imm8 as a comparison predicate. The comparison results are written to the destination k1 under writemask k2.
EVEX.128.66.0F3A.W0 1E /r ib VPCMPUD k1 {k2}, xmm2, xmm3/m128/m32bcst, imm8AV/VAVX512VL AVX512FCompare packed unsigned doubleword integer values in xmm3/m128/m32bcst and xmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F3A.W0 1E /r ib VPCMPUD k1 {k2}, ymm2, ymm3/m256/m32bcst, imm8AV/VAVX512VL AVX512FCompare packed unsigned doubleword integer values in ymm3/m256/m32bcst and ymm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F3A.W0 1E /r ib VPCMPUD k1 {k2}, zmm2, zmm3/m512/m32bcst, imm8AV/VAVX512FCompare packed unsigned doubleword integer values in zmm2 and zmm3/m512/m32bcst using bits 2:0 of imm8 as a comparison predicate. The comparison results are written to the destination k1 under writemask k2.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Performs a SIMD compare of the packed integer values in the second source operand and the first source operand and returns the results of the comparison to the mask destination operand. The comparison predicate operand (immediate byte) specifies the type of comparison performed on each pair of packed values in the two source operands. The result of each comparison is a single mask bit result of 1 (comparison true) or 0 (comparison false).

+

VPCMPD/VPCMPUD performs a comparison between pairs of signed/unsigned doubleword integer values.

+

The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand (first operand) is a mask register k1. Up to 16/8/4 comparisons are performed with results written to the destination operand under the writemask k2.

+

The comparison predicate operand is an 8-bit immediate: bits 2:0 define the type of comparison to be performed. Bits 3 through 7 of the immediate are reserved. Compiler can implement the pseudo-op mnemonic listed in Table 5-21.

+

Operation + ¶ +

+
CASE (COMPARISON PREDICATE) OF
+    0: OP := EQ;
+    1: OP := LT;
+    2: OP := LE;
+    3: OP := FALSE;
+    4: OP := NEQ;
+    5: OP := NLT;
+    6: OP := NLE;
+    7: OP := TRUE;
+ESAC;
+
+

VPCMPD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k2[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN CMP := SRC1[i+31:i] OP SRC2[31:0];
+                ELSE CMP := SRC1[i+31:i] OP SRC2[i+31:i];
+            FI;
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking onlyFI;
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPCMPUD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k2[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN CMP := SRC1[i+31:i] OP SRC2[31:0];
+                ELSE CMP := SRC1[i+31:i] OP SRC2[i+31:i];
+            FI;
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking onlyFI;
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCMPD __mmask16 _mm512_cmp_epi32_mask( __m512i a, __m512i b, int imm);
+
+
VPCMPD __mmask16 _mm512_mask_cmp_epi32_mask(__mmask16 k, __m512i a, __m512i b, int imm);
+
+
VPCMPD __mmask16 _mm512_cmp[eq|ge|gt|le|lt|neq]_epi32_mask( __m512i a, __m512i b);
+
+
VPCMPD __mmask16 _mm512_mask_cmp[eq|ge|gt|le|lt|neq]_epi32_mask(__mmask16 k, __m512i a, __m512i b);
+
+
VPCMPUD __mmask16 _mm512_cmp_epu32_mask( __m512i a, __m512i b, int imm);
+
+
VPCMPUD __mmask16 _mm512_mask_cmp_epu32_mask(__mmask16 k, __m512i a, __m512i b, int imm);
+
+
VPCMPUD __mmask16 _mm512_cmp[eq|ge|gt|le|lt|neq]_epu32_mask( __m512i a, __m512i b);
+
+
VPCMPUD __mmask16 _mm512_mask_cmp[eq|ge|gt|le|lt|neq]_epu32_mask(__mmask16 k, __m512i a, __m512i b);
+
+
VPCMPD __mmask8 _mm256_cmp_epi32_mask( __m256i a, __m256i b, int imm);
+
+
VPCMPD __mmask8 _mm256_mask_cmp_epi32_mask(__mmask8 k, __m256i a, __m256i b, int imm);
+
+
VPCMPD __mmask8 _mm256_cmp[eq|ge|gt|le|lt|neq]_epi32_mask( __m256i a, __m256i b);
+
+
VPCMPD __mmask8 _mm256_mask_cmp[eq|ge|gt|le|lt|neq]_epi32_mask(__mmask8 k, __m256i a, __m256i b);
+
+
VPCMPUD __mmask8 _mm256_cmp_epu32_mask( __m256i a, __m256i b, int imm);
+
+
VPCMPUD __mmask8 _mm256_mask_cmp_epu32_mask(__mmask8 k, __m256i a, __m256i b, int imm);
+
+
VPCMPUD __mmask8 _mm256_cmp[eq|ge|gt|le|lt|neq]_epu32_mask( __m256i a, __m256i b);
+
+
VPCMPUD __mmask8 _mm256_mask_cmp[eq|ge|gt|le|lt|neq]_epu32_mask(__mmask8 k, __m256i a, __m256i b);
+
+
VPCMPD __mmask8 _mm_cmp_epi32_mask( __m128i a, __m128i b, int imm);
+
+
VPCMPD __mmask8 _mm_mask_cmp_epi32_mask(__mmask8 k, __m128i a, __m128i b, int imm);
+
+
VPCMPD __mmask8 _mm_cmp[eq|ge|gt|le|lt|neq]_epi32_mask( __m128i a, __m128i b);
+
+
VPCMPD __mmask8 _mm_mask_cmp[eq|ge|gt|le|lt|neq]_epi32_mask(__mmask8 k, __m128i a, __m128i b);
+
+
VPCMPUD __mmask8 _mm_cmp_epu32_mask( __m128i a, __m128i b, int imm);
+
+
VPCMPUD __mmask8 _mm_mask_cmp_epu32_mask(__mmask8 k, __m128i a, __m128i b, int imm);
+
+
VPCMPUD __mmask8 _mm_cmp[eq|ge|gt|le|lt|neq]_epu32_mask( __m128i a, __m128i b);
+
+
VPCMPUD __mmask8 _mm_mask_cmp[eq|ge|gt|le|lt|neq]_epu32_mask(__mmask8 k, __m128i a, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpcmpq.vpcmpuq.html b/x86/vpcmpq.vpcmpuq.html new file mode 100644 index 0000000..83ae8be --- /dev/null +++ b/x86/vpcmpq.vpcmpuq.html @@ -0,0 +1,193 @@ + +VPCMPQ/VPCMPUQ + — Compare Packed Integer Values Into Mask

VPCMPQ/VPCMPUQ + — Compare Packed Integer Values Into Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 1F /r ib VPCMPQ k1 {k2}, xmm2, xmm3/m128/m64bcst, imm8AV/VAVX512VL AVX512FCompare packed signed quadword integer values in xmm3/m128/m64bcst and xmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F3A.W1 1F /r ib VPCMPQ k1 {k2}, ymm2, ymm3/m256/m64bcst, imm8AV/VAVX512VL AVX512FCompare packed signed quadword integer values in ymm3/m256/m64bcst and ymm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F3A.W1 1F /r ib VPCMPQ k1 {k2}, zmm2, zmm3/m512/m64bcst, imm8AV/VAVX512FCompare packed signed quadword integer values in zmm3/m512/m64bcst and zmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.128.66.0F3A.W1 1E /r ib VPCMPUQ k1 {k2}, xmm2, xmm3/m128/m64bcst, imm8AV/VAVX512VL AVX512FCompare packed unsigned quadword integer values in xmm3/m128/m64bcst and xmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F3A.W1 1E /r ib VPCMPUQ k1 {k2}, ymm2, ymm3/m256/m64bcst, imm8AV/VAVX512VL AVX512FCompare packed unsigned quadword integer values in ymm3/m256/m64bcst and ymm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F3A.W1 1E /r ib VPCMPUQ k1 {k2}, zmm2, zmm3/m512/m64bcst, imm8AV/VAVX512FCompare packed unsigned quadword integer values in zmm3/m512/m64bcst and zmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Performs a SIMD compare of the packed integer values in the second source operand and the first source operand and returns the results of the comparison to the mask destination operand. The comparison predicate operand (immediate byte) specifies the type of comparison performed on each pair of packed values in the two source operands. The result of each comparison is a single mask bit result of 1 (comparison true) or 0 (comparison false).

+

VPCMPQ/VPCMPUQ performs a comparison between pairs of signed/unsigned quadword integer values.

+

The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand (first operand) is a mask register k1. Up to 8/4/2 comparisons are performed with results written to the destination operand under the writemask k2.

+

The comparison predicate operand is an 8-bit immediate: bits 2:0 define the type of comparison to be performed. Bits 3 through 7 of the immediate are reserved. Compiler can implement the pseudo-op mnemonic listed in Table 5-21.

+

Operation + ¶ +

+
CASE (COMPARISON PREDICATE) OF
+    0: OP := EQ;
+    1: OP := LT;
+    2: OP := LE;
+    3: OP := FALSE;
+    4: OP := NEQ;
+    5: OP := NLT;
+    6: OP := NLE;
+    7: OP := TRUE;
+ESAC;
+
+

VPCMPQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k2[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN CMP := SRC1[i+63:i] OP SRC2[63:0];
+                ELSE CMP := SRC1[i+63:i] OP SRC2[i+63:i];
+            FI;
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPCMPUQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k2[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN CMP := SRC1[i+63:i] OP SRC2[63:0];
+                ELSE CMP := SRC1[i+63:i] OP SRC2[i+63:i];
+            FI;
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCMPQ __mmask8 _mm512_cmp_epi64_mask( __m512i a, __m512i b, int imm);
+
+
VPCMPQ __mmask8 _mm512_mask_cmp_epi64_mask(__mmask8 k, __m512i a, __m512i b, int imm);
+
+
VPCMPQ __mmask8 _mm512_cmp[eq|ge|gt|le|lt|neq]_epi64_mask( __m512i a, __m512i b);
+
+
VPCMPQ __mmask8 _mm512_mask_cmp[eq|ge|gt|le|lt|neq]_epi64_mask(__mmask8 k, __m512i a, __m512i b);
+
+
VPCMPUQ __mmask8 _mm512_cmp_epu64_mask( __m512i a, __m512i b, int imm);
+
+
VPCMPUQ __mmask8 _mm512_mask_cmp_epu64_mask(__mmask8 k, __m512i a, __m512i b, int imm);
+
+
VPCMPUQ __mmask8 _mm512_cmp[eq|ge|gt|le|lt|neq]_epu64_mask( __m512i a, __m512i b);
+
+
VPCMPUQ __mmask8 _mm512_mask_cmp[eq|ge|gt|le|lt|neq]_epu64_mask(__mmask8 k, __m512i a, __m512i b);
+
+
VPCMPQ __mmask8 _mm256_cmp_epi64_mask( __m256i a, __m256i b, int imm);
+
+
VPCMPQ __mmask8 _mm256_mask_cmp_epi64_mask(__mmask8 k, __m256i a, __m256i b, int imm);
+
+
VPCMPQ __mmask8 _mm256_cmp[eq|ge|gt|le|lt|neq]_epi64_mask( __m256i a, __m256i b);
+
+
VPCMPQ __mmask8 _mm256_mask_cmp[eq|ge|gt|le|lt|neq]_epi64_mask(__mmask8 k, __m256i a, __m256i b);
+
+
VPCMPUQ __mmask8 _mm256_cmp_epu64_mask( __m256i a, __m256i b, int imm);
+
+
VPCMPUQ __mmask8 _mm256_mask_cmp_epu64_mask(__mmask8 k, __m256i a, __m256i b, int imm);
+
+
VPCMPUQ __mmask8 _mm256_cmp[eq|ge|gt|le|lt|neq]_epu64_mask( __m256i a, __m256i b);
+
+
VPCMPUQ __mmask8 _mm256_mask_cmp[eq|ge|gt|le|lt|neq]_epu64_mask(__mmask8 k, __m256i a, __m256i b);
+
+
VPCMPQ __mmask8 _mm_cmp_epi64_mask( __m128i a, __m128i b, int imm);
+
+
VPCMPQ __mmask8 _mm_mask_cmp_epi64_mask(__mmask8 k, __m128i a, __m128i b, int imm);
+
+
VPCMPQ __mmask8 _mm_cmp[eq|ge|gt|le|lt|neq]_epi64_mask( __m128i a, __m128i b);
+
+
VPCMPQ __mmask8 _mm_mask_cmp[eq|ge|gt|le|lt|neq]_epi64_mask(__mmask8 k, __m128i a, __m128i b);
+
+
VPCMPUQ __mmask8 _mm_cmp_epu64_mask( __m128i a, __m128i b, int imm);
+
+
VPCMPUQ __mmask8 _mm_mask_cmp_epu64_mask(__mmask8 k, __m128i a, __m128i b, int imm);
+
+
VPCMPUQ __mmask8 _mm_cmp[eq|ge|gt|le|lt|neq]_epu64_mask( __m128i a, __m128i b);
+
+
VPCMPUQ __mmask8 _mm_mask_cmp[eq|ge|gt|le|lt|neq]_epu64_mask(__mmask8 k, __m128i a, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpcmpw.vpcmpuw.html b/x86/vpcmpw.vpcmpuw.html new file mode 100644 index 0000000..297e2a0 --- /dev/null +++ b/x86/vpcmpw.vpcmpuw.html @@ -0,0 +1,188 @@ + +VPCMPW/VPCMPUW + — Compare Packed Word Values Into Mask

VPCMPW/VPCMPUW + — Compare Packed Word Values Into Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 3F /r ib VPCMPW k1 {k2}, xmm2, xmm3/m128, imm8AV/VAVX512VL AVX512BWCompare packed signed word integers in xmm3/m128 and xmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F3A.W1 3F /r ib VPCMPW k1 {k2}, ymm2, ymm3/m256, imm8AV/VAVX512VL AVX512BWCompare packed signed word integers in ymm3/m256 and ymm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F3A.W1 3F /r ib VPCMPW k1 {k2}, zmm2, zmm3/m512, imm8AV/VAVX512BWCompare packed signed word integers in zmm3/m512 and zmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.128.66.0F3A.W1 3E /r ib VPCMPUW k1 {k2}, xmm2, xmm3/m128, imm8AV/VAVX512VL AVX512BWCompare packed unsigned word integers in xmm3/m128 and xmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F3A.W1 3E /r ib VPCMPUW k1 {k2}, ymm2, ymm3/m256, imm8AV/VAVX512VL AVX512BWCompare packed unsigned word integers in ymm3/m256 and ymm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F3A.W1 3E /r ib VPCMPUW k1 {k2}, zmm2, zmm3/m512, imm8AV/VAVX512BWCompare packed unsigned word integers in zmm3/m512 and zmm2 using bits 2:0 of imm8 as a comparison predicate with writemask k2 and leave the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a SIMD compare of the packed integer word in the second source operand and the first source operand and returns the results of the comparison to the mask destination operand. The comparison predicate operand (immediate byte) specifies the type of comparison performed on each pair of packed values in the two source operands. The result of each comparison is a single mask bit result of 1 (comparison true) or 0 (comparison false).

+

VPCMPW performs a comparison between pairs of signed word values.

+

VPCMPUW performs a comparison between pairs of unsigned word values.

+

The first source operand (second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand (first operand) is a mask register k1. Up to 32/16/8 comparisons are performed with results written to the destination operand under the writemask k2.

+

The comparison predicate operand is an 8-bit immediate: bits 2:0 define the type of comparison to be performed. Bits 3 through 7 of the immediate are reserved. Compiler can implement the pseudo-op mnemonic listed in Table 5-21.

+

Operation + ¶ +

+
CASE (COMPARISON PREDICATE) OF
+    0: OP := EQ;
+    1: OP := LT;
+    2: OP := LE;
+    3: OP := FALSE;
+    4: OP := NEQ;
+    5: OP := NLT;
+    6: OP := NLE;
+    7: OP := TRUE;
+ESAC;
+
+

VPCMPW (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k2[j] OR *no writemask*
+        THEN
+            ICMP := SRC1[i+15:i] OP SRC2[i+15:i];
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] = 0
+                    ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPCMPUW (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k2[j] OR *no writemask*
+        THEN
+            CMP := SRC1[i+15:i] OP SRC2[i+15:i];
+            IF CMP = TRUE
+                THEN DEST[j] := 1;
+                ELSE DEST[j] := 0; FI;
+        ELSE DEST[j] = 0
+                    ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCMPW __mmask32 _mm512_cmp_epi16_mask( __m512i a, __m512i b, int cmp);
+
+
VPCMPW __mmask32 _mm512_mask_cmp_epi16_mask( __mmask32 m, __m512i a, __m512i b, int cmp);
+
+
VPCMPW __mmask16 _mm256_cmp_epi16_mask( __m256i a, __m256i b, int cmp);
+
+
VPCMPW __mmask16 _mm256_mask_cmp_epi16_mask( __mmask16 m, __m256i a, __m256i b, int cmp);
+
+
VPCMPW __mmask8 _mm_cmp_epi16_mask( __m128i a, __m128i b, int cmp);
+
+
VPCMPW __mmask8 _mm_mask_cmp_epi16_mask( __mmask8 m, __m128i a, __m128i b, int cmp);
+
+
VPCMPW __mmask32 _mm512_cmp[eq|ge|gt|le|lt|neq]_epi16_mask( __m512i a, __m512i b);
+
+
VPCMPW __mmask32 _mm512_mask_cmp[eq|ge|gt|le|lt|neq]_epi16_mask( __mmask32 m, __m512i a, __m512i b);
+
+
VPCMPW __mmask16 _mm256_cmp[eq|ge|gt|le|lt|neq]_epi16_mask( __m256i a, __m256i b);
+
+
VPCMPW __mmask16 _mm256_mask_cmp[eq|ge|gt|le|lt|neq]_epi16_mask( __mmask16 m, __m256i a, __m256i b);
+
+
VPCMPW __mmask8 _mm_cmp[eq|ge|gt|le|lt|neq]_epi16_mask( __m128i a, __m128i b);
+
+
VPCMPW __mmask8 _mm_mask_cmp[eq|ge|gt|le|lt|neq]_epi16_mask( __mmask8 m, __m128i a, __m128i b);
+
+
VPCMPUW __mmask32 _mm512_cmp_epu16_mask( __m512i a, __m512i b, int cmp);
+
+
VPCMPUW __mmask32 _mm512_mask_cmp_epu16_mask( __mmask32 m, __m512i a, __m512i b, int cmp);
+
+
VPCMPUW __mmask16 _mm256_cmp_epu16_mask( __m256i a, __m256i b, int cmp);
+
+
VPCMPUW __mmask16 _mm256_mask_cmp_epu16_mask( __mmask16 m, __m256i a, __m256i b, int cmp);
+
+
VPCMPUW __mmask8 _mm_cmp_epu16_mask( __m128i a, __m128i b, int cmp);
+
+
VPCMPUW __mmask8 _mm_mask_cmp_epu16_mask( __mmask8 m, __m128i a, __m128i b, int cmp);
+
+
VPCMPUW __mmask32 _mm512_cmp[eq|ge|gt|le|lt|neq]_epu16_mask( __m512i a, __m512i b, int cmp);
+
+
VPCMPUW __mmask32 _mm512_mask_cmp[eq|ge|gt|le|lt|neq]_epu16_mask( __mmask32 m, __m512i a, __m512i b, int cmp);
+
+
VPCMPUW __mmask16 _mm256_cmp[eq|ge|gt|le|lt|neq]_epu16_mask( __m256i a, __m256i b, int cmp);
+
+
VPCMPUW __mmask16 _mm256_mask_cmp[eq|ge|gt|le|lt|neq]_epu16_mask( __mmask16 m, __m256i a, __m256i b, int cmp);
+
+
VPCMPUW __mmask8 _mm_cmp[eq|ge|gt|le|lt|neq]_epu16_mask( __m128i a, __m128i b, int cmp);
+
+
VPCMPUW __mmask8 _mm_mask_cmp[eq|ge|gt|le|lt|neq]_epu16_mask( __mmask8 m, __m128i a, __m128i b, int cmp);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpcompressb.vcompressw.html b/x86/vpcompressb.vcompressw.html new file mode 100644 index 0000000..0a9b331 --- /dev/null +++ b/x86/vpcompressb.vcompressw.html @@ -0,0 +1,221 @@ + +VPCOMPRESSB/VCOMPRESSW + — Store Sparse Packed Byte/Word Integer Values Into DenseMemory/Register

VPCOMPRESSB/VCOMPRESSW + — Store Sparse Packed Byte/Word Integer Values Into DenseMemory/Register

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 63 /r VPCOMPRESSB m128{k1}, xmm1AV/VAVX512_VBMI2 AVX512VLCompress up to 128 bits of packed byte values from xmm1 to m128 with writemask k1.
EVEX.128.66.0F38.W0 63 /r VPCOMPRESSB xmm1{k1}{z}, xmm2BV/VAVX512_VBMI2 AVX512VLCompress up to 128 bits of packed byte values from xmm2 to xmm1 with writemask k1.
EVEX.256.66.0F38.W0 63 /r VPCOMPRESSB m256{k1}, ymm1AV/VAVX512_VBMI2 AVX512VLCompress up to 256 bits of packed byte values from ymm1 to m256 with writemask k1.
EVEX.256.66.0F38.W0 63 /r VPCOMPRESSB ymm1{k1}{z}, ymm2BV/VAVX512_VBMI2 AVX512VLCompress up to 256 bits of packed byte values from ymm2 to ymm1 with writemask k1.
EVEX.512.66.0F38.W0 63 /r VPCOMPRESSB m512{k1}, zmm1AV/VAVX512_VBMI2Compress up to 512 bits of packed byte values from zmm1 to m512 with writemask k1.
EVEX.512.66.0F38.W0 63 /r VPCOMPRESSB zmm1{k1}{z}, zmm2BV/VAVX512_VBMI2Compress up to 512 bits of packed byte values from zmm2 to zmm1 with writemask k1.
EVEX.128.66.0F38.W1 63 /r VPCOMPRESSW m128{k1}, xmm1AV/VAVX512_VBMI2 AVX512VLCompress up to 128 bits of packed word values from xmm1 to m128 with writemask k1.
EVEX.128.66.0F38.W1 63 /r VPCOMPRESSW xmm1{k1}{z}, xmm2BV/VAVX512_VBMI2 AVX512VLCompress up to 128 bits of packed word values from xmm2 to xmm1 with writemask k1.
EVEX.256.66.0F38.W1 63 /r VPCOMPRESSW m256{k1}, ymm1AV/VAVX512_VBMI2 AVX512VLCompress up to 256 bits of packed word values from ymm1 to m256 with writemask k1.
EVEX.256.66.0F38.W1 63 /r VPCOMPRESSW ymm1{k1}{z}, ymm2BV/VAVX512_VBMI2 AVX512VLCompress up to 256 bits of packed word values from ymm2 to ymm1 with writemask k1.
EVEX.512.66.0F38.W1 63 /r VPCOMPRESSW m512{k1}, zmm1AV/VAVX512_VBMI2Compress up to 512 bits of packed word values from zmm1 to m512 with writemask k1.
EVEX.512.66.0F38.W1 63 /r VPCOMPRESSW zmm1{k1}{z}, zmm2BV/VAVX512_VBMI2Compress up to 512 bits of packed word values from zmm2 to zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
BN/AModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Compress (stores) up to 64 byte values or 32 word values from the source operand (second operand) to the destination operand (first operand), based on the active elements determined by the writemask operand. Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Moves up to 512 bits of packed byte values from the source operand (second operand) to the destination operand (first operand). This instruction is used to store partial contents of a vector register into a byte vector or single memory location using the active elements in operand writemask.

+

Memory destination version: Only the contiguous vector is written to the destination memory location. EVEX.z must be zero.

+

Register destination version: If the vector length of the contiguous vector is less than that of the input vector in the source operand, the upper bits of the destination register are unmodified if EVEX.z is not set, otherwise the upper bits are zeroed.

+

This instruction supports memory fault suppression.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VPCOMPRESSB store form + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+k := 0
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.byte[k] := SRC.byte[j]
+        k := k +1
+
+

VPCOMPRESSB reg-reg form + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+k := 0
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.byte[k] := SRC.byte[j]
+        k := k + 1
+IF *merging-masking*:
+    *DEST[VL-1:k*8] remains unchanged*
+    ELSE DEST[VL-1:k*8] := 0
+DEST[MAX_VL-1:VL] := 0
+
+

VPCOMPRESSW store form + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+k := 0
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.word[k] := SRC.word[j]
+        k := k + 1
+
+

VPCOMPRESSW reg-reg form + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+k := 0
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.word[k] := SRC.word[j]
+        k := k + 1
+IF *merging-masking*:
+    *DEST[VL-1:k*16] remains unchanged*
+    ELSE DEST[VL-1:k*16] := 0
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCOMPRESSB __m128i _mm_mask_compress_epi8(__m128i, __mmask16, __m128i);
+
+
VPCOMPRESSB __m128i _mm_maskz_compress_epi8(__mmask16, __m128i);
+
+
VPCOMPRESSB __m256i _mm256_mask_compress_epi8(__m256i, __mmask32, __m256i);
+
+
VPCOMPRESSB __m256i _mm256_maskz_compress_epi8(__mmask32, __m256i);
+
+
VPCOMPRESSB __m512i _mm512_mask_compress_epi8(__m512i, __mmask64, __m512i);
+
+
VPCOMPRESSB __m512i _mm512_maskz_compress_epi8(__mmask64, __m512i);
+
+
VPCOMPRESSB void _mm_mask_compressstoreu_epi8(void*, __mmask16, __m128i);
+
+
VPCOMPRESSB void _mm256_mask_compressstoreu_epi8(void*, __mmask32, __m256i);
+
+
VPCOMPRESSB void _mm512_mask_compressstoreu_epi8(void*, __mmask64, __m512i);
+
+
VPCOMPRESSW __m128i _mm_mask_compress_epi16(__m128i, __mmask8, __m128i);
+
+
VPCOMPRESSW __m128i _mm_maskz_compress_epi16(__mmask8, __m128i);
+
+
VPCOMPRESSW __m256i _mm256_mask_compress_epi16(__m256i, __mmask16, __m256i);
+
+
VPCOMPRESSW __m256i _mm256_maskz_compress_epi16(__mmask16, __m256i);
+
+
VPCOMPRESSW __m512i _mm512_mask_compress_epi16(__m512i, __mmask32, __m512i);
+
+
VPCOMPRESSW __m512i _mm512_maskz_compress_epi16(__mmask32, __m512i);
+
+
VPCOMPRESSW void _mm_mask_compressstoreu_epi16(void*, __mmask8, __m128i);
+
+
VPCOMPRESSW void _mm256_mask_compressstoreu_epi16(void*, __mmask16, __m256i);
+
+
VPCOMPRESSW void _mm512_mask_compressstoreu_epi16(void*, __mmask32, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpcompressd.html b/x86/vpcompressd.html new file mode 100644 index 0000000..a798df3 --- /dev/null +++ b/x86/vpcompressd.html @@ -0,0 +1,128 @@ + +VPCOMPRESSD + — Store Sparse Packed Doubleword Integer Values Into Dense Memory/Register

VPCOMPRESSD + — Store Sparse Packed Doubleword Integer Values Into Dense Memory/Register

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 8B /r VPCOMPRESSD xmm1/m128 {k1}{z}, xmm2AV/VAVX512VL AVX512FCompress packed doubleword integer values from xmm2 to xmm1/m128 using control mask k1.
EVEX.256.66.0F38.W0 8B /r VPCOMPRESSD ymm1/m256 {k1}{z}, ymm2AV/VAVX512VL AVX512FCompress packed doubleword integer values from ymm2 to ymm1/m256 using control mask k1.
EVEX.512.66.0F38.W0 8B /r VPCOMPRESSD zmm1/m512 {k1}{z}, zmm2AV/VAVX512FCompress packed doubleword integer values from zmm2 to zmm1/m512 using control mask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Compress (store) up to 16/8/4 doubleword integer values from the source operand (second operand) to the destination operand (first operand). The source operand is a ZMM/YMM/XMM register, the destination operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location.

+

The opmask register k1 selects the active elements (partial vector or possibly non-contiguous if less than 16 active elements) from the source operand to compress into a contiguous vector. The contiguous vector is written to the destination starting from the low element of the destination operand.

+

Memory destination version: Only the contiguous vector is written to the destination memory location. EVEX.z must be zero.

+

Register destination version: If the vector length of the contiguous vector is less than that of the input vector in the source operand, the upper bits of the destination register are unmodified if EVEX.z is not set, otherwise the upper bits are zeroed.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VPCOMPRESSD (EVEX encoded versions) store form + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+SIZE := 32
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no controlmask*
+        THEN
+            DEST[k+SIZE-1:k] := SRC[i+31:i]
+            k := k + SIZE
+    FI;
+ENDFOR;
+
+

VPCOMPRESSD (EVEX encoded versions) reg-reg form + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+SIZE := 32
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no controlmask*
+        THEN
+                DEST[k+SIZE-1:k] := SRC[i+31:i]
+                k := k + SIZE
+    FI;
+ENDFOR
+IF *merging-masking*
+            THEN *DEST[VL-1:k] remains unchanged*
+            ELSE DEST[VL-1:k] := 0
+FI
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCOMPRESSD __m512i _mm512_mask_compress_epi32(__m512i s, __mmask16 c, __m512i a);
+
+
VPCOMPRESSD __m512i _mm512_maskz_compress_epi32( __mmask16 c, __m512i a);
+
+
VPCOMPRESSD void _mm512_mask_compressstoreu_epi32(void * a, __mmask16 c, __m512i s);
+
+
VPCOMPRESSD __m256i _mm256_mask_compress_epi32(__m256i s, __mmask8 c, __m256i a);
+
+
VPCOMPRESSD __m256i _mm256_maskz_compress_epi32( __mmask8 c, __m256i a);
+
+
VPCOMPRESSD void _mm256_mask_compressstoreu_epi32(void * a, __mmask8 c, __m256i s);
+
+
VPCOMPRESSD __m128i _mm_mask_compress_epi32(__m128i s, __mmask8 c, __m128i a);
+
+
VPCOMPRESSD __m128i _mm_maskz_compress_epi32( __mmask8 c, __m128i a);
+
+
VPCOMPRESSD void _mm_mask_compressstoreu_epi32(void * a, __mmask8 c, __m128i s);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpcompressq.html b/x86/vpcompressq.html new file mode 100644 index 0000000..d2ef224 --- /dev/null +++ b/x86/vpcompressq.html @@ -0,0 +1,128 @@ + +VPCOMPRESSQ + — Store Sparse Packed Quadword Integer Values Into Dense Memory/Register

VPCOMPRESSQ + — Store Sparse Packed Quadword Integer Values Into Dense Memory/Register

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 8B /r VPCOMPRESSQ xmm1/m128 {k1}{z}, xmm2AV/VAVX512VL AVX512FCompress packed quadword integer values from xmm2 to xmm1/m128 using control mask k1.
EVEX.256.66.0F38.W1 8B /r VPCOMPRESSQ ymm1/m256 {k1}{z}, ymm2AV/VAVX512VL AVX512FCompress packed quadword integer values from ymm2 to ymm1/m256 using control mask k1.
EVEX.512.66.0F38.W1 8B /r VPCOMPRESSQ zmm1/m512 {k1}{z}, zmm2AV/VAVX512FCompress packed quadword integer values from zmm2 to zmm1/m512 using control mask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Compress (stores) up to 8/4/2 quadword integer values from the source operand (second operand) to the destination operand (first operand). The source operand is a ZMM/YMM/XMM register, the destination operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location.

+

The opmask register k1 selects the active elements (partial vector or possibly non-contiguous if less than 8 active elements) from the source operand to compress into a contiguous vector. The contiguous vector is written to the destination starting from the low element of the destination operand.

+

Memory destination version: Only the contiguous vector is written to the destination memory location. EVEX.z must be zero.

+

Register destination version: If the vector length of the contiguous vector is less than that of the input vector in the source operand, the upper bits of the destination register are unmodified if EVEX.z is not set, otherwise the upper bits are zeroed.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VPCOMPRESSQ (EVEX encoded versions) store form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+SIZE := 64
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no controlmask*
+        THEN
+            DEST[k+SIZE-1:k] := SRC[i+63:i]
+            k := k + SIZE
+    FI;
+ENFOR
+
+

VPCOMPRESSQ (EVEX encoded versions) reg-reg form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+SIZE := 64
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no controlmask*
+        THEN
+                DEST[k+SIZE-1:k] := SRC[i+63:i]
+                k := k + SIZE
+    FI;
+ENDFOR
+IF *merging-masking*
+            THEN *DEST[VL-1:k] remains unchanged*
+            ELSE DEST[VL-1:k] := 0
+FI
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCOMPRESSQ __m512i _mm512_mask_compress_epi64(__m512i s, __mmask8 c, __m512i a);
+
+
VPCOMPRESSQ __m512i _mm512_maskz_compress_epi64( __mmask8 c, __m512i a);
+
+
VPCOMPRESSQ void _mm512_mask_compressstoreu_epi64(void * a, __mmask8 c, __m512i s);
+
+
VPCOMPRESSQ __m256i _mm256_mask_compress_epi64(__m256i s, __mmask8 c, __m256i a);
+
+
VPCOMPRESSQ __m256i _mm256_maskz_compress_epi64( __mmask8 c, __m256i a);
+
+
VPCOMPRESSQ void _mm256_mask_compressstoreu_epi64(void * a, __mmask8 c, __m256i s);
+
+
VPCOMPRESSQ __m128i _mm_mask_compress_epi64(__m128i s, __mmask8 c, __m128i a);
+
+
VPCOMPRESSQ __m128i _mm_maskz_compress_epi64( __mmask8 c, __m128i a);
+
+
VPCOMPRESSQ void _mm_mask_compressstoreu_epi64(void * a, __mmask8 c, __m128i s);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpconflictd.vpconflictq.html b/x86/vpconflictd.vpconflictq.html new file mode 100644 index 0000000..155c608 --- /dev/null +++ b/x86/vpconflictd.vpconflictq.html @@ -0,0 +1,181 @@ + +VPCONFLICTD/VPCONFLICTQ + — Detect Conflicts Within a Vector of Packed Dword/Qword Values Into DenseMemory/ Register

VPCONFLICTD/VPCONFLICTQ + — Detect Conflicts Within a Vector of Packed Dword/Qword Values Into DenseMemory/ Register

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 C4 /r VPCONFLICTD xmm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512CDDetect duplicate double-word values in xmm2/m128/m32bcst using writemask k1.
EVEX.256.66.0F38.W0 C4 /r VPCONFLICTD ymm1 {k1}{z}, ymm2/m256/m32bcstAV/VAVX512VL AVX512CDDetect duplicate double-word values in ymm2/m256/m32bcst using writemask k1.
EVEX.512.66.0F38.W0 C4 /r VPCONFLICTD zmm1 {k1}{z}, zmm2/m512/m32bcstAV/VAVX512CDDetect duplicate double-word values in zmm2/m512/m32bcst using writemask k1.
EVEX.128.66.0F38.W1 C4 /r VPCONFLICTQ xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512CDDetect duplicate quad-word values in xmm2/m128/m64bcst using writemask k1.
EVEX.256.66.0F38.W1 C4 /r VPCONFLICTQ ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512CDDetect duplicate quad-word values in ymm2/m256/m64bcst using writemask k1.
EVEX.512.66.0F38.W1 C4 /r VPCONFLICTQ zmm1 {k1}{z}, zmm2/m512/m64bcstAV/VAVX512CDDetect duplicate quad-word values in zmm2/m512/m64bcst using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Test each dword/qword element of the source operand (the second operand) for equality with all other elements in the source operand closer to the least significant element. Each element’s comparison results form a bit vector, which is then zero extended and written to the destination according to the writemask.

+

EVEX.512 encoded version: The source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.256 encoded version: The source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.

+

EVEX.128 encoded version: The source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPCONFLICTD + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j*32
+    IF MaskBit(j) OR *no writemask*THEN
+        FOR k := 0 TO j-1
+            m := k*32
+            IF ((SRC[i+31:i] = SRC[m+31:m])) THEN
+                DEST[i+k] := 1
+            ELSE
+                DEST[i+k] := 0
+            FI
+        ENDFOR
+        DEST[i+31:i+j] := 0
+    ELSE
+        IF *merging-masking* THEN
+            *DEST[i+31:i] remains unchanged*
+        ELSE
+            DEST[i+31:i] := 0
+        FI
+    FI
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPCONFLICTQ + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+        i := j*64
+        IF MaskBit(j) OR *no writemask*THEN
+            FOR k := 0 TO j-1
+                    m := k*64
+                        IF ((SRC[i+63:i] = SRC[m+63:m])) THEN
+                            DEST[i+k] := 1
+                        ELSE
+                            DEST[i+k] := 0
+                    FI
+            ENDFOR
+            DEST[i+63:i+j] := 0
+    ELSE
+            IF *merging-masking* THEN
+                        *DEST[i+63:i] remains unchanged*
+                ELSE
+                        DEST[i+63:i] := 0
+                FI
+    FI
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPCONFLICTD __m512i _mm512_conflict_epi32( __m512i a);
+
+
VPCONFLICTD __m512i _mm512_mask_conflict_epi32(__m512i s, __mmask16 m, __m512i a);
+
+
VPCONFLICTD __m512i _mm512_maskz_conflict_epi32(__mmask16 m, __m512i a);
+
+
VPCONFLICTQ __m512i _mm512_conflict_epi64( __m512i a);
+
+
VPCONFLICTQ __m512i _mm512_mask_conflict_epi64(__m512i s, __mmask8 m, __m512i a);
+
+
VPCONFLICTQ __m512i _mm512_maskz_conflict_epi64(__mmask8 m, __m512i a);
+
+
VPCONFLICTD __m256i _mm256_conflict_epi32( __m256i a);
+
+
VPCONFLICTD __m256i _mm256_mask_conflict_epi32(__m256i s, __mmask8 m, __m256i a);
+
+
VPCONFLICTD __m256i _mm256_maskz_conflict_epi32(__mmask8 m, __m256i a);
+
+
VPCONFLICTQ __m256i _mm256_conflict_epi64( __m256i a);
+
+
VPCONFLICTQ __m256i _mm256_mask_conflict_epi64(__m256i s, __mmask8 m, __m256i a);
+
+
VPCONFLICTQ __m256i _mm256_maskz_conflict_epi64(__mmask8 m, __m256i a);
+
+
VPCONFLICTD __m128i _mm_conflict_epi32( __m128i a);
+
+
VPCONFLICTD __m128i _mm_mask_conflict_epi32(__m128i s, __mmask8 m, __m128i a);
+
+
VPCONFLICTD __m128i _mm_maskz_conflict_epi32(__mmask8 m, __m128i a);
+
+
VPCONFLICTQ __m128i _mm_conflict_epi64( __m128i a);
+
+
VPCONFLICTQ __m128i _mm_mask_conflict_epi64(__m128i s, __mmask8 m, __m128i a);
+
+
VPCONFLICTQ __m128i _mm_maskz_conflict_epi64(__mmask8 m, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vpdpbusd.html b/x86/vpdpbusd.html new file mode 100644 index 0000000..9391b33 --- /dev/null +++ b/x86/vpdpbusd.html @@ -0,0 +1,154 @@ + +VPDPBUSD + — Multiply and Add Unsigned and Signed Bytes

VPDPBUSD + — Multiply and Add Unsigned and Signed Bytes

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 50 /r VPDPBUSD xmm1, xmm2, xmm3/m128AV/VAVX-VNNIMultiply groups of 4 pairs of signed bytes in xmm3/m128 with corresponding unsigned bytes of xmm2, summing those products and adding them to doubleword result in xmm1.
VEX.256.66.0F38.W0 50 /r VPDPBUSD ymm1, ymm2, ymm3/m256AV/VAVX-VNNIMultiply groups of 4 pairs of signed bytes in ymm3/m256 with corresponding unsigned bytes of ymm2, summing those products and adding them to doubleword result in ymm1.
EVEX.128.66.0F38.W0 50 /r VPDPBUSD xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512_VNNI AVX512VLMultiply groups of 4 pairs of signed bytes in xmm3/m128/m32bcst with corresponding unsigned bytes of xmm2, summing those products and adding them to doubleword result in xmm1 under writemask k1.
EVEX.256.66.0F38.W0 50 /r VPDPBUSD ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512_VNNI AVX512VLMultiply groups of 4 pairs of signed bytes in ymm3/m256/m32bcst with corresponding unsigned bytes of ymm2, summing those products and adding them to doubleword result in ymm1 under writemask k1.
EVEX.512.66.0F38.W0 50 /r VPDPBUSD zmm1{k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512_VNNIMultiply groups of 4 pairs of signed bytes in zmm3/m512/m32bcst with corresponding unsigned bytes of zmm2, summing those products and adding them to doubleword result in zmm1 under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiplies the individual unsigned bytes of the first source operand by the corresponding signed bytes of the second source operand, producing intermediate signed word results. The word results are then summed and accumulated in the destination dword element size operand.

+

This instruction supports memory fault suppression.

+

Operation + ¶ +

+

VPDPBUSD dest, src1, src2 (VEX encoded versions) + ¶ +

+
VL=(128, 256)
+KL=VL/32
+ORIGDEST := DEST
+FOR i := 0 TO KL-1:
+    // Extending to 16b
+    // src1extend := ZERO_EXTEND
+    // src2extend := SIGN_EXTEND
+    p1word := src1extend(SRC1.byte[4*i+0]) * src2extend(SRC2.byte[4*i+0])
+    p2word := src1extend(SRC1.byte[4*i+1]) * src2extend(SRC2.byte[4*i+1])
+    p3word := src1extend(SRC1.byte[4*i+2]) * src2extend(SRC2.byte[4*i+2])
+    p4word := src1extend(SRC1.byte[4*i+3]) * src2extend(SRC2.byte[4*i+3])
+    DEST.dword[i] := ORIGDEST.dword[i] + p1word + p2word + p3word + p4word
+DEST[MAX_VL-1:VL] := 0
+
+

VPDPBUSD dest, src1, src2 (EVEX encoded versions) + ¶ +

+
(KL,VL)=(4,128), (8,256), (16,512)
+ORIGDEST := DEST
+FOR i := 0 TO KL-1:
+    IF k1[i] or *no writemask*:
+        // Byte elements of SRC1 are zero-extended to 16b and
+        // byte elements of SRC2 are sign extended to 16b before multiplication.
+        IF SRC2 is memory and EVEX.b == 1:
+            t := SRC2.dword[0]
+        ELSE:
+            t := SRC2.dword[i]
+        p1word := ZERO_EXTEND(SRC1.byte[4*i]) * SIGN_EXTEND(t.byte[0])
+        p2word := ZERO_EXTEND(SRC1.byte[4*i+1]) * SIGN_EXTEND(t.byte[1])
+        p3word := ZERO_EXTEND(SRC1.byte[4*i+2]) * SIGN_EXTEND(t.byte[2])
+        p4word := ZERO_EXTEND(SRC1.byte[4*i+3]) * SIGN_EXTEND(t.byte[3])
+        DEST.dword[i] := ORIGDEST.dword[i] + p1word + p2word + p3word + p4word
+    ELSE IF *zeroing*:
+        DEST.dword[i] := 0
+    ELSE: // Merge masking, dest element unchanged
+        DEST.dword[i] := ORIGDEST.dword[i]
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPDPBUSD __m128i _mm_dpbusd_avx_epi32(__m128i, __m128i, __m128i);
+
+
VPDPBUSD __m128i _mm_dpbusd_epi32(__m128i, __m128i, __m128i);
+
+
VPDPBUSD __m128i _mm_mask_dpbusd_epi32(__m128i, __mmask8, __m128i, __m128i);
+
+
VPDPBUSD __m128i _mm_maskz_dpbusd_epi32(__mmask8, __m128i, __m128i, __m128i);
+
+
VPDPBUSD __m256i _mm256_dpbusd_avx_epi32(__m256i, __m256i, __m256i);
+
+
VPDPBUSD __m256i _mm256_dpbusd_epi32(__m256i, __m256i, __m256i);
+
+
VPDPBUSD __m256i _mm256_mask_dpbusd_epi32(__m256i, __mmask8, __m256i, __m256i);
+
+
VPDPBUSD __m256i _mm256_maskz_dpbusd_epi32(__mmask8, __m256i, __m256i, __m256i);
+
+
VPDPBUSD __m512i _mm512_dpbusd_epi32(__m512i, __m512i, __m512i);
+
+
VPDPBUSD __m512i _mm512_mask_dpbusd_epi32(__m512i, __mmask16, __m512i, __m512i);
+
+
VPDPBUSD __m512i _mm512_maskz_dpbusd_epi32(__mmask16, __m512i, __m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpdpbusds.html b/x86/vpdpbusds.html new file mode 100644 index 0000000..8875180 --- /dev/null +++ b/x86/vpdpbusds.html @@ -0,0 +1,154 @@ + +VPDPBUSDS + — Multiply and Add Unsigned and Signed Bytes With Saturation

VPDPBUSDS + — Multiply and Add Unsigned and Signed Bytes With Saturation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 51 /r VPDPBUSDS xmm1, xmm2, xmm3/m128AV/VAVX-VNNIMultiply groups of 4 pairs signed bytes in xmm3/m128 with corresponding unsigned bytes of xmm2, summing those products and adding them to doubleword result, with signed saturation in xmm1.
VEX.256.66.0F38.W0 51 /r VPDPBUSDS ymm1, ymm2, ymm3/m256AV/VAVX-VNNIMultiply groups of 4 pairs signed bytes in ymm3/m256 with corresponding unsigned bytes of ymm2, summing those products and adding them to doubleword result, with signed saturation in ymm1.
EVEX.128.66.0F38.W0 51 /r VPDPBUSDS xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512_VNNI AVX512VLMultiply groups of 4 pairs signed bytes in xmm3/m128/m32bcst with corresponding unsigned bytes of xmm2, summing those products and adding them to doubleword result, with signed saturation in xmm1, under writemask k1.
EVEX.256.66.0F38.W0 51 /r VPDPBUSDS ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512_VNNI AVX512VLMultiply groups of 4 pairs signed bytes in ymm3/m256/m32bcst with corresponding unsigned bytes of ymm2, summing those products and adding them to doubleword result, with signed saturation in ymm1, under writemask k1.
EVEX.512.66.0F38.W0 51 /r VPDPBUSDS zmm1{k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512_VNNIMultiply groups of 4 pairs signed bytes in zmm3/m512/m32bcst with corresponding unsigned bytes of zmm2, summing those products and adding them to doubleword result, with signed saturation in zmm1, under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiplies the individual unsigned bytes of the first source operand by the corresponding signed bytes of the second source operand, producing intermediate signed word results. The word results are then summed and accumulated in the destination dword element size operand. If the intermediate sum overflows a 32b signed number the result is saturated to either 0x7FFF_FFFF for positive numbers of 0x8000_0000 for negative numbers.

+

This instruction supports memory fault suppression.

+

Operation + ¶ +

+

VPDPBUSDS dest, src1, src2 (VEX encoded versions) + ¶ +

+
VL=(128, 256)
+KL=VL/32
+ORIGDEST := DEST
+FOR i := 0 TO KL-1:
+    // Extending to 16b
+    // src1extend := ZERO_EXTEND
+    // src2extend := SIGN_EXTEND
+    p1word := src1extend(SRC1.byte[4*i+0]) * src2extend(SRC2.byte[4*i+0])
+    p2word := src1extend(SRC1.byte[4*i+1]) * src2extend(SRC2.byte[4*i+1])
+    p3word := src1extend(SRC1.byte[4*i+2]) * src2extend(SRC2.byte[4*i+2])
+    p4word := src1extend(SRC1.byte[4*i+3]) * src2extend(SRC2.byte[4*i+3])
+    DEST.dword[i] := SIGNED_DWORD_SATURATE(ORIGDEST.dword[i] + p1word + p2word + p3word + p4word)
+DEST[MAX_VL-1:VL] := 0
+
+

VPDPBUSDS dest, src1, src2 (EVEX encoded versions) + ¶ +

+
(KL,VL)=(4,128), (8,256), (16,512)
+ORIGDEST := DEST
+FOR i := 0 TO KL-1:
+    IF k1[i] or *no writemask*:
+        // Byte elements of SRC1 are zero-extended to 16b and
+        // byte elements of SRC2 are sign extended to 16b before multiplication.
+        IF SRC2 is memory and EVEX.b == 1:
+            t := SRC2.dword[0]
+        ELSE:
+            t := SRC2.dword[i]
+        p1word := ZERO_EXTEND(SRC1.byte[4*i]) * SIGN_EXTEND(t.byte[0])
+        p2word := ZERO_EXTEND(SRC1.byte[4*i+1]) * SIGN_EXTEND(t.byte[1])
+        p3word := ZERO_EXTEND(SRC1.byte[4*i+2]) * SIGN_EXTEND(t.byte[2])
+        p4word := ZERO_EXTEND(SRC1.byte[4*i+3]) *SIGN_EXTEND(t.byte[3])
+        DEST.dword[i] := SIGNED_DWORD_SATURATE(ORIGDEST.dword[i] + p1word + p2word + p3word + p4word)
+    ELSE IF *zeroing*:
+        DEST.dword[i] := 0
+    ELSE: // Merge masking, dest element unchanged
+        DEST.dword[i] := ORIGDEST.dword[i]
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPDPBUSDS __m128i _mm_dpbusds_avx_epi32(__m128i, __m128i, __m128i);
+
+
VPDPBUSDS __m128i _mm_dpbusds_epi32(__m128i, __m128i, __m128i);
+
+
VPDPBUSDS __m128i _mm_mask_dpbusds_epi32(__m128i, __mmask8, __m128i, __m128i);
+
+
VPDPBUSDS __m128i _mm_maskz_dpbusds_epi32(__mmask8, __m128i, __m128i, __m128i);
+
+
VPDPBUSDS __m256i _mm256_dpbusds_avx_epi32(__m256i, __m256i, __m256i);
+
+
VPDPBUSDS __m256i _mm256_dpbusds_epi32(__m256i, __m256i, __m256i);
+
+
VPDPBUSDS __m256i _mm256_mask_dpbusds_epi32(__m256i, __mmask8, __m256i, __m256i);
+
+
VPDPBUSDS __m256i _mm256_maskz_dpbusds_epi32(__mmask8, __m256i, __m256i, __m256i);
+
+
VPDPBUSDS __m512i _mm512_dpbusds_epi32(__m512i, __m512i, __m512i);
+
+
VPDPBUSDS __m512i _mm512_mask_dpbusds_epi32(__m512i, __mmask16, __m512i, __m512i);
+
+
VPDPBUSDS __m512i _mm512_maskz_dpbusds_epi32(__mmask16, __m512i, __m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpdpwssd.html b/x86/vpdpwssd.html new file mode 100644 index 0000000..28c230f --- /dev/null +++ b/x86/vpdpwssd.html @@ -0,0 +1,145 @@ + +VPDPWSSD + — Multiply and Add Signed Word Integers

VPDPWSSD + — Multiply and Add Signed Word Integers

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 52 /r VPDPWSSD xmm1, xmm2, xmm3/m128AV/VAVX-VNNIMultiply groups of 2 pairs signed words in xmm3/m128 with corresponding signed words of xmm2, summing those products and adding them to doubleword result in xmm1.
VEX.256.66.0F38.W0 52 /r VPDPWSSD ymm1, ymm2, ymm3/m256AV/VAVX-VNNIMultiply groups of 2 pairs signed words in ymm3/m256 with corresponding signed words of ymm2, summing those products and adding them to doubleword result in ymm1.
EVEX.128.66.0F38.W0 52 /r VPDPWSSD xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512_VNNI AVX512VLMultiply groups of 2 pairs signed words in xmm3/m128/m32bcst with corresponding signed words of xmm2, summing those products and adding them to doubleword result in xmm1, under writemask k1.
EVEX.256.66.0F38.W0 52 /r VPDPWSSD ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512_VNNI AVX512VLMultiply groups of 2 pairs signed words in ymm3/m256/m32bcst with corresponding signed words of ymm2, summing those products and adding them to doubleword result in ymm1, under writemask k1.
EVEX.512.66.0F38.W0 52 /r VPDPWSSD zmm1{k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512_VNNIMultiply groups of 2 pairs signed words in zmm3/m512/m32bcst with corresponding signed words of zmm2, summing those products and adding them to doubleword result in zmm1, under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiplies the individual signed words of the first source operand by the corresponding signed words of the second source operand, producing intermediate signed, doubleword results. The adjacent doubleword results are then summed and accumulated in the destination operand.

+

This instruction supports memory fault suppression.

+

Operation + ¶ +

+

VPDPWSSD dest, src1, src2 (VEX encoded versions) + ¶ +

+
VL=(128, 256)
+KL=VL/32
+ORIGDEST := DEST
+FOR i := 0 TO KL-1:
+    p1dword := SIGN_EXTEND(SRC1.word[2*i+0]) * SIGN_EXTEND(SRC2.word[2*i+0] )
+    p2dword := SIGN_EXTEND(SRC1.word[2*i+1]) * SIGN_EXTEND(SRC2.word[2*i+1] )
+    DEST.dword[i] := ORIGDEST.dword[i] + p1dword + p2dword
+DEST[MAX_VL-1:VL] := 0
+
+

VPDPWSSD dest, src1, src2 (EVEX encoded versions) + ¶ +

+
(KL,VL)=(4,128), (8,256), (16,512)
+ORIGDEST := DEST
+FOR i := 0 TO KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC2 is memory and EVEX.b == 1:
+            t := SRC2.dword[0]
+        ELSE:
+            t := SRC2.dword[i]
+        p1dword := SIGN_EXTEND(SRC1.word[2*i]) * SIGN_EXTEND(t.word[0])
+        p2dword := SIGN_EXTEND(SRC1.word[2*i+1]) * SIGN_EXTEND(t.word[1])
+        DEST.dword[i] := ORIGDEST.dword[i] + p1dword + p2dword
+    ELSE IF *zeroing*:
+        DEST.dword[i] := 0
+    ELSE: // Merge masking, dest element unchanged
+        DEST.dword[i] := ORIGDEST.dword[i]
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPDPWSSD __m128i _mm_dpwssd_avx_epi32(__m128i, __m128i, __m128i);
+
+
VPDPWSSD __m128i _mm_dpwssd_epi32(__m128i, __m128i, __m128i);
+
+
VPDPWSSD __m128i _mm_mask_dpwssd_epi32(__m128i, __mmask8, __m128i, __m128i);
+
+
VPDPWSSD __m128i _mm_maskz_dpwssd_epi32(__mmask8, __m128i, __m128i, __m128i);
+
+
VPDPWSSD __m256i _mm256_dpwssd_avx_epi32(__m256i, __m256i, __m256i);
+
+
VPDPWSSD __m256i _mm256_dpwssd_epi32(__m256i, __m256i, __m256i);
+
+
VPDPWSSD __m256i _mm256_mask_dpwssd_epi32(__m256i, __mmask8, __m256i, __m256i);
+
+
VPDPWSSD __m256i _mm256_maskz_dpwssd_epi32(__mmask8, __m256i, __m256i, __m256i);
+
+
VPDPWSSD __m512i _mm512_dpwssd_epi32(__m512i, __m512i, __m512i);
+
+
VPDPWSSD __m512i _mm512_mask_dpwssd_epi32(__m512i, __mmask16, __m512i, __m512i);
+
+
VPDPWSSD __m512i _mm512_maskz_dpwssd_epi32(__mmask16, __m512i, __m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpdpwssds.html b/x86/vpdpwssds.html new file mode 100644 index 0000000..724fe4f --- /dev/null +++ b/x86/vpdpwssds.html @@ -0,0 +1,145 @@ + +VPDPWSSDS + — Multiply and Add Signed Word Integers With Saturation

VPDPWSSDS + — Multiply and Add Signed Word Integers With Saturation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 53 /r VPDPWSSDS xmm1, xmm2, xmm3/m128AV/VAVX-VNNIMultiply groups of 2 pairs of signed words in xmm3/m128 with corresponding signed words of xmm2, summing those products and adding them to doubleword result in xmm1, with signed saturation.
VEX.256.66.0F38.W0 53 /r VPDPWSSDS ymm1, ymm2, ymm3/m256AV/VAVX-VNNIMultiply groups of 2 pairs of signed words in ymm3/m256 with corresponding signed words of ymm2, summing those products and adding them to doubleword result in ymm1, with signed saturation.
EVEX.128.66.0F38.W0 53 /r VPDPWSSDS xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512_VNNI AVX512VLMultiply groups of 2 pairs of signed words in xmm3/m128/m32bcst with corresponding signed words of xmm2, summing those products and adding them to doubleword result in xmm1, with signed saturation, under writemask k1.
EVEX.256.66.0F38.W0 53 /r VPDPWSSDS ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512_VNNI AVX512VLMultiply groups of 2 pairs of signed words in ymm3/m256/m32bcst with corresponding signed words of ymm2, summing those products and adding them to doubleword result in ymm1, with signed saturation, under writemask k1.
EVEX.512.66.0F38.W0 53 /r VPDPWSSDS zmm1{k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512_VNNIMultiply groups of 2 pairs of signed words in zmm3/m512/m32bcst with corresponding signed words of zmm2, summing those products and adding them to doubleword result in zmm1, with signed saturation, under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Multiplies the individual signed words of the first source operand by the corresponding signed words of the second source operand, producing intermediate signed, doubleword results. The adjacent doubleword results are then summed and accumulated in the destination operand. If the intermediate sum overflows a 32b signed number, the result is saturated to either 0x7FFF_FFFF for positive numbers of 0x8000_0000 for negative numbers.

+

This instruction supports memory fault suppression.

+

Operation + ¶ +

+

VPDPWSSDS dest, src1, src2 (VEX encoded versions) + ¶ +

+
VL=(128, 256)
+KL=VL/32
+ORIGDEST := DEST
+FOR i := 0 TO KL-1:
+    p1dword := SIGN_EXTEND(SRC1.word[2*i+0]) * SIGN_EXTEND(SRC2.word[2*i+0])
+    p2dword := SIGN_EXTEND(SRC1.word[2*i+1]) * SIGN_EXTEND(SRC2.word[2*i+1])
+    DEST.dword[i] := SIGNED_DWORD_SATURATE(ORIGDEST.dword[i] + p1dword + p2dword)
+DEST[MAX_VL-1:VL] := 0
+
+

VPDPWSSDS dest, src1, src2 (EVEX encoded versions) + ¶ +

+
(KL,VL)=(4,128), (8,256), (16,512)
+ORIGDEST := DEST
+FOR i := 0 TO KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC2 is memory and EVEX.b == 1:
+            t := SRC2.dword[0]
+        ELSE:
+            t := SRC2.dword[i]
+        p1dword := SIGN_EXTEND(SRC1.word[2*i]) * SIGN_EXTEND(t.word[0])
+        p2dword := SIGN_EXTEND(SRC1.word[2*i+1]) * SIGN_EXTEND(t.word[1])
+        DEST.dword[i] := SIGNED_DWORD_SATURATE(ORIGDEST.dword[i] + p1dword + p2dword)
+    ELSE IF *zeroing*:
+        DEST.dword[i] := 0
+    ELSE: // Merge masking, dest element unchanged
+        DEST.dword[i] := ORIGDEST.dword[i]
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPDPWSSDS __m128i _mm_dpwssds_avx_epi32(__m128i, __m128i, __m128i);
+
+
VPDPWSSDS __m128i _mm_dpwssds_epi32(__m128i, __m128i, __m128i);
+
+
VPDPWSSDS __m128i _mm_mask_dpwssd_epi32(__m128i, __mmask8, __m128i, __m128i);
+
+
VPDPWSSDS __m128i _mm_maskz_dpwssd_epi32(__mmask8, __m128i, __m128i, __m128i);
+
+
VPDPWSSDS __m256i _mm256_dpwssds_avx_epi32(__m256i, __m256i, __m256i);
+
+
VPDPWSSDS __m256i _mm256_dpwssd_epi32(__m256i, __m256i, __m256i);
+
+
VPDPWSSDS __m256i _mm256_mask_dpwssd_epi32(__m256i, __mmask8, __m256i, __m256i);
+
+
VPDPWSSDS __m256i _mm256_maskz_dpwssd_epi32(__mmask8, __m256i, __m256i, __m256i);
+
+
VPDPWSSDS __m512i _mm512_dpwssd_epi32(__m512i, __m512i, __m512i);
+
+
VPDPWSSDS __m512i _mm512_mask_dpwssd_epi32(__m512i, __mmask16, __m512i, __m512i);
+
+
VPDPWSSDS __m512i _mm512_maskz_dpwssd_epi32(__mmask16, __m512i, __m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vperm2f128.html b/x86/vperm2f128.html new file mode 100644 index 0000000..1e3a696 --- /dev/null +++ b/x86/vperm2f128.html @@ -0,0 +1,175 @@ + +VPERM2F128 + — Permute Floating-Point Values

VPERM2F128 + — Permute Floating-Point Values

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.256.66.0F3A.W0 06 /r ib VPERM2F128 ymm1, ymm2, ymm3/m256, imm8RV MIV/VAVXPermute 128-bit floating-point fields in ymm2 and ymm3/mem using controls from imm8 and store result in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Permute 128 bit floating-point-containing fields from the first source operand (second operand) and second source operand (third operand) using bits in the 8-bit immediate and store results in the destination operand (first operand). The first source operand is a YMM register, the second source operand is a YMM register or a 256-bit memory location, and the destination operand is a YMM register.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Y1 +Y0 +SRC2 +X1 +X0 +SRC1 +X0, X1, Y0, or Y1 +DEST +X0, X1, Y0, or Y1 +
Figure 5-21. VPERM2F128 Operation
+

Imm8[1:0] select the source for the first destination 128-bit field, imm8[5:4] select the source for the second destination field. If imm8[3] is set, the low 128-bit field is zeroed. If imm8[7] is set, the high 128-bit field is zeroed.

+

VEX.L must be 1, otherwise the instruction will #UD.

+

Operation + ¶ +

+

VPERM2F128 + ¶ +

+
CASE IMM8[1:0] of
+0: DEST[127:0] := SRC1[127:0]
+1: DEST[127:0] := SRC1[255:128]
+2: DEST[127:0] := SRC2[127:0]
+3: DEST[127:0] := SRC2[255:128]
+ESAC
+CASE IMM8[5:4] of
+0: DEST[255:128] := SRC1[127:0]
+1: DEST[255:128] := SRC1[255:128]
+2: DEST[255:128] := SRC2[127:0]
+3: DEST[255:128] := SRC2[255:128]
+ESAC
+IF (imm8[3])
+DEST[127:0] := 0
+FI
+IF (imm8[7])
+DEST[MAXVL-1:128] := 0
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERM2F128: __m256 _mm256_permute2f128_ps (__m256 a, __m256 b, int control)
+
+
VPERM2F128: __m256d _mm256_permute2f128_pd (__m256d a, __m256d b, int control)
+
+
VPERM2F128: __m256i _mm256_permute2f128_si256 (__m256i a, __m256i b, int control)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-23, “Type 6 Class Exception Conditions.”

+

Additionally:

+ + + + + +
#UDIf VEX.L = 0
If VEX.W = 1.
diff --git a/x86/vperm2i128.html b/x86/vperm2i128.html new file mode 100644 index 0000000..6002e8e --- /dev/null +++ b/x86/vperm2i128.html @@ -0,0 +1,171 @@ + +VPERM2I128 + — Permute Integer Values

VPERM2I128 + — Permute Integer Values

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 -bit ModeCPUID Feature FlagDescription
VEX.256.66.0F3A.W0 46 /r ib VPERM2I128 ymm1, ymm2, ymm3/m256, imm8RVMIV/VAVX2Permute 128-bit integer data in ymm2 and ymm3/mem using controls from imm8 and store result in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RVMIModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Permute 128 bit integer data from the first source operand (second operand) and second source operand (third operand) using bits in the 8-bit immediate and store results in the destination operand (first operand). The first source operand is a YMM register, the second source operand is a YMM register or a 256-bit memory location, and the destination operand is a YMM register.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Y1 +Y0 +SRC2 +X1 +X0 +SRC1 +X0, X1, Y0, or Y1 +DEST +X0, X1, Y0, or Y1 +
Figure 5-22. VPERM2I128 Operation
+

Imm8[1:0] select the source for the first destination 128-bit field, imm8[5:4] select the source for the second destination field. If imm8[3] is set, the low 128-bit field is zeroed. If imm8[7] is set, the high 128-bit field is zeroed.

+

VEX.L must be 1, otherwise the instruction will #UD.

+

Operation + ¶ +

+

VPERM2I128 + ¶ +

+
CASE IMM8[1:0] of
+0: DEST[127:0] := SRC1[127:0]
+1: DEST[127:0] := SRC1[255:128]
+2: DEST[127:0] := SRC2[127:0]
+3: DEST[127:0] := SRC2[255:128]
+ESAC
+CASE IMM8[5:4] of
+0: DEST[255:128] := SRC1[127:0]
+1: DEST[255:128] := SRC1[255:128]
+2: DEST[255:128] := SRC2[127:0]
+3: DEST[255:128] := SRC2[255:128]
+ESAC
+IF (imm8[3])
+DEST[127:0] := 0
+FI
+IF (imm8[7])
+DEST[255:128] := 0
+FI
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERM2I128: __m256i _mm256_permute2x128_si256 (__m256i a, __m256i b, int control)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

See Table 2-23, “Type 6 Class Exception Conditions.”

+

Additionally:

+ + + + + +
#UDIf VEX.L = 0,
If VEX.W = 1.
diff --git a/x86/vpermb.html b/x86/vpermb.html new file mode 100644 index 0000000..0992fe8 --- /dev/null +++ b/x86/vpermb.html @@ -0,0 +1,113 @@ + +VPERMB + — Permute Packed Bytes Elements

VPERMB + — Permute Packed Bytes Elements

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 8D /r VPERMB xmm1 {k1}{z}, xmm2, xmm3/m128AV/VAVX512VL AVX512_VBMIPermute bytes in xmm3/m128 using byte indexes in xmm2 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 8D /r VPERMB ymm1 {k1}{z}, ymm2, ymm3/m256AV/VAVX512VL AVX512_VBMIPermute bytes in ymm3/m256 using byte indexes in ymm2 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 8D /r VPERMB zmm1 {k1}{z}, zmm2, zmm3/m512AV/VAVX512_VBMIPermute bytes in zmm3/m512 using byte indexes in zmm2 and store the result in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Copies bytes from the second source operand (the third operand) to the destination operand (the first operand) according to the byte indices in the first source operand (the second operand). Note that this instruction permits a byte in the source operand to be copied to more than one location in the destination operand.

+

Only the low 6(EVEX.512)/5(EVEX.256)/4(EVEX.128) bits of each byte index is used to select the location of the source byte from the second source operand.

+

The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination operand is a ZMM/YMM/XMM register updated at byte granularity by the writemask k1.

+

Operation + ¶ +

+

VPERMB (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+IF VL = 128:
+    n := 3;
+ELSE IF VL = 256:
+    n := 4;
+ELSE IF VL = 512:
+    n := 5;
+FI;
+FOR j := 0 TO KL-1:
+    id := SRC1[j*8 + n : j*8] ; // location of the source byte
+    IF k1[j] OR *no writemask* THEN
+        DEST[j*8 + 7: j*8] := SRC2[id*8 +7: id*8];
+    ELSE IF zeroing-masking THEN
+        DEST[j*8 + 7: j*8] := 0;
+    *ELSE
+        DEST[j*8 + 7: j*8] remains unchanged*
+    FI
+ENDFOR
+DEST[MAX_VL-1:VL] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMB __m512i _mm512_permutexvar_epi8( __m512i idx, __m512i a);
+
+
VPERMB __m512i _mm512_mask_permutexvar_epi8(__m512i s, __mmask64 k, __m512i idx, __m512i a);
+
+
VPERMB __m512i _mm512_maskz_permutexvar_epi8( __mmask64 k, __m512i idx, __m512i a);
+
+
VPERMB __m256i _mm256_permutexvar_epi8( __m256i idx, __m256i a);
+
+
VPERMB __m256i _mm256_mask_permutexvar_epi8(__m256i s, __mmask32 k, __m256i idx, __m256i a);
+
+
VPERMB __m256i _mm256_maskz_permutexvar_epi8( __mmask32 k, __m256i idx, __m256i a);
+
+
VPERMB __m128i _mm_permutexvar_epi8( __m128i idx, __m128i a);
+
+
VPERMB __m128i _mm_mask_permutexvar_epi8(__m128i s, __mmask16 k, __m128i idx, __m128i a);
+
+
VPERMB __m128i _mm_maskz_permutexvar_epi8( __mmask16 k, __m128i idx, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vpermd.vpermw.html b/x86/vpermd.vpermw.html new file mode 100644 index 0000000..da3c152 --- /dev/null +++ b/x86/vpermd.vpermw.html @@ -0,0 +1,208 @@ + +VPERMD/VPERMW + — Permute Packed Doubleword/Word Elements

VPERMD/VPERMW + — Permute Packed Doubleword/Word Elements

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.256.66.0F38.W0 36 /r VPERMD ymm1, ymm2, ymm3/m256AV/VAVX2Permute doublewords in ymm3/m256 using indices in ymm2 and store the result in ymm1.
EVEX.256.66.0F38.W0 36 /r VPERMD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FPermute doublewords in ymm3/m256/m32bcst using indexes in ymm2 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 36 /r VPERMD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512FPermute doublewords in zmm3/m512/m32bcst using indices in zmm2 and store the result in zmm1 using writemask k1.
EVEX.128.66.0F38.W1 8D /r VPERMW xmm1 {k1}{z}, xmm2, xmm3/m128CV/VAVX512VL AVX512BWPermute word integers in xmm3/m128 using indexes in xmm2 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 8D /r VPERMW ymm1 {k1}{z}, ymm2, ymm3/m256CV/VAVX512VL AVX512BWPermute word integers in ymm3/m256 using indexes in ymm2 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 8D /r VPERMW zmm1 {k1}{z}, zmm2, zmm3/m512CV/VAVX512BWPermute word integers in zmm3/m512 using indexes in zmm2 and store the result in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
CFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Copies doublewords (or words) from the second source operand (the third operand) to the destination operand (the first operand) according to the indices in the first source operand (the second operand). Note that this instruction permits a doubleword (word) in the source operand to be copied to more than one location in the destination operand.

+

VEX.256 encoded VPERMD: The first and second operands are YMM registers, the third operand can be a YMM register or memory location. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX encoded VPERMD: The first and second operands are ZMM/YMM registers, the third operand can be a ZMM/YMM register, a 512/256-bit memory location or a 512/256-bit vector broadcasted from a 32-bit memory location. The elements in the destination are updated using the writemask k1.

+

VPERMW: first and second operands are ZMM/YMM/XMM registers, the third operand can be a ZMM/YMM/XMM register, or a 512/256/128-bit memory location. The destination is updated using the writemask k1.

+

EVEX.128 encoded versions: Bits (MAXVL-1:128) of the corresponding ZMM register are zeroed.

+

Operation + ¶ +

+

VPERMD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 256), (16, 512)
+IF VL = 256 THEN n := 2; FI;
+IF VL = 512 THEN n := 3; FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    id := 32*SRC1[i+n:i]
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := SRC2[31:0];
+                ELSE DEST[i+31:i] := SRC2[id+31:id];
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMD (VEX.256 encoded version) + ¶ +

+
DEST[31:0] := (SRC2[255:0] >> (SRC1[2:0] * 32))[31:0];
+DEST[63:32] := (SRC2[255:0] >> (SRC1[34:32] * 32))[31:0];
+DEST[95:64] := (SRC2[255:0] >> (SRC1[66:64] * 32))[31:0];
+DEST[127:96] := (SRC2[255:0] >> (SRC1[98:96] * 32))[31:0];
+DEST[159:128] := (SRC2[255:0] >> (SRC1[130:128] * 32))[31:0];
+DEST[191:160] := (SRC2[255:0] >> (SRC1[162:160] * 32))[31:0];
+DEST[223:192] := (SRC2[255:0] >> (SRC1[194:192] * 32))[31:0];
+DEST[255:224] := (SRC2[255:0] >> (SRC1[226:224] * 32))[31:0];
+DEST[MAXVL-1:256] := 0
+
+

VPERMW (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128 THEN n := 2; FI;
+IF VL = 256 THEN n := 3; FI;
+IF VL = 512 THEN n := 4; FI;
+FOR j := 0 TO KL-1
+    i := j * 16
+    id := 16*SRC1[i+n:i]
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SRC2[id+15:id]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMD __m512i _mm512_permutexvar_epi32( __m512i idx, __m512i a);
+
+
VPERMD __m512i _mm512_mask_permutexvar_epi32(__m512i s, __mmask16 k, __m512i idx, __m512i a);
+
+
VPERMD __m512i _mm512_maskz_permutexvar_epi32( __mmask16 k, __m512i idx, __m512i a);
+
+
VPERMD __m256i _mm256_permutexvar_epi32( __m256i idx, __m256i a);
+
+
VPERMD __m256i _mm256_mask_permutexvar_epi32(__m256i s, __mmask8 k, __m256i idx, __m256i a);
+
+
VPERMD __m256i _mm256_maskz_permutexvar_epi32( __mmask8 k, __m256i idx, __m256i a);
+
+
VPERMW __m512i _mm512_permutexvar_epi16( __m512i idx, __m512i a);
+
+
VPERMW __m512i _mm512_mask_permutexvar_epi16(__m512i s, __mmask32 k, __m512i idx, __m512i a);
+
+
VPERMW __m512i _mm512_maskz_permutexvar_epi16( __mmask32 k, __m512i idx, __m512i a);
+
+
VPERMW __m256i _mm256_permutexvar_epi16( __m256i idx, __m256i a);
+
+
VPERMW __m256i _mm256_mask_permutexvar_epi16(__m256i s, __mmask16 k, __m256i idx, __m256i a);
+
+
VPERMW __m256i _mm256_maskz_permutexvar_epi16( __mmask16 k, __m256i idx, __m256i a);
+
+
VPERMW __m128i _mm_permutexvar_epi16( __m128i idx, __m128i a);
+
+
VPERMW __m128i _mm_mask_permutexvar_epi16(__m128i s, __mmask8 k, __m128i idx, __m128i a);
+
+
VPERMW __m128i _mm_maskz_permutexvar_epi16( __mmask8 k, __m128i idx, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPERMD, see Table 2-50, “Type E4NF Class Exception Conditions.”

+

EVEX-encoded VPERMW, see Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + + + +
#UDIf VEX.L = 0.
If EVEX.L’L = 0 for VPERMD.
diff --git a/x86/vpermi2b.html b/x86/vpermi2b.html new file mode 100644 index 0000000..5764d50 --- /dev/null +++ b/x86/vpermi2b.html @@ -0,0 +1,115 @@ + +VPERMI2B + — Full Permute of Bytes From Two Tables Overwriting the Index

VPERMI2B + — Full Permute of Bytes From Two Tables Overwriting the Index

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 75 /r VPERMI2B xmm1 {k1}{z}, xmm2, xmm3/m128AV/VAVX512VL AVX512_VBMIPermute bytes in xmm3/m128 and xmm2 using byte indexes in xmm1 and store the byte results in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 75 /r VPERMI2B ymm1 {k1}{z}, ymm2, ymm3/m256AV/VAVX512VL AVX512_VBMIPermute bytes in ymm3/m256 and ymm2 using byte indexes in ymm1 and store the byte results in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 75 /r VPERMI2B zmm1 {k1}{z}, zmm2, zmm3/m512AV/VAVX512_VBMIPermute bytes in zmm3/m512 and zmm2 using byte indexes in zmm1 and store the byte results in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Permutes byte values in the second operand (the first source operand) and the third operand (the second source operand) using the byte indices in the first operand (the destination operand) to select byte elements from the second or third source operands. The selected byte elements are written to the destination at byte granularity under the writemask k1.

+

The first and second operands are ZMM/YMM/XMM registers. The first operand contains input indices to select elements from the two input tables in the 2nd and 3rd operands. The first operand is also the destination of the result. The third operand can be a ZMM/YMM/XMM register, or a 512/256/128-bit memory location. In each index byte, the id bit for table selection is bit 6/5/4, and bits [5:0]/[4:0]/[3:0] selects element within each input table.

+

Note that these instructions permit a byte value in the source operands to be copied to more than one location in the destination operand. Also, the same tables can be reused in subsequent iterations, but the index elements are overwritten.

+

Bits (MAX_VL-1:256/128) of the destination are zeroed for VL=256,128.

+

Operation + ¶ +

+

VPERMI2B (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+IF VL = 128:
+    id := 3;
+ELSE IF VL = 256:
+    id := 4;
+ELSE IF VL = 512:
+    id := 5;
+FI;
+TMP_DEST[VL-1:0] := DEST[VL-1:0];
+FOR j := 0 TO KL-1
+    off := 8*SRC1[j*8 + id: j*8] ;
+    IF k1[j] OR *no writemask*:
+        DEST[j*8 + 7: j*8] := TMP_DEST[j*8+id+1]? SRC2[off+7:off] : SRC1[off+7:off];
+    ELSE IF *zeroing-masking*
+        DEST[j*8 + 7: j*8] := 0;
+    *ELSE
+        DEST[j*8 + 7: j*8] remains unchanged*
+    FI;
+ENDFOR
+DEST[MAX_VL-1:VL] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMI2B __m512i _mm512_permutex2var_epi8(__m512i a, __m512i idx, __m512i b);
+
+
VPERMI2B __m512i _mm512_mask2_permutex2var_epi8(__m512i a, __m512i idx, __mmask64 k, __m512i b);
+
+
VPERMI2B __m512i _mm512_maskz_permutex2var_epi8(__mmask64 k, __m512i a, __m512i idx, __m512i b);
+
+
VPERMI2B __m256i _mm256_permutex2var_epi8(__m256i a, __m256i idx, __m256i b);
+
+
VPERMI2B __m256i _mm256_mask2_permutex2var_epi8(__m256i a, __m256i idx, __mmask32 k, __m256i b);
+
+
VPERMI2B __m256i _mm256_maskz_permutex2var_epi8(__mmask32 k, __m256i a, __m256i idx, __m256i b);
+
+
VPERMI2B __m128i _mm_permutex2var_epi8(__m128i a, __m128i idx, __m128i b);
+
+
VPERMI2B __m128i _mm_mask2_permutex2var_epi8(__m128i a, __m128i idx, __mmask16 k, __m128i b);
+
+
VPERMI2B __m128i _mm_maskz_permutex2var_epi8(__mmask16 k, __m128i a, __m128i idx, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vpermi2w.vpermi2d.vpermi2q.vpermi2ps.vpermi2pd.html b/x86/vpermi2w.vpermi2d.vpermi2q.vpermi2ps.vpermi2pd.html new file mode 100644 index 0000000..4da15d1 --- /dev/null +++ b/x86/vpermi2w.vpermi2d.vpermi2q.vpermi2ps.vpermi2pd.html @@ -0,0 +1,386 @@ + +VPERMI2W/VPERMI2D/VPERMI2Q/VPERMI2PS/VPERMI2PD + — Full Permute From Two Tables Overwriting the Index

VPERMI2W/VPERMI2D/VPERMI2Q/VPERMI2PS/VPERMI2PD + — Full Permute From Two Tables Overwriting the Index

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 75 /r VPERMI2W xmm1 {k1}{z}, xmm2, xmm3/m128AV/VAVX512VL AVX512BWPermute word integers from two tables in xmm3/m128 and xmm2 using indexes in xmm1 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 75 /r VPERMI2W ymm1 {k1}{z}, ymm2, ymm3/m256AV/VAVX512VL AVX512BWPermute word integers from two tables in ymm3/m256 and ymm2 using indexes in ymm1 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 75 /r VPERMI2W zmm1 {k1}{z}, zmm2, zmm3/m512AV/VAVX512BWPermute word integers from two tables in zmm3/m512 and zmm2 using indexes in zmm1 and store the result in zmm1 using writemask k1.
EVEX.128.66.0F38.W0 76 /r VPERMI2D xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FPermute double-words from two tables in xmm3/m128/m32bcst and xmm2 using indexes in xmm1 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 76 /r VPERMI2D ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FPermute double-words from two tables in ymm3/m256/m32bcst and ymm2 using indexes in ymm1 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 76 /r VPERMI2D zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512FPermute double-words from two tables in zmm3/m512/m32bcst and zmm2 using indices in zmm1 and store the result in zmm1 using writemask k1.
EVEX.128.66.0F38.W1 76 /r VPERMI2Q xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FPermute quad-words from two tables in xmm3/m128/m64bcst and xmm2 using indexes in xmm1 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 76 /r VPERMI2Q ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FPermute quad-words from two tables in ymm3/m256/m64bcst and ymm2 using indexes in ymm1 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 76 /r VPERMI2Q zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstBV/VAVX512FPermute quad-words from two tables in zmm3/m512/m64bcst and zmm2 using indices in zmm1 and store the result in zmm1 using writemask k1.
EVEX.128.66.0F38.W0 77 /r VPERMI2PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FPermute single-precision floating-point values from two tables in xmm3/m128/m32bcst and xmm2 using indexes in xmm1 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 77 /r VPERMI2PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FPermute single-precision floating-point values from two tables in ymm3/m256/m32bcst and ymm2 using indexes in ymm1 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 77 /r VPERMI2PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512FPermute single-precision floating-point values from two tables in zmm3/m512/m32bcst and zmm2 using indices in zmm1 and store the result in zmm1 using writemask k1.
EVEX.128.66.0F38.W1 77 /r VPERMI2PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FPermute double precision floating-point values from two tables in xmm3/m128/m64bcst and xmm2 using indexes in xmm1 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 77 /r VPERMI2PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FPermute double precision floating-point values from two tables in ymm3/m256/m64bcst and ymm2 using indexes in ymm1 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 77 /r VPERMI2PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstBV/VAVX512FPermute double precision floating-point values from two tables in zmm3/m512/m64bcst and zmm2 using indices in zmm1 and store the result in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (r,w)EVEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Permutes 16-bit/32-bit/64-bit values in the second operand (the first source operand) and the third operand (the second source operand) using indices in the first operand to select elements from the second and third operands. The selected elements are written to the destination operand (the first operand) according to the writemask k1.

+

The first and second operands are ZMM/YMM/XMM registers. The first operand contains input indices to select elements from the two input tables in the 2nd and 3rd operands. The first operand is also the destination of the result.

+

D/Q/PS/PD element versions: The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. Broadcast from the low 32/64-bit memory location is performed if EVEX.b and the id bit for table selection are set (selecting table_2).

+

Dword/PS versions: The id bit for table selection is bit 4/3/2, depending on VL=512, 256, 128. Bits [3:0]/[2:0]/[1:0] of each element in the input index vector select an element within the two source operands, If the id bit is 0, table_1 (the first source) is selected; otherwise the second source operand is selected.

+

Qword/PD versions: The id bit for table selection is bit 3/2/1, and bits [2:0]/[1:0] /bit 0 selects element within each input table.

+

Word element versions: The second source operand can be a ZMM/YMM/XMM register, or a 512/256/128-bit memory location. The id bit for table selection is bit 5/4/3, and bits [4:0]/[3:0]/[2:0] selects element within each input table.

+

Note that these instructions permit a 16-bit/32-bit/64-bit value in the source operands to be copied to more than one location in the destination operand. Note also that in this case, the same table can be reused for example for a second iteration, while the index elements are overwritten.

+

Bits (MAXVL-1:256/128) of the destination are zeroed for VL=256,128.

+

Operation + ¶ +

+

VPERMI2W (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    id := 2
+FI;
+IF VL = 256
+    id := 3
+FI;
+IF VL = 512
+    id := 4
+FI;
+TMP_DEST := DEST
+FOR j := 0 TO KL-1
+    i := j * 16
+    off := 16*TMP_DEST[i+id:i]
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+15:i]=TMP_DEST[i+id+1] ? SRC2[off+15:off]
+                    : SRC1[off+15:off]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE ; zeroing-masking
+                        DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMI2D/VPERMI2PS (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF VL = 128
+    id := 1
+FI;
+IF VL = 256
+    id := 2
+FI;
+IF VL = 512
+    id := 3
+FI;
+TMP_DEST := DEST
+FOR j := 0 TO KL-1
+    i := j * 32
+    off := 32*TMP_DEST[i+id:i]
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                        DEST[i+31:i] := TMP_DEST[i+id+1] ? SRC2[31:0]
+                    : SRC1[off+31:off]
+            ELSE
+                DEST[i+31:i] := TMP_DEST[i+id+1] ? SRC2[off+31:off]
+                    : SRC1[off+31:off]
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                        DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMI2Q/VPERMI2PD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8 512)
+IF VL = 128
+    id := 0
+FI;
+IF VL = 256
+    id := 1
+FI;
+IF VL = 512
+    id := 2
+FI;
+TMP_DEST:= DEST
+FOR j := 0 TO KL-1
+    i := j * 64
+    off := 64*TMP_DEST[i+id:i]
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                        DEST[i+63:i] := TMP_DEST[i+id+1] ? SRC2[63:0]
+                    : SRC1[off+63:off]
+            ELSE
+                DEST[i+63:i] := TMP_DEST[i+id+1] ? SRC2[off+63:off]
+                    : SRC1[off+63:off]
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                        DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMI2D __m512i _mm512_permutex2var_epi32(__m512i a, __m512i idx, __m512i b);
+
+
VPERMI2D __m512i _mm512_mask_permutex2var_epi32(__m512i a, __mmask16 k, __m512i idx, __m512i b);
+
+
VPERMI2D __m512i _mm512_mask2_permutex2var_epi32(__m512i a, __m512i idx, __mmask16 k, __m512i b);
+
+
VPERMI2D __m512i _mm512_maskz_permutex2var_epi32(__mmask16 k, __m512i a, __m512i idx, __m512i b);
+
+
VPERMI __m256i _mm256_permutex2var_epi32(__m256i a, __m256i idx, __m256i b);
+
+
VPERMI2D __m256i _mm256_mask_permutex2var_epi32(__m256i a, __mmask8 k, __m256i idx, __m256i b);
+
+
VPERMI2D __m256i _mm256_mask2_permutex2var_epi32(__m256i a, __m256i idx, __mmask8 k, __m256i b);
+
+
VPERMI2D __m256i _mm256_maskz_permutex2var_epi32(__mmask8 k, __m256i a, __m256i idx, __m256i b);
+
+
VPERMI2D __m128i _mm_permutex2var_epi32(__m128i a, __m128i idx, __m128i b);
+
+
VPERMI2D __m128i _mm_mask_permutex2var_epi32(__m128i a, __mmask8 k, __m128i idx, __m128i b);
+
+
VPERMI2D __m128i _mm_mask2_permutex2var_epi32(__m128i a, __m128i idx, __mmask8 k, __m128i b);
+
+
VPERMI2D __m128i _mm_maskz_permutex2var_epi32(__mmask8 k, __m128i a, __m128i idx, __m128i b);
+
+
VPERMI2PD __m512d _mm512_permutex2var_pd(__m512d a, __m512i idx, __m512d b);
+
+
VPERMI2PD __m512d _mm512_mask_permutex2var_pd(__m512d a, __mmask8 k, __m512i idx, __m512d b);
+
+
VPERMI2PD __m512d _mm512_mask2_permutex2var_pd(__m512d a, __m512i idx, __mmask8 k, __m512d b);
+
+
VPERMI2PD __m512d _mm512_maskz_permutex2var_pd(__mmask8 k, __m512d a, __m512i idx, __m512d b);
+
+
VPERMI2PD __m256d _mm256_permutex2var_pd(__m256d a, __m256i idx, __m256d b);
+
+
VPERMI2PD __m256d _mm256_mask_permutex2var_pd(__m256d a, __mmask8 k, __m256i idx, __m256d b);
+
+
VPERMI2PD __m256d _mm256_mask2_permutex2var_pd(__m256d a, __m256i idx, __mmask8 k, __m256d b);
+
+
VPERMI2PD __m256d _mm256_maskz_permutex2var_pd(__mmask8 k, __m256d a, __m256i idx, __m256d b);
+
+
VPERMI2PD __m128d _mm_permutex2var_pd(__m128d a, __m128i idx, __m128d b);
+
+
VPERMI2PD __m128d _mm_mask_permutex2var_pd(__m128d a, __mmask8 k, __m128i idx, __m128d b);
+
+
VPERMI2PD __m128d _mm_mask2_permutex2var_pd(__m128d a, __m128i idx, __mmask8 k, __m128d b);
+
+
VPERMI2PD __m128d _mm_maskz_permutex2var_pd(__mmask8 k, __m128d a, __m128i idx, __m128d b);
+
+
VPERMI2PS __m512 _mm512_permutex2var_ps(__m512 a, __m512i idx, __m512 b);
+
+
VPERMI2PS __m512 _mm512_mask_permutex2var_ps(__m512 a, __mmask16 k, __m512i idx, __m512 b);
+
+
VPERMI2PS __m512 _mm512_mask2_permutex2var_ps(__m512 a, __m512i idx, __mmask16 k, __m512 b);
+
+
VPERMI2PS __m512 _mm512_maskz_permutex2var_ps(__mmask16 k, __m512 a, __m512i idx, __m512 b);
+
+
VPERMI2PS __m256 _mm256_permutex2var_ps(__m256 a, __m256i idx, __m256 b);
+
+
VPERMI2PS __m256 _mm256_mask_permutex2var_ps(__m256 a, __mmask8 k, __m256i idx, __m256 b);
+
+
VPERMI2PS __m256 _mm256_mask2_permutex2var_ps(__m256 a, __m256i idx, __mmask8 k, __m256 b);
+
+
VPERMI2PS __m256 _mm256_maskz_permutex2var_ps(__mmask8 k, __m256 a, __m256i idx, __m256 b);
+
+
VPERMI2PS __m128 _mm_permutex2var_ps(__m128 a, __m128i idx, __m128 b);
+
+
VPERMI2PS __m128 _mm_mask_permutex2var_ps(__m128 a, __mmask8 k, __m128i idx, __m128 b);
+
+
VPERMI2PS __m128 _mm_mask2_permutex2var_ps(__m128 a, __m128i idx, __mmask8 k, __m128 b);
+
+
VPERMI2PS __m128 _mm_maskz_permutex2var_ps(__mmask8 k, __m128 a, __m128i idx, __m128 b);
+
+
VPERMI2Q __m512i _mm512_permutex2var_epi64(__m512i a, __m512i idx, __m512i b);
+
+
VPERMI2Q __m512i _mm512_mask_permutex2var_epi64(__m512i a, __mmask8 k, __m512i idx, __m512i b);
+
+
VPERMI2Q __m512i _mm512_mask2_permutex2var_epi64(__m512i a, __m512i idx, __mmask8 k, __m512i b);
+
+
VPERMI2Q __m512i _mm512_maskz_permutex2var_epi64(__mmask8 k, __m512i a, __m512i idx, __m512i b);
+
+
VPERMI2Q __m256i _mm256_permutex2var_epi64(__m256i a, __m256i idx, __m256i b);
+
+
VPERMI2Q __m256i _mm256_mask_permutex2var_epi64(__m256i a, __mmask8 k, __m256i idx, __m256i b);
+
+
VPERMI2Q __m256i _mm256_mask2_permutex2var_epi64(__m256i a, __m256i idx, __mmask8 k, __m256i b);
+
+
VPERMI2Q __m256i _mm256_maskz_permutex2var_epi64(__mmask8 k, __m256i a, __m256i idx, __m256i b);
+
+
VPERMI2Q __m128i _mm_permutex2var_epi64(__m128i a, __m128i idx, __m128i b);
+
+
VPERMI2Q __m128i _mm_mask_permutex2var_epi64(__m128i a, __mmask8 k, __m128i idx, __m128i b);
+
+
VPERMI2Q __m128i _mm_mask2_permutex2var_epi64(__m128i a, __m128i idx, __mmask8 k, __m128i b);
+
+
VPERMI2Q __m128i _mm_maskz_permutex2var_epi64(__mmask8 k, __m128i a, __m128i idx, __m128i b);
+
+
VPERMI2W __m512i _mm512_permutex2var_epi16(__m512i a, __m512i idx, __m512i b);
+
+
VPERMI2W __m512i _mm512_mask_permutex2var_epi16(__m512i a, __mmask32 k, __m512i idx, __m512i b);
+
+
VPERMI2W __m512i _mm512_mask2_permutex2var_epi16(__m512i a, __m512i idx, __mmask32 k, __m512i b);
+
+
VPERMI2W __m512i _mm512_maskz_permutex2var_epi16(__mmask32 k, __m512i a, __m512i idx, __m512i b);
+
+
VPERMI2W __m256i _mm256_permutex2var_epi16(__m256i a, __m256i idx, __m256i b);
+
+
VPERMI2W __m256i _mm256_mask_permutex2var_epi16(__m256i a, __mmask16 k, __m256i idx, __m256i b);
+
+
VPERMI2W __m256i _mm256_mask2_permutex2var_epi16(__m256i a, __m256i idx, __mmask16 k, __m256i b);
+
+
VPERMI2W __m256i _mm256_maskz_permutex2var_epi16(__mmask16 k, __m256i a, __m256i idx, __m256i b);
+
+
VPERMI2W __m128i _mm_permutex2var_epi16(__m128i a, __m128i idx, __m128i b);
+
+
VPERMI2W __m128i _mm_mask_permutex2var_epi16(__m128i a, __mmask8 k, __m128i idx, __m128i b);
+
+
VPERMI2W __m128i _mm_mask2_permutex2var_epi16(__m128i a, __m128i idx, __mmask8 k, __m128i b);
+
+
VPERMI2W __m128i _mm_maskz_permutex2var_epi16(__mmask8 k, __m128i a, __m128i idx, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VPERMI2D/Q/PS/PD: See Table 2-50, “Type E4NF Class Exception Conditions.”

+

VPERMI2W: See Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vpermilpd.html b/x86/vpermilpd.html new file mode 100644 index 0000000..9a2cf2d --- /dev/null +++ b/x86/vpermilpd.html @@ -0,0 +1,470 @@ + +VPERMILPD + — Permute In-Lane of Pairs of Double Precision Floating-Point Values

VPERMILPD + — Permute In-Lane of Pairs of Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 0D /r VPERMILPD xmm1, xmm2, xmm3/m128AV/VAVXPermute double precision floating-point values in xmm2 using controls from xmm3/m128 and store result in xmm1.
VEX.256.66.0F38.W0 0D /r VPERMILPD ymm1, ymm2, ymm3/m256AV/VAVXPermute double precision floating-point values in ymm2 using controls from ymm3/m256 and store result in ymm1.
EVEX.128.66.0F38.W1 0D /r VPERMILPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FPermute double precision floating-point values in xmm2 using control from xmm3/m128/m64bcst and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 0D /r VPERMILPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FPermute double precision floating-point values in ymm2 using control from ymm3/m256/m64bcst and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 0D /r VPERMILPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FPermute double precision floating-point values in zmm2 using control from zmm3/m512/m64bcst and store the result in zmm1 using writemask k1.
VEX.128.66.0F3A.W0 05 /r ib VPERMILPD xmm1, xmm2/m128, imm8BV/VAVXPermute double precision floating-point values in xmm2/m128 using controls from imm8.
VEX.256.66.0F3A.W0 05 /r ib VPERMILPD ymm1, ymm2/m256, imm8BV/VAVXPermute double precision floating-point values in ymm2/m256 using controls from imm8.
EVEX.128.66.0F3A.W1 05 /r ib VPERMILPD xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8DV/VAVX512VL AVX512FPermute double precision floating-point values in xmm2/m128/m64bcst using controls from imm8 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F3A.W1 05 /r ib VPERMILPD ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8DV/VAVX512VL AVX512FPermute double precision floating-point values in ymm2/m256/m64bcst using controls from imm8 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F3A.W1 05 /r ib VPERMILPD zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8DV/VAVX512FPermute double precision floating-point values in zmm2/m512/m64bcst using controls from imm8 and store the result in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
BN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

(variable control version)

+

Permute pairs of double precision floating-point values in the first source operand (second operand), each using a 1-bit control field residing in the corresponding quadword element of the second source operand (third operand). Permuted results are stored in the destination operand (first operand).

+

The control bits are located at bit 0 of each quadword element (see Figure 5-24). Each control determines which of the source element in an input pair is selected for the destination element. Each pair of source elements must lie in the same 128-bit region as the destination.

+

EVEX version: The second source operand (third operand) is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. Permuted results are written to the destination under the writemask.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X3 +X2 +X1 +X0 +SRC1 +DEST +X2..X3 +X2..X3 +X0..X1 +X0..X1 +
Figure 5-23. VPERMILPD Operation
+

VEX.256 encoded version: Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Bit +6665 +1 +255 +194193 +2 +127 +63 +. . . +ignored +ignored +sel +sel +sel +Control Field 4 +Control +Field 2 +Control +Field1 +
Figure 5-24. VPERMILPD Shuffle Control
+

Immediate control version: Permute pairs of double precision floating-point values in the first source operand (second operand), each pair using a 1-bit control field in the imm8 byte. Each element in the destination operand (first operand) use a separate control bit of the imm8 byte.

+

VEX version: The source operand is a YMM/XMM register or a 256/128-bit memory location and the destination operand is a YMM/XMM register. Imm8 byte provides the lower 4/2 bit as permute control fields.

+

EVEX version: The source operand (second operand) is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. Permuted results are written to the destination under the writemask. Imm8 byte provides the lower 8/4/2 bit as permute control fields.

+

Note: For the imm8 versions, VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instruction will #UD.

+

Operation + ¶ +

+

VPERMILPD (EVEX immediate versions) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC1 *is memory*)
+        THEN TMP_SRC1[i+63:i] := SRC1[63:0];
+        ELSE TMP_SRC1[i+63:i] := SRC1[i+63:i];
+    FI;
+ENDFOR;
+IF (imm8[0] = 0) THEN TMP_DEST[63:0] := SRC1[63:0]; FI;
+IF (imm8[0] = 1) THEN TMP_DEST[63:0] := TMP_SRC1[127:64]; FI;
+IF (imm8[1] = 0) THEN TMP_DEST[127:64] := TMP_SRC1[63:0]; FI;
+IF (imm8[1] = 1) THEN TMP_DEST[127:64] := TMP_SRC1[127:64]; FI;
+IF VL >= 256
+    IF (imm8[2] = 0) THEN TMP_DEST[191:128] := TMP_SRC1[191:128]; FI;
+    IF (imm8[2] = 1) THEN TMP_DEST[191:128] := TMP_SRC1[255:192]; FI;
+    IF (imm8[3] = 0) THEN TMP_DEST[255:192] := TMP_SRC1[191:128]; FI;
+    IF (imm8[3] = 1) THEN TMP_DEST[255:192] := TMP_SRC1[255:192]; FI;
+FI;
+IF VL >= 512
+    IF (imm8[4] = 0) THEN TMP_DEST[319:256] := TMP_SRC1[319:256]; FI;
+    IF (imm8[4] = 1) THEN TMP_DEST[319:256] := TMP_SRC1[383:320]; FI;
+    IF (imm8[5] = 0) THEN TMP_DEST[383:320] := TMP_SRC1[319:256]; FI;
+    IF (imm8[5] = 1) THEN TMP_DEST[383:320] := TMP_SRC1[383:320]; FI;
+    IF (imm8[6] = 0) THEN TMP_DEST[447:384] := TMP_SRC1[447:384]; FI;
+    IF (imm8[6] = 1) THEN TMP_DEST[447:384] := TMP_SRC1[511:448]; FI;
+    IF (imm8[7] = 0) THEN TMP_DEST[511:448] := TMP_SRC1[447:384]; FI;
+    IF (imm8[7] = 1) THEN TMP_DEST[511:448] := TMP_SRC1[511:448]; FI;
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMILPD (256-bit immediate version) + ¶ +

+
IF (imm8[0] = 0) THEN DEST[63:0] := SRC1[63:0]
+IF (imm8[0] = 1) THEN DEST[63:0] := SRC1[127:64]
+IF (imm8[1] = 0) THEN DEST[127:64] := SRC1[63:0]
+IF (imm8[1] = 1) THEN DEST[127:64] := SRC1[127:64]
+IF (imm8[2] = 0) THEN DEST[191:128] := SRC1[191:128]
+IF (imm8[2] = 1) THEN DEST[191:128] := SRC1[255:192]
+IF (imm8[3] = 0) THEN DEST[255:192] := SRC1[191:128]
+IF (imm8[3] = 1) THEN DEST[255:192] := SRC1[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VPERMILPD (128-bit immediate version) + ¶ +

+
IF (imm8[0] = 0) THEN DEST[63:0] := SRC1[63:0]
+IF (imm8[0] = 1) THEN DEST[63:0] := SRC1[127:64]
+IF (imm8[1] = 0) THEN DEST[127:64] := SRC1[63:0]
+IF (imm8[1] = 1) THEN DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

VPERMILPD (EVEX variable versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0];
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i];
+    FI;
+ENDFOR;
+IF (TMP_SRC2[1] = 0) THEN TMP_DEST[63:0] := SRC1[63:0]; FI;
+IF (TMP_SRC2[1] = 1) THEN TMP_DEST[63:0] := SRC1[127:64]; FI;
+IF (TMP_SRC2[65] = 0) THEN TMP_DEST[127:64] := SRC1[63:0]; FI;
+IF (TMP_SRC2[65] = 1) THEN TMP_DEST[127:64] := SRC1[127:64]; FI;
+IF VL >= 256
+    IF (TMP_SRC2[129] = 0) THEN TMP_DEST[191:128] := SRC1[191:128]; FI;
+    IF (TMP_SRC2[129] = 1) THEN TMP_DEST[191:128] := SRC1[255:192]; FI;
+    IF (TMP_SRC2[193] = 0) THEN TMP_DEST[255:192] := SRC1[191:128]; FI;
+    IF (TMP_SRC2[193] = 1) THEN TMP_DEST[255:192] := SRC1[255:192]; FI;
+FI;
+IF VL >= 512
+    IF (TMP_SRC2[257] = 0) THEN TMP_DEST[319:256] := SRC1[319:256]; FI;
+    IF (TMP_SRC2[257] = 1) THEN TMP_DEST[319:256] := SRC1[383:320]; FI;
+    IF (TMP_SRC2[321] = 0) THEN TMP_DEST[383:320] := SRC1[319:256]; FI;
+    IF (TMP_SRC2[321] = 1) THEN TMP_DEST[383:320] := SRC1[383:320]; FI;
+    IF (TMP_SRC2[385] = 0) THEN TMP_DEST[447:384] := SRC1[447:384]; FI;
+    IF (TMP_SRC2[385] = 1) THEN TMP_DEST[447:384] := SRC1[511:448]; FI;
+    IF (TMP_SRC2[449] = 0) THEN TMP_DEST[511:448] := SRC1[447:384]; FI;
+    IF (TMP_SRC2[449] = 1) THEN TMP_DEST[511:448] := SRC1[511:448]; FI;
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMILPD (256-bit variable version) + ¶ +

+
IF (SRC2[1] = 0) THEN DEST[63:0] := SRC1[63:0]
+IF (SRC2[1] = 1) THEN DEST[63:0] := SRC1[127:64]
+IF (SRC2[65] = 0) THEN DEST[127:64] := SRC1[63:0]
+IF (SRC2[65] = 1) THEN DEST[127:64] := SRC1[127:64]
+IF (SRC2[129] = 0) THEN DEST[191:128] := SRC1[191:128]
+IF (SRC2[129] = 1) THEN DEST[191:128] := SRC1[255:192]
+IF (SRC2[193] = 0) THEN DEST[255:192] := SRC1[191:128]
+IF (SRC2[193] = 1) THEN DEST[255:192] := SRC1[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VPERMILPD (128-bit variable version) + ¶ +

+
IF (SRC2[1] = 0) THEN DEST[63:0] := SRC1[63:0]
+IF (SRC2[1] = 1) THEN DEST[63:0] := SRC1[127:64]
+IF (SRC2[65] = 0) THEN DEST[127:64] := SRC1[63:0]
+IF (SRC2[65] = 1) THEN DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMILPD __m512d _mm512_permute_pd( __m512d a, int imm);
+
+
VPERMILPD __m512d _mm512_mask_permute_pd(__m512d s, __mmask8 k, __m512d a, int imm);
+
+
VPERMILPD __m512d _mm512_maskz_permute_pd( __mmask8 k, __m512d a, int imm);
+
+
VPERMILPD __m256d _mm256_mask_permute_pd(__m256d s, __mmask8 k, __m256d a, int imm);
+
+
VPERMILPD __m256d _mm256_maskz_permute_pd( __mmask8 k, __m256d a, int imm);
+
+
VPERMILPD __m128d _mm_mask_permute_pd(__m128d s, __mmask8 k, __m128d a, int imm);
+
+
VPERMILPD __m128d _mm_maskz_permute_pd( __mmask8 k, __m128d a, int imm);
+
+
VPERMILPD __m512d _mm512_permutevar_pd( __m512i i, __m512d a);
+
+
VPERMILPD __m512d _mm512_mask_permutevar_pd(__m512d s, __mmask8 k, __m512i i, __m512d a);
+
+
VPERMILPD __m512d _mm512_maskz_permutevar_pd( __mmask8 k, __m512i i, __m512d a);
+
+
VPERMILPD __m256d _mm256_mask_permutevar_pd(__m256d s, __mmask8 k, __m256d i, __m256d a);
+
+
VPERMILPD __m256d _mm256_maskz_permutevar_pd( __mmask8 k, __m256d i, __m256d a);
+
+
VPERMILPD __m128d _mm_mask_permutevar_pd(__m128d s, __mmask8 k, __m128d i, __m128d a);
+
+
VPERMILPD __m128d _mm_maskz_permutevar_pd( __mmask8 k, __m128d i, __m128d a);
+
+
VPERMILPD __m128d _mm_permute_pd (__m128d a, int control)
+
+
VPERMILPD __m256d _mm256_permute_pd (__m256d a, int control)
+
+
VPERMILPD __m128d _mm_permutevar_pd (__m128d a, __m128i control);
+
+
VPERMILPD __m256d _mm256_permutevar_pd (__m256d a, __m256i control);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.W = 1.
+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf either (E)VEX.vvvv != 1111B and with imm8.
diff --git a/x86/vpermilps.html b/x86/vpermilps.html new file mode 100644 index 0000000..905d3f7 --- /dev/null +++ b/x86/vpermilps.html @@ -0,0 +1,516 @@ + +VPERMILPS + — Permute In-Lane of Quadruples of Single Precision Floating-Point Values

VPERMILPS + — Permute In-Lane of Quadruples of Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 0C /r VPERMILPS xmm1, xmm2, xmm3/m128AV/VAVXPermute single-precision floating-point values in xmm2 using controls from xmm3/m128 and store result in xmm1.
VEX.128.66.0F3A.W0 04 /r ib VPERMILPS xmm1, xmm2/m128, imm8BV/VAVXPermute single-precision floating-point values in xmm2/m128 using controls from imm8 and store result in xmm1.
VEX.256.66.0F38.W0 0C /r VPERMILPS ymm1, ymm2, ymm3/m256AV/VAVXPermute single-precision floating-point values in ymm2 using controls from ymm3/m256 and store result in ymm1.
VEX.256.66.0F3A.W0 04 /r ib VPERMILPS ymm1, ymm2/m256, imm8BV/VAVXPermute single-precision floating-point values in ymm2/m256 using controls from imm8 and store result in ymm1.
EVEX.128.66.0F38.W0 0C /r VPERMILPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FPermute single-precision floating-point values xmm2 using control from xmm3/m128/m32bcst and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 0C /r VPERMILPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FPermute single-precision floating-point values ymm2 using control from ymm3/m256/m32bcst and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 0C /r VPERMILPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FPermute single-precision floating-point values zmm2 using control from zmm3/m512/m32bcst and store the result in zmm1 using writemask k1.
EVEX.128.66.0F3A.W0 04 /r ib VPERMILPS xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8DV/VAVX512VL AVX512FPermute single-precision floating-point values xmm2/m128/m32bcst using controls from imm8 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F3A.W0 04 /r ib VPERMILPS ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8DV/VAVX512VL AVX512FPermute single-precision floating-point values ymm2/m256/m32bcst using controls from imm8 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F3A.W0 04 /r ibVPERMILPS zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8DV/VAVX512FPermute single-precision floating-point values zmm2/m512/m32bcst using controls from imm8 and store the result in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
BN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
DFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Variable control version:

+

Permute quadruples of single-precision floating-point values in the first source operand (second operand), each quadruplet using a 2-bit control field in the corresponding dword element of the second source operand. Permuted results are stored in the destination operand (first operand).

+

The 2-bit control fields are located at the low two bits of each dword element (see Figure 5-26). Each control determines which of the source element in an input quadruple is selected for the destination element. Each quadruple of source elements must lie in the same 128-bit region as the destination.

+

EVEX version: The second source operand (third operand) is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. Permuted results are written to the destination under the writemask.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +X7 X6 X5 X4 X3 X2 X1 X0 +SRC1 +X7..X4 X7..X4 X7..X4 X7..X4 X3..X0 X3..X0 +DEST +X3..X0 X3..X0 +
Figure 5-25. VPERMILPS Operation
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Bit +31 +255 +226 +225 224 +33 32 +1 0 +63 +34 +. . . +ignored +ignored +sel +sel +sel +Control Field 7 +Control Field 2 +Control Field 1 +
Figure 5-26. VPERMILPS Shuffle Control
+

(immediate control version)

+

Permute quadruples of single-precision floating-point values in the first source operand (second operand), each quadruplet using a 2-bit control field in the imm8 byte. Each 128-bit lane in the destination operand (first operand) use the four control fields of the same imm8 byte.

+

VEX version: The source operand is a YMM/XMM register or a 256/128-bit memory location and the destination operand is a YMM/XMM register.

+

EVEX version: The source operand (second operand) is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. Permuted results are written to the destination under the writemask.

+

Note: For the imm8 version, VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instruction will #UD.

+

Operation + ¶ +

+
Select4(SRC, control) {
+CASE (control[1:0]) OF
+    0: TMP := SRC[31:0];
+    1: TMP := SRC[63:32];
+    2: TMP := SRC[95:64];
+    3: TMP := SRC[127:96];
+ESAC;
+RETURN TMP
+}
+
+

VPERMILPS (EVEX immediate versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF (EVEX.b = 1) AND (SRC1 *is memory*)
+        THEN TMP_SRC1[i+31:i] := SRC1[31:0];
+        ELSE TMP_SRC1[i+31:i] := SRC1[i+31:i];
+    FI;
+ENDFOR;
+TMP_DEST[31:0] := Select4(TMP_SRC1[127:0], imm8[1:0]);
+TMP_DEST[63:32] := Select4(TMP_SRC1[127:0], imm8[3:2]);
+TMP_DEST[95:64] := Select4(TMP_SRC1[127:0], imm8[5:4]);
+TMP_DEST[127:96] := Select4(TMP_SRC1[127:0], imm8[7:6]); FI;
+IF VL >= 256
+    TMP_DEST[159:128] := Select4(TMP_SRC1[255:128], imm8[1:0]); FI;
+    TMP_DEST[191:160] := Select4(TMP_SRC1[255:128], imm8[3:2]); FI;
+    TMP_DEST[223:192] := Select4(TMP_SRC1[255:128], imm8[5:4]); FI;
+    TMP_DEST[255:224] := Select4(TMP_SRC1[255:128], imm8[7:6]); FI;
+FI;
+IF VL >= 512
+    TMP_DEST[287:256] := Select4(TMP_SRC1[383:256], imm8[1:0]); FI;
+    TMP_DEST[319:288] := Select4(TMP_SRC1[383:256], imm8[3:2]); FI;
+    TMP_DEST[351:320] := Select4(TMP_SRC1[383:256], imm8[5:4]); FI;
+    TMP_DEST[383:352] := Select4(TMP_SRC1[383:256], imm8[7:6]); FI;
+    TMP_DEST[415:384] := Select4(TMP_SRC1[511:384], imm8[1:0]); FI;
+    TMP_DEST[447:416] := Select4(TMP_SRC1[511:384], imm8[3:2]); FI;
+    TMP_DEST[479:448] := Select4(TMP_SRC1[511:384], imm8[5:4]); FI;
+    TMP_DEST[511:480] := Select4(TMP_SRC1[511:384], imm8[7:6]); FI;
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0 ;zeroing-masking
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMILPS (256-bit immediate version) + ¶ +

+
DEST[31:0] := Select4(SRC1[127:0], imm8[1:0]);
+DEST[63:32] := Select4(SRC1[127:0], imm8[3:2]);
+DEST[95:64] := Select4(SRC1[127:0], imm8[5:4]);
+DEST[127:96] := Select4(SRC1[127:0], imm8[7:6]);
+DEST[159:128] := Select4(SRC1[255:128], imm8[1:0]);
+DEST[191:160] := Select4(SRC1[255:128], imm8[3:2]);
+DEST[223:192] := Select4(SRC1[255:128], imm8[5:4]);
+DEST[255:224] := Select4(SRC1[255:128], imm8[7:6]);
+
+

VPERMILPS (128-bit immediate version) + ¶ +

+
DEST[31:0] := Select4(SRC1[127:0], imm8[1:0]);
+DEST[63:32] := Select4(SRC1[127:0], imm8[3:2]);
+DEST[95:64] := Select4(SRC1[127:0], imm8[5:4]);
+DEST[127:96] := Select4(SRC1[127:0], imm8[7:6]);
+DEST[MAXVL-1:128] := 0
+
+

VPERMILPS (EVEX variable versions) + ¶ +

+
(KL, VL) = (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+31:i] := SRC2[31:0];
+        ELSE TMP_SRC2[i+31:i] := SRC2[i+31:i];
+    FI;
+ENDFOR;
+TMP_DEST[31:0] := Select4(SRC1[127:0], TMP_SRC2[1:0]);
+TMP_DEST[63:32] := Select4(SRC1[127:0], TMP_SRC2[33:32]);
+TMP_DEST[95:64] := Select4(SRC1[127:0], TMP_SRC2[65:64]);
+TMP_DEST[127:96] := Select4(SRC1[127:0], TMP_SRC2[97:96]);
+IF VL >= 256
+    TMP_DEST[159:128] := Select4(SRC1[255:128], TMP_SRC2[129:128]);
+    TMP_DEST[191:160] := Select4(SRC1[255:128], TMP_SRC2[161:160]);
+    TMP_DEST[223:192] := Select4(SRC1[255:128], TMP_SRC2[193:192]);
+    TMP_DEST[255:224] := Select4(SRC1[255:128], TMP_SRC2[225:224]);
+FI;
+IF VL >= 512
+    TMP_DEST[287:256] := Select4(SRC1[383:256], TMP_SRC2[257:256]);
+    TMP_DEST[319:288] := Select4(SRC1[383:256], TMP_SRC2[289:288]);
+    TMP_DEST[351:320] := Select4(SRC1[383:256], TMP_SRC2[321:320]);
+    TMP_DEST[383:352] := Select4(SRC1[383:256], TMP_SRC2[353:352]);
+    TMP_DEST[415:384] := Select4(SRC1[511:384], TMP_SRC2[385:384]);
+    TMP_DEST[447:416] := Select4(SRC1[511:384], TMP_SRC2[417:416]);
+    TMP_DEST[479:448] := Select4(SRC1[511:384], TMP_SRC2[449:448]);
+    TMP_DEST[511:480] := Select4(SRC1[511:384], TMP_SRC2[481:480]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0 ;zeroing-masking
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMILPS (256-bit variable version) + ¶ +

+
DEST[31:0] := Select4(SRC1[127:0], SRC2[1:0]);
+DEST[63:32] := Select4(SRC1[127:0], SRC2[33:32]);
+DEST[95:64] := Select4(SRC1[127:0], SRC2[65:64]);
+DEST[127:96] := Select4(SRC1[127:0], SRC2[97:96]);
+DEST[159:128] := Select4(SRC1[255:128], SRC2[129:128]);
+DEST[191:160] := Select4(SRC1[255:128], SRC2[161:160]);
+DEST[223:192] := Select4(SRC1[255:128], SRC2[193:192]);
+DEST[255:224] := Select4(SRC1[255:128], SRC2[225:224]);
+DEST[MAXVL-1:256] := 0
+
+

VPERMILPS (128-bit variable version) + ¶ +

+
DEST[31:0] := Select4(SRC1[127:0], SRC2[1:0]);
+DEST[63:32] := Select4(SRC1[127:0], SRC2[33:32]);
+DEST[95:64] :=Select4(SRC1[127:0], SRC2[65:64]);
+DEST[127:96] := Select4(SRC1[127:0], SRC2[97:96]);
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMILPS __m512 _mm512_permute_ps( __m512 a, int imm);
+
+
VPERMILPS __m512 _mm512_mask_permute_ps(__m512 s, __mmask16 k, __m512 a, int imm);
+
+
VPERMILPS __m512 _mm512_maskz_permute_ps( __mmask16 k, __m512 a, int imm);
+
+
VPERMILPS __m256 _mm256_mask_permute_ps(__m256 s, __mmask8 k, __m256 a, int imm);
+
+
VPERMILPS __m256 _mm256_maskz_permute_ps( __mmask8 k, __m256 a, int imm);
+
+
VPERMILPS __m128 _mm_mask_permute_ps(__m128 s, __mmask8 k, __m128 a, int imm);
+
+
VPERMILPS __m128 _mm_maskz_permute_ps( __mmask8 k, __m128 a, int imm);
+
+
VPERMILPS __m512 _mm512_permutevar_ps( __m512i i, __m512 a);
+
+
VPERMILPS __m512 _mm512_mask_permutevar_ps(__m512 s, __mmask16 k, __m512i i, __m512 a);
+
+
VPERMILPS __m512 _mm512_maskz_permutevar_ps( __mmask16 k, __m512i i, __m512 a);
+
+
VPERMILPS __m256 _mm256_mask_permutevar_ps(__m256 s, __mmask8 k, __m256 i, __m256 a);
+
+
VPERMILPS __m256 _mm256_maskz_permutevar_ps( __mmask8 k, __m256 i, __m256 a);
+
+
VPERMILPS __m128 _mm_mask_permutevar_ps(__m128 s, __mmask8 k, __m128 i, __m128 a);
+
+
VPERMILPS __m128 _mm_maskz_permutevar_ps( __mmask8 k, __m128 i, __m128 a);
+
+
VPERMILPS __m128 _mm_permute_ps (__m128 a, int control);
+
+
VPERMILPS __m256 _mm256_permute_ps (__m256 a, int control);
+
+
VPERMILPS __m128 _mm_permutevar_ps (__m128 a, __m128i control);
+
+
VPERMILPS __m256 _mm256_permutevar_ps (__m256 a, __m256i control);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.W = 1.
+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf either (E)VEX.vvvv != 1111B and with imm8.
diff --git a/x86/vpermpd.html b/x86/vpermpd.html new file mode 100644 index 0000000..4c5af44 --- /dev/null +++ b/x86/vpermpd.html @@ -0,0 +1,228 @@ + +VPERMPD + — Permute Double Precision Floating-Point Elements

VPERMPD + — Permute Double Precision Floating-Point Elements

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.256.66.0F3A.W1 01 /r ib VPERMPD ymm1, ymm2/m256, imm8AV/VAVX2Permute double precision floating-point elements in ymm2/m256 using indices in imm8 and store the result in ymm1.
EVEX.256.66.0F3A.W1 01 /r ib VPERMPD ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8BV/VAVX512VL AVX512FPermute double precision floating-point elements in ymm2/m256/m64bcst using indexes in imm8 and store the result in ymm1 subject to writemask k1.
EVEX.512.66.0F3A.W1 01 /r ib VPERMPD zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8BV/VAVX512FPermute double precision floating-point elements in zmm2/m512/m64bcst using indices in imm8 and store the result in zmm1 subject to writemask k1.
EVEX.256.66.0F38.W1 16 /r VPERMPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FPermute double precision floating-point elements in ymm3/m256/m64bcst using indexes in ymm2 and store the result in ymm1 subject to writemask k1.
EVEX.512.66.0F38.W1 16 /r VPERMPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FPermute double precision floating-point elements in zmm3/m512/m64bcst using indices in zmm2 and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)imm8N/A
BFullModRM:reg (w)ModRM:r/m (r)imm8N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The imm8 version: Copies quadword elements of double precision floating-point values from the source operand (the second operand) to the destination operand (the first operand) according to the indices specified by the immediate operand (the third operand). Each two-bit value in the immediate byte selects a qword element in the source operand.

+

VEX version: The source operand can be a YMM register or a memory location. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

In EVEX.512 encoded version, The elements in the destination are updated using the writemask k1 and the imm8 bits are reused as control bits for the upper 256-bit half when the control bits are coming from immediate. The source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location.

+

The imm8 versions: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

The vector control version: Copies quadword elements of double precision floating-point values from the second source operand (the third operand) to the destination operand (the first operand) according to the indices in the first source operand (the second operand). The first 3 bits of each 64 bit element in the index operand selects which quadword in the second source operand to copy. The first and second operands are ZMM registers, the third operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location. The elements in the destination are updated using the writemask k1.

+

Note that this instruction permits a qword in the source operand to be copied to multiple locations in the destination operand.

+

If VPERMPD is encoded with VEX.L= 0, an attempt to execute the instruction encoded with VEX.L= 0 will cause an #UD exception.

+

Operation + ¶ +

+

VPERMPD (EVEX - imm8 control forms) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC *is memory*)
+        THEN TMP_SRC[i+63:i] := SRC[63:0];
+        ELSE TMP_SRC[i+63:i] := SRC[i+63:i];
+    FI;
+ENDFOR;
+TMP_DEST[63:0] := (TMP_SRC[256:0] >> (IMM8[1:0] * 64))[63:0];
+TMP_DEST[127:64] := (TMP_SRC[256:0] >> (IMM8[3:2] * 64))[63:0];
+TMP_DEST[191:128] := (TMP_SRC[256:0] >> (IMM8[5:4] * 64))[63:0];
+TMP_DEST[255:192] := (TMP_SRC[256:0] >> (IMM8[7:6] * 64))[63:0];
+IF VL >= 512
+    TMP_DEST[319:256] := (TMP_SRC[511:256] >> (IMM8[1:0] * 64))[63:0];
+    TMP_DEST[383:320] := (TMP_SRC[511:256] >> (IMM8[3:2] * 64))[63:0];
+    TMP_DEST[447:384] := (TMP_SRC[511:256] >> (IMM8[5:4] * 64))[63:0];
+    TMP_DEST[511:448] := (TMP_SRC[511:256] >> (IMM8[7:6] * 64))[63:0];
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+                            ;zeroing-masking
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMPD (EVEX - vector control forms) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0];
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i];
+    FI;
+ENDFOR;
+IF VL = 256
+    TMP_DEST[63:0] := (TMP_SRC2[255:0] >> (SRC1[1:0] * 64))[63:0];
+    TMP_DEST[127:64] := (TMP_SRC2[255:0] >> (SRC1[65:64] * 64))[63:0];
+    TMP_DEST[191:128] := (TMP_SRC2[255:0] >> (SRC1[129:128] * 64))[63:0];
+    TMP_DEST[255:192] := (TMP_SRC2[255:0] >> (SRC1[193:192] * 64))[63:0];
+FI;
+IF VL = 512
+    TMP_DEST[63:0] := (TMP_SRC2[511:0] >> (SRC1[2:0] * 64))[63:0];
+    TMP_DEST[127:64] := (TMP_SRC2[511:0] >> (SRC1[66:64] * 64))[63:0];
+    TMP_DEST[191:128] := (TMP_SRC2[511:0] >> (SRC1[130:128] * 64))[63:0];
+    TMP_DEST[255:192] := (TMP_SRC2[511:0] >> (SRC1[194:192] * 64))[63:0];
+    TMP_DEST[319:256] := (TMP_SRC2[511:0] >> (SRC1[258:256] * 64))[63:0];
+    TMP_DEST[383:320] := (TMP_SRC2[511:0] >> (SRC1[322:320] * 64))[63:0];
+    TMP_DEST[447:384] := (TMP_SRC2[511:0] >> (SRC1[386:384] * 64))[63:0];
+    TMP_DEST[511:448] := (TMP_SRC2[511:0] >> (SRC1[450:448] * 64))[63:0];
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+                            ;zeroing-masking
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMPD (VEX.256 encoded version) + ¶ +

+
DEST[63:0] := (SRC[255:0] >> (IMM8[1:0] * 64))[63:0];
+DEST[127:64] := (SRC[255:0] >> (IMM8[3:2] * 64))[63:0];
+DEST[191:128] := (SRC[255:0] >> (IMM8[5:4] * 64))[63:0];
+DEST[255:192] := (SRC[255:0] >> (IMM8[7:6] * 64))[63:0];
+DEST[MAXVL-1:256] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMPD __m512d _mm512_permutex_pd( __m512d a, int imm);
+
+
VPERMPD __m512d _mm512_mask_permutex_pd(__m512d s, __mmask16 k, __m512d a, int imm);
+
+
VPERMPD __m512d _mm512_maskz_permutex_pd( __mmask16 k, __m512d a, int imm);
+
+
VPERMPD __m512d _mm512_permutexvar_pd( __m512i i, __m512d a);
+
+
VPERMPD __m512d _mm512_mask_permutexvar_pd(__m512d s, __mmask16 k, __m512i i, __m512d a);
+
+
VPERMPD __m512d _mm512_maskz_permutexvar_pd( __mmask16 k, __m512i i, __m512d a);
+
+
VPERMPD __m256d _mm256_permutex_epi64( __m256d a, int imm);
+
+
VPERMPD __m256d _mm256_mask_permutex_epi64(__m256i s, __mmask8 k, __m256d a, int imm);
+
+
VPERMPD __m256d _mm256_maskz_permutex_epi64( __mmask8 k, __m256d a, int imm);
+
+
VPERMPD __m256d _mm256_permutexvar_epi64( __m256i i, __m256d a);
+
+
VPERMPD __m256d _mm256_mask_permutexvar_epi64(__m256i s, __mmask8 k, __m256i i, __m256d a);
+
+
VPERMPD __m256d _mm256_maskz_permutexvar_epi64( __mmask8 k, __m256i i, __m256d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions”; additionally:

+ + + + + +
#UDIf VEX.L = 0.
If VEX.vvvv != 1111B.
+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions”; additionally:

+ + + + + +
#UDIf encoded with EVEX.128.
If EVEX.vvvv != 1111B and with imm8.
diff --git a/x86/vpermps.html b/x86/vpermps.html new file mode 100644 index 0000000..40b50d8 --- /dev/null +++ b/x86/vpermps.html @@ -0,0 +1,166 @@ + +VPERMPS + — Permute Single Precision Floating-Point Elements

VPERMPS + — Permute Single Precision Floating-Point Elements

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.256.66.0F38.W0 16 /r VPERMPS ymm1, ymm2, ymm3/m256AV/VAVX2Permute single-precision floating-point elements in ymm3/m256 using indices in ymm2 and store the result in ymm1.
EVEX.256.66.0F38.W0 16 /r VPERMPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FPermute single-precision floating-point elements in ymm3/m256/m32bcst using indexes in ymm2 and store the result in ymm1 subject to write mask k1.
EVEX.512.66.0F38.W0 16 /r VPERMPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512FPermute single-precision floating-point values in zmm3/m512/m32bcst using indices in zmm2 and store the result in zmm1 subject to write mask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Copies doubleword elements of single-precision floating-point values from the second source operand (the third operand) to the destination operand (the first operand) according to the indices in the first source operand (the second operand). Note that this instruction permits a doubleword in the source operand to be copied to more than one location in the destination operand.

+

VEX.256 versions: The first and second operands are YMM registers, the third operand can be a YMM register or memory location. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX encoded version: The first and second operands are ZMM registers, the third operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32-bit memory location. The elements in the destination are updated using the writemask k1.

+

If VPERMPS is encoded with VEX.L= 0, an attempt to execute the instruction encoded with VEX.L= 0 will cause an #UD exception.

+

Operation + ¶ +

+

VPERMPS (EVEX forms) + ¶ +

+
(KL, VL) (8, 256),= (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+31:i] := SRC2[31:0];
+        ELSE TMP_SRC2[i+31:i] := SRC2[i+31:i];
+    FI;
+ENDFOR;
+IF VL = 256
+    TMP_DEST[31:0] := (TMP_SRC2[255:0] >> (SRC1[2:0] * 32))[31:0];
+    TMP_DEST[63:32] := (TMP_SRC2[255:0] >> (SRC1[34:32] * 32))[31:0];
+    TMP_DEST[95:64] := (TMP_SRC2[255:0] >> (SRC1[66:64] * 32))[31:0];
+    TMP_DEST[127:96] := (TMP_SRC2[255:0] >> (SRC1[98:96] * 32))[31:0];
+    TMP_DEST[159:128] := (TMP_SRC2[255:0] >> (SRC1[130:128] * 32))[31:0];
+    TMP_DEST[191:160] := (TMP_SRC2[255:0] >> (SRC1[162:160] * 32))[31:0];
+    TMP_DEST[223:192] := (TMP_SRC2[255:0] >> (SRC1[193:192] * 32))[31:0];
+    TMP_DEST[255:224] := (TMP_SRC2[255:0] >> (SRC1[226:224] * 32))[31:0];
+FI;
+IF VL = 512
+    TMP_DEST[31:0] := (TMP_SRC2[511:0] >> (SRC1[3:0] * 32))[31:0];
+    TMP_DEST[63:32] := (TMP_SRC2[511:0] >> (SRC1[35:32] * 32))[31:0];
+    TMP_DEST[95:64] := (TMP_SRC2[511:0] >> (SRC1[67:64] * 32))[31:0];
+    TMP_DEST[127:96] := (TMP_SRC2[511:0] >> (SRC1[99:96] * 32))[31:0];
+    TMP_DEST[159:128] := (TMP_SRC2[511:0] >> (SRC1[131:128] * 32))[31:0];
+    TMP_DEST[191:160] := (TMP_SRC2[511:0] >> (SRC1[163:160] * 32))[31:0];
+    TMP_DEST[223:192] := (TMP_SRC2[511:0] >> (SRC1[195:192] * 32))[31:0];
+    TMP_DEST[255:224] := (TMP_SRC2[511:0] >> (SRC1[227:224] * 32))[31:0];
+    TMP_DEST[287:256] := (TMP_SRC2[511:0] >> (SRC1[259:256] * 32))[31:0];
+    TMP_DEST[319:288] := (TMP_SRC2[511:0] >> (SRC1[291:288] * 32))[31:0];
+    TMP_DEST[351:320] := (TMP_SRC2[511:0] >> (SRC1[323:320] * 32))[31:0];
+    TMP_DEST[383:352] := (TMP_SRC2[511:0] >> (SRC1[355:352] * 32))[31:0];
+    TMP_DEST[415:384] := (TMP_SRC2[511:0] >> (SRC1[387:384] * 32))[31:0];
+    TMP_DEST[447:416] := (TMP_SRC2[511:0] >> (SRC1[419:416] * 32))[31:0];
+    TMP_DEST[479:448] :=(TMP_SRC2[511:0] >> (SRC1[451:448] * 32))[31:0];
+    TMP_DEST[511:480] := (TMP_SRC2[511:0] >> (SRC1[483:480] * 32))[31:0];
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+                            ;zeroing-masking
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMPS (VEX.256 encoded version) + ¶ +

+
DEST[31:0] := (SRC2[255:0] >> (SRC1[2:0] * 32))[31:0];
+DEST[63:32] := (SRC2[255:0] >> (SRC1[34:32] * 32))[31:0];
+DEST[95:64] := (SRC2[255:0] >> (SRC1[66:64] * 32))[31:0];
+DEST[127:96] := (SRC2[255:0] >> (SRC1[98:96] * 32))[31:0];
+DEST[159:128] := (SRC2[255:0] >> (SRC1[130:128] * 32))[31:0];
+DEST[191:160] := (SRC2[255:0] >> (SRC1[162:160] * 32))[31:0];
+DEST[223:192] := (SRC2[255:0] >> (SRC1[194:192] * 32))[31:0];
+DEST[255:224] := (SRC2[255:0] >> (SRC1[226:224] * 32))[31:0];
+DEST[MAXVL-1:256] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMPS __m512 _mm512_permutexvar_ps(__m512i i, __m512 a);
+
+
VPERMPS __m512 _mm512_mask_permutexvar_ps(__m512 s, __mmask16 k, __m512i i, __m512 a);
+
+
VPERMPS __m512 _mm512_maskz_permutexvar_ps( __mmask16 k, __m512i i, __m512 a);
+
+
VPERMPS __m256 _mm256_permutexvar_ps(__m256 i, __m256 a);
+
+
VPERMPS __m256 _mm256_mask_permutexvar_ps(__m256 s, __mmask8 k, __m256 i, __m256 a);
+
+
VPERMPS __m256 _mm256_maskz_permutexvar_ps( __mmask8 k, __m256 i, __m256 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf VEX.L = 0.
+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vpermq.html b/x86/vpermq.html new file mode 100644 index 0000000..4170dcc --- /dev/null +++ b/x86/vpermq.html @@ -0,0 +1,230 @@ + +VPERMQ + — Qwords Element Permutation

VPERMQ + — Qwords Element Permutation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.256.66.0F3A.W1 00 /r ib VPERMQ ymm1, ymm2/m256, imm8AV/VAVX2Permute qwords in ymm2/m256 using indices in imm8 and store the result in ymm1.
EVEX.256.66.0F3A.W1 00 /r ib VPERMQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8BV/VAVX512VL AVX512FPermute qwords in ymm2/m256/m64bcst using indexes in imm8 and store the result in ymm1.
EVEX.512.66.0F3A.W1 00 /r ib VPERMQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8BV/VAVX512FPermute qwords in zmm2/m512/m64bcst using indices in imm8 and store the result in zmm1.
EVEX.256.66.0F38.W1 36 /r VPERMQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FPermute qwords in ymm3/m256/m64bcst using indexes in ymm2 and store the result in ymm1.
EVEX.512.66.0F38.W1 36 /r VPERMQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FPermute qwords in zmm3/m512/m64bcst using indices in zmm2 and store the result in zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)ModRM:r/m (r)imm8N/A
BFullModRM:reg (w)ModRM:r/m (r)imm8N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The imm8 version: Copies quadwords from the source operand (the second operand) to the destination operand (the first operand) according to the indices specified by the immediate operand (the third operand). Each two-bit value in the immediate byte selects a qword element in the source operand.

+

VEX version: The source operand can be a YMM register or a memory location. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

In EVEX.512 encoded version, The elements in the destination are updated using the writemask k1 and the imm8 bits are reused as control bits for the upper 256-bit half when the control bits are coming from immediate. The source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location.

+

Immediate control versions: VEX.vvvv and EVEX.vvvv are reserved and must be 1111b otherwise instructions will #UD.

+

The vector control version: Copies quadwords from the second source operand (the third operand) to the destination operand (the first operand) according to the indices in the first source operand (the second operand). The first 3 bits of each 64 bit element in the index operand selects which quadword in the second source operand to copy. The first and second operands are ZMM registers, the third operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location. The elements in the destination are updated using the writemask k1.

+

Note that this instruction permits a qword in the source operand to be copied to multiple locations in the destination operand.

+

If VPERMPQ is encoded with VEX.L= 0 or EVEX.128, an attempt to execute the instruction will cause an #UD exception.

+

Operation + ¶ +

+

VPERMQ (EVEX - imm8 control forms) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC *is memory*)
+        THEN TMP_SRC[i+63:i] := SRC[63:0];
+        ELSE TMP_SRC[i+63:i] := SRC[i+63:i];
+    FI;
+ENDFOR;
+    TMP_DEST[63:0] := (TMP_SRC[255:0] >> (IMM8[1:0] * 64))[63:0];
+    TMP_DEST[127:64] := (TMP_SRC[255:0] >> (IMM8[3:2] * 64))[63:0];
+    TMP_DEST[191:128] := (TMP_SRC[255:0] >> (IMM8[5:4] * 64))[63:0];
+    TMP_DEST[255:192] := (TMP_SRC[255:0] >> (IMM8[7:6] * 64))[63:0];
+IF VL >= 512
+    TMP_DEST[319:256] := (TMP_SRC[511:256] >> (IMM8[1:0] * 64))[63:0];
+    TMP_DEST[383:320] := (TMP_SRC[511:256] >> (IMM8[3:2] * 64))[63:0];
+    TMP_DEST[447:384] := (TMP_SRC[511:256] >> (IMM8[5:4] * 64))[63:0];
+    TMP_DEST[511:448] := (TMP_SRC[511:256] >> (IMM8[7:6] * 64))[63:0];
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+                            ;zeroing-masking
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMQ (EVEX - vector control forms) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0];
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i];
+    FI;
+ENDFOR;
+IF VL = 256
+    TMP_DEST[63:0] := (TMP_SRC2[255:0] >> (SRC1[1:0] * 64))[63:0];
+    TMP_DEST[127:64] := (TMP_SRC2[255:0] >> (SRC1[65:64] * 64))[63:0];
+    TMP_DEST[191:128] := (TMP_SRC2[255:0] >> (SRC1[129:128] * 64))[63:0];
+    TMP_DEST[255:192] := (TMP_SRC2[255:0] >> (SRC1[193:192] * 64))[63:0];
+FI;
+IF VL = 512
+    TMP_DEST[63:0] := (TMP_SRC2[511:0] >> (SRC1[2:0] * 64))[63:0];
+    TMP_DEST[127:64] := (TMP_SRC2[511:0] >> (SRC1[66:64] * 64))[63:0];
+    TMP_DEST[191:128] := (TMP_SRC2[511:0] >> (SRC1[130:128] * 64))[63:0];
+    TMP_DEST[255:192] := (TMP_SRC2[511:0] >> (SRC1[194:192] * 64))[63:0];
+    TMP_DEST[319:256] := (TMP_SRC2[511:0] >> (SRC1[258:256] * 64))[63:0];
+    TMP_DEST[383:320] := (TMP_SRC2[511:0] >> (SRC1[322:320] * 64))[63:0];
+    TMP_DEST[447:384] := (TMP_SRC2[511:0] >> (SRC1[386:384] * 64))[63:0];
+    TMP_DEST[511:448] := (TMP_SRC2[511:0] >> (SRC1[450:448] * 64))[63:0];
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+                            ;zeroing-masking
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMQ (VEX.256 encoded version) + ¶ +

+
DEST[63:0] := (SRC[255:0] >> (IMM8[1:0] * 64))[63:0];
+DEST[127:64] := (SRC[255:0] >> (IMM8[3:2] * 64))[63:0];
+DEST[191:128] := (SRC[255:0] >> (IMM8[5:4] * 64))[63:0];
+DEST[255:192] := (SRC[255:0] >> (IMM8[7:6] * 64))[63:0];
+DEST[MAXVL-1:256] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMQ __m512i _mm512_permutex_epi64( __m512i a, int imm);
+
+
VPERMQ __m512i _mm512_mask_permutex_epi64(__m512i s, __mmask8 k, __m512i a, int imm);
+
+
VPERMQ __m512i _mm512_maskz_permutex_epi64( __mmask8 k, __m512i a, int imm);
+
+
VPERMQ __m512i _mm512_permutexvar_epi64( __m512i a, __m512i b);
+
+
VPERMQ __m512i _mm512_mask_permutexvar_epi64(__m512i s, __mmask8 k, __m512i a, __m512i b);
+
+
VPERMQ __m512i _mm512_maskz_permutexvar_epi64( __mmask8 k, __m512i a, __m512i b);
+
+
VPERMQ __m256i _mm256_permutex_epi64( __m256i a, int imm);
+
+
VPERMQ __m256i _mm256_mask_permutex_epi64(__m256i s, __mmask8 k, __m256i a, int imm);
+
+
VPERMQ __m256i _mm256_maskz_permutex_epi64( __mmask8 k, __m256i a, int imm);
+
+
VPERMQ __m256i _mm256_permutexvar_epi64( __m256i a, __m256i b);
+
+
VPERMQ __m256i _mm256_mask_permutexvar_epi64(__m256i s, __mmask8 k, __m256i a, __m256i b);
+
+
VPERMQ __m256i _mm256_maskz_permutexvar_epi64( __mmask8 k, __m256i a, __m256i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

Additionally:

+ + + + + +
#UDIf VEX.L = 0.
If VEX.vvvv != 1111B.
+

EVEX-encoded instruction, see Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + + + +
#UDIf encoded with EVEX.128.
If EVEX.vvvv != 1111B and with imm8.
diff --git a/x86/vpermt2b.html b/x86/vpermt2b.html new file mode 100644 index 0000000..2a9628d --- /dev/null +++ b/x86/vpermt2b.html @@ -0,0 +1,115 @@ + +VPERMT2B + — Full Permute of Bytes From Two Tables Overwriting a Table

VPERMT2B + — Full Permute of Bytes From Two Tables Overwriting a Table

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp /En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 7D /r VPERMT2B xmm1 {k1}{z}, xmm2, xmm3/m128AV/VAVX512VL AVX512_VBMIPermute bytes in xmm3/m128 and xmm1 using byte indexes in xmm2 and store the byte results in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 7D /r VPERMT2B ymm1 {k1}{z}, ymm2, ymm3/m256AV/VAVX512VL AVX512_VBMIPermute bytes in ymm3/m256 and ymm1 using byte indexes in ymm2 and store the byte results in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 7D /r VPERMT2B zmm1 {k1}{z}, zmm2, zmm3/m512AV/VAVX512_VBMIPermute bytes in zmm3/m512 and zmm1 using byte indexes in zmm2 and store the byte results in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Permutes byte values from two tables, comprising of the first operand (also the destination operand) and the third operand (the second source operand). The second operand (the first source operand) provides byte indices to select byte results from the two tables. The selected byte elements are written to the destination at byte granularity under the writemask k1.

+

The first and second operands are ZMM/YMM/XMM registers. The second operand contains input indices to select elements from the two input tables in the 1st and 3rd operands. The first operand is also the destination of the result. The second source operand can be a ZMM/YMM/XMM register, or a 512/256/128-bit memory location. In each index byte, the id bit for table selection is bit 6/5/4, and bits [5:0]/[4:0]/[3:0] selects element within each input table.

+

Note that these instructions permit a byte value in the source operands to be copied to more than one location in the destination operand. Also, the second table and the indices can be reused in subsequent iterations, but the first table is overwritten.

+

Bits (MAX_VL-1:256/128) of the destination are zeroed for VL=256,128.

+

Operation + ¶ +

+

VPERMT2B (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+IF VL = 128:
+    id := 3;
+ELSE IF VL = 256:
+    id := 4;
+ELSE IF VL = 512:
+    id := 5;
+FI;
+TMP_DEST[VL-1:0] := DEST[VL-1:0];
+FOR j := 0 TO KL-1
+    off := 8*SRC1[j*8 + id: j*8] ;
+    IF k1[j] OR *no writemask*:
+        DEST[j*8 + 7: j*8] := SRC1[j*8+id+1]? SRC2[off+7:off] : TMP_DEST[off+7:off];
+    ELSE IF *zeroing-masking*
+        DEST[j*8 + 7: j*8] := 0;
+    *ELSE
+        DEST[j*8 + 7: j*8] remains unchanged*
+    FI;
+ENDFOR
+DEST[MAX_VL-1:VL] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMT2B __m512i _mm512_permutex2var_epi8(__m512i a, __m512i idx, __m512i b);
+
+
VPERMT2B __m512i _mm512_mask_permutex2var_epi8(__m512i a, __mmask64 k, __m512i idx, __m512i b);
+
+
VPERMT2B __m512i _mm512_maskz_permutex2var_epi8(__mmask64 k, __m512i a, __m512i idx, __m512i b);
+
+
VPERMT2B __m256i _mm256_permutex2var_epi8(__m256i a, __m256i idx, __m256i b);
+
+
VPERMT2B __m256i _mm256_mask_permutex2var_epi8(__m256i a, __mmask32 k, __m256i idx, __m256i b);
+
+
VPERMT2B __m256i _mm256_maskz_permutex2var_epi8(__mmask32 k, __m256i a, __m256i idx, __m256i b);
+
+
VPERMT2B __m128i _mm_permutex2var_epi8(__m128i a, __m128i idx, __m128i b);
+
+
VPERMT2B __m128i _mm_mask_permutex2var_epi8(__m128i a, __mmask16 k, __m128i idx, __m128i b);
+
+
VPERMT2B __m128i _mm_maskz_permutex2var_epi8(__mmask16 k, __m128i a, __m128i idx, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vpermt2w.vpermt2d.vpermt2q.vpermt2ps.vpermt2pd.html b/x86/vpermt2w.vpermt2d.vpermt2q.vpermt2ps.vpermt2pd.html new file mode 100644 index 0000000..eab2b6d --- /dev/null +++ b/x86/vpermt2w.vpermt2d.vpermt2q.vpermt2ps.vpermt2pd.html @@ -0,0 +1,386 @@ + +VPERMT2W/VPERMT2D/VPERMT2Q/VPERMT2PS/VPERMT2PD + — Full Permute From Two Tables Overwriting One Table

VPERMT2W/VPERMT2D/VPERMT2Q/VPERMT2PS/VPERMT2PD + — Full Permute From Two Tables Overwriting One Table

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 7D /r VPERMT2W xmm1 {k1}{z}, xmm2, xmm3/m128AV/VAVX512VL AVX512BWPermute word integers from two tables in xmm3/m128 and xmm1 using indexes in xmm2 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 7D /r VPERMT2W ymm1 {k1}{z}, ymm2, ymm3/m256AV/VAVX512VL AVX512BWPermute word integers from two tables in ymm3/m256 and ymm1 using indexes in ymm2 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 7D /r VPERMT2W zmm1 {k1}{z}, zmm2, zmm3/m512AV/VAVX512BWPermute word integers from two tables in zmm3/m512 and zmm1 using indexes in zmm2 and store the result in zmm1 using writemask k1.
EVEX.128.66.0F38.W0 7E /r VPERMT2D xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FPermute double-words from two tables in xmm3/m128/m32bcst and xmm1 using indexes in xmm2 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 7E /r VPERMT2D ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FPermute double-words from two tables in ymm3/m256/m32bcst and ymm1 using indexes in ymm2 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 7E /r VPERMT2D zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512FPermute double-words from two tables in zmm3/m512/m32bcst and zmm1 using indices in zmm2 and store the result in zmm1 using writemask k1.
EVEX.128.66.0F38.W1 7E /r VPERMT2Q xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FPermute quad-words from two tables in xmm3/m128/m64bcst and xmm1 using indexes in xmm2 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 7E /r VPERMT2Q ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FPermute quad-words from two tables in ymm3/m256/m64bcst and ymm1 using indexes in ymm2 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 7E /r VPERMT2Q zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstBV/VAVX512FPermute quad-words from two tables in zmm3/m512/m64bcst and zmm1 using indices in zmm2 and store the result in zmm1 using writemask k1.
EVEX.128.66.0F38.W0 7F /r VPERMT2PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FPermute single-precision floating-point values from two tables in xmm3/m128/m32bcst and xmm1 using indexes in xmm2 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W0 7F /r VPERMT2PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FPermute single-precision floating-point values from two tables in ymm3/m256/m32bcst and ymm1 using indexes in ymm2 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W0 7F /r VPERMT2PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512FPermute single-precision floating-point values from two tables in zmm3/m512/m32bcst and zmm1 using indices in zmm2 and store the result in zmm1 using writemask k1.
EVEX.128.66.0F38.W1 7F /r VPERMT2PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FPermute double precision floating-point values from two tables in xmm3/m128/m64bcst and xmm1 using indexes in xmm2 and store the result in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 7F /r VPERMT2PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FPermute double precision floating-point values from two tables in ymm3/m256/m64bcst and ymm1 using indexes in ymm2 and store the result in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 7F /r VPERMT2PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstBV/VAVX512FPermute double precision floating-point values from two tables in zmm3/m512/m64bcst and zmm1 using indices in zmm2 and store the result in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (r,w)EVEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Permutes 16-bit/32-bit/64-bit values in the first operand and the third operand (the second source operand) using indices in the second operand (the first source operand) to select elements from the first and third operands. The selected elements are written to the destination operand (the first operand) according to the writemask k1.

+

The first and second operands are ZMM/YMM/XMM registers. The second operand contains input indices to select elements from the two input tables in the 1st and 3rd operands. The first operand is also the destination of the result.

+

D/Q/PS/PD element versions: The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. Broadcast from the low 32/64-bit memory location is performed if EVEX.b and the id bit for table selection are set (selecting table_2).

+

Dword/PS versions: The id bit for table selection is bit 4/3/2, depending on VL=512, 256, 128. Bits [3:0]/[2:0]/[1:0] of each element in the input index vector select an element within the two source operands, If the id bit is 0, table_1 (the first source) is selected; otherwise the second source operand is selected.

+

Qword/PD versions: The id bit for table selection is bit 3/2/1, and bits [2:0]/[1:0] /bit 0 selects element within each input table.

+

Word element versions: The second source operand can be a ZMM/YMM/XMM register, or a 512/256/128-bit memory location. The id bit for table selection is bit 5/4/3, and bits [4:0]/[3:0]/[2:0] selects element within each input table.

+

Note that these instructions permit a 16-bit/32-bit/64-bit value in the source operands to be copied to more than one location in the destination operand. Note also that in this case, the same index can be reused for example for a second iteration, while the table elements being permuted are overwritten.

+

Bits (MAXVL-1:256/128) of the destination are zeroed for VL=256,128.

+

Operation + ¶ +

+

VPERMT2W (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+IF VL = 128
+    id := 2
+FI;
+IF VL = 256
+    id := 3
+FI;
+IF VL = 512
+    id := 4
+FI;
+TMP_DEST := DEST
+FOR j := 0 TO KL-1
+    i := j * 16
+    off := 16*SRC1[i+id:i]
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+15:i]=SRC1[i+id+1] ? SRC2[off+15:off]
+                    : TMP_DEST[off+15:off]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE ; zeroing-masking
+                        DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMT2D/VPERMT2PS (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF VL = 128
+    id := 1
+FI;
+IF VL = 256
+    id := 2
+FI;
+IF VL = 512
+    id := 3
+FI;
+TMP_DEST := DEST
+FOR j := 0 TO KL-1
+    i := j * 32
+    off := 32*SRC1[i+id:i]
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                        DEST[i+31:i] := SRC1[i+id+1] ? SRC2[31:0]
+                    : TMP_DEST[off+31:off]
+            ELSE
+                DEST[i+31:i] := SRC1[i+id+1] ? SRC2[off+31:off]
+                    : TMP_DEST[off+31:off]
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                        DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPERMT2Q/VPERMT2PD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8 512)
+IF VL = 128
+    id := 0
+FI;
+IF VL = 256
+    id := 1
+FI;
+IF VL = 512
+    id := 2
+FI;
+TMP_DEST:= DEST
+FOR j := 0 TO KL-1
+    i := j * 64
+    off := 64*SRC1[i+id:i]
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                        DEST[i+63:i] := SRC1[i+id+1] ? SRC2[63:0]
+                    : TMP_DEST[off+63:off]
+            ELSE
+                DEST[i+63:i] := SRC1[i+id+1] ? SRC2[off+63:off]
+                    : TMP_DEST[off+63:off]
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                        DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPERMT2D __m512i _mm512_permutex2var_epi32(__m512i a, __m512i idx, __m512i b);
+
+
VPERMT2D __m512i _mm512_mask_permutex2var_epi32(__m512i a, __mmask16 k, __m512i idx, __m512i b);
+
+
VPERMT2D __m512i _mm512_mask2_permutex2var_epi32(__m512i a, __m512i idx, __mmask16 k, __m512i b);
+
+
VPERMT2D __m512i _mm512_maskz_permutex2var_epi32(__mmask16 k, __m512i a, __m512i idx, __m512i b);
+
+
VPERMT2D __m256i _mm256_permutex2var_epi32(__m256i a, __m256i idx, __m256i b);
+
+
VPERMT2D __m256i _mm256_mask_permutex2var_epi32(__m256i a, __mmask8 k, __m256i idx, __m256i b);
+
+
VPERMT2D __m256i _mm256_mask2_permutex2var_epi32(__m256i a, __m256i idx, __mmask8 k, __m256i b);
+
+
VPERMT2D __m256i _mm256_maskz_permutex2var_epi32(__mmask8 k, __m256i a, __m256i idx, __m256i b);
+
+
VPERMT2D __m128i _mm_permutex2var_epi32(__m128i a, __m128i idx, __m128i b);
+
+
VPERMT2D __m128i _mm_mask_permutex2var_epi32(__m128i a, __mmask8 k, __m128i idx, __m128i b);
+
+
VPERMT2D __m128i _mm_mask2_permutex2var_epi32(__m128i a, __m128i idx, __mmask8 k, __m128i b);
+
+
VPERMT2D __m128i _mm_maskz_permutex2var_epi32(__mmask8 k, __m128i a, __m128i idx, __m128i b);
+
+
VPERMT2PD __m512d _mm512_permutex2var_pd(__m512d a, __m512i idx, __m512d b);
+
+
VPERMT2PD __m512d _mm512_mask_permutex2var_pd(__m512d a, __mmask8 k, __m512i idx, __m512d b);
+
+
VPERMT2PD __m512d _mm512_mask2_permutex2var_pd(__m512d a, __m512i idx, __mmask8 k, __m512d b);
+
+
VPERMT2PD __m512d _mm512_maskz_permutex2var_pd(__mmask8 k, __m512d a, __m512i idx, __m512d b);
+
+
VPERMT2PD __m256d _mm256_permutex2var_pd(__m256d a, __m256i idx, __m256d b);
+
+
VPERMT2PD __m256d _mm256_mask_permutex2var_pd(__m256d a, __mmask8 k, __m256i idx, __m256d b);
+
+
VPERMT2PD __m256d _mm256_mask2_permutex2var_pd(__m256d a, __m256i idx, __mmask8 k, __m256d b);
+
+
VPERMT2PD __m256d _mm256_maskz_permutex2var_pd(__mmask8 k, __m256d a, __m256i idx, __m256d b);
+
+
VPERMT2PD __m128d _mm_permutex2var_pd(__m128d a, __m128i idx, __m128d b);
+
+
VPERMT2PD __m128d _mm_mask_permutex2var_pd(__m128d a, __mmask8 k, __m128i idx, __m128d b);
+
+
VPERMT2PD __m128d _mm_mask2_permutex2var_pd(__m128d a, __m128i idx, __mmask8 k, __m128d b);
+
+
VPERMT2PD __m128d _mm_maskz_permutex2var_pd(__mmask8 k, __m128d a, __m128i idx, __m128d b);
+
+
VPERMT2PS __m512 _mm512_permutex2var_ps(__m512 a, __m512i idx, __m512 b);
+
+
VPERMT2PS __m512 _mm512_mask_permutex2var_ps(__m512 a, __mmask16 k, __m512i idx, __m512 b);
+
+
VPERMT2PS __m512 _mm512_mask2_permutex2var_ps(__m512 a, __m512i idx, __mmask16 k, __m512 b);
+
+
VPERMT2PS __m512 _mm512_maskz_permutex2var_ps(__mmask16 k, __m512 a, __m512i idx, __m512 b);
+
+
VPERMT2PS __m256 _mm256_permutex2var_ps(__m256 a, __m256i idx, __m256 b);
+
+
VPERMT2PS __m256 _mm256_mask_permutex2var_ps(__m256 a, __mmask8 k, __m256i idx, __m256 b);
+
+
VPERMT2PS __m256 _mm256_mask2_permutex2var_ps(__m256 a, __m256i idx, __mmask8 k, __m256 b);
+
+
VPERMT2PS __m256 _mm256_maskz_permutex2var_ps(__mmask8 k, __m256 a, __m256i idx, __m256 b);
+
+
VPERMT2PS __m128 _mm_permutex2var_ps(__m128 a, __m128i idx, __m128 b);
+
+
VPERMT2PS __m128 _mm_mask_permutex2var_ps(__m128 a, __mmask8 k, __m128i idx, __m128 b);
+
+
VPERMT2PS __m128 _mm_mask2_permutex2var_ps(__m128 a, __m128i idx, __mmask8 k, __m128 b);
+
+
VPERMT2PS __m128 _mm_maskz_permutex2var_ps(__mmask8 k, __m128 a, __m128i idx, __m128 b);
+
+
VPERMT2Q __m512i _mm512_permutex2var_epi64(__m512i a, __m512i idx, __m512i b);
+
+
VPERMT2Q __m512i _mm512_mask_permutex2var_epi64(__m512i a, __mmask8 k, __m512i idx, __m512i b);
+
+
VPERMT2Q __m512i _mm512_mask2_permutex2var_epi64(__m512i a, __m512i idx, __mmask8 k, __m512i b);
+
+
VPERMT2Q __m512i _mm512_maskz_permutex2var_epi64(__mmask8 k, __m512i a, __m512i idx, __m512i b);
+
+
VPERMT2Q __m256i _mm256_permutex2var_epi64(__m256i a, __m256i idx, __m256i b);
+
+
VPERMT2Q __m256i _mm256_mask_permutex2var_epi64(__m256i a, __mmask8 k, __m256i idx, __m256i b);
+
+
VPERMT2Q __m256i _mm256_mask2_permutex2var_epi64(__m256i a, __m256i idx, __mmask8 k, __m256i b);
+
+
VPERMT2Q __m256i _mm256_maskz_permutex2var_epi64(__mmask8 k, __m256i a, __m256i idx, __m256i b);
+
+
VPERMT2Q __m128i _mm_permutex2var_epi64(__m128i a, __m128i idx, __m128i b);
+
+
VPERMT2Q __m128i _mm_mask_permutex2var_epi64(__m128i a, __mmask8 k, __m128i idx, __m128i b);
+
+
VPERMT2Q __m128i _mm_mask2_permutex2var_epi64(__m128i a, __m128i idx, __mmask8 k, __m128i b);
+
+
VPERMT2Q __m128i _mm_maskz_permutex2var_epi64(__mmask8 k, __m128i a, __m128i idx, __m128i b);
+
+
VPERMT2W __m512i _mm512_permutex2var_epi16(__m512i a, __m512i idx, __m512i b);
+
+
VPERMT2W __m512i _mm512_mask_permutex2var_epi16(__m512i a, __mmask32 k, __m512i idx, __m512i b);
+
+
VPERMT2W __m512i _mm512_mask2_permutex2var_epi16(__m512i a, __m512i idx, __mmask32 k, __m512i b);
+
+
VPERMT2W __m512i _mm512_maskz_permutex2var_epi16(__mmask32 k, __m512i a, __m512i idx, __m512i b);
+
+
VPERMT2W __m256i _mm256_permutex2var_epi16(__m256i a, __m256i idx, __m256i b);
+
+
VPERMT2W __m256i _mm256_mask_permutex2var_epi16(__m256i a, __mmask16 k, __m256i idx, __m256i b);
+
+
VPERMT2W __m256i _mm256_mask2_permutex2var_epi16(__m256i a, __m256i idx, __mmask16 k, __m256i b);
+
+
VPERMT2W __m256i _mm256_maskz_permutex2var_epi16(__mmask16 k, __m256i a, __m256i idx, __m256i b);
+
+
VPERMT2W __m128i _mm_permutex2var_epi16(__m128i a, __m128i idx, __m128i b);
+
+
VPERMT2W __m128i _mm_mask_permutex2var_epi16(__m128i a, __mmask8 k, __m128i idx, __m128i b);
+
+
VPERMT2W __m128i _mm_mask2_permutex2var_epi16(__m128i a, __m128i idx, __mmask8 k, __m128i b);
+
+
VPERMT2W __m128i _mm_maskz_permutex2var_epi16(__mmask8 k, __m128i a, __m128i idx, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VPERMT2D/Q/PS/PD: See Table 2-50, “Type E4NF Class Exception Conditions.”

+

VPERMT2W: See Exceptions Type E4NF.nb in Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vpexpandb.vpexpandw.html b/x86/vpexpandb.vpexpandw.html new file mode 100644 index 0000000..a9b34cd --- /dev/null +++ b/x86/vpexpandb.vpexpandw.html @@ -0,0 +1,217 @@ + +VPEXPANDB/VPEXPANDW + — Expand Byte/Word Values

VPEXPANDB/VPEXPANDW + — Expand Byte/Word Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 62 /r VPEXPANDB xmm1{k1}{z}, m128AV/VAVX512_VBMI2 AVX512VLExpands up to 128 bits of packed byte values from m128 to xmm1 with writemask k1.
EVEX.128.66.0F38.W0 62 /r VPEXPANDB xmm1{k1}{z}, xmm2BV/VAVX512_VBMI2 AVX512VLExpands up to 128 bits of packed byte values from xmm2 to xmm1 with writemask k1.
EVEX.256.66.0F38.W0 62 /r VPEXPANDB ymm1{k1}{z}, m256AV/VAVX512_VBMI2 AVX512VLExpands up to 256 bits of packed byte values from m256 to ymm1 with writemask k1.
EVEX.256.66.0F38.W0 62 /r VPEXPANDB ymm1{k1}{z}, ymm2BV/VAVX512_VBMI2 AVX512VLExpands up to 256 bits of packed byte values from ymm2 to ymm1 with writemask k1.
EVEX.512.66.0F38.W0 62 /r VPEXPANDB zmm1{k1}{z}, m512AV/VAVX512_VBMI2Expands up to 512 bits of packed byte values from m512 to zmm1 with writemask k1.
EVEX.512.66.0F38.W0 62 /r VPEXPANDB zmm1{k1}{z}, zmm2BV/VAVX512_VBMI2Expands up to 512 bits of packed byte values from zmm2 to zmm1 with writemask k1.
EVEX.128.66.0F38.W1 62 /r VPEXPANDW xmm1{k1}{z}, m128AV/VAVX512_VBMI2 AVX512VLExpands up to 128 bits of packed word values from m128 to xmm1 with writemask k1.
EVEX.128.66.0F38.W1 62 /r VPEXPANDW xmm1{k1}{z}, xmm2BV/VAVX512_VBMI2 AVX512VLExpands up to 128 bits of packed word values from xmm2 to xmm1 with writemask k1.
EVEX.256.66.0F38.W1 62 /r VPEXPANDW ymm1{k1}{z}, m256AV/VAVX512_VBMI2 AVX512VLExpands up to 256 bits of packed word values from m256 to ymm1 with writemask k1.
EVEX.256.66.0F38.W1 62 /r VPEXPANDW ymm1{k1}{z}, ymm2BV/VAVX512_VBMI2 AVX512VLExpands up to 256 bits of packed word values from ymm2 to ymm1 with writemask k1.
EVEX.512.66.0F38.W1 62 /r VPEXPANDW zmm1{k1}{z}, m512AV/VAVX512_VBMI2Expands up to 512 bits of packed word values from m512 to zmm1 with writemask k1.
EVEX.512.66.0F38.W1 62 /r VPEXPANDW zmm1{k1}{z}, zmm2BV/VAVX512_VBMI2Expands up to 512 bits of packed byte integer values from zmm2 to zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Expands (loads) up to 64 byte integer values or 32 word integer values from the source operand (memory operand) to the destination operand (register operand), based on the active elements determined by the writemask operand.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Moves 128, 256 or 512 bits of packed byte integer values from the source operand (memory operand) to the destination operand (register operand). This instruction is used to load from an int8 vector register or memory location while inserting the data into sparse elements of destination vector register using the active elements pointed out by the operand writemask.

+

This instruction supports memory fault suppression.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VPEXPANDB + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+k := 0
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.byte[j] := SRC.byte[k];
+        k := k + 1
+        ELSE:
+            IF *merging-masking*:
+                *DEST.byte[j] remains unchanged*
+                ELSE:
+                        ; zeroing-masking
+                    DEST.byte[j] := 0
+DEST[MAX_VL-1:VL] := 0
+
+

VPEXPANDW + ¶ +

+
(KL, VL) = (8,128), (16,256), (32, 512)
+k := 0
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.word[j] := SRC.word[k];
+        k := k + 1
+        ELSE:
+            IF *merging-masking*:
+                *DEST.word[j] remains unchanged*
+                ELSE: ; zeroing-masking
+                    DEST.word[j] := 0
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPEXPAND __m128i _mm_mask_expand_epi8(__m128i, __mmask16, __m128i);
+
+
VPEXPAND __m128i _mm_maskz_expand_epi8(__mmask16, __m128i);
+
+
VPEXPAND __m128i _mm_mask_expandloadu_epi8(__m128i, __mmask16, const void*);
+
+
VPEXPAND __m128i _mm_maskz_expandloadu_epi8(__mmask16, const void*);
+
+
VPEXPAND __m256i _mm256_mask_expand_epi8(__m256i, __mmask32, __m256i);
+
+
VPEXPAND __m256i _mm256_maskz_expand_epi8(__mmask32, __m256i);
+
+
VPEXPAND __m256i _mm256_mask_expandloadu_epi8(__m256i, __mmask32, const void*);
+
+
VPEXPAND __m256i _mm256_maskz_expandloadu_epi8(__mmask32, const void*);
+
+
VPEXPAND __m512i _mm512_mask_expand_epi8(__m512i, __mmask64, __m512i);
+
+
VPEXPAND __m512i _mm512_maskz_expand_epi8(__mmask64, __m512i);
+
+
VPEXPAND __m512i _mm512_mask_expandloadu_epi8(__m512i, __mmask64, const void*);
+
+
VPEXPAND __m512i _mm512_maskz_expandloadu_epi8(__mmask64, const void*);
+
+
VPEXPANDW __m128i _mm_mask_expand_epi16(__m128i, __mmask8, __m128i);
+
+
VPEXPANDW __m128i _mm_maskz_expand_epi16(__mmask8, __m128i);
+
+
VPEXPANDW __m128i _mm_mask_expandloadu_epi16(__m128i, __mmask8, const void*);
+
+
VPEXPANDW __m128i _mm_maskz_expandloadu_epi16(__mmask8, const void *);
+
+
VPEXPANDW __m256i _mm256_mask_expand_epi16(__m256i, __mmask16, __m256i);
+
+
VPEXPANDW __m256i _mm256_maskz_expand_epi16(__mmask16, __m256i);
+
+
VPEXPANDW __m256i _mm256_mask_expandloadu_epi16(__m256i, __mmask16, const void*);
+
+
VPEXPANDW __m256i _mm256_maskz_expandloadu_epi16(__mmask16, const void*);
+
+
VPEXPANDW __m512i _mm512_mask_expand_epi16(__m512i, __mmask32, __m512i);
+
+
VPEXPANDW __m512i _mm512_maskz_expand_epi16(__mmask32, __m512i);
+
+
VPEXPANDW __m512i _mm512_mask_expandloadu_epi16(__m512i, __mmask32, const void*);
+
+
VPEXPANDW __m512i _mm512_maskz_expandloadu_epi16(__mmask32, const void*);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpexpandd.html b/x86/vpexpandd.html new file mode 100644 index 0000000..2c482a6 --- /dev/null +++ b/x86/vpexpandd.html @@ -0,0 +1,124 @@ + +VPEXPANDD + — Load Sparse Packed Doubleword Integer Values From Dense Memory/Register

VPEXPANDD + — Load Sparse Packed Doubleword Integer Values From Dense Memory/Register

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 89 /r VPEXPANDD xmm1 {k1}{z}, xmm2/m128AV/VAVX512VL AVX512FExpand packed double-word integer values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.66.0F38.W0 89 /r VPEXPANDD ymm1 {k1}{z}, ymm2/m256AV/VAVX512VL AVX512FExpand packed double-word integer values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.66.0F38.W0 89 /r VPEXPANDD zmm1 {k1}{z}, zmm2/m512AV/VAVX512FExpand packed double-word integer values from zmm2/m512 to zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Expand (load) up to 16 contiguous doubleword integer values of the input vector in the source operand (the second operand) to sparse elements in the destination operand (the first operand), selected by the writemask k1. The destination operand is a ZMM register, the source operand can be a ZMM register or memory location.

+

The input vector starts from the lowest element in the source operand. The opmask register k1 selects the destination elements (a partial vector or sparse elements if less than 8 elements) to be replaced by the ascending elements in the input vector. Destination elements not selected by the writemask k1 are either unmodified or zeroed, depending on EVEX.z.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VPEXPANDD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+31:i] := SRC[k+31:k];
+            k := k + 32
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPEXPANDD __m512i _mm512_mask_expandloadu_epi32(__m512i s, __mmask16 k, void * a);
+
+
VPEXPANDD __m512i _mm512_maskz_expandloadu_epi32( __mmask16 k, void * a);
+
+
VPEXPANDD __m512i _mm512_mask_expand_epi32(__m512i s, __mmask16 k, __m512i a);
+
+
VPEXPANDD __m512i _mm512_maskz_expand_epi32( __mmask16 k, __m512i a);
+
+
VPEXPANDD __m256i _mm256_mask_expandloadu_epi32(__m256i s, __mmask8 k, void * a);
+
+
VPEXPANDD __m256i _mm256_maskz_expandloadu_epi32( __mmask8 k, void * a);
+
+
VPEXPANDD __m256i _mm256_mask_expand_epi32(__m256i s, __mmask8 k, __m256i a);
+
+
VPEXPANDD __m256i _mm256_maskz_expand_epi32( __mmask8 k, __m256i a);
+
+
VPEXPANDD __m128i _mm_mask_expandloadu_epi32(__m128i s, __mmask8 k, void * a);
+
+
VPEXPANDD __m128i _mm_maskz_expandloadu_epi32( __mmask8 k, void * a);
+
+
VPEXPANDD __m128i _mm_mask_expand_epi32(__m128i s, __mmask8 k, __m128i a);
+
+
VPEXPANDD __m128i _mm_maskz_expand_epi32( __mmask8 k, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpexpandq.html b/x86/vpexpandq.html new file mode 100644 index 0000000..5ed17d0 --- /dev/null +++ b/x86/vpexpandq.html @@ -0,0 +1,124 @@ + +VPEXPANDQ + — Load Sparse Packed Quadword Integer Values From Dense Memory/Register

VPEXPANDQ + — Load Sparse Packed Quadword Integer Values From Dense Memory/Register

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 89 /r VPEXPANDQ xmm1 {k1}{z}, xmm2/m128AV/VAVX512VL AVX512FExpand packed quad-word integer values from xmm2/m128 to xmm1 using writemask k1.
EVEX.256.66.0F38.W1 89 /r VPEXPANDQ ymm1 {k1}{z}, ymm2/m256AV/VAVX512VL AVX512FExpand packed quad-word integer values from ymm2/m256 to ymm1 using writemask k1.
EVEX.512.66.0F38.W1 89 /r VPEXPANDQ zmm1 {k1}{z}, zmm2/m512AV/VAVX512FExpand packed quad-word integer values from zmm2/m512 to zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Expand (load) up to 8 quadword integer values from the source operand (the second operand) to sparse elements in the destination operand (the first operand), selected by the writemask k1. The destination operand is a ZMM register, the source operand can be a ZMM register or memory location.

+

The input vector starts from the lowest element in the source operand. The opmask register k1 selects the destination elements (a partial vector or sparse elements if less than 8 elements) to be replaced by the ascending elements in the input vector. Destination elements not selected by the writemask k1 are either unmodified or zeroed, depending on EVEX.z.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Note that the compressed displacement assumes a pre-scaling (N) corresponding to the size of one single element instead of the size of the full vector.

+

Operation + ¶ +

+

VPEXPANDQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+k := 0
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            DEST[i+63:i] := SRC[k+63:k];
+            k := k + 64
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    THEN DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPEXPANDQ __m512i _mm512_mask_expandloadu_epi64(__m512i s, __mmask8 k, void * a);
+
+
VPEXPANDQ __m512i _mm512_maskz_expandloadu_epi64( __mmask8 k, void * a);
+
+
VPEXPANDQ __m512i _mm512_mask_expand_epi64(__m512i s, __mmask8 k, __m512i a);
+
+
VPEXPANDQ __m512i _mm512_maskz_expand_epi64( __mmask8 k, __m512i a);
+
+
VPEXPANDQ __m256i _mm256_mask_expandloadu_epi64(__m256i s, __mmask8 k, void * a);
+
+
VPEXPANDQ __m256i _mm256_maskz_expandloadu_epi64( __mmask8 k, void * a);
+
+
VPEXPANDQ __m256i _mm256_mask_expand_epi64(__m256i s, __mmask8 k, __m256i a);
+
+
VPEXPANDQ __m256i _mm256_maskz_expand_epi64( __mmask8 k, __m256i a);
+
+
VPEXPANDQ __m128i _mm_mask_expandloadu_epi64(__m128i s, __mmask8 k, void * a);
+
+
VPEXPANDQ __m128i _mm_maskz_expandloadu_epi64( __mmask8 k, void * a);
+
+
VPEXPANDQ __m128i _mm_mask_expand_epi64(__m128i s, __mmask8 k, __m128i a);
+
+
VPEXPANDQ __m128i _mm_maskz_expand_epi64( __mmask8 k, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpgatherdd.vpgatherdq.html b/x86/vpgatherdd.vpgatherdq.html new file mode 100644 index 0000000..bc3db57 --- /dev/null +++ b/x86/vpgatherdd.vpgatherdq.html @@ -0,0 +1,159 @@ + +VPGATHERDD/VPGATHERDQ + — Gather Packed Dword, Packed Qword With Signed Dword Indices

VPGATHERDD/VPGATHERDQ + — Gather Packed Dword, Packed Qword With Signed Dword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 90 /vsib VPGATHERDD xmm1 {k1}, vm32xAV/VAVX512VL AVX512FUsing signed dword indices, gather dword values from memory using writemask k1 for merging-masking.
EVEX.256.66.0F38.W0 90 /vsib VPGATHERDD ymm1 {k1}, vm32yAV/VAVX512VL AVX512FUsing signed dword indices, gather dword values from memory using writemask k1 for merging-masking.
EVEX.512.66.0F38.W0 90 /vsib VPGATHERDD zmm1 {k1}, vm32zAV/VAVX512FUsing signed dword indices, gather dword values from memory using writemask k1 for merging-masking.
EVEX.128.66.0F38.W1 90 /vsib VPGATHERDQ xmm1 {k1}, vm32xAV/VAVX512VL AVX512FUsing signed dword indices, gather quadword values from memory using writemask k1 for merging-masking.
EVEX.256.66.0F38.W1 90 /vsib VPGATHERDQ ymm1 {k1}, vm32xAV/VAVX512VL AVX512FUsing signed dword indices, gather quadword values from memory using writemask k1 for merging-masking.
EVEX.512.66.0F38.W1 90 /vsib VPGATHERDQ zmm1 {k1}, vm32yAV/VAVX512FUsing signed dword indices, gather quadword values from memory using writemask k1 for merging-masking.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)BaseReg (R): VSIB:base, VectorReg(R): VSIB:indexN/AN/A
+

Description + ¶ +

+

A set of 16 or 8 doubleword/quadword memory locations pointed to by base address BASE_ADDR and index vector VINDEX with scale SCALE are gathered. The result is written into vector zmm1. The elements are specified via the VSIB (i.e., the index register is a zmm, holding packed indices). Elements will only be loaded if their corresponding mask bit is one. If an element’s mask bit is not set, the corresponding element of the destination register (zmm1) is left unchanged. The entire mask register will be set to zero by this instruction unless it triggers an exception.

+

This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask register (k1) are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gathered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

If the data element size is less than the index element size, the higher part of the destination register and the mask register do not correspond to any elements being gathered. This instruction sets those higher parts to zero. It may update these unused elements to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.

+

Note that:

+
    +
  • The values may be read from memory in any order. Memory ordering with other instructions follows the Intel-64 memory-ordering model.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination zmm will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be gathered in any order, but faults must be delivered in a right-to-left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • Not valid with 16-bit effective addresses. Will deliver a #UD fault.
  • +
  • These instructions do not accept zeroing-masking since the 0 values in k1 are used to determine completion.
+

Note that the presence of VSIB byte is enforced in this instruction. Hence, the instruction will #UD fault if ModRM.rm is different than 100b.

+

This instruction has the same disp8*N and alignment rules as for scalar instructions (Tuple 1).

+

The instruction will #UD fault if the destination vector zmm1 is the same as index vector VINDEX. The instruction will #UD fault if the k0 mask register is specified.

+

The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist
+VINDEX stands for the memory operand vector of indices (a ZMM register)
+SCALE stands for the memory operand scalar (1, 2, 4 or 8)
+DISP is the optional 1 or 4 byte displacement
+
+

VPGATHERDD (EVEX encoded version) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j]
+        THEN DEST[i+31:i] := MEM[BASE_ADDR +
+                SignExtend(VINDEX[i+31:i]) * SCALE + DISP]
+            k1[j] := 0
+        ELSE *DEST[i+31:i] := remains unchanged*
+                    ; Only merging masking is allowed
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+DEST[MAXVL-1:VL] := 0
+
+

VPGATHERDQ (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j]
+        THEN DEST[i+63:i] :=
+            MEM[BASE_ADDR + SignExtend(VINDEX[k+31:k]) * SCALE + DISP]
+            k1[j] := 0
+        ELSE *DEST[i+63:i] := remains unchanged*
+                ; Only merging masking is allowed
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPGATHERDD __m512i _mm512_i32gather_epi32( __m512i vdx, void * base, int scale);
+
+
VPGATHERDD __m512i _mm512_mask_i32gather_epi32(__m512i s, __mmask16 k, __m512i vdx, void * base, int scale);
+
+
VPGATHERDD __m256i _mm256_mmask_i32gather_epi32(__m256i s, __mmask8 k, __m256i vdx, void * base, int scale);
+
+
VPGATHERDD __m128i _mm_mmask_i32gather_epi32(__m128i s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+
VPGATHERDQ __m512i _mm512_i32logather_epi64( __m256i vdx, void * base, int scale);
+
+
VPGATHERDQ __m512i _mm512_mask_i32logather_epi64(__m512i s, __mmask8 k, __m256i vdx, void * base, int scale);
+
+
VPGATHERDQ __m256i _mm256_mmask_i32logather_epi64(__m256i s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+
VPGATHERDQ __m128i _mm_mmask_i32gather_epi64(__m128i s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-61, “Type E12 Class Exception Conditions.”

diff --git a/x86/vpgatherdd.vpgatherqd.html b/x86/vpgatherdd.vpgatherqd.html new file mode 100644 index 0000000..b0fd29c --- /dev/null +++ b/x86/vpgatherdd.vpgatherqd.html @@ -0,0 +1,205 @@ + +VPGATHERDD/VPGATHERQD + — Gather Packed Dword Values Using Signed Dword/Qword Indices

VPGATHERDD/VPGATHERQD + — Gather Packed Dword Values Using Signed Dword/Qword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 -bit ModeCPUID Feature FlagDescription
VEX.128.66.0F38.W0 90 /r VPGATHERDD xmm1, vm32x, xmm2RMVV/VAVX2Using dword indices specified in vm32x, gather dword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VEX.128.66.0F38.W0 91 /r VPGATHERQD xmm1, vm64x, xmm2RMVV/VAVX2Using qword indices specified in vm64x, gather dword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VEX.256.66.0F38.W0 90 /r VPGATHERDD ymm1, vm32y, ymm2RMVV/VAVX2Using dword indices specified in vm32y, gather dword from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
VEX.256.66.0F38.W0 91 /r VPGATHERQD xmm1, vm64y, xmm2RMVV/VAVX2Using qword indices specified in vm64y, gather dword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMVModRM:reg (r,w)BaseReg (R): VSIB:base, VectorReg(R): VSIB:indexVEX.vvvv (r, w)N/A
+

Description + ¶ +

+

The instruction conditionally loads up to 4 or 8 dword values from memory addresses specified by the memory operand (the second operand) and using dword indices. The memory operand uses the VSIB form of the SIB byte to specify a general purpose register operand as the common base, a vector register for an array of indices relative to the base and a constant scale factor.

+

The mask operand (the third operand) specifies the conditional load operation from each memory address and the corresponding update of each data element of the destination operand (the first operand). Conditionality is specified by the most significant bit of each data element of the mask register. If an element’s mask bit is not set, the corresponding element of the destination register is left unchanged. The width of data element in the destination register and mask register are identical. The entire mask register will be set to zero by this instruction unless the instruction causes an exception.

+

Using qword indices, the instruction conditionally loads up to 2 or 4 qword values from the VSIB addressing memory operand, and updates the lower half of the destination register. The upper 128 or 256 bits of the destination register are zero’ed with qword indices.

+

This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask operand are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gathered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

If the data size and index size are different, part of the destination register and part of the mask register do not correspond to any elements being gathered. This instruction sets those parts to zero. It may do this to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.

+

VEX.128 version: For dword indices, the instruction will gather four dword values. For qword indices, the instruction will gather two values and zero the upper 64 bits of the destination.

+

VEX.256 version: For dword indices, the instruction will gather eight dword values. For qword indices, the instruction will gather four values and zero the upper 128 bits of the destination.

+

Note that:

+
    +
  • If any pair of the index, mask, or destination registers are the same, this instruction results a UD fault.
  • +
  • The values may be read from memory in any order. Memory ordering with other instructions follows the Intel-64 memory-ordering model.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be gathered in any order, but faults must be delivered in a right-to-left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • This instruction will cause a #UD if the address size attribute is 16-bit.
  • +
  • This instruction will cause a #UD if the memory operand is encoded without the SIB byte.
  • +
  • This instruction should not be used to access memory mapped I/O as the ordering of the individual loads it does is implementation specific, and some implementations may use loads larger than the data element size or load elements an indeterminate number of times.
  • +
  • The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.
+

Operation + ¶ +

+
DEST := SRC1;
+BASE_ADDR: base register encoded in VSIB addressing;
+VINDEX: the vector index register encoded by VSIB addressing;
+SCALE: scale factor encoded by SIB:[7:6];
+DISP: optional 1, 4 byte displacement;
+MASK := SRC3;
+
+

VPGATHERDD (VEX.128 version) + ¶ +

+
MASK[MAXVL-1:128] := 0;
+FOR j := 0 to 3
+    i := j * 32;
+    IF MASK[31+i] THEN
+        MASK[i +31:i] := FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +31:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 3
+    i := j * 32;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX[i+31:i])*SCALE + DISP;
+    IF MASK[31+i] THEN
+        DEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +31:i] := 0;
+ENDFOR
+DEST[MAXVL-1:128] := 0;
+
+

VPGATHERQD (VEX.128 version) + ¶ +

+
MASK[MAXVL-1:64] := 0;
+FOR j := 0 to 3
+    i := j * 32;
+    IF MASK[31+i] THEN
+        MASK[i +31:i] := FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +31:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 1
+    k := j * 64;
+    i := j * 32;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[k+63:k])*SCALE + DISP;
+    IF MASK[31+i] THEN
+        DEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +31:i] := 0;
+ENDFOR
+DEST[MAXVL-1:64] := 0;
+
+

VPGATHERDD (VEX.256 version) + ¶ +

+
MASK[MAXVL-1:256] := 0;
+FOR j := 0 to 7
+    i := j * 32;
+    IF MASK[31+i] THEN
+        MASK[i +31:i] := FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +31:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 7
+    i := j * 32;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[i+31:i])*SCALE + DISP;
+    IF MASK[31+i] THEN
+        DEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +31:i] := 0;
+ENDFOR
+DEST[MAXVL-1:256] := 0;
+
+

VPGATHERQD (VEX.256 version) + ¶ +

+
MASK[MAXVL-1:128] := 0;
+FOR j := 0 to 7
+    i := j * 32;
+    IF MASK[31+i] THEN
+        MASK[i +31:i] := FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +31:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 3
+    k := j * 64;
+    i := j * 32;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[k+63:k])*SCALE + DISP;
+    IF MASK[31+i] THEN
+        DEST[i +31:i] := FETCH_32BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +31:i] := 0;
+ENDFOR
+DEST[MAXVL-1:128] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPGATHERDD: __m128i _mm_i32gather_epi32 (int const * base, __m128i index, const int scale);
+
+
VPGATHERDD: __m128i _mm_mask_i32gather_epi32 (__m128i src, int const * base, __m128i index, __m128i mask, const int scale);
+
+
VPGATHERDD: __m256i _mm256_i32gather_epi32 ( int const * base, __m256i index, const int scale);
+
+
VPGATHERDD: __m256i _mm256_mask_i32gather_epi32 (__m256i src, int const * base, __m256i index, __m256i mask, const int scale);
+
+
VPGATHERQD: __m128i _mm_i64gather_epi32 (int const * base, __m128i index, const int scale);
+
+
VPGATHERQD: __m128i _mm_mask_i64gather_epi32 (__m128i src, int const * base, __m128i index, __m128i mask, const int scale);
+
+
VPGATHERQD: __m128i _mm256_i64gather_epi32 (int const * base, __m256i index, const int scale);
+
+
VPGATHERQD: __m128i _mm256_mask_i64gather_epi32 (__m128i src, int const * base, __m256i index, __m128i mask, const int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-27, “Type 12 Class Exception Conditions.”

diff --git a/x86/vpgatherdq.vpgatherqq.html b/x86/vpgatherdq.vpgatherqq.html new file mode 100644 index 0000000..2a6ddca --- /dev/null +++ b/x86/vpgatherdq.vpgatherqq.html @@ -0,0 +1,205 @@ + +VPGATHERDQ/VPGATHERQQ + — Gather Packed Qword Values Using Signed Dword/Qword Indices

VPGATHERDQ/VPGATHERQQ + — Gather Packed Qword Values Using Signed Dword/Qword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 -bit ModeCPUID Feature FlagDescription
VEX.128.66.0F38.W1 90 /r VPGATHERDQ xmm1, vm32x, xmm2AV/VAVX2Using dword indices specified in vm32x, gather qword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VEX.128.66.0F38.W1 91 /r VPGATHERQQ xmm1, vm64x, xmm2AV/VAVX2Using qword indices specified in vm64x, gather qword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VEX.256.66.0F38.W1 90 /r VPGATHERDQ ymm1, vm32x, ymm2AV/VAVX2Using dword indices specified in vm32x, gather qword values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
VEX.256.66.0F38.W1 91 /r VPGATHERQQ ymm1, vm64y, ymm2AV/VAVX2Using qword indices specified in vm64y, gather qword values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
AModRM:reg (r,w)BaseReg (R): VSIB:base, VectorReg(R): VSIB:indexVEX.vvvv (r, w)N/A
+

Description + ¶ +

+

The instruction conditionally loads up to 2 or 4 qword values from memory addresses specified by the memory operand (the second operand) and using qword indices. The memory operand uses the VSIB form of the SIB byte to specify a general purpose register operand as the common base, a vector register for an array of indices relative to the base and a constant scale factor.

+

The mask operand (the third operand) specifies the conditional load operation from each memory address and the corresponding update of each data element of the destination operand (the first operand). Conditionality is specified by the most significant bit of each data element of the mask register. If an element’s mask bit is not set, the corresponding element of the destination register is left unchanged. The width of data element in the destination register and mask register are identical. The entire mask register will be set to zero by this instruction unless the instruction causes an exception.

+

Using dword indices in the lower half of the mask register, the instruction conditionally loads up to 2 or 4 qword values from the VSIB addressing memory operand, and updates the destination register.

+

This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask operand are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gathered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

If the data size and index size are different, part of the destination register and part of the mask register do not correspond to any elements being gathered. This instruction sets those parts to zero. It may do this to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.

+

VEX.128 version: The instruction will gather two qword values. For dword indices, only the lower two indices in the vector index register are used.

+

VEX.256 version: The instruction will gather four qword values. For dword indices, only the lower four indices in the vector index register are used.

+

Note that:

+
    +
  • If any pair of the index, mask, or destination registers are the same, this instruction results a UD fault.
  • +
  • The values may be read from memory in any order. Memory ordering with other instructions follows the Intel-64 memory-ordering model.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be gathered in any order, but faults must be delivered in a right-to-left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • This instruction will cause a #UD if the address size attribute is 16-bit.
  • +
  • This instruction will cause a #UD if the memory operand is encoded without the SIB byte.
  • +
  • This instruction should not be used to access memory mapped I/O as the ordering of the individual loads it does is implementation specific, and some implementations may use loads larger than the data element size or load elements an indeterminate number of times.
  • +
  • The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.
+

Operation + ¶ +

+
DEST := SRC1;
+BASE_ADDR: base register encoded in VSIB addressing;
+VINDEX: the vector index register encoded by VSIB addressing;
+SCALE: scale factor encoded by SIB:[7:6];
+DISP: optional 1, 4 byte displacement;
+MASK := SRC3;
+
+

VPGATHERDQ (VEX.128 version) + ¶ +

+
MASK[MAXVL-1:128] := 0;
+FOR j := 0 to 1
+    i := j * 64;
+    IF MASK[63+i] THEN
+        MASK[i +63:i] := FFFFFFFF_FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +63:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 1
+    k := j * 32;
+    i := j * 64;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX[k+31:k])*SCALE + DISP);
+    IF MASK[63+i] THEN
+        DEST[i +63:i] := FETCH_64BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +63:i] := 0;
+ENDFOR
+DEST[MAXVL-1:128] := 0;
+
+

VPGATHERQQ (VEX.128 version) + ¶ +

+
MASK[MAXVL-1:128] := 0;
+FOR j := 0 to 1
+    i := j * 64;
+    IF MASK[63+i] THEN
+        MASK[i +63:i] := FFFFFFFF_FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +63:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 1
+    i := j * 64;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[i+63:i])*SCALE + DISP);
+    IF MASK[63+i] THEN
+        DEST[i +63:i] := FETCH_64BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +63:i] := 0;
+ENDFOR
+DEST[MAXVL-1:128] := 0;
+
+

VPGATHERQQ (VEX.256 version) + ¶ +

+
MASK[MAXVL-1:256] := 0;
+FOR j := 0 to 3
+    i := j * 64;
+    IF MASK[63+i] THEN
+        MASK[i +63:i] := FFFFFFFF_FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +63:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 3
+    i := j * 64;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[i+63:i])*SCALE + DISP);
+    IF MASK[63+i] THEN
+        DEST[i +63:i] := FETCH_64BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +63:i] := 0;
+ENDFOR
+DEST[MAXVL-1:256] := 0;
+
+

VPGATHERDQ (VEX.256 version) + ¶ +

+
MASK[MAXVL-1:256] := 0;
+FOR j := 0 to 3
+    i := j * 64;
+    IF MASK[63+i] THEN
+        MASK[i +63:i] := FFFFFFFF_FFFFFFFFH; // extend from most significant bit
+    ELSE
+        MASK[i +63:i] := 0;
+    FI;
+ENDFOR
+FOR j := 0 to 3
+    k := j * 32;
+    i := j * 64;
+    DATA_ADDR := BASE_ADDR + (SignExtend(VINDEX1[k+31:k])*SCALE + DISP);
+    IF MASK[63+i] THEN
+        DEST[i +63:i] := FETCH_64BITS(DATA_ADDR); // a fault exits the instruction
+    FI;
+    MASK[i +63:i] := 0;
+ENDFOR
+DEST[MAXVL-1:256] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPGATHERDQ: __m128i _mm_i32gather_epi64 (__int64 const * base, __m128i index, const int scale);
+
+
VPGATHERDQ: __m128i _mm_mask_i32gather_epi64 (__m128i src, __int64 const * base, __m128i index, __m128i mask, const int scale);
+
+
VPGATHERDQ: __m256i _mm256_i32gather_epi64 (__int64 const * base, __m128i index, const int scale);
+
+
VPGATHERDQ: __m256i _mm256_mask_i32gather_epi64 (__m256i src, __int64 const * base, __m128i index, __m256i mask, const int scale);
+
+
VPGATHERQQ: __m128i _mm_i64gather_epi64 (__int64 const * base, __m128i index, const int scale);
+
+
VPGATHERQQ: __m128i _mm_mask_i64gather_epi64 (__m128i src, __int64 const * base, __m128i index, __m128i mask, const int scale);
+
+
VPGATHERQQ: __m256i _mm256_i64gather_epi64 __(int64 const * base, __m256i index, const int scale);
+
+
VPGATHERQQ: __m256i _mm256_mask_i64gather_epi64 (__m256i src, __int64 const * base, __m256i index, __m256i mask, const int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-27, “Type 12 Class Exception Conditions.”

diff --git a/x86/vpgatherqd.vpgatherqq.html b/x86/vpgatherqd.vpgatherqq.html new file mode 100644 index 0000000..9f7006e --- /dev/null +++ b/x86/vpgatherqd.vpgatherqq.html @@ -0,0 +1,158 @@ + +VPGATHERQD/VPGATHERQQ + — Gather Packed Dword, Packed Qword with Signed Qword Indices

VPGATHERQD/VPGATHERQQ + — Gather Packed Dword, Packed Qword with Signed Qword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 91 /vsib VPGATHERQD xmm1 {k1}, vm64xAV/VAVX512VL AVX512FUsing signed qword indices, gather dword values from memory using writemask k1 for merging-masking.
EVEX.256.66.0F38.W0 91 /vsib VPGATHERQD xmm1 {k1}, vm64yAV/VAVX512VL AVX512FUsing signed qword indices, gather dword values from memory using writemask k1 for merging-masking.
EVEX.512.66.0F38.W0 91 /vsib VPGATHERQD ymm1 {k1}, vm64zAV/VAVX512FUsing signed qword indices, gather dword values from memory using writemask k1 for merging-masking.
EVEX.128.66.0F38.W1 91 /vsib VPGATHERQQ xmm1 {k1}, vm64xAV/VAVX512VL AVX512FUsing signed qword indices, gather quadword values from memory using writemask k1 for merging-masking.
EVEX.256.66.0F38.W1 91 /vsib VPGATHERQQ ymm1 {k1}, vm64yAV/VAVX512VL AVX512FUsing signed qword indices, gather quadword values from memory using writemask k1 for merging-masking.
EVEX.512.66.0F38.W1 91 /vsib VPGATHERQQ zmm1 {k1}, vm64zAV/VAVX512FUsing signed qword indices, gather quadword values from memory using writemask k1 for merging-masking.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)BaseReg (R): VSIB:base, VectorReg(R): VSIB:indexN/AN/A
+

Description + ¶ +

+

A set of 8 doubleword/quadword memory locations pointed to by base address BASE_ADDR and index vector VINDEX with scale SCALE are gathered. The result is written into a vector register. The elements are specified via the VSIB (i.e., the index register is a vector register, holding packed indices). Elements will only be loaded if their corresponding mask bit is one. If an element’s mask bit is not set, the corresponding element of the destination register is left unchanged. The entire mask register will be set to zero by this instruction unless it triggers an exception.

+

This instruction can be suspended by an exception if at least one element is already gathered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask register (k1) are partially updated; those elements that have been gathered are placed into the destination register and have their mask bits set to zero. If any traps or interrupts are pending from already gathered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

If the data element size is less than the index element size, the higher part of the destination register and the mask register do not correspond to any elements being gathered. This instruction sets those higher parts to zero. It may update these unused elements to one or both of those registers even if the instruction triggers an exception, and even if the instruction triggers the exception before gathering any elements.

+

Note that:

+
    +
  • The values may be read from memory in any order. Memory ordering with other instructions follows the Intel-64 memory-ordering model.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination zmm will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be gathered in any order, but faults must be delivered in a right-to-left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • Not valid with 16-bit effective addresses. Will deliver a #UD fault.
  • +
  • These instructions do not accept zeroing-masking since the 0 values in k1 are used to determine completion.
+

Note that the presence of VSIB byte is enforced in this instruction. Hence, the instruction will #UD fault if ModRM.rm is different than 100b.

+

This instruction has the same disp8*N and alignment rules as for scalar instructions (Tuple 1).

+

The instruction will #UD fault if the destination vector zmm1 is the same as index vector VINDEX. The instruction will #UD fault if the k0 mask register is specified.

+

The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist
+VINDEX stands for the memory operand vector of indices (a ZMM register)
+SCALE stands for the memory operand scalar (1, 2, 4 or 8)
+DISP is the optional 1 or 4 byte displacement
+
+

VPGATHERQD (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j]
+        THEN DEST[i+31:i] := MEM[BASE_ADDR + (VINDEX[k+63:k]) * SCALE + DISP]
+            k1[j] := 0
+        ELSE *DEST[i+31:i] := remains unchanged*
+                ; Only merging masking is allowed
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+DEST[MAXVL-1:VL/2] := 0
+
+

VPGATHERQQ (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 64), (4, 128), (8, 256)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j]
+        THEN DEST[i+63:i] :=
+            MEM[BASE_ADDR + (VINDEX[i+63:i]) * SCALE + DISP]
+            k1[j] := 0
+        ELSE *DEST[i+63:i] := remains unchanged*
+                ; Only merging masking is allowed
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPGATHERQD __m256i _mm512_i64gather_epi32(__m512i vdx, void * base, int scale);
+
+
VPGATHERQD __m256i _mm512_mask_i64gather_epi32lo(__m256i s, __mmask8 k, __m512i vdx, void * base, int scale);
+
+
VPGATHERQD __m128i _mm256_mask_i64gather_epi32lo(__m128i s, __mmask8 k, __m256i vdx, void * base, int scale);
+
+
VPGATHERQD __m128i _mm_mask_i64gather_epi32(__m128i s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+
VPGATHERQQ __m512i _mm512_i64gather_epi64( __m512i vdx, void * base, int scale);
+
+
VPGATHERQQ __m512i _mm512_mask_i64gather_epi64(__m512i s, __mmask8 k, __m512i vdx, void * base, int scale);
+
+
VPGATHERQQ __m256i _mm256_mask_i64gather_epi64(__m256i s, __mmask8 k, __m256i vdx, void * base, int scale);
+
+
VPGATHERQQ __m128i _mm_mask_i64gather_epi64(__m128i s, __mmask8 k, __m128i vdx, void * base, int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-61, “Type E12 Class Exception Conditions.”

diff --git a/x86/vplzcntd.vplzcntq.html b/x86/vplzcntd.vplzcntq.html new file mode 100644 index 0000000..5dc401e --- /dev/null +++ b/x86/vplzcntd.vplzcntq.html @@ -0,0 +1,177 @@ + +VPLZCNTD/VPLZCNTQ + — Count the Number of Leading Zero Bits for Packed Dword, Packed Qword Values

VPLZCNTD/VPLZCNTQ + — Count the Number of Leading Zero Bits for Packed Dword, Packed Qword Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 44 /r VPLZCNTD xmm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512CDCount the number of leading zero bits in each dword element of xmm2/m128/m32bcst using writemask k1.
EVEX.256.66.0F38.W0 44 /r VPLZCNTD ymm1 {k1}{z}, ymm2/m256/m32bcstAV/VAVX512VL AVX512CDCount the number of leading zero bits in each dword element of ymm2/m256/m32bcst using writemask k1.
EVEX.512.66.0F38.W0 44 /r VPLZCNTD zmm1 {k1}{z}, zmm2/m512/m32bcstAV/VAVX512CDCount the number of leading zero bits in each dword element of zmm2/m512/m32bcst using writemask k1.
EVEX.128.66.0F38.W1 44 /r VPLZCNTQ xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512CDCount the number of leading zero bits in each qword element of xmm2/m128/m64bcst using writemask k1.
EVEX.256.66.0F38.W1 44 /r VPLZCNTQ ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512CDCount the number of leading zero bits in each qword element of ymm2/m256/m64bcst using writemask k1.
EVEX.512.66.0F38.W1 44 /r VPLZCNTQ zmm1 {k1}{z}, zmm2/m512/m64bcstAV/VAVX512CDCount the number of leading zero bits in each qword element of zmm2/m512/m64bcst using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Counts the number of leading most significant zero bits in each dword or qword element of the source operand (the second operand) and stores the results in the destination register (the first operand) according to the writemask. If an element is zero, the result for that element is the operand size of the element.

+

EVEX.512 encoded version: The source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.256 encoded version: The source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.

+

EVEX.128 encoded version: The source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPLZCNTD + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j*32
+    IF MaskBit(j) OR *no writemask*
+        THEN
+                temp := 32
+                DEST[i+31:i] := 0
+                WHILE (temp > 0) AND (SRC[i+temp-1] = 0)
+                DO
+                    temp := temp – 1
+                    DEST[i+31:i] := DEST[i+31:i] + 1
+                OD
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE DEST[i+31:i] := 0
+            FI
+    FI
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPLZCNTQ + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j*64
+    IF MaskBit(j) OR *no writemask*
+        THEN
+                temp := 64
+                DEST[i+63:i] := 0
+                WHILE (temp > 0) AND (SRC[i+temp-1] = 0)
+                DO
+                    temp := temp – 1
+                    DEST[i+63:i] := DEST[i+63:i] + 1
+                OD
+        ELSE
+            IF *merging-masking*
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE DEST[i+63:i] := 0
+            FI
+    FI
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPLZCNTD __m512i _mm512_lzcnt_epi32(__m512i a);
+
+
VPLZCNTD __m512i _mm512_mask_lzcnt_epi32(__m512i s, __mmask16 m, __m512i a);
+
+
VPLZCNTD __m512i _mm512_maskz_lzcnt_epi32( __mmask16 m, __m512i a);
+
+
VPLZCNTQ __m512i _mm512_lzcnt_epi64(__m512i a);
+
+
VPLZCNTQ __m512i _mm512_mask_lzcnt_epi64(__m512i s, __mmask8 m, __m512i a);
+
+
VPLZCNTQ __m512i _mm512_maskz_lzcnt_epi64(__mmask8 m, __m512i a);
+
+
VPLZCNTD __m256i _mm256_lzcnt_epi32(__m256i a);
+
+
VPLZCNTD __m256i _mm256_mask_lzcnt_epi32(__m256i s, __mmask8 m, __m256i a);
+
+
VPLZCNTD __m256i _mm256_maskz_lzcnt_epi32( __mmask8 m, __m256i a);
+
+
VPLZCNTQ __m256i _mm256_lzcnt_epi64(__m256i a);
+
+
VPLZCNTQ __m256i _mm256_mask_lzcnt_epi64(__m256i s, __mmask8 m, __m256i a);
+
+
VPLZCNTQ __m256i _mm256_maskz_lzcnt_epi64(__mmask8 m, __m256i a);
+
+
VPLZCNTD __m128i _mm_lzcnt_epi32(__m128i a);
+
+
VPLZCNTD __m128i _mm_mask_lzcnt_epi32(__m128i s, __mmask8 m, __m128i a);
+
+
VPLZCNTD __m128i _mm_maskz_lzcnt_epi32( __mmask8 m, __m128i a);
+
+
VPLZCNTQ __m128i _mm_lzcnt_epi64(__m128i a);
+
+
VPLZCNTQ __m128i _mm_mask_lzcnt_epi64(__m128i s, __mmask8 m, __m128i a);
+
+
VPLZCNTQ __m128i _mm_maskz_lzcnt_epi64(__mmask8 m, __m128i a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpmadd52huq.html b/x86/vpmadd52huq.html new file mode 100644 index 0000000..adbe304 --- /dev/null +++ b/x86/vpmadd52huq.html @@ -0,0 +1,118 @@ + +VPMADD52HUQ + — Packed Multiply of Unsigned 52-Bit Unsigned Integers and Add High 52-BitProducts to 64-Bit Accumulators

VPMADD52HUQ + — Packed Multiply of Unsigned 52-Bit Unsigned Integers and Add High 52-BitProducts to 64-Bit Accumulators

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUIDDescription
EVEX.128.66.0F38.W1 B5 /r VPMADD52HUQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstAV/VAVX512_IFMA AVX512VLMultiply unsigned 52-bit integers in xmm2 and xmm3/m128 and add the high 52 bits of the 104-bit product to the qword unsigned integers in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 B5 /r VPMADD52HUQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstAV/VAVX512_IFMA AVX512VLMultiply unsigned 52-bit integers in ymm2 and ymm3/m256 and add the high 52 bits of the 104-bit product to the qword unsigned integers in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 B5 /r VPMADD52HUQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstAV/VAVX512_IFMAMultiply unsigned 52-bit integers in zmm2 and zmm3/m512 and add the high 52 bits of the 104-bit product to the qword unsigned integers in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m(r)N/A
+

Description + ¶ +

+

Multiplies packed unsigned 52-bit integers in each qword element of the first source operand (the second operand) with the packed unsigned 52-bit integers in the corresponding elements of the second source operand (the third operand) to form packed 104-bit intermediate results. The high 52-bit, unsigned integer of each 104-bit product is added to the corresponding qword unsigned integer of the destination operand (the first operand) under the writemask k1.

+

The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1 at 64-bit granularity.

+

Operation + ¶ +

+

VPMADD52HUQ (EVEX encoded) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64;
+    IF k1[j] OR *no writemask* THEN
+        IF src2 is Memory AND EVEX.b=1 THEN
+            tsrc2[63:0] := ZeroExtend64(src2[51:0]);
+        ELSE
+            tsrc2[63:0] := ZeroExtend64(src2[i+51:i];
+        FI;
+        Temp128[127:0] := ZeroExtend64(src1[i+51:i]) * tsrc2[63:0];
+        Temp2[63:0] := DEST[i+63:i] + ZeroExtend64(temp128[103:52]) ;
+        DEST[i+63:i] := Temp2[63:0];
+    ELSE
+        IF *zeroing-masking* THEN
+            DEST[i+63:i] := 0;
+        ELSE *merge-masking*
+            DEST[i+63:i] is unchanged;
+        FI;
+    FI;
+ENDFOR
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMADD52HUQ __m512i _mm512_madd52hi_epu64( __m512i a, __m512i b, __m512i c);
+
+
VPMADD52HUQ __m512i _mm512_mask_madd52hi_epu64(__m512i s, __mmask8 k, __m512i a, __m512i b, __m512i c);
+
+
VPMADD52HUQ __m512i _mm512_maskz_madd52hi_epu64( __mmask8 k, __m512i a, __m512i b, __m512i c);
+
+
VPMADD52HUQ __m256i _mm256_madd52hi_epu64( __m256i a, __m256i b, __m256i c);
+
+
VPMADD52HUQ __m256i _mm256_mask_madd52hi_epu64(__m256i s, __mmask8 k, __m256i a, __m256i b, __m256i c);
+
+
VPMADD52HUQ __m256i _mm256_maskz_madd52hi_epu64( __mmask8 k, __m256i a, __m256i b, __m256i c);
+
+
VPMADD52HUQ __m128i _mm_madd52hi_epu64( __m128i a, __m128i b, __m128i c);
+
+
VPMADD52HUQ __m128i _mm_mask_madd52hi_epu64(__m128i s, __mmask8 k, __m128i a, __m128i b, __m128i c);
+
+
VPMADD52HUQ __m128i _mm_maskz_madd52hi_epu64( __mmask8 k, __m128i a, __m128i b, __m128i c);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpmadd52luq.html b/x86/vpmadd52luq.html new file mode 100644 index 0000000..3d5ee96 --- /dev/null +++ b/x86/vpmadd52luq.html @@ -0,0 +1,118 @@ + +VPMADD52LUQ + — Packed Multiply of Unsigned 52-Bit Integers and Add the Low 52-Bit Productsto Qword Accumulators

VPMADD52LUQ + — Packed Multiply of Unsigned 52-Bit Integers and Add the Low 52-Bit Productsto Qword Accumulators

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 Bit Mode SupportCPUIDDescription
EVEX.128.66.0F38.W1 B4 /r VPMADD52LUQ xmm1 {k1}{z}, xmm2,xmm3/m128/m64bcstAV/VAVX512_IFMA AVX512VLMultiply unsigned 52-bit integers in xmm2 and xmm3/m128 and add the low 52 bits of the 104-bit product to the qword unsigned integers in xmm1 using writemask k1.
EVEX.256.66.0F38.W1 B4 /r VPMADD52LUQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstAV/VAVX512_IFMA AVX512VLMultiply unsigned 52-bit integers in ymm2 and ymm3/m256 and add the low 52 bits of the 104-bit product to the qword unsigned integers in ymm1 using writemask k1.
EVEX.512.66.0F38.W1 B4 /r VPMADD52LUQ zmm1 {k1}{z}, zmm2,zmm3/m512/m64bcstAV/VAVX512_IFMAMultiply unsigned 52-bit integers in zmm2 and zmm3/m512 and add the low 52 bits of the 104-bit product to the qword unsigned integers in zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m(r)N/A
+

Description + ¶ +

+

Multiplies packed unsigned 52-bit integers in each qword element of the first source operand (the second operand) with the packed unsigned 52-bit integers in the corresponding elements of the second source operand (the third operand) to form packed 104-bit intermediate results. The low 52-bit, unsigned integer of each 104-bit product is added to the corresponding qword unsigned integer of the destination operand (the first operand) under the writemask k1.

+

The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1 at 64-bit granularity.

+

Operation + ¶ +

+

VPMADD52LUQ (EVEX encoded) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64;
+    IF k1[j] OR *no writemask* THEN
+        IF src2 is Memory AND EVEX.b=1 THEN
+            tsrc2[63:0] := ZeroExtend64(src2[51:0]);
+        ELSE
+            tsrc2[63:0] := ZeroExtend64(src2[i+51:i];
+        FI;
+        Temp128[127:0] := ZeroExtend64(src1[i+51:i]) * tsrc2[63:0];
+        Temp2[63:0] := DEST[i+63:i] + ZeroExtend64(temp128[51:0]) ;
+        DEST[i+63:i] := Temp2[63:0];
+    ELSE
+        IF *zeroing-masking* THEN
+            DEST[i+63:i] := 0;
+        ELSE *merge-masking*
+            DEST[i+63:i] is unchanged;
+        FI;
+    FI;
+ENDFOR
+DEST[MAX_VL-1:VL] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMADD52LUQ __m512i _mm512_madd52lo_epu64( __m512i a, __m512i b, __m512i c);
+
+
VPMADD52LUQ __m512i _mm512_mask_madd52lo_epu64(__m512i s, __mmask8 k, __m512i a, __m512i b, __m512i c);
+
+
VPMADD52LUQ __m512i _mm512_maskz_madd52lo_epu64( __mmask8 k, __m512i a, __m512i b, __m512i c);
+
+
VPMADD52LUQ __m256i _mm256_madd52lo_epu64( __m256i a, __m256i b, __m256i c);
+
+
VPMADD52LUQ __m256i _mm256_mask_madd52lo_epu64(__m256i s, __mmask8 k, __m256i a, __m256i b, __m256i c);
+
+
VPMADD52LUQ __m256i _mm256_maskz_madd52lo_epu64( __mmask8 k, __m256i a, __m256i b, __m256i c);
+
+
VPMADD52LUQ __m128i _mm_madd52lo_epu64( __m128i a, __m128i b, __m128i c);
+
+
VPMADD52LUQ __m128i _mm_mask_madd52lo_epu64(__m128i s, __mmask8 k, __m128i a, __m128i b, __m128i c);
+
+
VPMADD52LUQ __m128i _mm_maskz_madd52lo_epu64( __mmask8 k, __m128i a, __m128i b, __m128i c);
+
+

Flags Affected + ¶ +

+

None.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpmaskmov.html b/x86/vpmaskmov.html new file mode 100644 index 0000000..82fed1b --- /dev/null +++ b/x86/vpmaskmov.html @@ -0,0 +1,199 @@ + +VPMASKMOV + — Conditional SIMD Integer Packed Loads and Stores

VPMASKMOV + — Conditional SIMD Integer Packed Loads and Stores

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 -bit ModeCPUID Feature FlagDescription
VEX.128.66.0F38.W0 8C /r VPMASKMOVD xmm1, xmm2, m128RVMV/VAVX2Conditionally load dword values from m128 using mask in xmm2 and store in xmm1.
VEX.256.66.0F38.W0 8C /r VPMASKMOVD ymm1, ymm2, m256RVMV/VAVX2Conditionally load dword values from m256 using mask in ymm2 and store in ymm1.
VEX.128.66.0F38.W1 8C /r VPMASKMOVQ xmm1, xmm2, m128RVMV/VAVX2Conditionally load qword values from m128 using mask in xmm2 and store in xmm1.
VEX.256.66.0F38.W1 8C /r VPMASKMOVQ ymm1, ymm2, m256RVMV/VAVX2Conditionally load qword values from m256 using mask in ymm2 and store in ymm1.
VEX.128.66.0F38.W0 8E /r VPMASKMOVD m128, xmm1, xmm2MVRV/VAVX2Conditionally store dword values from xmm2 using mask in xmm1.
VEX.256.66.0F38.W0 8E /r VPMASKMOVD m256, ymm1, ymm2MVRV/VAVX2Conditionally store dword values from ymm2 using mask in ymm1.
VEX.128.66.0F38.W1 8E /r VPMASKMOVQ m128, xmm1, xmm2MVRV/VAVX2Conditionally store qword values from xmm2 using mask in xmm1.
VEX.256.66.0F38.W1 8E /r VPMASKMOVQ m256, ymm1, ymm2MVRV/VAVX2Conditionally store qword values from ymm2 using mask in ymm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RVMModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
MVRModRM:r/m (w)VEX.vvvv (r)ModRM:reg (r)N/A
+

Description + ¶ +

+

Conditionally moves packed data elements from the second source operand into the corresponding data element of the destination operand, depending on the mask bits associated with each data element. The mask bits are specified in the first source operand.

+

The mask bit for each data element is the most significant bit of that element in the first source operand. If a mask is 1, the corresponding data element is copied from the second source operand to the destination operand. If the mask is 0, the corresponding data element is set to zero in the load form of these instructions, and unmodified in the store form.

+

The second source operand is a memory address for the load form of these instructions. The destination operand is a memory address for the store form of these instructions. The other operands are either XMM registers (for VEX.128 version) or YMM registers (for VEX.256 version).

+

Faults occur only due to mask-bit required memory accesses that caused the faults. Faults will not occur due to referencing any memory location if the corresponding mask bit for that memory location is 0. For example, no faults will be detected if the mask bits are all zero.

+

Unlike previous MASKMOV instructions (MASKMOVQ and MASKMOVDQU), a nontemporal hint is not applied to these instructions.

+

Instruction behavior on alignment check reporting with mask bits of less than all 1s are the same as with mask bits of all 1s.

+

VMASKMOV should not be used to access memory mapped I/O as the ordering of the individual loads or stores it does is implementation specific.

+

In cases where mask bits indicate data should not be loaded or stored paging A and D bits will be set in an implementation dependent way. However, A and D bits are always set for pages where data is actually loaded/stored.

+

Note: for load forms, the first source (the mask) is encoded in VEX.vvvv; the second source is encoded in rm_field, and the destination register is encoded in reg_field.

+

Note: for store forms, the first source (the mask) is encoded in VEX.vvvv; the second source register is encoded in reg_field, and the destination memory location is encoded in rm_field.

+

Operation + ¶ +

+

VPMASKMOVD - 256-bit load + ¶ +

+
DEST[31:0] := IF (SRC1[31]) Load_32(mem) ELSE 0
+DEST[63:32] := IF (SRC1[63]) Load_32(mem + 4) ELSE 0
+DEST[95:64] := IF (SRC1[95]) Load_32(mem + 8) ELSE 0
+DEST[127:96] := IF (SRC1[127]) Load_32(mem + 12) ELSE 0
+DEST[159:128] := IF (SRC1[159]) Load_32(mem + 16) ELSE 0
+DEST[191:160] := IF (SRC1[191]) Load_32(mem + 20) ELSE 0
+DEST[223:192] := IF (SRC1[223]) Load_32(mem + 24) ELSE 0
+DEST[255:224] := IF (SRC1[255]) Load_32(mem + 28) ELSE 0
+
+

VPMASKMOVD -128-bit load + ¶ +

+
DEST[31:0] := IF (SRC1[31]) Load_32(mem) ELSE 0
+DEST[63:32] := IF (SRC1[63]) Load_32(mem + 4) ELSE 0
+DEST[95:64] := IF (SRC1[95]) Load_32(mem + 8) ELSE 0
+DEST[127:97] := IF (SRC1[127]) Load_32(mem + 12) ELSE 0
+DEST[MAXVL-1:128] := 0
+
+

VPMASKMOVQ - 256-bit load + ¶ +

+
DEST[63:0] := IF (SRC1[63]) Load_64(mem) ELSE 0
+DEST[127:64] := IF (SRC1[127]) Load_64(mem + 8) ELSE 0
+DEST[195:128] := IF (SRC1[191]) Load_64(mem + 16) ELSE 0
+DEST[255:196] := IF (SRC1[255]) Load_64(mem + 24) ELSE 0
+
+

VPMASKMOVQ - 128-bit load + ¶ +

+
DEST[63:0] := IF (SRC1[63]) Load_64(mem) ELSE 0
+DEST[127:64] := IF (SRC1[127]) Load_64(mem + 16) ELSE 0
+DEST[MAXVL-1:128] := 0
+
+

VPMASKMOVD - 256-bit store + ¶ +

+
IF (SRC1[31]) DEST[31:0] := SRC2[31:0]
+IF (SRC1[63]) DEST[63:32] := SRC2[63:32]
+IF (SRC1[95]) DEST[95:64] := SRC2[95:64]
+IF (SRC1[127]) DEST[127:96] := SRC2[127:96]
+IF (SRC1[159]) DEST[159:128] :=SRC2[159:128]
+IF (SRC1[191]) DEST[191:160] := SRC2[191:160]
+IF (SRC1[223]) DEST[223:192] := SRC2[223:192]
+IF (SRC1[255]) DEST[255:224] := SRC2[255:224]
+
+

VPMASKMOVD - 128-bit store + ¶ +

+
IF (SRC1[31]) DEST[31:0] := SRC2[31:0]
+IF (SRC1[63]) DEST[63:32] := SRC2[63:32]
+IF (SRC1[95]) DEST[95:64] := SRC2[95:64]
+IF (SRC1[127]) DEST[127:96] := SRC2[127:96]
+
+

VPMASKMOVQ - 256-bit store + ¶ +

+
IF (SRC1[63]) DEST[63:0] := SRC2[63:0]
+IF (SRC1[127]) DEST[127:64] :=SRC2[127:64]
+IF (SRC1[191]) DEST[191:128] := SRC2[191:128]
+IF (SRC1[255]) DEST[255:192] := SRC2[255:192]
+
+

VPMASKMOVQ - 128-bit store + ¶ +

+
IF (SRC1[63]) DEST[63:0] := SRC2[63:0]
+IF (SRC1[127]) DEST[127:64] :=SRC2[127:64]
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMASKMOVD: __m256i _mm256_maskload_epi32(int const *a, __m256i mask)
+
+
VPMASKMOVD: void _mm256_maskstore_epi32(int *a, __m256i mask, __m256i b)
+
+
VPMASKMOVQ: __m256i _mm256_maskload_epi64(__int64 const *a, __m256i mask);
+
+
VPMASKMOVQ: void _mm256_maskstore_epi64(__int64 *a, __m256i mask, __m256d b);
+
+
VPMASKMOVD: __m128i _mm_maskload_epi32(int const *a, __m128i mask)
+
+
VPMASKMOVD: void _mm_maskstore_epi32(int *a, __m128i mask, __m128 b)
+
+
VPMASKMOVQ: __m128i _mm_maskload_epi64(__int cont *a, __m128i mask);
+
+
VPMASKMOVQ: void _mm_maskstore_epi64(__int64 *a, __m128i mask, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-23, “Type 6 Class Exception Conditions” (No AC# reported for any mask bit combinations).

diff --git a/x86/vpmovb2m.vpmovw2m.vpmovd2m.vpmovq2m.html b/x86/vpmovb2m.vpmovw2m.vpmovd2m.vpmovq2m.html new file mode 100644 index 0000000..67c7b17 --- /dev/null +++ b/x86/vpmovb2m.vpmovw2m.vpmovd2m.vpmovq2m.html @@ -0,0 +1,208 @@ + +VPMOVB2M/VPMOVW2M/VPMOVD2M/VPMOVQ2M + — Convert a Vector Register to a Mask

VPMOVB2M/VPMOVW2M/VPMOVD2M/VPMOVQ2M + — Convert a Vector Register to a Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 29 /r VPMOVB2M k1, xmm1RMV/VAVX512VL AVX512BWSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding byte in XMM1.
EVEX.256.F3.0F38.W0 29 /r VPMOVB2M k1, ymm1RMV/VAVX512VL AVX512BWSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding byte in YMM1.
EVEX.512.F3.0F38.W0 29 /r VPMOVB2M k1, zmm1RMV/VAVX512BWSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding byte in ZMM1.
EVEX.128.F3.0F38.W1 29 /r VPMOVW2M k1, xmm1RMV/VAVX512VL AVX512BWSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding word in XMM1.
EVEX.256.F3.0F38.W1 29 /r VPMOVW2M k1, ymm1RMV/VAVX512VL AVX512BWSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding word in YMM1.
EVEX.512.F3.0F38.W1 29 /r VPMOVW2M k1, zmm1RMV/VAVX512BWSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding word in ZMM1.
EVEX.128.F3.0F38.W0 39 /r VPMOVD2M k1, xmm1RMV/VAVX512VL AVX512DQSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding doubleword in XMM1.
EVEX.256.F3.0F38.W0 39 /r VPMOVD2M k1, ymm1RMV/VAVX512VL AVX512DQSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding doubleword in YMM1.
EVEX.512.F3.0F38.W0 39 /r VPMOVD2M k1, zmm1RMV/VAVX512DQSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding doubleword in ZMM1.
EVEX.128.F3.0F38.W1 39 /r VPMOVQ2M k1, xmm1RMV/VAVX512VL AVX512DQSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding quadword in XMM1.
EVEX.256.F3.0F38.W1 39 /r VPMOVQ2M k1, ymm1RMV/VAVX512VL AVX512DQSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding quadword in YMM1.
EVEX.512.F3.0F38.W1 39 /r VPMOVQ2M k1, zmm1RMV/VAVX512DQSets each bit in k1 to 1 or 0 based on the value of the most significant bit of the corresponding quadword in ZMM1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts a vector register to a mask register. Each element in the destination register is set to 1 or 0 depending on the value of most significant bit of the corresponding element in the source register.

+

The source operand is a ZMM/YMM/XMM register. The destination operand is a mask register.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPMOVB2M (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF SRC[i+7]
+        THEN DEST[j]:=1
+        ELSE DEST[j] := 0
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPMOVW2M (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF SRC[i+15]
+        THEN DEST[j]:=1
+        ELSE DEST[j] := 0
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPMOVD2M (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF SRC[i+31]
+        THEN DEST[j]:=1
+        ELSE DEST[j] := 0
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPMOVQ2M (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF SRC[i+63]
+        THEN DEST[j]:=1
+        ELSE DEST[j] := 0
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMPOVB2M __mmask64 _mm512_movepi8_mask( __m512i );
+
+
VPMPOVD2M __mmask16 _mm512_movepi32_mask( __m512i );
+
+
VPMPOVQ2M __mmask8 _mm512_movepi64_mask( __m512i );
+
+
VPMPOVW2M __mmask32 _mm512_movepi16_mask( __m512i );
+
+
VPMPOVB2M __mmask32 _mm256_movepi8_mask( __m256i );
+
+
VPMPOVD2M __mmask8 _mm256_movepi32_mask( __m256i );
+
+
VPMPOVQ2M __mmask8 _mm256_movepi64_mask( __m256i );
+
+
VPMPOVW2M __mmask16 _mm256_movepi16_mask( __m256i );
+
+
VPMPOVB2M __mmask16 _mm_movepi8_mask( __m128i );
+
+
VPMPOVD2M __mmask8 _mm_movepi32_mask( __m128i );
+
+
VPMPOVQ2M __mmask8 _mm_movepi64_mask( __m128i );
+
+
VPMPOVW2M __mmask8 _mm_movepi16_mask( __m128i );
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-55, “Type E7NM Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpmovdb.vpmovsdb.vpmovusdb.html b/x86/vpmovdb.vpmovsdb.vpmovusdb.html new file mode 100644 index 0000000..c0cd96b --- /dev/null +++ b/x86/vpmovdb.vpmovsdb.vpmovusdb.html @@ -0,0 +1,284 @@ + +VPMOVDB/VPMOVSDB/VPMOVUSDB + — Down Convert DWord to Byte

VPMOVDB/VPMOVSDB/VPMOVUSDB + — Down Convert DWord to Byte

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 31 /r VPMOVDB xmm1/m32 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 4 packed double-word integers from xmm2 into 4 packed byte integers in xmm1/m32 with truncation under writemask k1.
EVEX.128.F3.0F38.W0 21 /r VPMOVSDB xmm1/m32 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 4 packed signed double-word integers from xmm2 into 4 packed signed byte integers in xmm1/m32 using signed saturation under writemask k1.
EVEX.128.F3.0F38.W0 11 /r VPMOVUSDB xmm1/m32 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 4 packed unsigned double-word integers from xmm2 into 4 packed unsigned byte integers in xmm1/m32 using unsigned saturation under writemask k1.
EVEX.256.F3.0F38.W0 31 /r VPMOVDB xmm1/m64 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 8 packed double-word integers from ymm2 into 8 packed byte integers in xmm1/m64 with truncation under writemask k1.
EVEX.256.F3.0F38.W0 21 /r VPMOVSDB xmm1/m64 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 8 packed signed double-word integers from ymm2 into 8 packed signed byte integers in xmm1/m64 using signed saturation under writemask k1.
EVEX.256.F3.0F38.W0 11 /r VPMOVUSDB xmm1/m64 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 8 packed unsigned double-word integers from ymm2 into 8 packed unsigned byte integers in xmm1/m64 using unsigned saturation under writemask k1.
EVEX.512.F3.0F38.W0 31 /r VPMOVDB xmm1/m128 {k1}{z}, zmm2AV/VAVX512FConverts 16 packed double-word integers from zmm2 into 16 packed byte integers in xmm1/m128 with truncation under writemask k1.
EVEX.512.F3.0F38.W0 21 /r VPMOVSDB xmm1/m128 {k1}{z}, zmm2AV/VAVX512FConverts 16 packed signed double-word integers from zmm2 into 16 packed signed byte integers in xmm1/m128 using signed saturation under writemask k1.
EVEX.512.F3.0F38.W0 11 /r VPMOVUSDB xmm1/m128 {k1}{z}, zmm2AV/VAVX512FConverts 16 packed unsigned double-word integers from zmm2 into 16 packed unsigned byte integers in xmm1/m128 using unsigned saturation under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AQuarter MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

VPMOVDB down converts 32-bit integer elements in the source operand (the second operand) into packed bytes using truncation. VPMOVSDB converts signed 32-bit integers into packed signed bytes using signed saturation. VPMOVUSDB convert unsigned double-word values into unsigned byte values using unsigned saturation.

+

The source operand is a ZMM/YMM/XMM register. The destination operand is a XMM register or a 128/64/32-bit memory location.

+

Down-converted byte elements are written to the destination operand (the first operand) from the least-significant byte. Byte elements of the destination operand are updated according to the writemask. Bits (MAXVL-1:128/64/32) of the register destination are zeroed.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPMOVDB instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := TruncateDoubleWordToByte (SRC[m+31:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/4] := 0;
+
+

VPMOVDB instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := TruncateDoubleWordToByte (SRC[m+31:m])
+        ELSE *DEST[i+7:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVSDB instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateSignedDoubleWordToByte (SRC[m+31:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/4] := 0;
+
+

VPMOVSDB instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateSignedDoubleWordToByte (SRC[m+31:m])
+        ELSE *DEST[i+7:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVUSDB instruction (EVEX encoded versions) when dest is a register + ¶ +

+
    (KL, VL) = (4, 128), (8, 256), (16, 512)
+    FOR j := 0 TO KL-1
+        i := j * 8
+        m := j * 32
+        IF k1[j] OR *no writemask*
+            THEN DEST[i+7:i] := SaturateUnsignedDoubleWordToByte (SRC[m+31:m])
+            ELSE
+                IF *merging-masking* ; merging-masking
+                    THEN *DEST[i+7:i] remains unchanged*
+                    ELSE *zeroing-masking*
+                            ; zeroing-masking
+                        DEST[i+7:i] := 0
+                FI
+        FI;
+    ENDFOR
+    DEST[MAXVL-1:VL/4] := 0;
+VPMOVUSDB instruction (EVEX encoded versions) when dest is memory
+    (KL, VL) = (4, 128), (8, 256), (16, 512)
+    FOR j := 0 TO KL-1
+        i := j * 8
+        m := j * 32
+        IF k1[j] OR *no writemask*
+            THEN DEST[i+7:i] := SaturateUnsignedDoubleWordToByte (SRC[m+31:m])
+            ELSE *DEST[i+7:i] remains unchanged* ; merging-masking
+        FI;
+    ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMOVDB __m128i _mm512_cvtepi32_epi8( __m512i a);
+
+
VPMOVDB __m128i _mm512_mask_cvtepi32_epi8(__m128i s, __mmask16 k, __m512i a);
+
+
VPMOVDB __m128i _mm512_maskz_cvtepi32_epi8( __mmask16 k, __m512i a);
+
+
VPMOVDB void _mm512_mask_cvtepi32_storeu_epi8(void * d, __mmask16 k, __m512i a);
+
+
VPMOVSDB __m128i _mm512_cvtsepi32_epi8( __m512i a);
+
+
VPMOVSDB __m128i _mm512_mask_cvtsepi32_epi8(__m128i s, __mmask16 k, __m512i a);
+
+
VPMOVSDB __m128i _mm512_maskz_cvtsepi32_epi8( __mmask16 k, __m512i a);
+
+
VPMOVSDB void _mm512_mask_cvtsepi32_storeu_epi8(void * d, __mmask16 k, __m512i a);
+
+
VPMOVUSDB __m128i _mm512_cvtusepi32_epi8( __m512i a);
+
+
VPMOVUSDB __m128i _mm512_mask_cvtusepi32_epi8(__m128i s, __mmask16 k, __m512i a);
+
+
VPMOVUSDB __m128i _mm512_maskz_cvtusepi32_epi8( __mmask16 k, __m512i a);
+
+
VPMOVUSDB void _mm512_mask_cvtusepi32_storeu_epi8(void * d, __mmask16 k, __m512i a);
+
+
VPMOVUSDB __m128i _mm256_cvtusepi32_epi8(__m256i a);
+
+
VPMOVUSDB __m128i _mm256_mask_cvtusepi32_epi8(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVUSDB __m128i _mm256_maskz_cvtusepi32_epi8( __mmask8 k, __m256i b);
+
+
VPMOVUSDB void _mm256_mask_cvtusepi32_storeu_epi8(void * , __mmask8 k, __m256i b);
+
+
VPMOVUSDB __m128i _mm_cvtusepi32_epi8(__m128i a);
+
+
VPMOVUSDB __m128i _mm_mask_cvtusepi32_epi8(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVUSDB __m128i _mm_maskz_cvtusepi32_epi8( __mmask8 k, __m128i b);
+
+
VPMOVUSDB void _mm_mask_cvtusepi32_storeu_epi8(void * , __mmask8 k, __m128i b);
+
+
VPMOVSDB __m128i _mm256_cvtsepi32_epi8(__m256i a);
+
+
VPMOVSDB __m128i _mm256_mask_cvtsepi32_epi8(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVSDB __m128i _mm256_maskz_cvtsepi32_epi8( __mmask8 k, __m256i b);
+
+
VPMOVSDB void _mm256_mask_cvtsepi32_storeu_epi8(void * , __mmask8 k, __m256i b);
+
+
VPMOVSDB __m128i _mm_cvtsepi32_epi8(__m128i a);
+
+
VPMOVSDB __m128i _mm_mask_cvtsepi32_epi8(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSDB __m128i _mm_maskz_cvtsepi32_epi8( __mmask8 k, __m128i b);
+
+
VPMOVSDB void _mm_mask_cvtsepi32_storeu_epi8(void * , __mmask8 k, __m128i b);
+
+
VPMOVDB __m128i _mm256_cvtepi32_epi8(__m256i a);
+
+
VPMOVDB __m128i _mm256_mask_cvtepi32_epi8(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVDB __m128i _mm256_maskz_cvtepi32_epi8( __mmask8 k, __m256i b);
+
+
VPMOVDB void _mm256_mask_cvtepi32_storeu_epi8(void * , __mmask8 k, __m256i b);
+
+
VPMOVDB __m128i _mm_cvtepi32_epi8(__m128i a);
+
+
VPMOVDB __m128i _mm_mask_cvtepi32_epi8(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVDB __m128i _mm_maskz_cvtepi32_epi8( __mmask8 k, __m128i b);
+
+
VPMOVDB void _mm_mask_cvtepi32_storeu_epi8(void * , __mmask8 k, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpmovdw.vpmovsdw.vpmovusdw.html b/x86/vpmovdw.vpmovsdw.vpmovusdw.html new file mode 100644 index 0000000..bb8e2cf --- /dev/null +++ b/x86/vpmovdw.vpmovsdw.vpmovusdw.html @@ -0,0 +1,291 @@ + +VPMOVDW/VPMOVSDW/VPMOVUSDW + — Down Convert DWord to Word

VPMOVDW/VPMOVSDW/VPMOVUSDW + — Down Convert DWord to Word

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 33 /r VPMOVDW xmm1/m64 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 4 packed double-word integers from xmm2 into 4 packed word integers in xmm1/m64 with truncation under writemask k1.
EVEX.128.F3.0F38.W0 23 /r VPMOVSDW xmm1/m64 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 4 packed signed double-word integers from xmm2 into 4 packed signed word integers in ymm1/m64 using signed saturation under writemask k1.
EVEX.128.F3.0F38.W0 13 /r VPMOVUSDW xmm1/m64 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 4 packed unsigned double-word integers from xmm2 into 4 packed unsigned word integers in xmm1/m64 using unsigned saturation under writemask k1.
EVEX.256.F3.0F38.W0 33 /r VPMOVDW xmm1/m128 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 8 packed double-word integers from ymm2 into 8 packed word integers in xmm1/m128 with truncation under writemask k1.
EVEX.256.F3.0F38.W0 23 /r VPMOVSDW xmm1/m128 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 8 packed signed double-word integers from ymm2 into 8 packed signed word integers in xmm1/m128 using signed saturation under writemask k1.
EVEX.256.F3.0F38.W0 13 /r VPMOVUSDW xmm1/m128 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 8 packed unsigned double-word integers from ymm2 into 8 packed unsigned word integers in xmm1/m128 using unsigned saturation under writemask k1.
EVEX.512.F3.0F38.W0 33 /r VPMOVDW ymm1/m256 {k1}{z}, zmm2AV/VAVX512FConverts 16 packed double-word integers from zmm2 into 16 packed word integers in ymm1/m256 with truncation under writemask k1.
EVEX.512.F3.0F38.W0 23 /r VPMOVSDW ymm1/m256 {k1}{z}, zmm2AV/VAVX512FConverts 16 packed signed double-word integers from zmm2 into 16 packed signed word integers in ymm1/m256 using signed saturation under writemask k1.
EVEX.512.F3.0F38.W0 13 /r VPMOVUSDW ymm1/m256 {k1}{z}, zmm2AV/VAVX512FConverts 16 packed unsigned double-word integers from zmm2 into 16 packed unsigned word integers in ymm1/m256 using unsigned saturation under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AHalf MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

VPMOVDW down converts 32-bit integer elements in the source operand (the second operand) into packed words using truncation. VPMOVSDW converts signed 32-bit integers into packed signed words using signed saturation. VPMOVUSDW convert unsigned double-word values into unsigned word values using unsigned saturation.

+

The source operand is a ZMM/YMM/XMM register. The destination operand is a YMM/XMM/XMM register or a 256/128/64-bit memory location.

+

Down-converted word elements are written to the destination operand (the first operand) from the least-significant word. Word elements of the destination operand are updated according to the writemask. Bits (MAXVL-1:256/128/64) of the register destination are zeroed.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPMOVDW instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TruncateDoubleWordToWord (SRC[m+31:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0;
+
+

VPMOVDW instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TruncateDoubleWordToWord (SRC[m+31:m])
+        ELSE
+            *DEST[i+15:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVSDW instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateSignedDoubleWordToWord (SRC[m+31:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0;
+
+

VPMOVSDW instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateSignedDoubleWordToWord (SRC[m+31:m])
+        ELSE
+            *DEST[i+15:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVUSDW instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateUnsignedDoubleWordToWord (SRC[m+31:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0;
+
+

VPMOVUSDW instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateUnsignedDoubleWordToWord (SRC[m+31:m])
+        ELSE
+            *DEST[i+15:i] remains unchanged*
+                ; merging-masking
+    FI;
+ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMOVDW __m256i _mm512_cvtepi32_epi16( __m512i a);
+
+
VPMOVDW __m256i _mm512_mask_cvtepi32_epi16(__m256i s, __mmask16 k, __m512i a);
+
+
VPMOVDW __m256i _mm512_maskz_cvtepi32_epi16( __mmask16 k, __m512i a);
+
+
VPMOVDW void _mm512_mask_cvtepi32_storeu_epi16(void * d, __mmask16 k, __m512i a);
+
+
VPMOVSDW __m256i _mm512_cvtsepi32_epi16( __m512i a);
+
+
VPMOVSDW __m256i _mm512_mask_cvtsepi32_epi16(__m256i s, __mmask16 k, __m512i a);
+
+
VPMOVSDW __m256i _mm512_maskz_cvtsepi32_epi16( __mmask16 k, __m512i a);
+
+
VPMOVSDW void _mm512_mask_cvtsepi32_storeu_epi16(void * d, __mmask16 k, __m512i a);
+
+
VPMOVUSDW __m256i _mm512_cvtusepi32_epi16 __m512i a);
+
+
VPMOVUSDW __m256i _mm512_mask_cvtusepi32_epi16(__m256i s, __mmask16 k, __m512i a);
+
+
VPMOVUSDW __m256i _mm512_maskz_cvtusepi32_epi16( __mmask16 k, __m512i a);
+
+
VPMOVUSDW void _mm512_mask_cvtusepi32_storeu_epi16(void * d, __mmask16 k, __m512i a);
+
+
VPMOVUSDW __m128i _mm256_cvtusepi32_epi16(__m256i a);
+
+
VPMOVUSDW __m128i _mm256_mask_cvtusepi32_epi16(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVUSDW __m128i _mm256_maskz_cvtusepi32_epi16( __mmask8 k, __m256i b);
+
+
VPMOVUSDW void _mm256_mask_cvtusepi32_storeu_epi16(void * , __mmask8 k, __m256i b);
+
+
VPMOVUSDW __m128i _mm_cvtusepi32_epi16(__m128i a);
+
+
VPMOVUSDW __m128i _mm_mask_cvtusepi32_epi16(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVUSDW __m128i _mm_maskz_cvtusepi32_epi16( __mmask8 k, __m128i b);
+
+
VPMOVUSDW void _mm_mask_cvtusepi32_storeu_epi16(void * , __mmask8 k, __m128i b);
+
+
VPMOVSDW __m128i _mm256_cvtsepi32_epi16(__m256i a);
+
+
VPMOVSDW __m128i _mm256_mask_cvtsepi32_epi16(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVSDW __m128i _mm256_maskz_cvtsepi32_epi16( __mmask8 k, __m256i b);
+
+
VPMOVSDW void _mm256_mask_cvtsepi32_storeu_epi16(void * , __mmask8 k, __m256i b);
+
+
VPMOVSDW __m128i _mm_cvtsepi32_epi16(__m128i a);
+
+
VPMOVSDW __m128i _mm_mask_cvtsepi32_epi16(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSDW __m128i _mm_maskz_cvtsepi32_epi16( __mmask8 k, __m128i b);
+
+
VPMOVSDW void _mm_mask_cvtsepi32_storeu_epi16(void * , __mmask8 k, __m128i b);
+
+
VPMOVDW __m128i _mm256_cvtepi32_epi16(__m256i a);
+
+
VPMOVDW __m128i _mm256_mask_cvtepi32_epi16(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVDW __m128i _mm256_maskz_cvtepi32_epi16( __mmask8 k, __m256i b);
+
+
VPMOVDW void _mm256_mask_cvtepi32_storeu_epi16(void * , __mmask8 k, __m256i b);
+
+
VPMOVDW __m128i _mm_cvtepi32_epi16(__m128i a);
+
+
VPMOVDW __m128i _mm_mask_cvtepi32_epi16(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVDW __m128i _mm_maskz_cvtepi32_epi16( __mmask8 k, __m128i b);
+
+
VPMOVDW void _mm_mask_cvtepi32_storeu_epi16(void * , __mmask8 k, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpmovm2b.vpmovm2w.vpmovm2d.vpmovm2q.html b/x86/vpmovm2b.vpmovm2w.vpmovm2d.vpmovm2q.html new file mode 100644 index 0000000..9bb2449 --- /dev/null +++ b/x86/vpmovm2b.vpmovm2w.vpmovm2d.vpmovm2q.html @@ -0,0 +1,208 @@ + +VPMOVM2B/VPMOVM2W/VPMOVM2D/VPMOVM2Q + — Convert a Mask Register to a VectorRegister

VPMOVM2B/VPMOVM2W/VPMOVM2D/VPMOVM2Q + — Convert a Mask Register to a VectorRegister

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 28 /r VPMOVM2B xmm1, k1RMV/VAVX512VL AVX512BWSets each byte in XMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.256.F3.0F38.W0 28 /r VPMOVM2B ymm1, k1RMV/VAVX512VL AVX512BWSets each byte in YMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.512.F3.0F38.W0 28 /r VPMOVM2B zmm1, k1RMV/VAVX512BWSets each byte in ZMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.128.F3.0F38.W1 28 /r VPMOVM2W xmm1, k1RMV/VAVX512VL AVX512BWSets each word in XMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.256.F3.0F38.W1 28 /r VPMOVM2W ymm1, k1RMV/VAVX512VL AVX512BWSets each word in YMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.512.F3.0F38.W1 28 /r VPMOVM2W zmm1, k1RMV/VAVX512BWSets each word in ZMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.128.F3.0F38.W0 38 /r VPMOVM2D xmm1, k1RMV/VAVX512VL AVX512DQSets each doubleword in XMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.256.F3.0F38.W0 38 /r VPMOVM2D ymm1, k1RMV/VAVX512VL AVX512DQSets each doubleword in YMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.512.F3.0F38.W0 38 /r VPMOVM2D zmm1, k1RMV/VAVX512DQSets each doubleword in ZMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.128.F3.0F38.W1 38 /r VPMOVM2Q xmm1, k1RMV/VAVX512VL AVX512DQSets each quadword in XMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.256.F3.0F38.W1 38 /r VPMOVM2Q ymm1, k1RMV/VAVX512VL AVX512DQSets each quadword in YMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
EVEX.512.F3.0F38.W1 38 /r VPMOVM2Q zmm1, k1RMV/VAVX512DQSets each quadword in ZMM1 to all 1’s or all 0’s based on the value of the corresponding bit in k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Converts a mask register to a vector register. Each element in the destination register is set to all 1’s or all 0’s depending on the value of the corresponding bit in the source mask register.

+

The source operand is a mask register. The destination operand is a ZMM/YMM/XMM register.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPMOVM2B (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF SRC[j]
+        THEN DEST[i+7:i] := -1
+        ELSE DEST[i+7:i] := 0
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVM2W (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF SRC[j]
+        THEN DEST[i+15:i] := -1
+        ELSE DEST[i+15:i] := 0
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVM2D (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF SRC[j]
+        THEN DEST[i+31:i] := -1
+        ELSE DEST[i+31:i] := 0
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPMOVM2Q (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF SRC[j]
+        THEN DEST[i+63:i] := -1
+        ELSE DEST[i+63:i] := 0
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMOVM2B __m512i _mm512_movm_epi8(__mmask64 );
+
+
VPMOVM2D __m512i _mm512_movm_epi32(__mmask8 );
+
+
VPMOVM2Q __m512i _mm512_movm_epi64(__mmask16 );
+
+
VPMOVM2W __m512i _mm512_movm_epi16(__mmask32 );
+
+
VPMOVM2B __m256i _mm256_movm_epi8(__mmask32 );
+
+
VPMOVM2D __m256i _mm256_movm_epi32(__mmask8 );
+
+
VPMOVM2Q __m256i _mm256_movm_epi64(__mmask8 );
+
+
VPMOVM2W __m256i _mm256_movm_epi16(__mmask16 );
+
+
VPMOVM2B __m128i _mm_movm_epi8(__mmask16 );
+
+
VPMOVM2D __m128i _mm_movm_epi32(__mmask8 );
+
+
VPMOVM2Q __m128i _mm_movm_epi64(__mmask8 );
+
+
VPMOVM2W __m128i _mm_movm_epi16(__mmask8 );
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-55, “Type E7NM Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpmovqb.vpmovsqb.vpmovusqb.html b/x86/vpmovqb.vpmovsqb.vpmovusqb.html new file mode 100644 index 0000000..006b080 --- /dev/null +++ b/x86/vpmovqb.vpmovsqb.vpmovusqb.html @@ -0,0 +1,289 @@ + +VPMOVQB/VPMOVSQB/VPMOVUSQB + — Down Convert QWord to Byte

VPMOVQB/VPMOVSQB/VPMOVUSQB + — Down Convert QWord to Byte

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 32 /r VPMOVQB xmm1/m16 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 2 packed quad-word integers from xmm2 into 2 packed byte integers in xmm1/m16 with truncation under writemask k1.
EVEX.128.F3.0F38.W0 22 /r VPMOVSQB xmm1/m16 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 2 packed signed quad-word integers from xmm2 into 2 packed signed byte integers in xmm1/m16 using signed saturation under writemask k1.
EVEX.128.F3.0F38.W0 12 /r VPMOVUSQB xmm1/m16 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 2 packed unsigned quad-word integers from xmm2 into 2 packed unsigned byte integers in xmm1/m16 using unsigned saturation under writemask k1.
EVEX.256.F3.0F38.W0 32 /r VPMOVQB xmm1/m32 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 4 packed quad-word integers from ymm2 into 4 packed byte integers in xmm1/m32 with truncation under writemask k1.
EVEX.256.F3.0F38.W0 22 /r VPMOVSQB xmm1/m32 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 4 packed signed quad-word integers from ymm2 into 4 packed signed byte integers in xmm1/m32 using signed saturation under writemask k1.
EVEX.256.F3.0F38.W0 12 /r VPMOVUSQB xmm1/m32 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 4 packed unsigned quad-word integers from ymm2 into 4 packed unsigned byte integers in xmm1/m32 using unsigned saturation under writemask k1.
EVEX.512.F3.0F38.W0 32 /r VPMOVQB xmm1/m64 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed quad-word integers from zmm2 into 8 packed byte integers in xmm1/m64 with truncation under writemask k1.
EVEX.512.F3.0F38.W0 22 /r VPMOVSQB xmm1/m64 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed signed quad-word integers from zmm2 into 8 packed signed byte integers in xmm1/m64 using signed saturation under writemask k1.
EVEX.512.F3.0F38.W0 12 /r VPMOVUSQB xmm1/m64 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed unsigned quad-word integers from zmm2 into 8 packed unsigned byte integers in xmm1/m64 using unsigned saturation under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AEighth MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

VPMOVQB down converts 64-bit integer elements in the source operand (the second operand) into packed byte elements using truncation. VPMOVSQB converts signed 64-bit integers into packed signed bytes using signed saturation. VPMOVUSQB convert unsigned quad-word values into unsigned byte values using unsigned saturation. The source operand is a vector register. The destination operand is an XMM register or a memory location.

+

Down-converted byte elements are written to the destination operand (the first operand) from the least-significant byte. Byte elements of the destination operand are updated according to the writemask. Bits (MAXVL-1:64) of the destination are zeroed.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPMOVQB instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := TruncateQuadWordToByte (SRC[m+63:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/8] := 0;
+
+

VPMOVQB instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := TruncateQuadWordToByte (SRC[m+63:m])
+        ELSE
+            *DEST[i+7:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVSQB instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateSignedQuadWordToByte (SRC[m+63:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/8] := 0;
+
+

VPMOVSQB instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateSignedQuadWordToByte (SRC[m+63:m])
+        ELSE
+            *DEST[i+7:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVUSQB instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateUnsignedQuadWordToByte (SRC[m+63:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/8] := 0;
+
+

VPMOVUSQB instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateUnsignedQuadWordToByte (SRC[m+63:m])
+        ELSE
+            *DEST[i+7:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMOVQB __m128i _mm512_cvtepi64_epi8( __m512i a);
+
+
VPMOVQB __m128i _mm512_mask_cvtepi64_epi8(__m128i s, __mmask8 k, __m512i a);
+
+
VPMOVQB __m128i _mm512_maskz_cvtepi64_epi8( __mmask8 k, __m512i a);
+
+
VPMOVQB void _mm512_mask_cvtepi64_storeu_epi8(void * d, __mmask8 k, __m512i a);
+
+
VPMOVSQB __m128i _mm512_cvtsepi64_epi8( __m512i a);
+
+
VPMOVSQB __m128i _mm512_mask_cvtsepi64_epi8(__m128i s, __mmask8 k, __m512i a);
+
+
VPMOVSQB __m128i _mm512_maskz_cvtsepi64_epi8( __mmask8 k, __m512i a);
+
+
VPMOVSQB void _mm512_mask_cvtsepi64_storeu_epi8(void * d, __mmask8 k, __m512i a);
+
+
VPMOVUSQB __m128i _mm512_cvtusepi64_epi8( __m512i a);
+
+
VPMOVUSQB __m128i _mm512_mask_cvtusepi64_epi8(__m128i s, __mmask8 k, __m512i a);
+
+
VPMOVUSQB __m128i _mm512_maskz_cvtusepi64_epi8( __mmask8 k, __m512i a);
+
+
VPMOVUSQB void _mm512_mask_cvtusepi64_storeu_epi8(void * d, __mmask8 k, __m512i a);
+
+
VPMOVUSQB __m128i _mm256_cvtusepi64_epi8(__m256i a);
+
+
VPMOVUSQB __m128i _mm256_mask_cvtusepi64_epi8(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVUSQB __m128i _mm256_maskz_cvtusepi64_epi8( __mmask8 k, __m256i b);
+
+
VPMOVUSQB void _mm256_mask_cvtusepi64_storeu_epi8(void * , __mmask8 k, __m256i b);
+
+
VPMOVUSQB __m128i _mm_cvtusepi64_epi8(__m128i a);
+
+
VPMOVUSQB __m128i _mm_mask_cvtusepi64_epi8(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVUSQB __m128i _mm_maskz_cvtusepi64_epi8( __mmask8 k, __m128i b);
+
+
VPMOVUSQB void _mm_mask_cvtusepi64_storeu_epi8(void * , __mmask8 k, __m128i b);
+
+
VPMOVSQB __m128i _mm256_cvtsepi64_epi8(__m256i a);
+
+
VPMOVSQB __m128i _mm256_mask_cvtsepi64_epi8(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVSQB __m128i _mm256_maskz_cvtsepi64_epi8( __mmask8 k, __m256i b);
+
+
VPMOVSQB void _mm256_mask_cvtsepi64_storeu_epi8(void * , __mmask8 k, __m256i b);
+
+
VPMOVSQB __m128i _mm_cvtsepi64_epi8(__m128i a);
+
+
VPMOVSQB __m128i _mm_mask_cvtsepi64_epi8(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSQB __m128i _mm_maskz_cvtsepi64_epi8( __mmask8 k, __m128i b);
+
+
VPMOVSQB void _mm_mask_cvtsepi64_storeu_epi8(void * , __mmask8 k, __m128i b);
+
+
VPMOVQB __m128i _mm256_cvtepi64_epi8(__m256i a);
+
+
VPMOVQB __m128i _mm256_mask_cvtepi64_epi8(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVQB __m128i _mm256_maskz_cvtepi64_epi8( __mmask8 k, __m256i b);
+
+
VPMOVQB void _mm256_mask_cvtepi64_storeu_epi8(void * , __mmask8 k, __m256i b);
+
+
VPMOVQB __m128i _mm_cvtepi64_epi8(__m128i a);
+
+
VPMOVQB __m128i _mm_mask_cvtepi64_epi8(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVQB __m128i _mm_maskz_cvtepi64_epi8( __mmask8 k, __m128i b);
+
+
VPMOVQB void _mm_mask_cvtepi64_storeu_epi8(void * , __mmask8 k, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpmovqd.vpmovsqd.vpmovusqd.html b/x86/vpmovqd.vpmovsqd.vpmovusqd.html new file mode 100644 index 0000000..7172832 --- /dev/null +++ b/x86/vpmovqd.vpmovsqd.vpmovusqd.html @@ -0,0 +1,284 @@ + +VPMOVQD/VPMOVSQD/VPMOVUSQD + — Down Convert QWord to DWord

VPMOVQD/VPMOVSQD/VPMOVUSQD + — Down Convert QWord to DWord

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 35 /r VPMOVQD xmm1/m128 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 2 packed quad-word integers from xmm2 into 2 packed double-word integers in xmm1/m128 with truncation subject to writemask k1.
EVEX.128.F3.0F38.W0 25 /r VPMOVSQD xmm1/m64 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 2 packed signed quad-word integers from xmm2 into 2 packed signed double-word integers in xmm1/m64 using signed saturation subject to writemask k1.
EVEX.128.F3.0F38.W0 15 /r VPMOVUSQD xmm1/m64 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 2 packed unsigned quad-word integers from xmm2 into 2 packed unsigned double-word integers in xmm1/m64 using unsigned saturation subject to writemask k1.
EVEX.256.F3.0F38.W0 35 /r VPMOVQD xmm1/m128 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 4 packed quad-word integers from ymm2 into 4 packed double-word integers in xmm1/m128 with truncation subject to writemask k1.
EVEX.256.F3.0F38.W0 25 /r VPMOVSQD xmm1/m128 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 4 packed signed quad-word integers from ymm2 into 4 packed signed double-word integers in xmm1/m128 using signed saturation subject to writemask k1.
EVEX.256.F3.0F38.W0 15 /r VPMOVUSQD xmm1/m128 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 4 packed unsigned quad-word integers from ymm2 into 4 packed unsigned double-word integers in xmm1/m128 using unsigned saturation subject to writemask k1.
EVEX.512.F3.0F38.W0 35 /r VPMOVQD ymm1/m256 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed quad-word integers from zmm2 into 8 packed double-word integers in ymm1/m256 with truncation subject to writemask k1.
EVEX.512.F3.0F38.W0 25 /r VPMOVSQD ymm1/m256 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed signed quad-word integers from zmm2 into 8 packed signed double-word integers in ymm1/m256 using signed saturation subject to writemask k1.
EVEX.512.F3.0F38.W0 15 /r VPMOVUSQD ymm1/m256 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed unsigned quad-word integers from zmm2 into 8 packed unsigned double-word integers in ymm1/m256 using unsigned saturation subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AHalf MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

VPMOVQW down converts 64-bit integer elements in the source operand (the second operand) into packed double-words using truncation. VPMOVSQW converts signed 64-bit integers into packed signed doublewords using signed saturation. VPMOVUSQW convert unsigned quad-word values into unsigned double-word values using unsigned saturation.

+

The source operand is a ZMM/YMM/XMM register. The destination operand is a YMM/XMM/XMM register or a 256/128/64-bit memory location.

+

Down-converted doubleword elements are written to the destination operand (the first operand) from the least-significant doubleword. Doubleword elements of the destination operand are updated according to the writemask. Bits (MAXVL-1:256/128/64) of the register destination are zeroed.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPMOVQD instruction (EVEX encoded version) reg-reg form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TruncateQuadWordToDWord (SRC[m+63:m])
+        ELSE *zeroing-masking*
+                    ; zeroing-masking
+                DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0;
+
+

VPMOVQD instruction (EVEX encoded version) memory form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TruncateQuadWordToDWord (SRC[m+63:m])
+        ELSE *DEST[i+31:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVSQD instruction (EVEX encoded version) reg-reg form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SaturateSignedQuadWordToDWord (SRC[m+63:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0;
+
+

VPMOVSQD instruction (EVEX encoded version) memory form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SaturateSignedQuadWordToDWord (SRC[m+63:m])
+        ELSE *DEST[i+31:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVUSQD instruction (EVEX encoded version) reg-reg form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SaturateUnsignedQuadWordToDWord (SRC[m+63:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0;
+
+

VPMOVUSQD instruction (EVEX encoded version) memory form + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := SaturateUnsignedQuadWordToDWord (SRC[m+63:m])
+        ELSE *DEST[i+31:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMOVQD __m256i _mm512_cvtepi64_epi32( __m512i a);
+
+
VPMOVQD __m256i _mm512_mask_cvtepi64_epi32(__m256i s, __mmask8 k, __m512i a);
+
+
VPMOVQD __m256i _mm512_maskz_cvtepi64_epi32( __mmask8 k, __m512i a);
+
+
VPMOVQD void _mm512_mask_cvtepi64_storeu_epi32(void * d, __mmask8 k, __m512i a);
+
+
VPMOVSQD __m256i _mm512_cvtsepi64_epi32( __m512i a);
+
+
VPMOVSQD __m256i _mm512_mask_cvtsepi64_epi32(__m256i s, __mmask8 k, __m512i a);
+
+
VPMOVSQD __m256i _mm512_maskz_cvtsepi64_epi32( __mmask8 k, __m512i a);
+
+
VPMOVSQD void _mm512_mask_cvtsepi64_storeu_epi32(void * d, __mmask8 k, __m512i a);
+
+
VPMOVUSQD __m256i _mm512_cvtusepi64_epi32( __m512i a);
+
+
VPMOVUSQD __m256i _mm512_mask_cvtusepi64_epi32(__m256i s, __mmask8 k, __m512i a);
+
+
VPMOVUSQD __m256i _mm512_maskz_cvtusepi64_epi32( __mmask8 k, __m512i a);
+
+
VPMOVUSQD void _mm512_mask_cvtusepi64_storeu_epi32(void * d, __mmask8 k, __m512i a);
+
+
VPMOVUSQD __m128i _mm256_cvtusepi64_epi32(__m256i a);
+
+
VPMOVUSQD __m128i _mm256_mask_cvtusepi64_epi32(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVUSQD __m128i _mm256_maskz_cvtusepi64_epi32( __mmask8 k, __m256i b);
+
+
VPMOVUSQD void _mm256_mask_cvtusepi64_storeu_epi32(void * , __mmask8 k, __m256i b);
+
+
VPMOVUSQD __m128i _mm_cvtusepi64_epi32(__m128i a);
+
+
VPMOVUSQD __m128i _mm_mask_cvtusepi64_epi32(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVUSQD __m128i _mm_maskz_cvtusepi64_epi32( __mmask8 k, __m128i b);
+
+
VPMOVUSQD void _mm_mask_cvtusepi64_storeu_epi32(void * , __mmask8 k, __m128i b);
+
+
VPMOVSQD __m128i _mm256_cvtsepi64_epi32(__m256i a);
+
+
VPMOVSQD __m128i _mm256_mask_cvtsepi64_epi32(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVSQD __m128i _mm256_maskz_cvtsepi64_epi32( __mmask8 k, __m256i b);
+
+
VPMOVSQD void _mm256_mask_cvtsepi64_storeu_epi32(void * , __mmask8 k, __m256i b);
+
+
VPMOVSQD __m128i _mm_cvtsepi64_epi32(__m128i a);
+
+
VPMOVSQD __m128i _mm_mask_cvtsepi64_epi32(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSQD __m128i _mm_maskz_cvtsepi64_epi32( __mmask8 k, __m128i b);
+
+
VPMOVSQD void _mm_mask_cvtsepi64_storeu_epi32(void * , __mmask8 k, __m128i b);
+
+
VPMOVQD __m128i _mm256_cvtepi64_epi32(__m256i a);
+
+
VPMOVQD __m128i _mm256_mask_cvtepi64_epi32(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVQD __m128i _mm256_maskz_cvtepi64_epi32( __mmask8 k, __m256i b);
+
+
VPMOVQD void _mm256_mask_cvtepi64_storeu_epi32(void * , __mmask8 k, __m256i b);
+
+
VPMOVQD __m128i _mm_cvtepi64_epi32(__m128i a);
+
+
VPMOVQD __m128i _mm_mask_cvtepi64_epi32(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVQD __m128i _mm_maskz_cvtepi64_epi32( __mmask8 k, __m128i b);
+
+
VPMOVQD void _mm_mask_cvtepi64_storeu_epi32(void * , __mmask8 k, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpmovqw.vpmovsqw.vpmovusqw.html b/x86/vpmovqw.vpmovsqw.vpmovusqw.html new file mode 100644 index 0000000..043d332 --- /dev/null +++ b/x86/vpmovqw.vpmovsqw.vpmovusqw.html @@ -0,0 +1,290 @@ + +VPMOVQW/VPMOVSQW/VPMOVUSQW + — Down Convert QWord to Word

VPMOVQW/VPMOVSQW/VPMOVUSQW + — Down Convert QWord to Word

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 34 /r VPMOVQW xmm1/m32 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 2 packed quad-word integers from xmm2 into 2 packed word integers in xmm1/m32 with truncation under writemask k1.
EVEX.128.F3.0F38.W0 24 /r VPMOVSQW xmm1/m32 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 8 packed signed quad-word integers from zmm2 into 8 packed signed word integers in xmm1/m32 using signed saturation under writemask k1.
EVEX.128.F3.0F38.W0 14 /r VPMOVUSQW xmm1/m32 {k1}{z}, xmm2AV/VAVX512VL AVX512FConverts 2 packed unsigned quad-word integers from xmm2 into 2 packed unsigned word integers in xmm1/m32 using unsigned saturation under writemask k1.
EVEX.256.F3.0F38.W0 34 /r VPMOVQW xmm1/m64 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 4 packed quad-word integers from ymm2 into 4 packed word integers in xmm1/m64 with truncation under writemask k1.
EVEX.256.F3.0F38.W0 24 /r VPMOVSQW xmm1/m64 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 4 packed signed quad-word integers from ymm2 into 4 packed signed word integers in xmm1/m64 using signed saturation under writemask k1.
EVEX.256.F3.0F38.W0 14 /r VPMOVUSQW xmm1/m64 {k1}{z}, ymm2AV/VAVX512VL AVX512FConverts 4 packed unsigned quad-word integers from ymm2 into 4 packed unsigned word integers in xmm1/m64 using unsigned saturation under writemask k1.
EVEX.512.F3.0F38.W0 34 /r VPMOVQW xmm1/m128 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed quad-word integers from zmm2 into 8 packed word integers in xmm1/m128 with truncation under writemask k1.
EVEX.512.F3.0F38.W0 24 /r VPMOVSQW xmm1/m128 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed signed quad-word integers from zmm2 into 8 packed signed word integers in xmm1/m128 using signed saturation under writemask k1.
EVEX.512.F3.0F38.W0 14 /r VPMOVUSQW xmm1/m128 {k1}{z}, zmm2AV/VAVX512FConverts 8 packed unsigned quad-word integers from zmm2 into 8 packed unsigned word integers in xmm1/m128 using unsigned saturation under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AQuarter MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

VPMOVQW down converts 64-bit integer elements in the source operand (the second operand) into packed words using truncation. VPMOVSQW converts signed 64-bit integers into packed signed words using signed saturation. VPMOVUSQW convert unsigned quad-word values into unsigned word values using unsigned saturation.

+

The source operand is a ZMM/YMM/XMM register. The destination operand is a XMM register or a 128/64/32-bit memory location.

+

Down-converted word elements are written to the destination operand (the first operand) from the least-significant word. Word elements of the destination operand are updated according to the writemask. Bits (MAXVL-1:128/64/32) of the register destination are zeroed.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPMOVQW instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TruncateQuadWordToWord (SRC[m+63:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/4] := 0;
+
+

VPMOVQW instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := TruncateQuadWordToWord (SRC[m+63:m])
+        ELSE
+            *DEST[i+15:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVSQW instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateSignedQuadWordToWord (SRC[m+63:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/4] := 0;
+
+

VPMOVSQW instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateSignedQuadWordToWord (SRC[m+63:m])
+        ELSE
+            *DEST[i+15:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVUSQW instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateUnsignedQuadWordToWord (SRC[m+63:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/4] := 0;
+
+

VPMOVUSQW instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    m := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := SaturateUnsignedQuadWordToWord (SRC[m+63:m])
+        ELSE
+            *DEST[i+15:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMOVQW __m128i _mm512_cvtepi64_epi16( __m512i a);
+
+
VPMOVQW __m128i _mm512_mask_cvtepi64_epi16(__m128i s, __mmask8 k, __m512i a);
+
+
VPMOVQW __m128i _mm512_maskz_cvtepi64_epi16( __mmask8 k, __m512i a);
+
+
VPMOVQW void _mm512_mask_cvtepi64_storeu_epi16(void * d, __mmask8 k, __m512i a);
+
+
VPMOVSQW __m128i _mm512_cvtsepi64_epi16( __m512i a);
+
+
VPMOVSQW __m128i _mm512_mask_cvtsepi64_epi16(__m128i s, __mmask8 k, __m512i a);
+
+
VPMOVSQW __m128i _mm512_maskz_cvtsepi64_epi16( __mmask8 k, __m512i a);
+
+
VPMOVSQW void _mm512_mask_cvtsepi64_storeu_epi16(void * d, __mmask8 k, __m512i a);
+
+
VPMOVUSQW __m128i _mm512_cvtusepi64_epi16( __m512i a);
+
+
VPMOVUSQW __m128i _mm512_mask_cvtusepi64_epi16(__m128i s, __mmask8 k, __m512i a);
+
+
VPMOVUSQW __m128i _mm512_maskz_cvtusepi64_epi16( __mmask8 k, __m512i a);
+
+
VPMOVUSQW void _mm512_mask_cvtusepi64_storeu_epi16(void * d, __mmask8 k, __m512i a);
+
+
VPMOVUSQD __m128i _mm256_cvtusepi64_epi32(__m256i a);
+
+
VPMOVUSQD __m128i _mm256_mask_cvtusepi64_epi32(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVUSQD __m128i _mm256_maskz_cvtusepi64_epi32( __mmask8 k, __m256i b);
+
+
VPMOVUSQD void _mm256_mask_cvtusepi64_storeu_epi32(void * , __mmask8 k, __m256i b);
+
+
VPMOVUSQD __m128i _mm_cvtusepi64_epi32(__m128i a);
+
+
VPMOVUSQD __m128i _mm_mask_cvtusepi64_epi32(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVUSQD __m128i _mm_maskz_cvtusepi64_epi32( __mmask8 k, __m128i b);
+
+
VPMOVUSQD void _mm_mask_cvtusepi64_storeu_epi32(void * , __mmask8 k, __m128i b);
+
+
VPMOVSQD __m128i _mm256_cvtsepi64_epi32(__m256i a);
+
+
VPMOVSQD __m128i _mm256_mask_cvtsepi64_epi32(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVSQD __m128i _mm256_maskz_cvtsepi64_epi32( __mmask8 k, __m256i b);
+
+
VPMOVSQD void _mm256_mask_cvtsepi64_storeu_epi32(void * , __mmask8 k, __m256i b);
+
+
VPMOVSQD __m128i _mm_cvtsepi64_epi32(__m128i a);
+
+
VPMOVSQD __m128i _mm_mask_cvtsepi64_epi32(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSQD __m128i _mm_maskz_cvtsepi64_epi32( __mmask8 k, __m128i b);
+
+
VPMOVSQD void _mm_mask_cvtsepi64_storeu_epi32(void * , __mmask8 k, __m128i b);
+
+
VPMOVQD __m128i _mm256_cvtepi64_epi32(__m256i a);
+
+
VPMOVQD __m128i _mm256_mask_cvtepi64_epi32(__m128i a, __mmask8 k, __m256i b);
+
+
VPMOVQD __m128i _mm256_maskz_cvtepi64_epi32( __mmask8 k, __m256i b);
+
+
VPMOVQD void _mm256_mask_cvtepi64_storeu_epi32(void * , __mmask8 k, __m256i b);
+
+
VPMOVQD __m128i _mm_cvtepi64_epi32(__m128i a);
+
+
VPMOVQD __m128i _mm_mask_cvtepi64_epi32(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVQD __m128i _mm_maskz_cvtepi64_epi32( __mmask8 k, __m128i b);
+
+
VPMOVQD void _mm_mask_cvtepi64_storeu_epi32(void * , __mmask8 k, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpmovwb.vpmovswb.vpmovuswb.html b/x86/vpmovwb.vpmovswb.vpmovuswb.html new file mode 100644 index 0000000..af954b9 --- /dev/null +++ b/x86/vpmovwb.vpmovswb.vpmovuswb.html @@ -0,0 +1,290 @@ + +VPMOVWB/VPMOVSWB/VPMOVUSWB + — Down Convert Word to Byte

VPMOVWB/VPMOVSWB/VPMOVUSWB + — Down Convert Word to Byte

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.F3.0F38.W0 30 /r VPMOVWB xmm1/m64 {k1}{z}, xmm2AV/VAVX512VL AVX512BWConverts 8 packed word integers from xmm2 into 8 packed bytes in xmm1/m64 with truncation under writemask k1.
EVEX.128.F3.0F38.W0 20 /r VPMOVSWB xmm1/m64 {k1}{z}, xmm2AV/VAVX512VL AVX512BWConverts 8 packed signed word integers from xmm2 into 8 packed signed bytes in xmm1/m64 using signed saturation under writemask k1.
EVEX.128.F3.0F38.W0 10 /r VPMOVUSWB xmm1/m64 {k1}{z}, xmm2AV/VAVX512VL AVX512BWConverts 8 packed unsigned word integers from xmm2 into 8 packed unsigned bytes in 8mm1/m64 using unsigned saturation under writemask k1.
EVEX.256.F3.0F38.W0 30 /r VPMOVWB xmm1/m128 {k1}{z}, ymm2AV/VAVX512VL AVX512BWConverts 16 packed word integers from ymm2 into 16 packed bytes in xmm1/m128 with truncation under writemask k1.
EVEX.256.F3.0F38.W0 20 /r VPMOVSWB xmm1/m128 {k1}{z}, ymm2AV/VAVX512VL AVX512BWConverts 16 packed signed word integers from ymm2 into 16 packed signed bytes in xmm1/m128 using signed saturation under writemask k1.
EVEX.256.F3.0F38.W0 10 /r VPMOVUSWB xmm1/m128 {k1}{z}, ymm2AV/VAVX512VL AVX512BWConverts 16 packed unsigned word integers from ymm2 into 16 packed unsigned bytes in xmm1/m128 using unsigned saturation under writemask k1.
EVEX.512.F3.0F38.W0 30 /r VPMOVWB ymm1/m256 {k1}{z}, zmm2AV/VAVX512BWConverts 32 packed word integers from zmm2 into 32 packed bytes in ymm1/m256 with truncation under writemask k1.
EVEX.512.F3.0F38.W0 20 /r VPMOVSWB ymm1/m256 {k1}{z}, zmm2AV/VAVX512BWConverts 32 packed signed word integers from zmm2 into 32 packed signed bytes in ymm1/m256 using signed saturation under writemask k1.
EVEX.512.F3.0F38.W0 10 /r VPMOVUSWB ymm1/m256 {k1}{z}, zmm2AV/VAVX512BWConverts 32 packed unsigned word integers from zmm2 into 32 packed unsigned bytes in ymm1/m256 using unsigned saturation under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AHalf MemModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

VPMOVWB down converts 16-bit integers into packed bytes using truncation. VPMOVSWB converts signed 16-bit integers into packed signed bytes using signed saturation. VPMOVUSWB convert unsigned word values into unsigned byte values using unsigned saturation.

+

The source operand is a ZMM/YMM/XMM register. The destination operand is a YMM/XMM/XMM register or a 256/128/64-bit memory location.

+

Down-converted byte elements are written to the destination operand (the first operand) from the least-significant byte. Byte elements of the destination operand are updated according to the writemask. Bits (MAXVL-1:256/128/64) of the register destination are zeroed.

+

Note: EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Operation + ¶ +

+

VPMOVWB instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO Kl-1
+    i := j * 8
+    m := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := TruncateWordToByte (SRC[m+15:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0;
+
+

VPMOVWB instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO Kl-1
+    i := j * 8
+    m := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := TruncateWordToByte (SRC[m+15:m])
+        ELSE
+            *DEST[i+7:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVSWB instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO Kl-1
+    i := j * 8
+    m := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateSignedWordToByte (SRC[m+15:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0;
+
+

VPMOVSWB instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO Kl-1
+    i := j * 8
+    m := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateSignedWordToByte (SRC[m+15:m])
+        ELSE
+            *DEST[i+7:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

VPMOVUSWB instruction (EVEX encoded versions) when dest is a register + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO Kl-1
+    i := j * 8
+    m := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateUnsignedWordToByte (SRC[m+15:m])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+7:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+7:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL/2] := 0;
+
+

VPMOVUSWB instruction (EVEX encoded versions) when dest is memory + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO Kl-1
+    i := j * 8
+    m := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+7:i] := SaturateUnsignedWordToByte (SRC[m+15:m])
+        ELSE
+            *DEST[i+7:i] remains unchanged* ; merging-masking
+    FI;
+ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPMOVUSWB __m256i _mm512_cvtusepi16_epi8(__m512i a);
+
+
VPMOVUSWB __m256i _mm512_mask_cvtusepi16_epi8(__m256i a, __mmask32 k, __m512i b);
+
+
VPMOVUSWB __m256i _mm512_maskz_cvtusepi16_epi8( __mmask32 k, __m512i b);
+
+
VPMOVUSWB void _mm512_mask_cvtusepi16_storeu_epi8(void * , __mmask32 k, __m512i b);
+
+
VPMOVSWB __m256i _mm512_cvtsepi16_epi8(__m512i a);
+
+
VPMOVSWB __m256i _mm512_mask_cvtsepi16_epi8(__m256i a, __mmask32 k, __m512i b);
+
+
VPMOVSWB __m256i _mm512_maskz_cvtsepi16_epi8( __mmask32 k, __m512i b);
+
+
VPMOVSWB void _mm512_mask_cvtsepi16_storeu_epi8(void * , __mmask32 k, __m512i b);
+
+
VPMOVWB __m256i _mm512_cvtepi16_epi8(__m512i a);
+
+
VPMOVWB __m256i _mm512_mask_cvtepi16_epi8(__m256i a, __mmask32 k, __m512i b);
+
+
VPMOVWB __m256i _mm512_maskz_cvtepi16_epi8( __mmask32 k, __m512i b);
+
+
VPMOVWB void _mm512_mask_cvtepi16_storeu_epi8(void * , __mmask32 k, __m512i b);
+
+
VPMOVUSWB __m128i _mm256_cvtusepi16_epi8(__m256i a);
+
+
VPMOVUSWB __m128i _mm256_mask_cvtusepi16_epi8(__m128i a, __mmask16 k, __m256i b);
+
+
VPMOVUSWB __m128i _mm256_maskz_cvtusepi16_epi8( __mmask16 k, __m256i b);
+
+
VPMOVUSWB void _mm256_mask_cvtusepi16_storeu_epi8(void * , __mmask16 k, __m256i b);
+
+
VPMOVUSWB __m128i _mm_cvtusepi16_epi8(__m128i a);
+
+
VPMOVUSWB __m128i _mm_mask_cvtusepi16_epi8(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVUSWB __m128i _mm_maskz_cvtusepi16_epi8( __mmask8 k, __m128i b);
+
+
VPMOVUSWB void _mm_mask_cvtusepi16_storeu_epi8(void * , __mmask8 k, __m128i b);
+
+
VPMOVSWB __m128i _mm256_cvtsepi16_epi8(__m256i a);
+
+
VPMOVSWB __m128i _mm256_mask_cvtsepi16_epi8(__m128i a, __mmask16 k, __m256i b);
+
+
VPMOVSWB __m128i _mm256_maskz_cvtsepi16_epi8( __mmask16 k, __m256i b);
+
+
VPMOVSWB void _mm256_mask_cvtsepi16_storeu_epi8(void * , __mmask16 k, __m256i b);
+
+
VPMOVSWB __m128i _mm_cvtsepi16_epi8(__m128i a);
+
+
VPMOVSWB __m128i _mm_mask_cvtsepi16_epi8(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVSWB __m128i _mm_maskz_cvtsepi16_epi8( __mmask8 k, __m128i b);
+
+
VPMOVSWB void _mm_mask_cvtsepi16_storeu_epi8(void * , __mmask8 k, __m128i b);
+
+
VPMOVWB __m128i _mm256_cvtepi16_epi8(__m256i a);
+
+
VPMOVWB __m128i _mm256_mask_cvtepi16_epi8(__m128i a, __mmask16 k, __m256i b);
+
+
VPMOVWB __m128i _mm256_maskz_cvtepi16_epi8( __mmask16 k, __m256i b);
+
+
VPMOVWB void _mm256_mask_cvtepi16_storeu_epi8(void * , __mmask16 k, __m256i b);
+
+
VPMOVWB __m128i _mm_cvtepi16_epi8(__m128i a);
+
+
VPMOVWB __m128i _mm_mask_cvtepi16_epi8(__m128i a, __mmask8 k, __m128i b);
+
+
VPMOVWB __m128i _mm_maskz_cvtepi16_epi8( __mmask8 k, __m128i b);
+
+
VPMOVWB void _mm_mask_cvtepi16_storeu_epi8(void * , __mmask8 k, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-53, “Type E6 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vpmultishiftqb.html b/x86/vpmultishiftqb.html new file mode 100644 index 0000000..2439759 --- /dev/null +++ b/x86/vpmultishiftqb.html @@ -0,0 +1,113 @@ + +VPMULTISHIFTQB + — Select Packed Unaligned Bytes From Quadword Sources

VPMULTISHIFTQB + — Select Packed Unaligned Bytes From Quadword Sources

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 83 /r VPMULTISHIFTQB xmm1 {k1}{z}, xmm2,xmm3/m128/m64bcstAV/VAVX512_VBMI AVX512VLSelect unaligned bytes from qwords in xmm3/m128/m64bcst using control bytes in xmm2, write byte results to xmm1 under k1.
EVEX.256.66.0F38.W1 83 /r VPMULTISHIFTQB ymm1 {k1}{z}, ymm2,ymm3/m256/m64bcstAV/VAVX512_VBMI AVX512VLSelect unaligned bytes from qwords in ymm3/m256/m64bcst using control bytes in ymm2, write byte results to ymm1 under k1.
EVEX.512.66.0F38.W1 83 /r VPMULTISHIFTQB zmm1 {k1}{z}, zmm2,zmm3/m512/m64bcstAV/VAVX512_VBMISelect unaligned bytes from qwords in zmm3/m512/m64bcst using control bytes in zmm2, write byte results to zmm1 under k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction selects eight unaligned bytes from each input qword element of the second source operand (the third operand) and writes eight assembled bytes for each qword element in the destination operand (the first operand). Each byte result is selected using a byte-granular shift control within the corresponding qword element of the first source operand (the second operand). Each byte result in the destination operand is updated under the writemask k1.

+

Only the low 6 bits of each control byte are used to select an 8-bit slot to extract the output byte from the qword data in the second source operand. The starting bit of the 8-bit slot can be unaligned relative to any byte boundary and is extracted from the input qword source at the location specified in the low 6-bit of the control byte. If the 8-bit slot would exceed the qword boundary, the out-of-bound portion of the 8-bit slot is wrapped back to start from bit 0 of the input qword element.

+

The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register.

+

Operation + ¶ +

+

VPMULTISHIFTQB DEST, SRC1, SRC2 (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 128),(4, 256), (8, 512)
+FOR i := 0 TO KL-1
+    IF EVEX.b=1 AND src2 is memory THEN
+            tcur := src2.qword[0]; //broadcasting
+    ELSE
+            tcur := src2.qword[i];
+    FI;
+    FOR j := 0 to 7
+        ctrl := src1.qword[i].byte[j] & 63;
+        FOR k := 0 to 7
+            res.bit[k] := tcur.bit[ (ctrl+k) mod 64 ];
+        ENDFOR
+        IF k1[i*8+j] or no writemask THEN
+            DEST.qword[i].byte[j] := res;
+        ELSE IF zeroing-masking THEN
+            DEST.qword[i].byte[j] := 0;
+    ENDFOR
+ENDFOR
+DEST.qword[MAX_VL-1:VL] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPMULTISHIFTQB __m512i _mm512_multishift_epi64_epi8( __m512i a, __m512i b);
+
+
VPMULTISHIFTQB __m512i _mm512_mask_multishift_epi64_epi8(__m512i s, __mmask64 k, __m512i a, __m512i b);
+
+
VPMULTISHIFTQB __m512i _mm512_maskz_multishift_epi64_epi8( __mmask64 k, __m512i a, __m512i b);
+
+
VPMULTISHIFTQB __m256i _mm256_multishift_epi64_epi8( __m256i a, __m256i b);
+
+
VPMULTISHIFTQB __m256i _mm256_mask_multishift_epi64_epi8(__m256i s, __mmask32 k, __m256i a, __m256i b);
+
+
VPMULTISHIFTQB __m256i _mm256_maskz_multishift_epi64_epi8( __mmask32 k, __m256i a, __m256i b);
+
+
VPMULTISHIFTQB __m128i _mm_multishift_epi64_epi8( __m128i a, __m128i b);
+
+
VPMULTISHIFTQB __m128i _mm_mask_multishift_epi64_epi8(__m128i s, __mmask8 k, __m128i a, __m128i b);
+
+
VPMULTISHIFTQB __m128i _mm_maskz_multishift_epi64_epi8( __mmask8 k, __m128i a, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-50, “Type E4NF Class Exception Conditions.”

diff --git a/x86/vpopcnt.html b/x86/vpopcnt.html new file mode 100644 index 0000000..28322a8 --- /dev/null +++ b/x86/vpopcnt.html @@ -0,0 +1,263 @@ + +VPOPCNT + — Return the Count of Number of Bits Set to 1 in BYTE/WORD/DWORD/QWORD

VPOPCNT + — Return the Count of Number of Bits Set to 1 in BYTE/WORD/DWORD/QWORD

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 54 /r VPOPCNTB xmm1{k1}{z}, xmm2/m128AV/VAVX512_BITALG AVX512VLCounts the number of bits set to one in xmm2/m128 and puts the result in xmm1 with writemask k1.
EVEX.256.66.0F38.W0 54 /r VPOPCNTB ymm1{k1}{z}, ymm2/m256AV/VAVX512_BITALG AVX512VLCounts the number of bits set to one in ymm2/m256 and puts the result in ymm1 with writemask k1.
EVEX.512.66.0F38.W0 54 /r VPOPCNTB zmm1{k1}{z}, zmm2/m512AV/VAVX512_BITALGCounts the number of bits set to one in zmm2/m512 and puts the result in zmm1 with writemask k1.
EVEX.128.66.0F38.W1 54 /r VPOPCNTW xmm1{k1}{z}, xmm2/m128AV/VAVX512_BITALG AVX512VLCounts the number of bits set to one in xmm2/m128 and puts the result in xmm1 with writemask k1.
EVEX.256.66.0F38.W1 54 /r VPOPCNTW ymm1{k1}{z}, ymm2/m256AV/VAVX512_BITALG AVX512VLCounts the number of bits set to one in ymm2/m256 and puts the result in ymm1 with writemask k1.
EVEX.512.66.0F38.W1 54 /r VPOPCNTW zmm1{k1}{z}, zmm2/m512AV/VAVX512_BITALGCounts the number of bits set to one in zmm2/m512 and puts the result in zmm1 with writemask k1.
EVEX.128.66.0F38.W0 55 /r VPOPCNTD xmm1{k1}{z}, xmm2/m128/m32bcstBV/VAVX512_VPOPCNTDQ AVX512VLCounts the number of bits set to one in xmm2/m128/m32bcst and puts the result in xmm1 with writemask k1.
EVEX.256.66.0F38.W0 55 /r VPOPCNTD ymm1{k1}{z}, ymm2/m256/m32bcstBV/VAVX512_VPOPCNTDQ AVX512VLCounts the number of bits set to one in ymm2/m256/m32bcst and puts the result in ymm1 with writemask k1.
EVEX.512.66.0F38.W0 55 /r VPOPCNTD zmm1{k1}{z}, zmm2/m512/m32bcstBV/VAVX512_VPOPCNTDQCounts the number of bits set to one in zmm2/m512/m32bcst and puts the result in zmm1 with writemask k1.
EVEX.128.66.0F38.W1 55 /r VPOPCNTQ xmm1{k1}{z}, xmm2/m128/m64bcstBV/VAVX512_VPOPCNTDQ AVX512VLCounts the number of bits set to one in xmm2/m128/m32bcst and puts the result in xmm1 with writemask k1.
EVEX.256.66.0F38.W1 55 /r VPOPCNTQ ymm1{k1}{z}, ymm2/m256/m64bcstBV/VAVX512_VPOPCNTDQ AVX512VLCounts the number of bits set to one in ymm2/m256/m32bcst and puts the result in ymm1 with writemask k1.
EVEX.512.66.0F38.W1 55 /r VPOPCNTQ zmm1{k1}{z}, zmm2/m512/m64bcstBV/VAVX512_VPOPCNTDQCounts the number of bits set to one in zmm2/m512/m64bcst and puts the result in zmm1 with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)ModRM:r/m (r)N/AN/A
BFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction counts the number of bits set to one in each byte, word, dword or qword element of its source (e.g., zmm2 or memory) and places the results in the destination register (zmm1). This instruction supports memory fault suppression.

+

Operation + ¶ +

+

VPOPCNTB + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1:
+    IF MaskBit(j) OR *no writemask*:
+        DEST.byte[j] := POPCNT(SRC.byte[j])
+    ELSE IF *merging-masking*:
+        *DEST.byte[j] remains unchanged*
+    ELSE:
+        DEST.byte[j] := 0
+DEST[MAX_VL-1:VL] := 0
+
+

VPOPCNTW + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1:
+    IF MaskBit(j) OR *no writemask*:
+        DEST.word[j] := POPCNT(SRC.word[j])
+    ELSE IF *merging-masking*:
+        *DEST.word[j] remains unchanged*
+    ELSE:
+        DEST.word[j] := 0
+DEST[MAX_VL-1:VL] := 0
+
+

VPOPCNTD + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1:
+    IF MaskBit(j) OR *no writemask*:
+        IF SRC is broadcast memop:
+            t := SRC.dword[0]
+        ELSE:
+            t := SRC.dword[j]
+        DEST.dword[j] := POPCNT(t)
+    ELSE IF *merging-masking*:
+        *DEST..dword[j] remains unchanged*
+    ELSE:
+        DEST..dword[j] := 0
+DEST[MAX_VL-1:VL] := 0
+
+

VPOPCNTQ + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1:
+    IF MaskBit(j) OR *no writemask*:
+        IF SRC is broadcast memop:
+            t := SRC.qword[0]
+        ELSE:
+            t := SRC.qword[j]
+        DEST.qword[j] := POPCNT(t)
+    ELSE IF *merging-masking*:
+        *DEST..qword[j] remains unchanged*
+    ELSE:
+        DEST..qword[j] := 0
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPOPCNTW __m128i _mm_popcnt_epi16(__m128i);
+
+
VPOPCNTW __m128i _mm_mask_popcnt_epi16(__m128i, __mmask8, __m128i);
+
+
VPOPCNTW __m128i _mm_maskz_popcnt_epi16(__mmask8, __m128i);
+
+
VPOPCNTW __m256i _mm256_popcnt_epi16(__m256i);
+
+
VPOPCNTW __m256i _mm256_mask_popcnt_epi16(__m256i, __mmask16, __m256i);
+
+
VPOPCNTW __m256i _mm256_maskz_popcnt_epi16(__mmask16, __m256i);
+
+
VPOPCNTW __m512i _mm512_popcnt_epi16(__m512i);
+
+
VPOPCNTW __m512i _mm512_mask_popcnt_epi16(__m512i, __mmask32, __m512i);
+
+
VPOPCNTW __m512i _mm512_maskz_popcnt_epi16(__mmask32, __m512i);
+
+
VPOPCNTQ __m128i _mm_popcnt_epi64(__m128i);
+
+
VPOPCNTQ __m128i _mm_mask_popcnt_epi64(__m128i, __mmask8, __m128i);
+
+
VPOPCNTQ __m128i _mm_maskz_popcnt_epi64(__mmask8, __m128i);
+
+
VPOPCNTQ __m256i _mm256_popcnt_epi64(__m256i);
+
+
VPOPCNTQ __m256i _mm256_mask_popcnt_epi64(__m256i, __mmask8, __m256i);
+
+
VPOPCNTQ __m256i _mm256_maskz_popcnt_epi64(__mmask8, __m256i);
+
+
VPOPCNTQ __m512i _mm512_popcnt_epi64(__m512i);
+
+
VPOPCNTQ __m512i _mm512_mask_popcnt_epi64(__m512i, __mmask8, __m512i);
+
+
VPOPCNTQ __m512i _mm512_maskz_popcnt_epi64(__mmask8, __m512i);
+
+
VPOPCNTD __m128i _mm_popcnt_epi32(__m128i);
+
+
VPOPCNTD __m128i _mm_mask_popcnt_epi32(__m128i, __mmask8, __m128i);
+
+
VPOPCNTD __m128i _mm_maskz_popcnt_epi32(__mmask8, __m128i);
+
+
VPOPCNTD __m256i _mm256_popcnt_epi32(__m256i);
+
+
VPOPCNTD __m256i _mm256_mask_popcnt_epi32(__m256i, __mmask8, __m256i);
+
+
VPOPCNTD __m256i _mm256_maskz_popcnt_epi32(__mmask8, __m256i);
+
+
VPOPCNTD __m512i _mm512_popcnt_epi32(__m512i);
+
+
VPOPCNTD __m512i _mm512_mask_popcnt_epi32(__m512i, __mmask16, __m512i);
+
+
VPOPCNTD __m512i _mm512_maskz_popcnt_epi32(__mmask16, __m512i);
+
+
VPOPCNTB __m128i _mm_popcnt_epi8(__m128i);
+
+
VPOPCNTB __m128i _mm_mask_popcnt_epi8(__m128i, __mmask16, __m128i);
+
+
VPOPCNTB __m128i _mm_maskz_popcnt_epi8(__mmask16, __m128i);
+
+
VPOPCNTB __m256i _mm256_popcnt_epi8(__m256i);
+
+
VPOPCNTB __m256i _mm256_mask_popcnt_epi8(__m256i, __mmask32, __m256i);
+
+
VPOPCNTB __m256i _mm256_maskz_popcnt_epi8(__mmask32, __m256i);
+
+
VPOPCNTB __m512i _mm512_popcnt_epi8(__m512i);
+
+
VPOPCNTB __m512i _mm512_mask_popcnt_epi8(__m512i, __mmask64, __m512i);
+
+
VPOPCNTB __m512i _mm512_maskz_popcnt_epi8(__mmask64, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vprold.vprolvd.vprolq.vprolvq.html b/x86/vprold.vprolvd.vprolq.vprolvq.html new file mode 100644 index 0000000..b33cd51 --- /dev/null +++ b/x86/vprold.vprolvd.vprolq.vprolvq.html @@ -0,0 +1,304 @@ + +VPROLD/VPROLVD/VPROLQ/VPROLVQ + — Bit Rotate Left

VPROLD/VPROLVD/VPROLQ/VPROLVQ + — Bit Rotate Left

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 15 /r VPROLVD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FRotate doublewords in xmm2 left by count in the corresponding element of xmm3/m128/m32bcst. Result written to xmm1 under writemask k1.
EVEX.128.66.0F.W0 72 /1 ib VPROLD xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8AV/VAVX512VL AVX512FRotate doublewords in xmm2/m128/m32bcst left by imm8. Result written to xmm1 using writemask k1.
EVEX.128.66.0F38.W1 15 /r VPROLVQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FRotate quadwords in xmm2 left by count in the corresponding element of xmm3/m128/m64bcst. Result written to xmm1 under writemask k1.
EVEX.128.66.0F.W1 72 /1 ib VPROLQ xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8AV/VAVX512VL AVX512FRotate quadwords in xmm2/m128/m64bcst left by imm8. Result written to xmm1 using writemask k1.
EVEX.256.66.0F38.W0 15 /r VPROLVD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FRotate doublewords in ymm2 left by count in the corresponding element of ymm3/m256/m32bcst. Result written to ymm1 under writemask k1.
EVEX.256.66.0F.W0 72 /1 ib VPROLD ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8AV/VAVX512VL AVX512FRotate doublewords in ymm2/m256/m32bcst left by imm8. Result written to ymm1 using writemask k1.
EVEX.256.66.0F38.W1 15 /r VPROLVQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FRotate quadwords in ymm2 left by count in the corresponding element of ymm3/m256/m64bcst. Result written to ymm1 under writemask k1.
EVEX.256.66.0F.W1 72 /1 ib VPROLQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8AV/VAVX512VL AVX512FRotate quadwords in ymm2/m256/m64bcst left by imm8. Result written to ymm1 using writemask k1.
EVEX.512.66.0F38.W0 15 /r VPROLVD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512FRotate left of doublewords in zmm2 by count in the corresponding element of zmm3/m512/m32bcst. Result written to zmm1 using writemask k1.
EVEX.512.66.0F.W0 72 /1 ib VPROLD zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8AV/VAVX512FRotate left of doublewords in zmm3/m512/m32bcst by imm8. Result written to zmm1 using writemask k1.
EVEX.512.66.0F38.W1 15 /r VPROLVQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstBV/VAVX512FRotate quadwords in zmm2 left by count in the corresponding element of zmm3/m512/m64bcst. Result written to zmm1under writemask k1.
EVEX.512.66.0F.W1 72 /1 ib VPROLQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8AV/VAVX512FRotate quadwords in zmm2/m512/m64bcst left by imm8. Result written to zmm1 using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullVEX.vvvv (w)ModRM:r/m (R)imm8N/A
BFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Rotates the bits in the individual data elements (doublewords, or quadword) in the first source operand to the left by the number of bits specified in the count operand. If the value specified by the count operand is greater than 31 (for doublewords), or 63 (for a quadword), then the count operand modulo the data size (32 or 64) is used.

+

EVEX.128 encoded version: The destination operand is a XMM register. The source operand is a XMM register or a memory location (for immediate form). The count operand can come either from an XMM register or a memory location or an 8-bit immediate. Bits (MAXVL-1:128) of the corresponding ZMM register are zeroed.

+

EVEX.256 encoded version: The destination operand is a YMM register. The source operand is a YMM register or a memory location (for immediate form). The count operand can come either from an XMM register or a memory location or an 8-bit immediate. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX.512 encoded version: The destination operand is a ZMM register updated according to the writemask. For the count operand in immediate form, the source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location, the count operand is an 8-bit immediate. For the count operand in variable form, the first source operand (the second operand) is a ZMM register and the counter operand (the third operand) is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location.

+

Operation + ¶ +

+
LEFT_ROTATE_DWORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC modulo 32;
+DEST[31:0] := (SRC << COUNT) | (SRC >> (32 - COUNT));
+LEFT_ROTATE_QWORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC modulo 64;
+DEST[63:0] := (SRC << COUNT) | (SRC >> (64 - COUNT));
+
+

VPROLD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+31:i] := LEFT_ROTATE_DWORDS(SRC1[31:0], imm8)
+                ELSE DEST[i+31:i] := LEFT_ROTATE_DWORDS(SRC1[i+31:i], imm8)
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPROLVD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := LEFT_ROTATE_DWORDS(SRC1[i+31:i], SRC2[31:0])
+                ELSE DEST[i+31:i] := LEFT_ROTATE_DWORDS(SRC1[i+31:i], SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPROLQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+63:i] := LEFT_ROTATE_QWORDS(SRC1[63:0], imm8)
+                ELSE DEST[i+63:i] := LEFT_ROTATE_QWORDS(SRC1[i+63:i], imm8)
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPROLVQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := LEFT_ROTATE_QWORDS(SRC1[i+63:i], SRC2[63:0])
+                ELSE DEST[i+63:i] := LEFT_ROTATE_QWORDS(SRC1[i+63:i], SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPROLD __m512i _mm512_rol_epi32(__m512i a, int imm);
+
+
VPROLD __m512i _mm512_mask_rol_epi32(__m512i a, __mmask16 k, __m512i b, int imm);
+
+
VPROLD __m512i _mm512_maskz_rol_epi32( __mmask16 k, __m512i a, int imm);
+
+
VPROLD __m256i _mm256_rol_epi32(__m256i a, int imm);
+
+
VPROLD __m256i _mm256_mask_rol_epi32(__m256i a, __mmask8 k, __m256i b, int imm);
+
+
VPROLD __m256i _mm256_maskz_rol_epi32( __mmask8 k, __m256i a, int imm);
+
+
VPROLD __m128i _mm_rol_epi32(__m128i a, int imm);
+
+
VPROLD __m128i _mm_mask_rol_epi32(__m128i a, __mmask8 k, __m128i b, int imm);
+
+
VPROLD __m128i _mm_maskz_rol_epi32( __mmask8 k, __m128i a, int imm);
+
+
VPROLQ __m512i _mm512_rol_epi64(__m512i a, int imm);
+
+
VPROLQ __m512i _mm512_mask_rol_epi64(__m512i a, __mmask8 k, __m512i b, int imm);
+
+
VPROLQ __m512i _mm512_maskz_rol_epi64(__mmask8 k, __m512i a, int imm);
+
+
VPROLQ __m256i _mm256_rol_epi64(__m256i a, int imm);
+
+
VPROLQ __m256i _mm256_mask_rol_epi64(__m256i a, __mmask8 k, __m256i b, int imm);
+
+
VPROLQ __m256i _mm256_maskz_rol_epi64( __mmask8 k, __m256i a, int imm);
+
+
VPROLQ __m128i _mm_rol_epi64(__m128i a, int imm);
+
+
VPROLQ __m128i _mm_mask_rol_epi64(__m128i a, __mmask8 k, __m128i b, int imm);
+
+
VPROLQ __m128i _mm_maskz_rol_epi64( __mmask8 k, __m128i a, int imm);
+
+
VPROLVD __m512i _mm512_rolv_epi32(__m512i a, __m512i cnt);
+
+
VPROLVD __m512i _mm512_mask_rolv_epi32(__m512i a, __mmask16 k, __m512i b, __m512i cnt);
+
+
VPROLVD __m512i _mm512_maskz_rolv_epi32(__mmask16 k, __m512i a, __m512i cnt);
+
+
VPROLVD __m256i _mm256_rolv_epi32(__m256i a, __m256i cnt);
+
+
VPROLVD __m256i _mm256_mask_rolv_epi32(__m256i a, __mmask8 k, __m256i b, __m256i cnt);
+
+
VPROLVD __m256i _mm256_maskz_rolv_epi32(__mmask8 k, __m256i a, __m256i cnt);
+
+
VPROLVD __m128i _mm_rolv_epi32(__m128i a, __m128i cnt);
+
+
VPROLVD __m128i _mm_mask_rolv_epi32(__m128i a, __mmask8 k, __m128i b, __m128i cnt);
+
+
VPROLVD __m128i _mm_maskz_rolv_epi32(__mmask8 k, __m128i a, __m128i cnt);
+
+
VPROLVQ __m512i _mm512_rolv_epi64(__m512i a, __m512i cnt);
+
+
VPROLVQ __m512i _mm512_mask_rolv_epi64(__m512i a, __mmask8 k, __m512i b, __m512i cnt);
+
+
VPROLVQ __m512i _mm512_maskz_rolv_epi64( __mmask8 k, __m512i a, __m512i cnt);
+
+
VPROLVQ __m256i _mm256_rolv_epi64(__m256i a, __m256i cnt);
+
+
VPROLVQ __m256i _mm256_mask_rolv_epi64(__m256i a, __mmask8 k, __m256i b, __m256i cnt);
+
+
VPROLVQ __m256i _mm256_maskz_rolv_epi64(__mmask8 k, __m256i a, __m256i cnt);
+
+
VPROLVQ __m128i _mm_rolv_epi64(__m128i a, __m128i cnt);
+
+
VPROLVQ __m128i _mm_mask_rolv_epi64(__m128i a, __mmask8 k, __m128i b, __m128i cnt);
+
+
VPROLVQ __m128i _mm_maskz_rolv_epi64(__mmask8 k, __m128i a, __m128i cnt);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vprord.vprorvd.vprorq.vprorvq.html b/x86/vprord.vprorvd.vprorq.vprorvq.html new file mode 100644 index 0000000..dab36a2 --- /dev/null +++ b/x86/vprord.vprorvd.vprorq.vprorvq.html @@ -0,0 +1,304 @@ + +VPRORD/VPRORVD/VPRORQ/VPRORVQ + — Bit Rotate Right

VPRORD/VPRORVD/VPRORQ/VPRORVQ + — Bit Rotate Right

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 14 /r VPRORVD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FRotate doublewords in xmm2 right by count in the corresponding element of xmm3/m128/m32bcst, store result using writemask k1.
EVEX.128.66.0F.W0 72 /0 ib VPRORD xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8AV/VAVX512VL AVX512FRotate doublewords in xmm2/m128/m32bcst right by imm8, store result using writemask k1.
EVEX.128.66.0F38.W1 14 /r VPRORVQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FRotate quadwords in xmm2 right by count in the corresponding element of xmm3/m128/m64bcst, store result using writemask k1.
EVEX.128.66.0F.W1 72 /0 ib VPRORQ xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8AV/VAVX512VL AVX512FRotate quadwords in xmm2/m128/m64bcst right by imm8, store result using writemask k1.
EVEX.256.66.0F38.W0 14 /r VPRORVD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FRotate doublewords in ymm2 right by count in the corresponding element of ymm3/m256/m32bcst, store using result writemask k1.
EVEX.256.66.0F.W0 72 /0 ib VPRORD ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8AV/VAVX512VL AVX512FRotate doublewords in ymm2/m256/m32bcst right by imm8, store result using writemask k1.
EVEX.256.66.0F38.W1 14 /r VPRORVQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FRotate quadwords in ymm2 right by count in the corresponding element of ymm3/m256/m64bcst, store result using writemask k1.
EVEX.256.66.0F.W1 72 /0 ib VPRORQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8AV/VAVX512VL AVX512FRotate quadwords in ymm2/m256/m64bcst right by imm8, store result using writemask k1.
EVEX.512.66.0F38.W0 14 /r VPRORVD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512FRotate doublewords in zmm2 right by count in the corresponding element of zmm3/m512/m32bcst, store result using writemask k1.
EVEX.512.66.0F.W0 72 /0 ib VPRORD zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8AV/VAVX512FRotate doublewords in zmm2/m512/m32bcst right by imm8, store result using writemask k1.
EVEX.512.66.0F38.W1 14 /r VPRORVQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstBV/VAVX512FRotate quadwords in zmm2 right by count in the corresponding element of zmm3/m512/m64bcst, store result using writemask k1.
EVEX.512.66.0F.W1 72 /0 ib VPRORQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8AV/VAVX512FRotate quadwords in zmm2/m512/m64bcst right by imm8, store result using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullVEX.vvvv (w)ModRM:r/m (R)imm8N/A
BFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Rotates the bits in the individual data elements (doublewords, or quadword) in the first source operand to the right by the number of bits specified in the count operand. If the value specified by the count operand is greater than 31 (for doublewords), or 63 (for a quadword), then the count operand modulo the data size (32 or 64) is used.

+

EVEX.128 encoded version: The destination operand is a XMM register. The source operand is a XMM register or a memory location (for immediate form). The count operand can come either from an XMM register or a memory location or an 8-bit immediate. Bits (MAXVL-1:128) of the corresponding ZMM register are zeroed.

+

EVEX.256 encoded version: The destination operand is a YMM register. The source operand is a YMM register or a memory location (for immediate form). The count operand can come either from an XMM register or a memory location or an 8-bit immediate. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX.512 encoded version: The destination operand is a ZMM register updated according to the writemask. For the count operand in immediate form, the source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location, the count operand is an 8-bit immediate. For the count operand in variable form, the first source operand (the second operand) is a ZMM register and the counter operand (the third operand) is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location.

+

Operation + ¶ +

+
RIGHT_ROTATE_DWORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC modulo 32;
+DEST[31:0] := (SRC >> COUNT) | (SRC << (32 - COUNT));
+RIGHT_ROTATE_QWORDS(SRC, COUNT_SRC)
+COUNT := COUNT_SRC modulo 64;
+DEST[63:0] := (SRC >> COUNT) | (SRC << (64 - COUNT));
+
+

VPRORD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+31:i] := RIGHT_ROTATE_DWORDS( SRC1[31:0], imm8)
+                ELSE DEST[i+31:i] := RIGHT_ROTATE_DWORDS(SRC1[i+31:i], imm8)
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPRORVD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := RIGHT_ROTATE_DWORDS(SRC1[i+31:i], SRC2[31:0])
+                ELSE DEST[i+31:i] := RIGHT_ROTATE_DWORDS(SRC1[i+31:i], SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPRORQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC1 *is memory*)
+                THEN DEST[i+63:i] := RIGHT_ROTATE_QWORDS(SRC1[63:0], imm8)
+                ELSE DEST[i+63:i] := RIGHT_ROTATE_QWORDS(SRC1[i+63:i], imm8])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VPRORVQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := RIGHT_ROTATE_QWORDS(SRC1[i+63:i], SRC2[63:0])
+                ELSE DEST[i+63:i] := RIGHT_ROTATE_QWORDS(SRC1[i+63:i], SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPRORD __m512i _mm512_ror_epi32(__m512i a, int imm);
+
+
VPRORD __m512i _mm512_mask_ror_epi32(__m512i a, __mmask16 k, __m512i b, int imm);
+
+
VPRORD __m512i _mm512_maskz_ror_epi32( __mmask16 k, __m512i a, int imm);
+
+
VPRORD __m256i _mm256_ror_epi32(__m256i a, int imm);
+
+
VPRORD __m256i _mm256_mask_ror_epi32(__m256i a, __mmask8 k, __m256i b, int imm);
+
+
VPRORD __m256i _mm256_maskz_ror_epi32( __mmask8 k, __m256i a, int imm);
+
+
VPRORD __m128i _mm_ror_epi32(__m128i a, int imm);
+
+
VPRORD __m128i _mm_mask_ror_epi32(__m128i a, __mmask8 k, __m128i b, int imm);
+
+
VPRORD __m128i _mm_maskz_ror_epi32( __mmask8 k, __m128i a, int imm);
+
+
VPRORQ __m512i _mm512_ror_epi64(__m512i a, int imm);
+
+
VPRORQ __m512i _mm512_mask_ror_epi64(__m512i a, __mmask8 k, __m512i b, int imm);
+
+
VPRORQ __m512i _mm512_maskz_ror_epi64(__mmask8 k, __m512i a, int imm);
+
+
VPRORQ __m256i _mm256_ror_epi64(__m256i a, int imm);
+
+
VPRORQ __m256i _mm256_mask_ror_epi64(__m256i a, __mmask8 k, __m256i b, int imm);
+
+
VPRORQ __m256i _mm256_maskz_ror_epi64( __mmask8 k, __m256i a, int imm);
+
+
VPRORQ __m128i _mm_ror_epi64(__m128i a, int imm);
+
+
VPRORQ __m128i _mm_mask_ror_epi64(__m128i a, __mmask8 k, __m128i b, int imm);
+
+
VPRORQ __m128i _mm_maskz_ror_epi64( __mmask8 k, __m128i a, int imm);
+
+
VPRORVD __m512i _mm512_rorv_epi32(__m512i a, __m512i cnt);
+
+
VPRORVD __m512i _mm512_mask_rorv_epi32(__m512i a, __mmask16 k, __m512i b, __m512i cnt);
+
+
VPRORVD __m512i _mm512_maskz_rorv_epi32(__mmask16 k, __m512i a, __m512i cnt);
+
+
VPRORVD __m256i _mm256_rorv_epi32(__m256i a, __m256i cnt);
+
+
VPRORVD __m256i _mm256_mask_rorv_epi32(__m256i a, __mmask8 k, __m256i b, __m256i cnt);
+
+
VPRORVD __m256i _mm256_maskz_rorv_epi32(__mmask8 k, __m256i a, __m256i cnt);
+
+
VPRORVD __m128i _mm_rorv_epi32(__m128i a, __m128i cnt);
+
+
VPRORVD __m128i _mm_mask_rorv_epi32(__m128i a, __mmask8 k, __m128i b, __m128i cnt);
+
+
VPRORVD __m128i _mm_maskz_rorv_epi32(__mmask8 k, __m128i a, __m128i cnt);
+
+
VPRORVQ __m512i _mm512_rorv_epi64(__m512i a, __m512i cnt);
+
+
VPRORVQ __m512i _mm512_mask_rorv_epi64(__m512i a, __mmask8 k, __m512i b, __m512i cnt);
+
+
VPRORVQ __m512i _mm512_maskz_rorv_epi64( __mmask8 k, __m512i a, __m512i cnt);
+
+
VPRORVQ __m256i _mm256_rorv_epi64(__m256i a, __m256i cnt);
+
+
VPRORVQ __m256i _mm256_mask_rorv_epi64(__m256i a, __mmask8 k, __m256i b, __m256i cnt);
+
+
VPRORVQ __m256i _mm256_maskz_rorv_epi64(__mmask8 k, __m256i a, __m256i cnt);
+
+
VPRORVQ __m128i _mm_rorv_epi64(__m128i a, __m128i cnt);
+
+
VPRORVQ __m128i _mm_mask_rorv_epi64(__m128i a, __mmask8 k, __m128i b, __m128i cnt);
+
+
VPRORVQ __m128i _mm_maskz_rorv_epi64(__mmask8 k, __m128i a, __m128i cnt);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpscatterdd.vpscatterdq.vpscatterqd.vpscatterqq.html b/x86/vpscatterdd.vpscatterdq.vpscatterqd.vpscatterqq.html new file mode 100644 index 0000000..8e37e27 --- /dev/null +++ b/x86/vpscatterdd.vpscatterdq.vpscatterqd.vpscatterqq.html @@ -0,0 +1,248 @@ + +VPSCATTERDD/VPSCATTERDQ/VPSCATTERQD/VPSCATTERQQ + — Scatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices

VPSCATTERDD/VPSCATTERDQ/VPSCATTERQD/VPSCATTERQQ + — Scatter Packed Dword, PackedQword with Signed Dword, Signed Qword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 A0 /vsib VPSCATTERDD vm32x {k1}, xmm1AV/VAVX512VL AVX512FUsing signed dword indices, scatter dword values to memory using writemask k1.
EVEX.256.66.0F38.W0 A0 /vsib VPSCATTERDD vm32y {k1}, ymm1AV/VAVX512VL AVX512FUsing signed dword indices, scatter dword values to memory using writemask k1.
EVEX.512.66.0F38.W0 A0 /vsib VPSCATTERDD vm32z {k1}, zmm1AV/VAVX512FUsing signed dword indices, scatter dword values to memory using writemask k1.
EVEX.128.66.0F38.W1 A0 /vsib VPSCATTERDQ vm32x {k1}, xmm1AV/VAVX512VL AVX512FUsing signed dword indices, scatter qword values to memory using writemask k1.
EVEX.256.66.0F38.W1 A0 /vsib VPSCATTERDQ vm32x {k1}, ymm1AV/VAVX512VL AVX512FUsing signed dword indices, scatter qword values to memory using writemask k1.
EVEX.512.66.0F38.W1 A0 /vsib VPSCATTERDQ vm32y {k1}, zmm1AV/VAVX512FUsing signed dword indices, scatter qword values to memory using writemask k1.
EVEX.128.66.0F38.W0 A1 /vsib VPSCATTERQD vm64x {k1}, xmm1AV/VAVX512VL AVX512FUsing signed qword indices, scatter dword values to memory using writemask k1.
EVEX.256.66.0F38.W0 A1 /vsib VPSCATTERQD vm64y {k1}, xmm1AV/VAVX512VL AVX512FUsing signed qword indices, scatter dword values to memory using writemask k1.
EVEX.512.66.0F38.W0 A1 /vsib VPSCATTERQD vm64z {k1}, ymm1AV/VAVX512FUsing signed qword indices, scatter dword values to memory using writemask k1.
EVEX.128.66.0F38.W1 A1 /vsib VPSCATTERQQ vm64x {k1}, xmm1AV/VAVX512VL AVX512FUsing signed qword indices, scatter qword values to memory using writemask k1.
EVEX.256.66.0F38.W1 A1 /vsib VPSCATTERQQ vm64y {k1}, ymm1AV/VAVX512VL AVX512FUsing signed qword indices, scatter qword values to memory using writemask k1.
EVEX.512.66.0F38.W1 A1 /vsib VPSCATTERQQ vm64z {k1}, zmm1AV/VAVX512FUsing signed qword indices, scatter qword values to memory using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarBaseReg (R): VSIB:base, VectorReg(R): VSIB:indexModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Stores up to 16 elements (8 elements for qword indices) in doubleword vector or 8 elements in quadword vector to the memory locations pointed by base address BASE_ADDR and index vector VINDEX, with scale SCALE. The elements are specified via the VSIB (i.e., the index register is a vector register, holding packed indices). Elements will only be stored if their corresponding mask bit is one. The entire mask register will be set to zero by this instruction unless it triggers an exception.

+

This instruction can be suspended by an exception if at least one element is already scattered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask register are partially updated. If any traps or interrupts are pending from already scattered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

Note that:

+
    +
  • Only writes to overlapping vector indices are guaranteed to be ordered with respect to each other (from LSB to MSB of the source registers). Note that this also include partially overlapping vector indices. Writes that are not overlapped may happen in any order. Memory ordering with other instructions follows the Intel-64 memory ordering model. Note that this does not account for non-overlapping indices that map into the same physical address locations.
  • +
  • If two or more destination indices completely overlap, the “earlier” write(s) may be skipped.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination ZMM will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be scattered in any order, but faults must be delivered in a right-to left order; thus, elements to the left of a faulting one may be gathered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be gathered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • Not valid with 16-bit effective addresses. Will deliver a #UD fault.
  • +
  • If this instruction overwrites itself and then takes a fault, only a subset of elements may be completed before the fault is delivered (as described above). If the fault handler completes and attempts to re-execute this instruction, the new instruction will be executed, and the scatter will not complete.
+

Note that the presence of VSIB byte is enforced in this instruction. Hence, the instruction will #UD fault if ModRM.rm is different than 100b.

+

This instruction has special disp8*N and alignment rules. N is considered to be the size of a single vector element.

+

The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.

+

The instruction will #UD fault if the k0 mask register is specified.

+

The instruction will #UD fault if EVEX.Z = 1.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist
+VINDEX stands for the memory operand vector of indices (a ZMM register)
+SCALE stands for the memory operand scalar (1, 2, 4 or 8)
+DISP is the optional 1 or 4 byte displacement
+
+

VPSCATTERDD (EVEX encoded versions) + ¶ +

+
(KL, VL)= (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN MEM[BASE_ADDR +SignExtend(VINDEX[i+31:i]) * SCALE + DISP] := SRC[i+31:i]
+            k1[j] := 0
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+
+

VPSCATTERDQ (EVEX encoded versions) + ¶ +

+
(KL, VL)= (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN MEM[BASE_ADDR +SignExtend(VINDEX[k+31:k]) * SCALE + DISP] := SRC[i+63:i]
+            k1[j] := 0
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+
+

VPSCATTERQD (EVEX encoded versions) + ¶ +

+
(KL, VL)= (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN MEM[BASE_ADDR + (VINDEX[k+63:k]) * SCALE + DISP] := SRC[i+31:i]
+            k1[j] := 0
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+
+

VPSCATTERQQ (EVEX encoded versions) + ¶ +

+
(KL, VL)= (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN MEM[BASE_ADDR + (VINDEX[j+63:j]) * SCALE + DISP] := SRC[i+63:i]
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSCATTERDD void _mm512_i32scatter_epi32(void * base, __m512i vdx, __m512i a, int scale);
+
+
VPSCATTERDD void _mm256_i32scatter_epi32(void * base, __m256i vdx, __m256i a, int scale);
+
+
VPSCATTERDD void _mm_i32scatter_epi32(void * base, __m128i vdx, __m128i a, int scale);
+
+
VPSCATTERDD void _mm512_mask_i32scatter_epi32(void * base, __mmask16 k, __m512i vdx, __m512i a, int scale);
+
+
VPSCATTERDD void _mm256_mask_i32scatter_epi32(void * base, __mmask8 k, __m256i vdx, __m256i a, int scale);
+
+
VPSCATTERDD void _mm_mask_i32scatter_epi32(void * base, __mmask8 k, __m128i vdx, __m128i a, int scale);
+
+
VPSCATTERDQ void _mm512_i32scatter_epi64(void * base, __m256i vdx, __m512i a, int scale);
+
+
VPSCATTERDQ void _mm256_i32scatter_epi64(void * base, __m128i vdx, __m256i a, int scale);
+
+
VPSCATTERDQ void _mm_i32scatter_epi64(void * base, __m128i vdx, __m128i a, int scale);
+
+
VPSCATTERDQ void _mm512_mask_i32scatter_epi64(void * base, __mmask8 k, __m256i vdx, __m512i a, int scale);
+
+
VPSCATTERDQ void _mm256_mask_i32scatter_epi64(void * base, __mmask8 k, __m128i vdx, __m256i a, int scale);
+
+
VPSCATTERDQ void _mm_mask_i32scatter_epi64(void * base, __mmask8 k, __m128i vdx, __m128i a, int scale);
+
+
VPSCATTERQD void _mm512_i64scatter_epi32(void * base, __m512i vdx, __m256i a, int scale);
+
+
VPSCATTERQD void _mm256_i64scatter_epi32(void * base, __m256i vdx, __m128i a, int scale);
+
+
VPSCATTERQD void _mm_i64scatter_epi32(void * base, __m128i vdx, __m128i a, int scale);
+
+
VPSCATTERQD void _mm512_mask_i64scatter_epi32(void * base, __mmask8 k, __m512i vdx, __m256i a, int scale);
+
+
VPSCATTERQD void _mm256_mask_i64scatter_epi32(void * base, __mmask8 k, __m256i vdx, __m128i a, int scale);
+
+
VPSCATTERQD void _mm_mask_i64scatter_epi32(void * base, __mmask8 k, __m128i vdx, __m128i a, int scale);
+
+
VPSCATTERQQ void _mm512_i64scatter_epi64(void * base, __m512i vdx, __m512i a, int scale);
+
+
VPSCATTERQQ void _mm256_i64scatter_epi64(void * base, __m256i vdx, __m256i a, int scale);
+
+
VPSCATTERQQ void _mm_i64scatter_epi64(void * base, __m128i vdx, __m128i a, int scale);
+
+
VPSCATTERQQ void _mm512_mask_i64scatter_epi64(void * base, __mmask8 k, __m512i vdx, __m512i a, int scale);
+
+
VPSCATTERQQ void _mm256_mask_i64scatter_epi64(void * base, __mmask8 k, __m256i vdx, __m256i a, int scale);
+
+
VPSCATTERQQ void _mm_mask_i64scatter_epi64(void * base, __mmask8 k, __m128i vdx, __m128i a, int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-61, “Type E12 Class Exception Conditions.”

diff --git a/x86/vpshld.html b/x86/vpshld.html new file mode 100644 index 0000000..47eba13 --- /dev/null +++ b/x86/vpshld.html @@ -0,0 +1,215 @@ + +VPSHLD + — Concatenate and Shift Packed Data Left Logical

VPSHLD + — Concatenate and Shift Packed Data Left Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 70 /r /ib VPSHLDW xmm1{k1}{z}, xmm2, xmm3/m128, imm8AV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the left by constant value in imm8 into xmm1.
EVEX.256.66.0F3A.W1 70 /r /ib VPSHLDW ymm1{k1}{z}, ymm2, ymm3/m256, imm8AV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the left by constant value in imm8 into ymm1.
EVEX.512.66.0F3A.W1 70 /r /ib VPSHLDW zmm1{k1}{z}, zmm2, zmm3/m512, imm8AV/VAVX512_VBMI2Concatenate destination and source operands, extract result shifted to the left by constant value in imm8 into zmm1.
EVEX.128.66.0F3A.W0 71 /r /ib VPSHLDD xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst, imm8BV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the left by constant value in imm8 into xmm1.
EVEX.256.66.0F3A.W0 71 /r /ib VPSHLDD ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst, imm8BV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the left by constant value in imm8 into ymm1.
EVEX.512.66.0F3A.W0 71 /r /ib VPSHLDD zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst, imm8BV/VAVX512_VBMI2Concatenate destination and source operands, extract result shifted to the left by constant value in imm8 into zmm1.
EVEX.128.66.0F3A.W1 71 /r /ib VPSHLDQ xmm1{k1}{z}, xmm2, xmm3/m128/m64bcst, imm8BV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the left by constant value in imm8 into xmm1.
EVEX.256.66.0F3A.W1 71 /r /ib VPSHLDQ ymm1{k1}{z}, ymm2, ymm3/m256/m64bcst, imm8BV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the left by constant value in imm8 into ymm1.
EVEX.512.66.0F3A.W1 71 /r /ib VPSHLDQ zmm1{k1}{z}, zmm2, zmm3/m512/m64bcst, imm8BV/VAVX512_VBMI2Concatenate destination and source operands, extract result shifted to the left by constant value in imm8 into zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8 (r)
BFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

Concatenate packed data, extract result shifted to the left by constant value.

+

This instruction supports memory fault suppression.

+

Operation + ¶ +

+

VPSHLDW DEST, SRC2, SRC3, imm8 + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1:
+    IF MaskBit(j) OR *no writemask*:
+        tmp := concat(SRC2.word[j], SRC3.word[j]) << (imm8 & 15)
+        DEST.word[j] := tmp.word[1]
+    ELSE IF *zeroing*:
+        DEST.word[j] := 0
+    *ELSE DEST.word[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VPSHLDD DEST, SRC2, SRC3, imm8 + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1:
+    IF SRC3 is broadcast memop:
+        tsrc3 := SRC3.dword[0]
+    ELSE:
+        tsrc3 := SRC3.dword[j]
+    IF MaskBit(j) OR *no writemask*:
+        tmp := concat(SRC2.dword[j], tsrc3) << (imm8 & 31)
+        DEST.dword[j] := tmp.dword[1]
+    ELSE IF *zeroing*:
+        DEST.dword[j] := 0
+    *ELSE DEST.dword[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VPSHLDQ DEST, SRC2, SRC3, imm8 + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1:
+    IF SRC3 is broadcast memop:
+        tsrc3 := SRC3.qword[0]
+    ELSE:
+        tsrc3 := SRC3.qword[j]
+    IF MaskBit(j) OR *no writemask*:
+        tmp := concat(SRC2.qword[j], tsrc3) << (imm8 & 63)
+        DEST.qword[j] := tmp.qword[1]
+    ELSE IF *zeroing*:
+        DEST.qword[j] := 0
+    *ELSE DEST.qword[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSHLDD __m128i _mm_shldi_epi32(__m128i, __m128i, int);
+
+
VPSHLDD __m128i _mm_mask_shldi_epi32(__m128i, __mmask8, __m128i, __m128i, int);
+
+
VPSHLDD __m128i _mm_maskz_shldi_epi32(__mmask8, __m128i, __m128i, int);
+
+
VPSHLDD __m256i _mm256_shldi_epi32(__m256i, __m256i, int);
+
+
VPSHLDD __m256i _mm256_mask_shldi_epi32(__m256i, __mmask8, __m256i, __m256i, int);
+
+
VPSHLDD __m256i _mm256_maskz_shldi_epi32(__mmask8, __m256i, __m256i, int);
+
+
VPSHLDD __m512i _mm512_shldi_epi32(__m512i, __m512i, int);
+
+
VPSHLDD __m512i _mm512_mask_shldi_epi32(__m512i, __mmask16, __m512i, __m512i, int);
+
+
VPSHLDD __m512i _mm512_maskz_shldi_epi32(__mmask16, __m512i, __m512i, int);
+
+
VPSHLDQ __m128i _mm_shldi_epi64(__m128i, __m128i, int);
+
+
VPSHLDQ __m128i _mm_mask_shldi_epi64(__m128i, __mmask8, __m128i, __m128i, int);
+
+
VPSHLDQ __m128i _mm_maskz_shldi_epi64(__mmask8, __m128i, __m128i, int);
+
+
VPSHLDQ __m256i _mm256_shldi_epi64(__m256i, __m256i, int);
+
+
VPSHLDQ __m256i _mm256_mask_shldi_epi64(__m256i, __mmask8, __m256i, __m256i, int);
+
+
VPSHLDQ __m256i _mm256_maskz_shldi_epi64(__mmask8, __m256i, __m256i, int);
+
+
VPSHLDQ __m512i _mm512_shldi_epi64(__m512i, __m512i, int);
+
+
VPSHLDQ __m512i _mm512_mask_shldi_epi64(__m512i, __mmask8, __m512i, __m512i, int);
+
+
VPSHLDQ __m512i _mm512_maskz_shldi_epi64(__mmask8, __m512i, __m512i, int);
+
+
VPSHLDW __m128i _mm_shldi_epi16(__m128i, __m128i, int);
+
+
VPSHLDW __m128i _mm_mask_shldi_epi16(__m128i, __mmask8, __m128i, __m128i, int);
+
+
VPSHLDW __m128i _mm_maskz_shldi_epi16(__mmask8, __m128i, __m128i, int);
+
+
VPSHLDW __m256i _mm256_shldi_epi16(__m256i, __m256i, int);
+
+
VPSHLDW __m256i _mm256_mask_shldi_epi16(__m256i, __mmask16, __m256i, __m256i, int);
+
+
VPSHLDW __m256i _mm256_maskz_shldi_epi16(__mmask16, __m256i, __m256i, int);
+
+
VPSHLDW __m512i _mm512_shldi_epi16(__m512i, __m512i, int);
+
+
VPSHLDW __m512i _mm512_mask_shldi_epi16(__m512i, __mmask32, __m512i, __m512i, int);
+
+
VPSHLDW __m512i _mm512_maskz_shldi_epi16(__mmask32, __m512i, __m512i, int);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpshldv.html b/x86/vpshldv.html new file mode 100644 index 0000000..ae85cc0 --- /dev/null +++ b/x86/vpshldv.html @@ -0,0 +1,229 @@ + +VPSHLDV + — Concatenate and Variable Shift Packed Data Left Logical

VPSHLDV + — Concatenate and Variable Shift Packed Data Left Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 70 /r VPSHLDVW xmm1{k1}{z}, xmm2, xmm3/m128AV/VAVX512_VBMI2 AVX512VLConcatenate xmm1 and xmm2, extract result shifted to the left by value in xmm3/m128 into xmm1.
EVEX.256.66.0F38.W1 70 /r VPSHLDVW ymm1{k1}{z}, ymm2, ymm3/m256AV/VAVX512_VBMI2 AVX512VLConcatenate ymm1 and ymm2, extract result shifted to the left by value in xmm3/m256 into ymm1.
EVEX.512.66.0F38.W1 70 /r VPSHLDVW zmm1{k1}{z}, zmm2, zmm3/m512AV/VAVX512_VBMI2Concatenate zmm1 and zmm2, extract result shifted to the left by value in zmm3/m512 into zmm1.
EVEX.128.66.0F38.W0 71 /r VPSHLDVD xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512_VBMI2 AVX512VLConcatenate xmm1 and xmm2, extract result shifted to the left by value in xmm3/m128 into xmm1.
EVEX.256.66.0F38.W0 71 /r VPSHLDVD ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512_VBMI2 AVX512VLConcatenate ymm1 and ymm2, extract result shifted to the left by value in xmm3/m256 into ymm1.
EVEX.512.66.0F38.W0 71 /r VPSHLDVD zmm1{k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512_VBMI2Concatenate zmm1 and zmm2, extract result shifted to the left by value in zmm3/m512 into zmm1.
EVEX.128.66.0F38.W1 71 /r VPSHLDVQ xmm1{k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512_VBMI2 AVX512VLConcatenate xmm1 and xmm2, extract result shifted to the left by value in xmm3/m128 into xmm1.
EVEX.256.66.0F38.W1 71 /r VPSHLDVQ ymm1{k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512_VBMI2 AVX512VLConcatenate ymm1 and ymm2, extract result shifted to the left by value in xmm3/m256 into ymm1.
EVEX.512.66.0F38.W1 71 /r VPSHLDVQ zmm1{k1}{z}, zmm2, zmm3/m512/m64bcstBV/VAVX512_VBMI2Concatenate zmm1 and zmm2, extract result shifted to the left by value in zmm3/m512 into zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Concatenate packed data, extract result shifted to the left by variable value.

+

This instruction supports memory fault suppression.

+

Operation + ¶ +

+
FUNCTION concat(a,b):
+    IF words:
+        d.word[1] := a
+        d.word[0] := b
+        return d
+    ELSE IF dwords:
+        q.dword[1] := a
+        q.dword[0] := b
+        return q
+    ELSE IF qwords:
+        o.qword[1] := a
+        o.qword[0] := b
+        return o
+
+

VPSHLDVW DEST, SRC2, SRC3 + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1:
+    IF MaskBit(j) OR *no writemask*:
+        tmp := concat(DEST.word[j], SRC2.word[j]) << (SRC3.word[j] & 15)
+        DEST.word[j] := tmp.word[1]
+    ELSE IF *zeroing*:
+        DEST.word[j] := 0
+    *ELSE DEST.word[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VPSHLDVD DEST, SRC2, SRC3 + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1:
+    IF SRC3 is broadcast memop:
+        tsrc3 := SRC3.dword[0]
+    ELSE:
+        tsrc3 := SRC3.dword[j]
+    IF MaskBit(j) OR *no writemask*:
+        tmp := concat(DEST.dword[j], SRC2.dword[j]) << (tsrc3 & 31)
+        DEST.dword[j] := tmp.dword[1]
+    ELSE IF *zeroing*:
+        DEST.dword[j] := 0
+    *ELSE DEST.dword[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VPSHLDVQ DEST, SRC2, SRC3 + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1:
+    IF SRC3 is broadcast memop:
+        tsrc3 := SRC3.qword[0]
+    ELSE:
+        tsrc3 := SRC3.qword[j]
+    IF MaskBit(j) OR *no writemask*:
+        tmp := concat(DEST.qword[j], SRC2.qword[j]) << (tsrc3 & 63)
+        DEST.qword[j] := tmp.qword[1]
+    ELSE IF *zeroing*:
+        DEST.qword[j] := 0
+    *ELSE DEST.qword[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSHLDVW __m128i _mm_shldv_epi16(__m128i, __m128i, __m128i);
+
+
VPSHLDVW __m128i _mm_mask_shldv_epi16(__m128i, __mmask8, __m128i, __m128i);
+
+
VPSHLDVW __m128i _mm_maskz_shldv_epi16(__mmask8, __m128i, __m128i, __m128i);
+
+
VPSHLDVW __m256i _mm256_shldv_epi16(__m256i, __m256i, __m256i);
+
+
VPSHLDVW __m256i _mm256_mask_shldv_epi16(__m256i, __mmask16, __m256i, __m256i);
+
+
VPSHLDVW __m256i _mm256_maskz_shldv_epi16(__mmask16, __m256i, __m256i, __m256i);
+
+
VPSHLDVQ __m512i _mm512_shldv_epi64(__m512i, __m512i, __m512i);
+
+
VPSHLDVQ __m512i _mm512_mask_shldv_epi64(__m512i, __mmask8, __m512i, __m512i);
+
+
VPSHLDVQ __m512i _mm512_maskz_shldv_epi64(__mmask8, __m512i, __m512i, __m512i);
+
+
VPSHLDVW __m128i _mm_shldv_epi16(__m128i, __m128i, __m128i);
+
+
VPSHLDVW __m128i _mm_mask_shldv_epi16(__m128i, __mmask8, __m128i, __m128i);
+
+
VPSHLDVW __m128i _mm_maskz_shldv_epi16(__mmask8, __m128i, __m128i, __m128i);
+
+
VPSHLDVW __m256i _mm256_shldv_epi16(__m256i, __m256i, __m256i);
+
+
VPSHLDVW __m256i _mm256_mask_shldv_epi16(__m256i, __mmask16, __m256i, __m256i);
+
+
VPSHLDVW __m256i _mm256_maskz_shldv_epi16(__mmask16, __m256i, __m256i, __m256i);
+
+
VPSHLDVW __m512i _mm512_shldv_epi16(__m512i, __m512i, __m512i);
+
+
VPSHLDVW __m512i _mm512_mask_shldv_epi16(__m512i, __mmask32, __m512i, __m512i);
+
+
VPSHLDVW __m512i _mm512_maskz_shldv_epi16(__mmask32, __m512i, __m512i, __m512i);
+
+
VPSHLDVD __m128i _mm_shldv_epi32(__m128i, __m128i, __m128i);
+
+
VPSHLDVD __m128i _mm_mask_shldv_epi32(__m128i, __mmask8, __m128i, __m128i);
+
+
VPSHLDVD __m128i _mm_maskz_shldv_epi32(__mmask8, __m128i, __m128i, __m128i);
+
+
VPSHLDVD __m256i _mm256_shldv_epi32(__m256i, __m256i, __m256i);
+
+
VPSHLDVD __m256i _mm256_mask_shldv_epi32(__m256i, __mmask8, __m256i, __m256i);
+
+
VPSHLDVD __m256i _mm256_maskz_shldv_epi32(__mmask8, __m256i, __m256i, __m256i);
+
+
VPSHLDVD __m512i _mm512_shldv_epi32(__m512i, __m512i, __m512i);
+
+
VPSHLDVD __m512i _mm512_mask_shldv_epi32(__m512i, __mmask16, __m512i, __m512i);
+
+
VPSHLDVD __m512i _mm512_maskz_shldv_epi32(__mmask16, __m512i, __m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpshrd.html b/x86/vpshrd.html new file mode 100644 index 0000000..6c7e20e --- /dev/null +++ b/x86/vpshrd.html @@ -0,0 +1,212 @@ + +VPSHRD + — Concatenate and Shift Packed Data Right Logical

VPSHRD + — Concatenate and Shift Packed Data Right Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 72 /r /ib VPSHRDW xmm1{k1}{z}, xmm2, xmm3/m128, imm8AV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the right by constant value in imm8 into xmm1.
EVEX.256.66.0F3A.W1 72 /r /ib VPSHRDW ymm1{k1}{z}, ymm2, ymm3/m256, imm8AV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the right by constant value in imm8 into ymm1.
EVEX.512.66.0F3A.W1 72 /r /ib VPSHRDW zmm1{k1}{z}, zmm2, zmm3/m512, imm8AV/VAVX512_VBMI2Concatenate destination and source operands, extract result shifted to the right by constant value in imm8 into zmm1.
EVEX.128.66.0F3A.W0 73 /r /ib VPSHRDD xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst, imm8BV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the right by constant value in imm8 into xmm1.
EVEX.256.66.0F3A.W0 73 /r /ib VPSHRDD ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst, imm8BV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the right by constant value in imm8 into ymm1.
EVEX.512.66.0F3A.W0 73 /r /ib VPSHRDD zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst, imm8BV/VAVX512_VBMI2Concatenate destination and source operands, extract result shifted to the right by constant value in imm8 into zmm1.
EVEX.128.66.0F3A.W1 73 /r /ib VPSHRDQ xmm1{k1}{z}, xmm2, xmm3/m128/m64bcst, imm8BV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the right by constant value in imm8 into xmm1.
EVEX.256.66.0F3A.W1 73 /r /ib VPSHRDQ ymm1{k1}{z}, ymm2, ymm3/m256/m64bcst, imm8BV/VAVX512_VBMI2 AVX512VLConcatenate destination and source operands, extract result shifted to the right by constant value in imm8 into ymm1.
EVEX.512.66.0F3A.W1 73 /r /ib VPSHRDQ zmm1{k1}{z}, zmm2, zmm3/m512/m64bcst, imm8BV/VAVX512_VBMI2Concatenate destination and source operands, extract result shifted to the right by constant value in imm8 into zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8 (r)
BFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

Concatenate packed data, extract result shifted to the right by constant value.

+

This instruction supports memory fault suppression.

+

Operation + ¶ +

+

VPSHRDW DEST, SRC2, SRC3, imm8 + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1:
+    IF MaskBit(j) OR *no writemask*:
+        DEST.word[j] := concat(SRC3.word[j], SRC2.word[j]) >> (imm8 & 15)
+    ELSE IF *zeroing*:
+        DEST.word[j] := 0
+    *ELSE DEST.word[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VPSHRDD DEST, SRC2, SRC3, imm8 + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1:
+    IF SRC3 is broadcast memop:
+        tsrc3 := SRC3.dword[0]
+    ELSE:
+        tsrc3 := SRC3.dword[j]
+    IF MaskBit(j) OR *no writemask*:
+        DEST.dword[j] := concat(tsrc3, SRC2.dword[j]) >> (imm8 & 31)
+    ELSE IF *zeroing*:
+        DEST.dword[j] := 0
+    *ELSE DEST.dword[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VPSHRDQ DEST, SRC2, SRC3, imm8 + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1:
+    IF SRC3 is broadcast memop:
+        tsrc3 := SRC3.qword[0]
+    ELSE:
+        tsrc3 := SRC3.qword[j]
+    IF MaskBit(j) OR *no writemask*:
+        DEST.qword[j] := concat(tsrc3, SRC2.qword[j]) >> (imm8 & 63)
+    ELSE IF *zeroing*:
+        DEST.qword[j] := 0
+    *ELSE DEST.qword[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSHRDQ __m128i _mm_shrdi_epi64(__m128i, __m128i, int);
+
+
VPSHRDQ __m128i _mm_mask_shrdi_epi64(__m128i, __mmask8, __m128i, __m128i, int);
+
+
VPSHRDQ __m128i _mm_maskz_shrdi_epi64(__mmask8, __m128i, __m128i, int);
+
+
VPSHRDQ __m256i _mm256_shrdi_epi64(__m256i, __m256i, int);
+
+
VPSHRDQ __m256i _mm256_mask_shrdi_epi64(__m256i, __mmask8, __m256i, __m256i, int);
+
+
VPSHRDQ __m256i _mm256_maskz_shrdi_epi64(__mmask8, __m256i, __m256i, int);
+
+
VPSHRDQ __m512i _mm512_shrdi_epi64(__m512i, __m512i, int);
+
+
VPSHRDQ __m512i _mm512_mask_shrdi_epi64(__m512i, __mmask8, __m512i, __m512i, int);
+
+
VPSHRDQ __m512i _mm512_maskz_shrdi_epi64(__mmask8, __m512i, __m512i, int);
+
+
VPSHRDD __m128i _mm_shrdi_epi32(__m128i, __m128i, int);
+
+
VPSHRDD __m128i _mm_mask_shrdi_epi32(__m128i, __mmask8, __m128i, __m128i, int);
+
+
VPSHRDD __m128i _mm_maskz_shrdi_epi32(__mmask8, __m128i, __m128i, int);
+
+
VPSHRDD __m256i _mm256_shrdi_epi32(__m256i, __m256i, int);
+
+
VPSHRDD __m256i _mm256_mask_shrdi_epi32(__m256i, __mmask8, __m256i, __m256i, int);
+
+
VPSHRDD __m256i _mm256_maskz_shrdi_epi32(__mmask8, __m256i, __m256i, int);
+
+
VPSHRDD __m512i _mm512_shrdi_epi32(__m512i, __m512i, int);
+
+
VPSHRDD __m512i _mm512_mask_shrdi_epi32(__m512i, __mmask16, __m512i, __m512i, int);
+
+
VPSHRDD __m512i _mm512_maskz_shrdi_epi32(__mmask16, __m512i, __m512i, int);
+
+
VPSHRDW __m128i _mm_shrdi_epi16(__m128i, __m128i, int);
+
+
VPSHRDW __m128i _mm_mask_shrdi_epi16(__m128i, __mmask8, __m128i, __m128i, int);
+
+
VPSHRDW __m128i _mm_maskz_shrdi_epi16(__mmask8, __m128i, __m128i, int);
+
+
VPSHRDW __m256i _mm256_shrdi_epi16(__m256i, __m256i, int);
+
+
VPSHRDW __m256i _mm256_mask_shrdi_epi16(__m256i, __mmask16, __m256i, __m256i, int);
+
+
VPSHRDW __m256i _mm256_maskz_shrdi_epi16(__mmask16, __m256i, __m256i, int);
+
+
VPSHRDW __m512i _mm512_shrdi_epi16(__m512i, __m512i, int);
+
+
VPSHRDW __m512i _mm512_mask_shrdi_epi16(__m512i, __mmask32, __m512i, __m512i, int);
+
+
VPSHRDW __m512i _mm512_maskz_shrdi_epi16(__mmask32, __m512i, __m512i, int);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpshrdv.html b/x86/vpshrdv.html new file mode 100644 index 0000000..2f47594 --- /dev/null +++ b/x86/vpshrdv.html @@ -0,0 +1,212 @@ + +VPSHRDV + — Concatenate and Variable Shift Packed Data Right Logical

VPSHRDV + — Concatenate and Variable Shift Packed Data Right Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 72 /r VPSHRDVW xmm1{k1}{z}, xmm2, xmm3/m128AV/VAVX512_VBMI2 AVX512VLConcatenate xmm1 and xmm2, extract result shifted to the right by value in xmm3/m128 into xmm1.
EVEX.256.66.0F38.W1 72 /r VPSHRDVW ymm1{k1}{z}, ymm2, ymm3/m256AV/VAVX512_VBMI2 AVX512VLConcatenate ymm1 and ymm2, extract result shifted to the right by value in xmm3/m256 into ymm1.
EVEX.512.66.0F38.W1 72 /r VPSHRDVW zmm1{k1}{z}, zmm2, zmm3/m512AV/VAVX512_VBMI2Concatenate zmm1 and zmm2, extract result shifted to the right by value in zmm3/m512 into zmm1.
EVEX.128.66.0F38.W0 73 /r VPSHRDVD xmm1{k1}{z}, xmm2, xmm3/m128/m32bcstBV/VAVX512_VBMI2 AVX512VLConcatenate xmm1 and xmm2, extract result shifted to the right by value in xmm3/m128 into xmm1.
EVEX.256.66.0F38.W0 73 /r VPSHRDVD ymm1{k1}{z}, ymm2, ymm3/m256/m32bcstBV/VAVX512_VBMI2 AVX512VLConcatenate ymm1 and ymm2, extract result shifted to the right by value in xmm3/m256 into ymm1.
EVEX.512.66.0F38.W0 73 /r VPSHRDVD zmm1{k1}{z}, zmm2, zmm3/m512/m32bcstBV/VAVX512_VBMI2Concatenate zmm1 and zmm2, extract result shifted to the right by value in zmm3/m512 into zmm1.
EVEX.128.66.0F38.W1 73 /r VPSHRDVQ xmm1{k1}{z}, xmm2, xmm3/m128/m64bcstBV/VAVX512_VBMI2 AVX512VLConcatenate xmm1 and xmm2, extract result shifted to the right by value in xmm3/m128 into xmm1.
EVEX.256.66.0F38.W1 73 /r VPSHRDVQ ymm1{k1}{z}, ymm2, ymm3/m256/m64bcstBV/VAVX512_VBMI2 AVX512VLConcatenate ymm1 and ymm2, extract result shifted to the right by value in xmm3/m256 into ymm1.
EVEX.512.66.0F38.W1 73 /r VPSHRDVQ zmm1{k1}{z}, zmm2, zmm3/m512/m64bcstBV/VAVX512_VBMI2Concatenate zmm1 and zmm2, extract result shifted to the right by value in zmm3/m512 into zmm1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Concatenate packed data, extract result shifted to the right by variable value.

+

This instruction supports memory fault suppression.

+

Operation + ¶ +

+

VPSHRDVW DEST, SRC2, SRC3 + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1:
+    IF MaskBit(j) OR *no writemask*:
+        DEST.word[j] := concat(SRC2.word[j], DEST.word[j]) >> (SRC3.word[j] & 15)
+    ELSE IF *zeroing*:
+        DEST.word[j] := 0
+    *ELSE DEST.word[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VPSHRDVD DEST, SRC2, SRC3 + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1:
+    IF SRC3 is broadcast memop:
+        tsrc3 := SRC3.dword[0]
+    ELSE:
+        tsrc3 := SRC3.dword[j]
+    IF MaskBit(j) OR *no writemask*:
+        DEST.dword[j] := concat(SRC2.dword[j], DEST.dword[j]) >> (tsrc3 & 31)
+    ELSE IF *zeroing*:
+        DEST.dword[j] := 0
+    *ELSE DEST.dword[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

VPSHRDVQ DEST, SRC2, SRC3 + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1:
+    IF SRC3 is broadcast memop:
+        tsrc3 := SRC3.qword[0]
+    ELSE:
+        tsrc3 := SRC3.qword[j]
+    IF MaskBit(j) OR *no writemask*:
+        DEST.qword[j] := concat(SRC2.qword[j], DEST.qword[j]) >> (tsrc3 & 63)
+    ELSE IF *zeroing*:
+        DEST.qword[j] := 0
+    *ELSE DEST.qword[j] remains unchanged*
+DEST[MAX_VL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSHRDVQ __m128i _mm_shrdv_epi64(__m128i, __m128i, __m128i);
+
+
VPSHRDVQ __m128i _mm_mask_shrdv_epi64(__m128i, __mmask8, __m128i, __m128i);
+
+
VPSHRDVQ __m128i _mm_maskz_shrdv_epi64(__mmask8, __m128i, __m128i, __m128i);
+
+
VPSHRDVQ __m256i _mm256_shrdv_epi64(__m256i, __m256i, __m256i);
+
+
VPSHRDVQ __m256i _mm256_mask_shrdv_epi64(__m256i, __mmask8, __m256i, __m256i);
+
+
VPSHRDVQ __m256i _mm256_maskz_shrdv_epi64(__mmask8, __m256i, __m256i, __m256i);
+
+
VPSHRDVQ __m512i _mm512_shrdv_epi64(__m512i, __m512i, __m512i);
+
+
VPSHRDVQ __m512i _mm512_mask_shrdv_epi64(__m512i, __mmask8, __m512i, __m512i);
+
+
VPSHRDVQ __m512i _mm512_maskz_shrdv_epi64(__mmask8, __m512i, __m512i, __m512i);
+
+
VPSHRDVD __m128i _mm_shrdv_epi32(__m128i, __m128i, __m128i);
+
+
VPSHRDVD __m128i _mm_mask_shrdv_epi32(__m128i, __mmask8, __m128i, __m128i);
+
+
VPSHRDVD __m128i _mm_maskz_shrdv_epi32(__mmask8, __m128i, __m128i, __m128i);
+
+
VPSHRDVD __m256i _mm256_shrdv_epi32(__m256i, __m256i, __m256i);
+
+
VPSHRDVD __m256i _mm256_mask_shrdv_epi32(__m256i, __mmask8, __m256i, __m256i);
+
+
VPSHRDVD __m256i _mm256_maskz_shrdv_epi32(__mmask8, __m256i, __m256i, __m256i);
+
+
VPSHRDVD __m512i _mm512_shrdv_epi32(__m512i, __m512i, __m512i);
+
+
VPSHRDVD __m512i _mm512_mask_shrdv_epi32(__m512i, __mmask16, __m512i, __m512i);
+
+
VPSHRDVD __m512i _mm512_maskz_shrdv_epi32(__mmask16, __m512i, __m512i, __m512i);
+
+
VPSHRDVW __m128i _mm_shrdv_epi16(__m128i, __m128i, __m128i);
+
+
VPSHRDVW __m128i _mm_mask_shrdv_epi16(__m128i, __mmask8, __m128i, __m128i);
+
+
VPSHRDVW __m128i _mm_maskz_shrdv_epi16(__mmask8, __m128i, __m128i, __m128i);
+
+
VPSHRDVW __m256i _mm256_shrdv_epi16(__m256i, __m256i, __m256i);
+
+
VPSHRDVW __m256i _mm256_mask_shrdv_epi16(__m256i, __mmask16, __m256i, __m256i);
+
+
VPSHRDVW __m256i _mm256_maskz_shrdv_epi16(__mmask16, __m256i, __m256i, __m256i);
+
+
VPSHRDVW __m512i _mm512_shrdv_epi16(__m512i, __m512i, __m512i);
+
+
VPSHRDVW __m512i _mm512_mask_shrdv_epi16(__m512i, __mmask32, __m512i, __m512i);
+
+
VPSHRDVW __m512i _mm512_maskz_shrdv_epi16(__mmask32, __m512i, __m512i, __m512i);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpshufbitqmb.html b/x86/vpshufbitqmb.html new file mode 100644 index 0000000..82009c7 --- /dev/null +++ b/x86/vpshufbitqmb.html @@ -0,0 +1,90 @@ + +VPSHUFBITQMB + — Shuffle Bits From Quadword Elements Using Byte Indexes Into Mask

VPSHUFBITQMB + — Shuffle Bits From Quadword Elements Using Byte Indexes Into Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 8F /r VPSHUFBITQMB k1{k2}, xmm2, xmm3/m128AV/VAVX512_BITALG AVX512VLExtract values in xmm2 using control bits of xmm3/m128 with writemask k2 and leave the result in mask register k1.
EVEX.256.66.0F38.W0 8F /r VPSHUFBITQMB k1{k2}, ymm2, ymm3/m256AV/VAVX512_BITALG AVX512VLExtract values in ymm2 using control bits of ymm3/m256 with writemask k2 and leave the result in mask register k1.
EVEX.512.66.0F38.W0 8F /r VPSHUFBITQMB k1{k2}, zmm2, zmm3/m512AV/VAVX512_BITALGExtract values in zmm2 using control bits of zmm3/m512 with writemask k2 and leave the result in mask register k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

The VPSHUFBITQMB instruction performs a bit gather select using second source as control and first source as data. Each bit uses 6 control bits (2nd source operand) to select which data bit is going to be gathered (first source operand). A given bit can only access 64 different bits of data (first 64 destination bits can access first 64 data bits, second 64 destination bits can access second 64 data bits, etc.).

+

Control data for each output bit is stored in 8 bit elements of SRC2, but only the 6 least significant bits of each element are used.

+

This instruction uses write masking (zeroing only). This instruction supports memory fault suppression.

+

The first source operand is a ZMM register. The second source operand is a ZMM register or a memory location. The destination operand is a mask register.

+

Operation + ¶ +

+

VPSHUFBITQMB DEST, SRC1, SRC2 + ¶ +

+
(KL, VL) = (16,128), (32,256), (64, 512)
+FOR i := 0 TO KL/8-1: //Qword
+    FOR j := 0 to 7: // Byte
+        IF k2[i*8+j] or *no writemask*:
+            m := SRC2.qword[i].byte[j] & 0x3F
+            k1[i*8+j] := SRC1.qword[i].bit[m]
+        ELSE:
+            k1[i*8+j] := 0
+k1[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSHUFBITQMB __mmask16 _mm_bitshuffle_epi64_mask(__m128i, __m128i);
+
+
VPSHUFBITQMB __mmask16 _mm_mask_bitshuffle_epi64_mask(__mmask16, __m128i, __m128i);
+
+
VPSHUFBITQMB __mmask32 _mm256_bitshuffle_epi64_mask(__m256i, __m256i);
+
+
VPSHUFBITQMB __mmask32 _mm256_mask_bitshuffle_epi64_mask(__mmask32, __m256i, __m256i);
+
+
VPSHUFBITQMB __mmask64 _mm512_bitshuffle_epi64_mask(__m512i, __m512i);
+
+
VPSHUFBITQMB __mmask64 _mm512_mask_bitshuffle_epi64_mask(__mmask64, __m512i, __m512i);
+
diff --git a/x86/vpsllvw.vpsllvd.vpsllvq.html b/x86/vpsllvw.vpsllvd.vpsllvq.html new file mode 100644 index 0000000..a0c910e --- /dev/null +++ b/x86/vpsllvw.vpsllvd.vpsllvq.html @@ -0,0 +1,327 @@ + +VPSLLVW/VPSLLVD/VPSLLVQ + — Variable Bit Shift Left Logical

VPSLLVW/VPSLLVD/VPSLLVQ + — Variable Bit Shift Left Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 47 /r VPSLLVD xmm1, xmm2, xmm3/m128AV/VAVX2Shift doublewords in xmm2 left by amount specified in the corresponding element of xmm3/m128 while shifting in 0s.
VEX.128.66.0F38.W1 47 /r VPSLLVQ xmm1, xmm2, xmm3/m128AV/VAVX2Shift quadwords in xmm2 left by amount specified in the corresponding element of xmm3/m128 while shifting in 0s.
VEX.256.66.0F38.W0 47 /r VPSLLVD ymm1, ymm2, ymm3/m256AV/VAVX2Shift doublewords in ymm2 left by amount specified in the corresponding element of ymm3/m256 while shifting in 0s.
VEX.256.66.0F38.W1 47 /r VPSLLVQ ymm1, ymm2, ymm3/m256AV/VAVX2Shift quadwords in ymm2 left by amount specified in the corresponding element of ymm3/m256 while shifting in 0s.
EVEX.128.66.0F38.W1 12 /r VPSLLVW xmm1 {k1}{z}, xmm2, xmm3/m128BV/VAVX512VL AVX512BWShift words in xmm2 left by amount specified in the corresponding element of xmm3/m128 while shifting in 0s using writemask k1.
EVEX.256.66.0F38.W1 12 /r VPSLLVW ymm1 {k1}{z}, ymm2, ymm3/m256BV/VAVX512VL AVX512BWShift words in ymm2 left by amount specified in the corresponding element of ymm3/m256 while shifting in 0s using writemask k1.
EVEX.512.66.0F38.W1 12 /r VPSLLVW zmm1 {k1}{z}, zmm2, zmm3/m512BV/VAVX512BWShift words in zmm2 left by amount specified in the corresponding element of zmm3/m512 while shifting in 0s using writemask k1.
EVEX.128.66.0F38.W0 47 /r VPSLLVD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FShift doublewords in xmm2 left by amount specified in the corresponding element of xmm3/m128/m32bcst while shifting in 0s using writemask k1.
EVEX.256.66.0F38.W0 47 /r VPSLLVD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FShift doublewords in ymm2 left by amount specified in the corresponding element of ymm3/m256/m32bcst while shifting in 0s using writemask k1.
EVEX.512.66.0F38.W0 47 /r VPSLLVD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FShift doublewords in zmm2 left by amount specified in the corresponding element of zmm3/m512/m32bcst while shifting in 0s using writemask k1.
EVEX.128.66.0F38.W1 47 /r VPSLLVQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FShift quadwords in xmm2 left by amount specified in the corresponding element of xmm3/m128/m64bcst while shifting in 0s using writemask k1.
EVEX.256.66.0F38.W1 47 /r VPSLLVQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FShift quadwords in ymm2 left by amount specified in the corresponding element of ymm3/m256/m64bcst while shifting in 0s using writemask k1.
EVEX.512.66.0F38.W1 47 /r VPSLLVQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FShift quadwords in zmm2 left by amount specified in the corresponding element of zmm3/m512/m64bcst while shifting in 0s using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Shifts the bits in the individual data elements (words, doublewords or quadword) in the first source operand to the left by the count value of respective data elements in the second source operand. As the bits in the data elements are shifted left, the empty low-order bits are cleared (set to 0).

+

The count values are specified individually in each data element of the second source operand. If the unsigned integer value specified in the respective data element of the second source operand is greater than 15 (for word), 31 (for doublewords), or 63 (for a quadword), then the destination data element are written with 0.

+

VEX.128 encoded version: The destination and first source operands are XMM registers. The count operand can be either an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The destination and first source operands are YMM registers. The count operand can be either an YMM register or a 256-bit memory. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded VPSLLVD/Q: The destination and first source operands are ZMM/YMM/XMM registers. The count operand can be either a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location. The destination is conditionally updated with writemask k1.

+

EVEX encoded VPSLLVW: The destination and first source operands are ZMM/YMM/XMM registers. The count operand can be either a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination is conditionally updated with writemask k1.

+

Operation + ¶ +

+

VPSLLVW (EVEX encoded version) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := ZeroExtend(SRC1[i+15:i] << SRC2[i+15:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

VPSLLVD (VEX.128 version) + ¶ +

+
COUNT_0 := SRC2[31 : 0]
+    (* Repeat Each COUNT_i for the 2nd through 4th dwords of SRC2*)
+COUNT_3 := SRC2[127 : 96];
+IF COUNT_0 < 32 THEN
+DEST[31:0] := ZeroExtend(SRC1[31:0] << COUNT_0);
+ELSE
+DEST[31:0] := 0;
+    (* Repeat shift operation for 2nd through 4th dwords *)
+IF COUNT_3 < 32 THEN
+DEST[127:96] := ZeroExtend(SRC1[127:96] << COUNT_3);
+ELSE
+DEST[127:96] := 0;
+DEST[MAXVL-1:128] := 0;
+
+

VPSLLVD (VEX.256 version) + ¶ +

+
COUNT_0 := SRC2[31 : 0];
+    (* Repeat Each COUNT_i for the 2nd through 7th dwords of SRC2*)
+COUNT_7 := SRC2[255 : 224];
+IF COUNT_0 < 32 THEN
+DEST[31:0] := ZeroExtend(SRC1[31:0] << COUNT_0);
+ELSE
+DEST[31:0] := 0;
+    (* Repeat shift operation for 2nd through 7th dwords *)
+IF COUNT_7 < 32 THEN
+DEST[255:224] := ZeroExtend(SRC1[255:224] << COUNT_7);
+ELSE
+DEST[255:224] := 0;
+DEST[MAXVL-1:256] := 0;
+
+

VPSLLVD (EVEX encoded version) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := ZeroExtend(SRC1[i+31:i] << SRC2[31:0])
+                ELSE DEST[i+31:i] := ZeroExtend(SRC1[i+31:i] << SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

VPSLLVQ (VEX.128 version) + ¶ +

+
COUNT_0 := SRC2[63 : 0];
+COUNT_1 := SRC2[127 : 64];
+IF COUNT_0 < 64THEN
+DEST[63:0] := ZeroExtend(SRC1[63:0] << COUNT_0);
+ELSE
+DEST[63:0] := 0;
+IF COUNT_1 < 64 THEN
+DEST[127:64] := ZeroExtend(SRC1[127:64] << COUNT_1);
+ELSE
+DEST[127:96] := 0;
+DEST[MAXVL-1:128] := 0;
+
+

VPSLLVQ (VEX.256 version) + ¶ +

+
COUNT_0 := SRC2[63 : 0];
+    (* Repeat Each COUNT_i for the 2nd through 4th dwords of SRC2*)
+COUNT_3 := SRC2[255 : 192];
+IF COUNT_0 < 64THEN
+DEST[63:0] := ZeroExtend(SRC1[63:0] << COUNT_0);
+ELSE
+DEST[63:0] := 0;
+    (* Repeat shift operation for 2nd through 4th dwords *)
+IF COUNT_3 < 64 THEN
+DEST[255:192] := ZeroExtend(SRC1[255:192] << COUNT_3);
+ELSE
+DEST[255:192] := 0;
+DEST[MAXVL-1:256] := 0;
+
+

VPSLLVQ (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := ZeroExtend(SRC1[i+63:i] << SRC2[63:0])
+                ELSE DEST[i+63:i] := ZeroExtend(SRC1[i+63:i] << SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSLLVW __m512i _mm512_sllv_epi16(__m512i a, __m512i cnt);
+
+
VPSLLVW __m512i _mm512_mask_sllv_epi16(__m512i s, __mmask32 k, __m512i a, __m512i cnt);
+
+
VPSLLVW __m512i _mm512_maskz_sllv_epi16( __mmask32 k, __m512i a, __m512i cnt);
+
+
VPSLLVW __m256i _mm256_mask_sllv_epi16(__m256i s, __mmask16 k, __m256i a, __m256i cnt);
+
+
VPSLLVW __m256i _mm256_maskz_sllv_epi16( __mmask16 k, __m256i a, __m256i cnt);
+
+
VPSLLVW __m128i _mm_mask_sllv_epi16(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLVW __m128i _mm_maskz_sllv_epi16( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLVD __m512i _mm512_sllv_epi32(__m512i a, __m512i cnt);
+
+
VPSLLVD __m512i _mm512_mask_sllv_epi32(__m512i s, __mmask16 k, __m512i a, __m512i cnt);
+
+
VPSLLVD __m512i _mm512_maskz_sllv_epi32( __mmask16 k, __m512i a, __m512i cnt);
+
+
VPSLLVD __m256i _mm256_mask_sllv_epi32(__m256i s, __mmask8 k, __m256i a, __m256i cnt);
+
+
VPSLLVD __m256i _mm256_maskz_sllv_epi32( __mmask8 k, __m256i a, __m256i cnt);
+
+
VPSLLVD __m128i _mm_mask_sllv_epi32(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLVD __m128i _mm_maskz_sllv_epi32( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLVQ __m512i _mm512_sllv_epi64(__m512i a, __m512i cnt);
+
+
VPSLLVQ __m512i _mm512_mask_sllv_epi64(__m512i s, __mmask8 k, __m512i a, __m512i cnt);
+
+
VPSLLVQ __m512i _mm512_maskz_sllv_epi64( __mmask8 k, __m512i a, __m512i cnt);
+
+
VPSLLVD __m256i _mm256_mask_sllv_epi64(__m256i s, __mmask8 k, __m256i a, __m256i cnt);
+
+
VPSLLVD __m256i _mm256_maskz_sllv_epi64( __mmask8 k, __m256i a, __m256i cnt);
+
+
VPSLLVD __m128i _mm_mask_sllv_epi64(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLVD __m128i _mm_maskz_sllv_epi64( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSLLVD __m256i _mm256_sllv_epi32 (__m256i m, __m256i count)
+
+
VPSLLVQ __m256i _mm256_sllv_epi64 (__m256i m, __m256i count)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPSLLVD/VPSLLVQ, see Table 2-49, “Type E4 Class Exception Conditions.”

+

EVEX-encoded VPSLLVW, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpsravw.vpsravd.vpsravq.html b/x86/vpsravw.vpsravd.vpsravq.html new file mode 100644 index 0000000..b01822a --- /dev/null +++ b/x86/vpsravw.vpsravd.vpsravq.html @@ -0,0 +1,320 @@ + +VPSRAVW/VPSRAVD/VPSRAVQ + — Variable Bit Shift Right Arithmetic

VPSRAVW/VPSRAVD/VPSRAVQ + — Variable Bit Shift Right Arithmetic

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 46 /r VPSRAVD xmm1, xmm2, xmm3/m128AV/VAVX2Shift doublewords in xmm2 right by amount specified in the corresponding element of xmm3/m128 while shifting in sign bits.
VEX.256.66.0F38.W0 46 /r VPSRAVD ymm1, ymm2, ymm3/m256AV/VAVX2Shift doublewords in ymm2 right by amount specified in the corresponding element of ymm3/m256 while shifting in sign bits.
EVEX.128.66.0F38.W1 11 /r VPSRAVW xmm1 {k1}{z}, xmm2, xmm3/m128BV/VAVX512VL AVX512BWShift words in xmm2 right by amount specified in the corresponding element of xmm3/m128 while shifting in sign bits using writemask k1.
EVEX.256.66.0F38.W1 11 /r VPSRAVW ymm1 {k1}{z}, ymm2, ymm3/m256BV/VAVX512VL AVX512BWShift words in ymm2 right by amount specified in the corresponding element of ymm3/m256 while shifting in sign bits using writemask k1.
EVEX.512.66.0F38.W1 11 /r VPSRAVW zmm1 {k1}{z}, zmm2, zmm3/m512BV/VAVX512BWShift words in zmm2 right by amount specified in the corresponding element of zmm3/m512 while shifting in sign bits using writemask k1.
EVEX.128.66.0F38.W0 46 /r VPSRAVD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FShift doublewords in xmm2 right by amount specified in the corresponding element of xmm3/m128/m32bcst while shifting in sign bits using writemask k1.
EVEX.256.66.0F38.W0 46 /r VPSRAVD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FShift doublewords in ymm2 right by amount specified in the corresponding element of ymm3/m256/m32bcst while shifting in sign bits using writemask k1.
EVEX.512.66.0F38.W0 46 /r VPSRAVD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FShift doublewords in zmm2 right by amount specified in the corresponding element of zmm3/m512/m32bcst while shifting in sign bits using writemask k1.
EVEX.128.66.0F38.W1 46 /r VPSRAVQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FShift quadwords in xmm2 right by amount specified in the corresponding element of xmm3/m128/m64bcst while shifting in sign bits using writemask k1.
EVEX.256.66.0F38.W1 46 /r VPSRAVQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FShift quadwords in ymm2 right by amount specified in the corresponding element of ymm3/m256/m64bcst while shifting in sign bits using writemask k1.
EVEX.512.66.0F38.W1 46 /r VPSRAVQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FShift quadwords in zmm2 right by amount specified in the corresponding element of zmm3/m512/m64bcst while shifting in sign bits using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Shifts the bits in the individual data elements (word/doublewords/quadword) in the first source operand (the second operand) to the right by the number of bits specified in the count value of respective data elements in the second source operand (the third operand). As the bits in the data elements are shifted right, the empty high-order bits are set to the MSB (sign extension).

+

The count values are specified individually in each data element of the second source operand. If the unsigned integer value specified in the respective data element of the second source operand is greater than 15 (for words), 31 (for doublewords), or 63 (for a quadword), then the destination data element is filled with the corresponding sign bit of the source element.

+

VEX.128 encoded version: The destination and first source operands are XMM registers. The count operand can be either an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The destination and first source operands are YMM registers. The count operand can be either an YMM register or a 256-bit memory. Bits (MAXVL-1:256) of the corresponding destination register are zeroed.

+

EVEX.512/256/128 encoded VPSRAVD/W: The destination and first source operands are ZMM/YMM/XMM registers. The count operand can be either a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination is conditionally updated with writemask k1.

+

EVEX.512/256/128 encoded VPSRAVQ: The destination and first source operands are ZMM/YMM/XMM registers. The count operand can be either a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination is conditionally updated with writemask k1.

+

Operation + ¶ +

+

VPSRAVW (EVEX encoded version) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN
+            COUNT := SRC2[i+3:i]
+            IF COUNT < 16
+                THEN DEST[i+15:i] := SignExtend(SRC1[i+15:i] >> COUNT)
+                ELSE
+                    FOR k := 0 TO 15
+                        DEST[i+k] := SRC1[i+15]
+                    ENDFOR;
+            FI
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

VPSRAVD (VEX.128 version) + ¶ +

+
COUNT_0 := SRC2[31 : 0]
+    (* Repeat Each COUNT_i for the 2nd through 4th dwords of SRC2*)
+COUNT_3 := SRC2[127 : 96];
+DEST[31:0] := SignExtend(SRC1[31:0] >> COUNT_0);
+    (* Repeat shift operation for 2nd through 4th dwords *)
+DEST[127:96] := SignExtend(SRC1[127:96] >> COUNT_3);
+DEST[MAXVL-1:128] := 0;
+
+

VPSRAVD (VEX.256 version) + ¶ +

+
COUNT_0 := SRC2[31 : 0];
+    (* Repeat Each COUNT_i for the 2nd through 8th dwords of SRC2*)
+COUNT_7 := SRC2[255 : 224];
+DEST[31:0] := SignExtend(SRC1[31:0] >> COUNT_0);
+    (* Repeat shift operation for 2nd through 7th dwords *)
+DEST[255:224] := SignExtend(SRC1[255:224] >> COUNT_7);
+DEST[MAXVL-1:256] := 0;
+
+

VPSRAVD (EVEX encoded version) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    COUNT := SRC2[4:0]
+                    IF COUNT < 32
+                        THEN DEST[i+31:i] := SignExtend(SRC1[i+31:i] >> COUNT)
+                        ELSE
+                            FOR k := 0 TO 31
+                                DEST[i+k] := SRC1[i+31]
+                            ENDFOR;
+                    FI
+                ELSE
+                    COUNT := SRC2[i+4:i]
+                    IF COUNT < 32
+                        THEN DEST[i+31:i] := SignExtend(SRC1[i+31:i] >> COUNT)
+                        ELSE
+                            FOR k := 0 TO 31
+                                DEST[i+k] := SRC1[i+31]
+                            ENDFOR;
+                    FI
+            FI;
+    ELSE
+        IF *merging-masking*
+                                    ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE
+                                    ; zeroing-masking
+                DEST[31:0] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

VPSRAVQ (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN
+                    COUNT := SRC2[5:0]
+                    IF COUNT < 64
+                        THEN DEST[i+63:i] := SignExtend(SRC1[i+63:i] >> COUNT)
+                        ELSE
+                            FOR k := 0 TO 63
+                                DEST[i+k] := SRC1[i+63]
+                            ENDFOR;
+                    FI
+                ELSE
+                    COUNT := SRC2[i+5:i]
+                    IF COUNT < 64
+                        THEN DEST[i+63:i] := SignExtend(SRC1[i+63:i] >> COUNT)
+                        ELSE
+                            FOR k := 0 TO 63
+                                DEST[i+k] := SRC1[i+63]
+                            ENDFOR;
+                    FI
+            FI;
+    ELSE
+        IF *merging-masking*
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[63:0] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSRAVD __m512i _mm512_srav_epi32(__m512i a, __m512i cnt);
+
+
VPSRAVD __m512i _mm512_mask_srav_epi32(__m512i s, __mmask16 m, __m512i a, __m512i cnt);
+
+
VPSRAVD __m512i _mm512_maskz_srav_epi32(__mmask16 m, __m512i a, __m512i cnt);
+
+
VPSRAVD __m256i _mm256_srav_epi32(__m256i a, __m256i cnt);
+
+
VPSRAVD __m256i _mm256_mask_srav_epi32(__m256i s, __mmask8 m, __m256i a, __m256i cnt);
+
+
VPSRAVD __m256i _mm256_maskz_srav_epi32(__mmask8 m, __m256i a, __m256i cnt);
+
+
VPSRAVD __m128i _mm_srav_epi32(__m128i a, __m128i cnt);
+
+
VPSRAVD __m128i _mm_mask_srav_epi32(__m128i s, __mmask8 m, __m128i a, __m128i cnt);
+
+
VPSRAVD __m128i _mm_maskz_srav_epi32(__mmask8 m, __m128i a, __m128i cnt);
+
+
VPSRAVQ __m512i _mm512_srav_epi64(__m512i a, __m512i cnt);
+
+
VPSRAVQ __m512i _mm512_mask_srav_epi64(__m512i s, __mmask8 m, __m512i a, __m512i cnt);
+
+
VPSRAVQ __m512i _mm512_maskz_srav_epi64( __mmask8 m, __m512i a, __m512i cnt);
+
+
VPSRAVQ __m256i _mm256_srav_epi64(__m256i a, __m256i cnt);
+
+
VPSRAVQ __m256i _mm256_mask_srav_epi64(__m256i s, __mmask8 m, __m256i a, __m256i cnt);
+
+
VPSRAVQ __m256i _mm256_maskz_srav_epi64( __mmask8 m, __m256i a, __m256i cnt);
+
+
VPSRAVQ __m128i _mm_srav_epi64(__m128i a, __m128i cnt);
+
+
VPSRAVQ __m128i _mm_mask_srav_epi64(__m128i s, __mmask8 m, __m128i a, __m128i cnt);
+
+
VPSRAVQ __m128i _mm_maskz_srav_epi64( __mmask8 m, __m128i a, __m128i cnt);
+
+
VPSRAVW __m512i _mm512_srav_epi16(__m512i a, __m512i cnt);
+
+
VPSRAVW __m512i _mm512_mask_srav_epi16(__m512i s, __mmask32 m, __m512i a, __m512i cnt);
+
+
VPSRAVW __m512i _mm512_maskz_srav_epi16(__mmask32 m, __m512i a, __m512i cnt);
+
+
VPSRAVW __m256i _mm256_srav_epi16(__m256i a, __m256i cnt);
+
+
VPSRAVW __m256i _mm256_mask_srav_epi16(__m256i s, __mmask16 m, __m256i a, __m256i cnt);
+
+
VPSRAVW __m256i _mm256_maskz_srav_epi16(__mmask16 m, __m256i a, __m256i cnt);
+
+
VPSRAVW __m128i _mm_srav_epi16(__m128i a, __m128i cnt);
+
+
VPSRAVW __m128i _mm_mask_srav_epi16(__m128i s, __mmask8 m, __m128i a, __m128i cnt);
+
+
VPSRAVW __m128i _mm_maskz_srav_epi32(__mmask8 m, __m128i a, __m128i cnt);
+
+
VPSRAVD __m256i _mm256_srav_epi32 (__m256i m, __m256i count)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instruction, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpsrlvw.vpsrlvd.vpsrlvq.html b/x86/vpsrlvw.vpsrlvd.vpsrlvq.html new file mode 100644 index 0000000..0e793fb --- /dev/null +++ b/x86/vpsrlvw.vpsrlvd.vpsrlvq.html @@ -0,0 +1,330 @@ + +VPSRLVW/VPSRLVD/VPSRLVQ + — Variable Bit Shift Right Logical

VPSRLVW/VPSRLVD/VPSRLVQ + — Variable Bit Shift Right Logical

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 45 /r VPSRLVD xmm1, xmm2, xmm3/m128AV/VAVX2Shift doublewords in xmm2 right by amount specified in the corresponding element of xmm3/m128 while shifting in 0s.
VEX.128.66.0F38.W1 45 /r VPSRLVQ xmm1, xmm2, xmm3/m128AV/VAVX2Shift quadwords in xmm2 right by amount specified in the corresponding element of xmm3/m128 while shifting in 0s.
VEX.256.66.0F38.W0 45 /r VPSRLVD ymm1, ymm2, ymm3/m256AV/VAVX2Shift doublewords in ymm2 right by amount specified in the corresponding element of ymm3/m256 while shifting in 0s.
VEX.256.66.0F38.W1 45 /r VPSRLVQ ymm1, ymm2, ymm3/m256AV/VAVX2Shift quadwords in ymm2 right by amount specified in the corresponding element of ymm3/m256 while shifting in 0s.
EVEX.128.66.0F38.W1 10 /r VPSRLVW xmm1 {k1}{z}, xmm2, xmm3/m128BV/VAVX512VL AVX512BWShift words in xmm2 right by amount specified in the corresponding element of xmm3/m128 while shifting in 0s using writemask k1.
EVEX.256.66.0F38.W1 10 /r VPSRLVW ymm1 {k1}{z}, ymm2, ymm3/m256BV/VAVX512VL AVX512BWShift words in ymm2 right by amount specified in the corresponding element of ymm3/m256 while shifting in 0s using writemask k1.
EVEX.512.66.0F38.W1 10 /r VPSRLVW zmm1 {k1}{z}, zmm2, zmm3/m512BV/VAVX512BWShift words in zmm2 right by amount specified in the corresponding element of zmm3/m512 while shifting in 0s using writemask k1.
EVEX.128.66.0F38.W0 45 /r VPSRLVD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512FShift doublewords in xmm2 right by amount specified in the corresponding element of xmm3/m128/m32bcst while shifting in 0s using writemask k1.
EVEX.256.66.0F38.W0 45 /r VPSRLVD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512FShift doublewords in ymm2 right by amount specified in the corresponding element of ymm3/m256/m32bcst while shifting in 0s using writemask k1.
EVEX.512.66.0F38.W0 45 /r VPSRLVD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512FShift doublewords in zmm2 right by amount specified in the corresponding element of zmm3/m512/m32bcst while shifting in 0s using writemask k1.
EVEX.128.66.0F38.W1 45 /r VPSRLVQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512FShift quadwords in xmm2 right by amount specified in the corresponding element of xmm3/m128/m64bcst while shifting in 0s using writemask k1.
EVEX.256.66.0F38.W1 45 /r VPSRLVQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512FShift quadwords in ymm2 right by amount specified in the corresponding element of ymm3/m256/m64bcst while shifting in 0s using writemask k1.
EVEX.512.66.0F38.W1 45 /r VPSRLVQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512FShift quadwords in zmm2 right by amount specified in the corresponding element of zmm3/m512/m64bcst while shifting in 0s using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
BFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Shifts the bits in the individual data elements (words, doublewords or quadword) in the first source operand to the right by the count value of respective data elements in the second source operand. As the bits in the data elements are shifted right, the empty high-order bits are cleared (set to 0).

+

The count values are specified individually in each data element of the second source operand. If the unsigned integer value specified in the respective data element of the second source operand is greater than 15 (for word), 31 (for doublewords), or 63 (for a quadword), then the destination data element are written with 0.

+

VEX.128 encoded version: The destination and first source operands are XMM registers. The count operand can be either an XMM register or a 128-bit memory location. Bits (MAXVL-1:128) of the corresponding destination register are zeroed.

+

VEX.256 encoded version: The destination and first source operands are YMM registers. The count operand can be either an YMM register or a 256-bit memory. Bits (MAXVL-1:256) of the corresponding ZMM register are zeroed.

+

EVEX encoded VPSRLVD/Q: The destination and first source operands are ZMM/YMM/XMM registers. The count operand can be either a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location. The destination is conditionally updated with writemask k1.

+

EVEX encoded VPSRLVW: The destination and first source operands are ZMM/YMM/XMM registers. The count operand can be either a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination is conditionally updated with writemask k1.

+

Operation + ¶ +

+

VPSRLVW (EVEX encoded version) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+15:i] := ZeroExtend(SRC1[i+15:i] >> SRC2[i+15:i])
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+15:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+15:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

VPSRLVD (VEX.128 version) + ¶ +

+
COUNT_0 := SRC2[31 : 0]
+    (* Repeat Each COUNT_i for the 2nd through 4th dwords of SRC2*)
+COUNT_3 := SRC2[127 : 96];
+IF COUNT_0 < 32 THEN
+    DEST[31:0] := ZeroExtend(SRC1[31:0] >> COUNT_0);
+ELSE
+    DEST[31:0] := 0;
+    (* Repeat shift operation for 2nd through 4th dwords *)
+IF COUNT_3 < 32 THEN
+    DEST[127:96] := ZeroExtend(SRC1[127:96] >> COUNT_3);
+ELSE
+    DEST[127:96] := 0;
+DEST[MAXVL-1:128] := 0;
+
+

VPSRLVD (VEX.256 version) + ¶ +

+
COUNT_0 := SRC2[31 : 0];
+    (* Repeat Each COUNT_i for the 2nd through 7th dwords of SRC2*)
+COUNT_7 := SRC2[255 : 224];
+IF COUNT_0 < 32 THEN
+DEST[31:0] := ZeroExtend(SRC1[31:0] >> COUNT_0);
+ELSE
+DEST[31:0] := 0;
+    (* Repeat shift operation for 2nd through 7th dwords *)
+IF COUNT_7 < 32 THEN
+    DEST[255:224] := ZeroExtend(SRC1[255:224] >> COUNT_7);
+ELSE
+    DEST[255:224] := 0;
+DEST[MAXVL-1:256] := 0;
+
+

VPSRLVD (EVEX encoded version) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := ZeroExtend(SRC1[i+31:i] >> SRC2[31:0])
+                ELSE DEST[i+31:i] := ZeroExtend(SRC1[i+31:i] >> SRC2[i+31:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

VPSRLVQ (VEX.128 version) + ¶ +

+
COUNT_0 := SRC2[63 : 0];
+COUNT_1 := SRC2[127 : 64];
+IF COUNT_0 < 64 THEN
+    DEST[63:0] := ZeroExtend(SRC1[63:0] >> COUNT_0);
+ELSE
+    DEST[63:0] := 0;
+IF COUNT_1 < 64 THEN
+    DEST[127:64] := ZeroExtend(SRC1[127:64] >> COUNT_1);
+ELSE
+    DEST[127:64] := 0;
+DEST[MAXVL-1:128] := 0;
+
+

VPSRLVQ (VEX.256 version) + ¶ +

+
COUNT_0 := SRC2[63 : 0];
+    (* Repeat Each COUNT_i for the 2nd through 4th dwords of SRC2*)
+COUNT_3 := SRC2[255 : 192];
+IF COUNT_0 < 64 THEN
+DEST[63:0] := ZeroExtend(SRC1[63:0] >> COUNT_0);
+ELSE
+DEST[63:0] := 0;
+    (* Repeat shift operation for 2nd through 4th dwords *)
+IF COUNT_3 < 64 THEN
+    DEST[255:192] := ZeroExtend(SRC1[255:192] >> COUNT_3);
+ELSE
+    DEST[255:192] := 0;
+DEST[MAXVL-1:256] := 0;
+
+

VPSRLVQ (EVEX encoded version) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := ZeroExtend(SRC1[i+63:i] >> SRC2[63:0])
+                ELSE DEST[i+63:i] := ZeroExtend(SRC1[i+63:i] >> SRC2[i+63:i])
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPSRLVW __m512i _mm512_srlv_epi16(__m512i a, __m512i cnt);
+
+
VPSRLVW __m512i _mm512_mask_srlv_epi16(__m512i s, __mmask32 k, __m512i a, __m512i cnt);
+
+
VPSRLVW __m512i _mm512_maskz_srlv_epi16( __mmask32 k, __m512i a, __m512i cnt);
+
+
VPSRLVW __m256i _mm256_mask_srlv_epi16(__m256i s, __mmask16 k, __m256i a, __m256i cnt);
+
+
VPSRLVW __m256i _mm256_maskz_srlv_epi16( __mmask16 k, __m256i a, __m256i cnt);
+
+
VPSRLVW __m128i _mm_mask_srlv_epi16(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLVW __m128i _mm_maskz_srlv_epi16( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLVW __m256i _mm256_srlv_epi32 (__m256i m, __m256i count)
+
+
VPSRLVD __m512i _mm512_srlv_epi32(__m512i a, __m512i cnt);
+
+
VPSRLVD __m512i _mm512_mask_srlv_epi32(__m512i s, __mmask16 k, __m512i a, __m512i cnt);
+
+
VPSRLVD __m512i _mm512_maskz_srlv_epi32( __mmask16 k, __m512i a, __m512i cnt);
+
+
VPSRLVD __m256i _mm256_mask_srlv_epi32(__m256i s, __mmask8 k, __m256i a, __m256i cnt);
+
+
VPSRLVD __m256i _mm256_maskz_srlv_epi32( __mmask8 k, __m256i a, __m256i cnt);
+
+
VPSRLVD __m128i _mm_mask_srlv_epi32(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLVD __m128i _mm_maskz_srlv_epi32( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLVQ __m512i _mm512_srlv_epi64(__m512i a, __m512i cnt);
+
+
VPSRLVQ __m512i _mm512_mask_srlv_epi64(__m512i s, __mmask8 k, __m512i a, __m512i cnt);
+
+
VPSRLVQ __m512i _mm512_maskz_srlv_epi64( __mmask8 k, __m512i a, __m512i cnt);
+
+
VPSRLVQ __m256i _mm256_mask_srlv_epi64(__m256i s, __mmask8 k, __m256i a, __m256i cnt);
+
+
VPSRLVQ __m256i _mm256_maskz_srlv_epi64( __mmask8 k, __m256i a, __m256i cnt);
+
+
VPSRLVQ __m128i _mm_mask_srlv_epi64(__m128i s, __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLVQ __m128i _mm_maskz_srlv_epi64( __mmask8 k, __m128i a, __m128i cnt);
+
+
VPSRLVQ __m256i _mm256_srlv_epi64 (__m256i m, __m256i count)
+
+
VPSRLVD __m128i _mm_srlv_epi32( __m128i a, __m128i cnt);
+
+
VPSRLVQ __m128i _mm_srlv_epi64( __m128i a, __m128i cnt);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VEX-encoded instructions, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded VPSRLVD/Q, see Table 2-49, “Type E4 Class Exception Conditions.”

+

EVEX-encoded VPSRLVW, see Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vpternlogd.vpternlogq.html b/x86/vpternlogd.vpternlogq.html new file mode 100644 index 0000000..d9f195c --- /dev/null +++ b/x86/vpternlogd.vpternlogq.html @@ -0,0 +1,262 @@ + +VPTERNLOGD/VPTERNLOGQ + — Bitwise Ternary Logic

VPTERNLOGD/VPTERNLOGQ + — Bitwise Ternary Logic

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 25 /r ib VPTERNLOGD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst, imm8AV/VAVX512VL AVX512FBitwise ternary logic taking xmm1, xmm2, and xmm3/m128/m32bcst as source operands and writing the result to xmm1 under writemask k1 with dword granularity. The immediate value determines the specific binary function being implemented.
EVEX.256.66.0F3A.W0 25 /r ib VPTERNLOGD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst, imm8AV/VAVX512VL AVX512FBitwise ternary logic taking ymm1, ymm2, and ymm3/m256/m32bcst as source operands and writing the result to ymm1 under writemask k1 with dword granularity. The immediate value determines the specific binary function being implemented.
EVEX.512.66.0F3A.W0 25 /r ib VPTERNLOGD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst, imm8AV/VAVX512FBitwise ternary logic taking zmm1, zmm2, and zmm3/m512/m32bcst as source operands and writing the result to zmm1 under writemask k1 with dword granularity. The immediate value determines the specific binary function being implemented.
EVEX.128.66.0F3A.W1 25 /r ib VPTERNLOGQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst, imm8AV/VAVX512VL AVX512FBitwise ternary logic taking xmm1, xmm2, and xmm3/m128/m64bcst as source operands and writing the result to xmm1 under writemask k1 with qword granularity. The immediate value determines the specific binary function being implemented.
EVEX.256.66.0F3A.W1 25 /r ib VPTERNLOGQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst, imm8AV/VAVX512VL AVX512FBitwise ternary logic taking ymm1, ymm2, and ymm3/m256/m64bcst as source operands and writing the result to ymm1 under writemask k1 with qword granularity. The immediate value determines the specific binary function being implemented.
EVEX.512.66.0F3A.W1 25 /r ib VPTERNLOGQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst, imm8AV/VAVX512FBitwise ternary logic taking zmm1, zmm2, and zmm3/m512/m64bcst as source operands and writing the result to zmm1 under writemask k1 with qword granularity. The immediate value determines the specific binary function being implemented.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (r, w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

VPTERNLOGD/Q takes three bit vectors of 512-bit length (in the first, second, and third operand) as input data to form a set of 512 indices, each index is comprised of one bit from each input vector. The imm8 byte specifies a boolean logic table producing a binary value for each 3-bit index value. The final 512-bit boolean result is written to the destination operand (the first operand) using the writemask k1 with the granularity of doubleword element or quadword element into the destination.

+

The destination operand is a ZMM (EVEX.512)/YMM (EVEX.256)/XMM (EVEX.128) register. The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location The destination operand is a ZMM register conditionally updated with writemask k1.

+

Table 5-22 shows two examples of Boolean functions specified by immediate values 0xE2 and 0xE4, with the look up result listed in the fourth column following the three columns containing all possible values of the 3-bit index.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
VPTERNLOGD reg1, reg2, src3, 0xE2Bit Result with Imm8=0xE2VPTERNLOGD reg1, reg2, src3, 0xE4Bit Result with Imm8=0xE4
Bit(reg1)Bit(reg2)Bit(src3)Bit(reg1)Bit(reg2)Bit(src3)
00000000
00110010
01000101
01100110
10001000
10111011
11011101
11111111
+
Table 5-22. Examples of VPTERNLOGD/Q Imm8 Boolean Function and Input Index Values
+

Specifying different values in imm8 will allow any arbitrary three-input Boolean functions to be implemented in software using VPTERNLOGD/Q. Table 5-11 and Table 5-12 provide a mapping of all 256 possible imm8 values to various Boolean expressions.

+

Operation + ¶ +

+

VPTERNLOGD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            FOR k := 0 TO 31
+                IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                    THEN DEST[j][k] := imm[(DEST[i+k] << 2) + (SRC1[ i+k ] << 1) + SRC2[ k ]]
+                    ELSE DEST[j][k] := imm[(DEST[i+k] << 2) + (SRC1[ i+k ] << 1) + SRC2[ i+k ]]
+                FI;
+                        ; table lookup of immediate bellow;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31+i:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[31+i:i] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

VPTERNLOGQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            FOR k := 0 TO 63
+                IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                    THEN DEST[j][k] := imm[(DEST[i+k] << 2) + (SRC1[ i+k ] << 1) + SRC2[ k ]]
+                    ELSE DEST[j][k] := imm[(DEST[i+k] << 2) + (SRC1[ i+k ] << 1) + SRC2[ i+k ]]
+                FI; ; table lookup of immediate bellow;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[63+i:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[63+i:i] := 0
+            FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPTERNLOGD __m512i _mm512_ternarylogic_epi32(__m512i a, __m512i b, int imm);
+
+
VPTERNLOGD __m512i _mm512_mask_ternarylogic_epi32(__m512i s, __mmask16 m, __m512i a, __m512i b, int imm);
+
+
VPTERNLOGD __m512i _mm512_maskz_ternarylogic_epi32(__mmask m, __m512i a, __m512i b, int imm);
+
+
VPTERNLOGD __m256i _mm256_ternarylogic_epi32(__m256i a, __m256i b, int imm);
+
+
VPTERNLOGD __m256i _mm256_mask_ternarylogic_epi32(__m256i s, __mmask8 m, __m256i a, __m256i b, int imm);
+
+
VPTERNLOGD __m256i _mm256_maskz_ternarylogic_epi32( __mmask8 m, __m256i a, __m256i b, int imm);
+
+
VPTERNLOGD __m128i _mm_ternarylogic_epi32(__m128i a, __m128i b, int imm);
+
+
VPTERNLOGD __m128i _mm_mask_ternarylogic_epi32(__m128i s, __mmask8 m, __m128i a, __m128i b, int imm);
+
+
VPTERNLOGD __m128i _mm_maskz_ternarylogic_epi32( __mmask8 m, __m128i a, __m128i b, int imm);
+
+
VPTERNLOGQ __m512i _mm512_ternarylogic_epi64(__m512i a, __m512i b, int imm);
+
+
VPTERNLOGQ __m512i _mm512_mask_ternarylogic_epi64(__m512i s, __mmask8 m, __m512i a, __m512i b, int imm);
+
+
VPTERNLOGQ __m512i _mm512_maskz_ternarylogic_epi64( __mmask8 m, __m512i a, __m512i b, int imm);
+
+
VPTERNLOGQ __m256i _mm256_ternarylogic_epi64(__m256i a, __m256i b, int imm);
+
+
VPTERNLOGQ __m256i _mm256_mask_ternarylogic_epi64(__m256i s, __mmask8 m, __m256i a, __m256i b, int imm);
+
+
VPTERNLOGQ __m256i _mm256_maskz_ternarylogic_epi64( __mmask8 m, __m256i a, __m256i b, int imm);
+
+
VPTERNLOGQ __m128i _mm_ternarylogic_epi64(__m128i a, __m128i b, int imm);
+
+
VPTERNLOGQ __m128i _mm_mask_ternarylogic_epi64(__m128i s, __mmask8 m, __m128i a, __m128i b, int imm);
+
+
VPTERNLOGQ __m128i _mm_maskz_ternarylogic_epi64( __mmask8 m, __m128i a, __m128i b, int imm);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vptestmb.vptestmw.vptestmd.vptestmq.html b/x86/vptestmb.vptestmw.vptestmd.vptestmq.html new file mode 100644 index 0000000..6dc502d --- /dev/null +++ b/x86/vptestmb.vptestmw.vptestmd.vptestmq.html @@ -0,0 +1,217 @@ + +VPTESTMB/VPTESTMW/VPTESTMD/VPTESTMQ + — Logical AND and Set Mask

VPTESTMB/VPTESTMW/VPTESTMD/VPTESTMQ + — Logical AND and Set Mask

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 26 /r VPTESTMB k2 {k1}, xmm2, xmm3/m128AV/VAVX512VL AVX512BWBitwise AND of packed byte integers in xmm2 and xmm3/m128 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.256.66.0F38.W0 26 /r VPTESTMB k2 {k1}, ymm2, ymm3/m256AV/VAVX512VL AVX512BWBitwise AND of packed byte integers in ymm2 and ymm3/m256 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.512.66.0F38.W0 26 /r VPTESTMB k2 {k1}, zmm2, zmm3/m512AV/VAVX512BWBitwise AND of packed byte integers in zmm2 and zmm3/m512 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.128.66.0F38.W1 26 /r VPTESTMW k2 {k1}, xmm2, xmm3/m128AV/VAVX512VL AVX512BWBitwise AND of packed word integers in xmm2 and xmm3/m128 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.256.66.0F38.W1 26 /r VPTESTMW k2 {k1}, ymm2, ymm3/m256AV/VAVX512VL AVX512BWBitwise AND of packed word integers in ymm2 and ymm3/m256 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.512.66.0F38.W1 26 /r VPTESTMW k2 {k1}, zmm2, zmm3/m512AV/VAVX512BWBitwise AND of packed word integers in zmm2 and zmm3/m512 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.128.66.0F38.W0 27 /r VPTESTMD k2 {k1}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FBitwise AND of packed doubleword integers in xmm2 and xmm3/m128/m32bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.256.66.0F38.W0 27 /r VPTESTMD k2 {k1}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FBitwise AND of packed doubleword integers in ymm2 and ymm3/m256/m32bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.512.66.0F38.W0 27 /r VPTESTMD k2 {k1}, zmm2, zmm3/m512/m32bcstBV/VAVX512FBitwise AND of packed doubleword integers in zmm2 and zmm3/m512/m32bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.128.66.0F38.W1 27 /r VPTESTMQ k2 {k1}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FBitwise AND of packed quadword integers in xmm2 and xmm3/m128/m64bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.256.66.0F38.W1 27 /r VPTESTMQ k2 {k1}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FBitwise AND of packed quadword integers in ymm2 and ymm3/m256/m64bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.512.66.0F38.W1 27 /r VPTESTMQ k2 {k1}, zmm2, zmm3/m512/m64bcstBV/VAVX512FBitwise AND of packed quadword integers in zmm2 and zmm3/m512/m64bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical AND operation on the first source operand (the second operand) and second source operand (the third operand) and stores the result in the destination operand (the first operand) under the write-mask. Each bit of the result is set to 1 if the bitwise AND of the corresponding elements of the first and second src operands is non-zero; otherwise it is set to 0.

+

VPTESTMD/VPTESTMQ: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a mask register updated under the writemask.

+

VPTESTMB/VPTESTMW: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register or a 512/256/128-bit memory location. The destination operand is a mask register updated under the writemask.

+

Operation + ¶ +

+

VPTESTMB (EVEX encoded versions) + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j * 8
+    IF k1[j] OR *no writemask*
+        THEN DEST[j] := (SRC1[i+7:i] BITWISE AND SRC2[i+7:i] != 0)? 1 : 0;
+        ELSE DEST[j] = 0
+            ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPTESTMW (EVEX encoded versions) + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j * 16
+    IF k1[j] OR *no writemask*
+        THEN DEST[j] := (SRC1[i+15:i] BITWISE AND SRC2[i+15:i] != 0)? 1 : 0;
+        ELSE DEST[j] = 0
+            ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPTESTMD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[j] := (SRC1[i+31:i] BITWISE AND SRC2[31:0] != 0)? 1 : 0;
+                ELSE DEST[j] := (SRC1[i+31:i] BITWISE AND SRC2[i+31:i] != 0)? 1 : 0;
+            FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPTESTMQ (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[j] := (SRC1[i+63:i] BITWISE AND SRC2[63:0] != 0)? 1 : 0;
+                ELSE DEST[j] := (SRC1[i+63:i] BITWISE AND SRC2[i+63:i] != 0)? 1 : 0;
+            FI;
+        ELSE DEST[j] := 0
+                    ; zeroing-masking only
+    FI;
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalents + ¶ +

+
VPTESTMB __mmask64 _mm512_test_epi8_mask( __m512i a, __m512i b);
+
+
VPTESTMB __mmask64 _mm512_mask_test_epi8_mask(__mmask64, __m512i a, __m512i b);
+
+
VPTESTMW __mmask32 _mm512_test_epi16_mask( __m512i a, __m512i b);
+
+
VPTESTMW __mmask32 _mm512_mask_test_epi16_mask(__mmask32, __m512i a, __m512i b);
+
+
VPTESTMD __mmask16 _mm512_test_epi32_mask( __m512i a, __m512i b);
+
+
VPTESTMD __mmask16 _mm512_mask_test_epi32_mask(__mmask16, __m512i a, __m512i b);
+
+
VPTESTMQ __mmask8 _mm512_test_epi64_mask(__m512i a, __m512i b);
+
+
VPTESTMQ __mmask8 _mm512_mask_test_epi64_mask(__mmask8, __m512i a, __m512i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VPTESTMD/Q: See Table 2-49, “Type E4 Class Exception Conditions.”

+

VPTESTMB/W: See Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vptestnmb.vptestnmw.vptestnmd.vptestnmq.html b/x86/vptestnmb.vptestnmw.vptestnmd.vptestnmq.html new file mode 100644 index 0000000..b48f439 --- /dev/null +++ b/x86/vptestnmb.vptestnmw.vptestnmd.vptestnmq.html @@ -0,0 +1,247 @@ + +VPTESTNMB/VPTESTNMW/VPTESTNMD/VPTESTNMQ + — Logical NAND and Set

VPTESTNMB/VPTESTNMW/VPTESTNMD/VPTESTNMQ + — Logical NAND and Set

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUIDDescription
EVEX.128.F3.0F38.W0 26 /r VPTESTNMB k2 {k1}, xmm2, xmm3/m128AV/VAVX512VL AVX512BWBitwise NAND of packed byte integers in xmm2 and xmm3/m128 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.256.F3.0F38.W0 26 /r VPTESTNMB k2 {k1}, ymm2, ymm3/m256AV/VAVX512VL AVX512BWBitwise NAND of packed byte integers in ymm2 and ymm3/m256 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.512.F3.0F38.W0 26 /r VPTESTNMB k2 {k1}, zmm2, zmm3/m512AV/VAVX512F AVX512BWBitwise NAND of packed byte integers in zmm2 and zmm3/m512 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.128.F3.0F38.W1 26 /r VPTESTNMW k2 {k1}, xmm2, xmm3/m128AV/VAVX512VL AVX512BWBitwise NAND of packed word integers in xmm2 and xmm3/m128 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.256.F3.0F38.W1 26 /r VPTESTNMW k2 {k1}, ymm2, ymm3/m256AV/VAVX512VL AVX512BWBitwise NAND of packed word integers in ymm2 and ymm3/m256 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.512.F3.0F38.W1 26 /r VPTESTNMW k2 {k1}, zmm2, zmm3/m512AV/VAVX512F AVX512BWBitwise NAND of packed word integers in zmm2 and zmm3/m512 and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.128.F3.0F38.W0 27 /r VPTESTNMD k2 {k1}, xmm2, xmm3/m128/m32bcstBV/VAVX512VL AVX512FBitwise NAND of packed doubleword integers in xmm2 and xmm3/m128/m32bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.256.F3.0F38.W0 27 /r VPTESTNMD k2 {k1}, ymm2, ymm3/m256/m32bcstBV/VAVX512VL AVX512FBitwise NAND of packed doubleword integers in ymm2 and ymm3/m256/m32bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.512.F3.0F38.W0 27 /r VPTESTNMD k2 {k1}, zmm2, zmm3/m512/m32bcstBV/VAVX512FBitwise NAND of packed doubleword integers in zmm2 and zmm3/m512/m32bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.128.F3.0F38.W1 27 /r VPTESTNMQ k2 {k1}, xmm2, xmm3/m128/m64bcstBV/VAVX512VL AVX512FBitwise NAND of packed quadword integers in xmm2 and xmm3/m128/m64bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.256.F3.0F38.W1 27 /r VPTESTNMQ k2 {k1}, ymm2, ymm3/m256/m64bcstBV/VAVX512VL AVX512FBitwise NAND of packed quadword integers in ymm2 and ymm3/m256/m64bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
EVEX.512.F3.0F38.W1 27 /r VPTESTNMQ k2 {k1}, zmm2, zmm3/m512/m64bcstBV/VAVX512FBitwise NAND of packed quadword integers in zmm2 and zmm3/m512/m64bcst and set mask k2 to reflect the zero/non-zero status of each element of the result, under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFull MemModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
BFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical NAND operation on the byte/word/doubleword/quadword element of the first source operand (the second operand) with the corresponding element of the second source operand (the third operand) and stores the logical comparison result into each bit of the destination operand (the first operand) according to the writemask k1. Each bit of the result is set to 1 if the bitwise AND of the corresponding elements of the first and second src operands is zero; otherwise it is set to 0.

+

EVEX encoded VPTESTNMD/Q: The first source operand is a ZMM/YMM/XMM registers. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 32/64-bit memory location. The destination is updated according to the writemask.

+

EVEX encoded VPTESTNMB/W: The first source operand is a ZMM/YMM/XMM registers. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location. The destination is updated according to the writemask.

+

Operation + ¶ +

+

VPTESTNMB + ¶ +

+
(KL, VL) = (16, 128), (32, 256), (64, 512)
+FOR j := 0 TO KL-1
+    i := j*8
+    IF MaskBit(j) OR *no writemask*
+        THEN
+            DEST[j] := (SRC1[i+7:i] BITWISE AND SRC2[i+7:i] == 0)? 1 : 0
+        ELSE DEST[j] := 0; zeroing masking only
+    FI
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPTESTNMW + ¶ +

+
(KL, VL) = (8, 128), (16, 256), (32, 512)
+FOR j := 0 TO KL-1
+    i := j*16
+    IF MaskBit(j) OR *no writemask*
+        THEN
+            DEST[j] := (SRC1[i+15:i] BITWISE AND SRC2[i+15:i] == 0)? 1 : 0
+        ELSE DEST[j] := 0; zeroing masking only
+    FI
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPTESTNMD + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j*32
+    IF MaskBit(j) OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := (SRC1[i+31:i] BITWISE AND SRC2[31:0] == 0)? 1 : 0
+                ELSE DEST[j] := (SRC1[i+31:i] BITWISE AND SRC2[i+31:i] == 0)? 1 : 0
+            FI
+        ELSE DEST[j] := 0; zeroing masking only
+    FI
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

VPTESTNMQ + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j*64
+    IF MaskBit(j) OR *no writemask*
+        THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[j] := (SRC1[i+63:i] BITWISE AND SRC2[63:0] == 0)? 1 : 0;
+                ELSE DEST[j] := (SRC1[i+63:i] BITWISE AND SRC2[i+63:i] == 0)? 1 : 0;
+            FI;
+        ELSE DEST[j] := 0; zeroing masking only
+    FI
+ENDFOR
+DEST[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VPTESTNMB __mmask64 _mm512_testn_epi8_mask( __m512i a, __m512i b);
+
+
VPTESTNMB __mmask64 _mm512_mask_testn_epi8_mask(__mmask64, __m512i a, __m512i b);
+
+
VPTESTNMB __mmask32 _mm256_testn_epi8_mask(__m256i a, __m256i b);
+
+
VPTESTNMB __mmask32 _mm256_mask_testn_epi8_mask(__mmask32, __m256i a, __m256i b);
+
+
VPTESTNMB __mmask16 _mm_testn_epi8_mask(__m128i a, __m128i b);
+
+
VPTESTNMB __mmask16 _mm_mask_testn_epi8_mask(__mmask16, __m128i a, __m128i b);
+
+
VPTESTNMW __mmask32 _mm512_testn_epi16_mask( __m512i a, __m512i b);
+
+
VPTESTNMW __mmask32 _mm512_mask_testn_epi16_mask(__mmask32, __m512i a, __m512i b);
+
+
VPTESTNMW __mmask16 _mm256_testn_epi16_mask(__m256i a, __m256i b);
+
+
VPTESTNMW __mmask16 _mm256_mask_testn_epi16_mask(__mmask16, __m256i a, __m256i b);
+
+
VPTESTNMW __mmask8 _mm_testn_epi16_mask(__m128i a, __m128i b);
+
+
VPTESTNMW __mmask8 _mm_mask_testn_epi16_mask(__mmask8, __m128i a, __m128i b);
+
+
VPTESTNMD __mmask16 _mm512_testn_epi32_mask( __m512i a, __m512i b);
+
+
VPTESTNMD __mmask16 _mm512_mask_testn_epi32_mask(__mmask16, __m512i a, __m512i b);
+
+
VPTESTNMD __mmask8 _mm256_testn_epi32_mask(__m256i a, __m256i b);
+
+
VPTESTNMD __mmask8 _mm256_mask_testn_epi32_mask(__mmask8, __m256i a, __m256i b);
+
+
VPTESTNMD __mmask8 _mm_testn_epi32_mask(__m128i a, __m128i b);
+
+
VPTESTNMD __mmask8 _mm_mask_testn_epi32_mask(__mmask8, __m128i a, __m128i b);
+
+
VPTESTNMQ __mmask8 _mm512_testn_epi64_mask(__m512i a, __m512i b);
+
+
VPTESTNMQ __mmask8 _mm512_mask_testn_epi64_mask(__mmask8, __m512i a, __m512i b);
+
+
VPTESTNMQ __mmask8 _mm256_testn_epi64_mask(__m256i a, __m256i b);
+
+
VPTESTNMQ __mmask8 _mm256_mask_testn_epi64_mask(__mmask8, __m256i a, __m256i b);
+
+
VPTESTNMQ __mmask8 _mm_testn_epi64_mask(__m128i a, __m128i b);
+
+
VPTESTNMQ __mmask8 _mm_mask_testn_epi64_mask(__mmask8, __m128i a, __m128i b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

VPTESTNMD/VPTESTNMQ: See Table 2-49, “Type E4 Class Exception Conditions.”

+

VPTESTNMB/VPTESTNMW: See Exceptions Type E4.nb in Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vrangepd.html b/x86/vrangepd.html new file mode 100644 index 0000000..da62d1a --- /dev/null +++ b/x86/vrangepd.html @@ -0,0 +1,325 @@ + +VRANGEPD + — Range Restriction Calculation for Packed Pairs of Float64 Values

VRANGEPD + — Range Restriction Calculation for Packed Pairs of Float64 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 50 /r ib VRANGEPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst, imm8AV/VAVX512VL AVX512DQCalculate two RANGE operation output value from 2 pairs of double precision floating-point values in xmm2 and xmm3/m128/m32bcst, store the results to xmm1 under the writemask k1. Imm8 specifies the comparison and sign of the range operation.
EVEX.256.66.0F3A.W1 50 /r ib VRANGEPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst, imm8AV/VAVX512VL AVX512DQCalculate four RANGE operation output value from 4pairs of double precision floating-point values in ymm2 and ymm3/m256/m32bcst, store the results to ymm1 under the writemask k1. Imm8 specifies the comparison and sign of the range operation.
EVEX.512.66.0F3A.W1 50 /r ib VRANGEPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{sae}, imm8AV/VAVX512DQCalculate eight RANGE operation output value from 8 pairs of double precision floating-point values in zmm2 and zmm3/m512/m32bcst, store the results to zmm1 under the writemask k1. Imm8 specifies the comparison and sign of the range operation.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

This instruction calculates 2/4/8 range operation outputs from two sets of packed input double precision floating-point values in the first source operand (the second operand) and the second source operand (the third operand). The range outputs are written to the destination operand (the first operand) under the writemask k1.

+

Bits7:4 of imm8 byte must be zero. The range operation output is performed in two parts, each configured by a two-bit control field within imm8[3:0]:

+
    +
  • Imm8[1:0] specifies the initial comparison operation to be one of max, min, max absolute value or min absolute value of the input value pair. Each comparison of two input values produces an intermediate result that combines with the sign selection control (imm8[3:2]) to determine the final range operation output.
  • +
  • Imm8[3:2] specifies the sign of the range operation output to be one of the following: from the first input value, from the comparison result, set or clear.
+

The encodings of imm8[1:0] and imm8[3:2] are shown in Figure 5-27.

+
+ + + + + + + + + + + + + + + + + + + + + +0 +imm8 +Must Be Zero +Sign Control (SC) +Compare Operation Select +Imm8[1:0] = 00b : Select Min value +Imm8[3:2] = 00b : Select sign(SRC1) +Imm8[1:0] = 01b : Select Max value +Imm8[3:2] = 01b : Select sign(Compare_Result) +Imm8[1:0] = 10b : Select Min-Abs value +Imm8[3:2] = 10b : Set sign to 0 +Imm8[1:0] = 11b : Select Max-Abs value +Imm8[3:2] = 11b : Set sign to 1 +
Figure 5-27. Imm8 Controls for VRANGEPD/SD/PS/SS
+

When one or more of the input value is a NAN, the comparison operation may signal invalid exception (IE). Details with one of more input value is NAN is listed in Table 5-23. If the comparison raises an IE, the sign select control (imm8[3:2]) has no effect to the range operation output; this is indicated also in Table 5-23.

+

When both input values are zeros of opposite signs, the comparison operation of MIN/MAX in the range compare operation is slightly different from the conceptually similar floating-point MIN/MAX operation that are found in the instructions VMAXPD/VMINPD. The details of MIN/MAX/MIN_ABS/MAX_ABS operation for VRANGEPD/PS/SD/SS for magnitude-0, opposite-signed input cases are listed in Table 5-24.

+

Additionally, non-zero, equal-magnitude with opposite-sign input values perform MIN_ABS or MAX_ABS comparison operation with result listed in Table 5-25.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Src1Src2ResultIE Signaling Due to ComparisonImm8[3:2] Effect to Range Output
sNaN1sNaN2Quiet(sNaN1)YesIgnored
sNaN1qNaN2Quiet(sNaN1)YesIgnored
sNaN1Norm2Quiet(sNaN1)YesIgnored
qNaN1sNaN2Quiet(sNaN2)YesIgnored
qNaN1qNaN2qNaN1NoApplicable
qNaN1Norm2Norm2NoApplicable
Norm1sNaN2Quiet(sNaN2)YesIgnored
Norm1qNaN2Norm1NoApplicable
+
Table 5-23. Signaling of Comparison Operation of One or More NaN Input Values and Effect of Imm8[3:2]
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
MIN and MIN_ABSMAX and MAX_ABS
Src1Src2ResultSrc1Src2Result
+0-0-0+0-0+0
-0+0-0-0+0+0
+
Table 5-24. Comparison Result for Opposite-Signed Zero Cases for MIN, MIN_ABS, and MAX, MAX_ABS
+
+ + + + + + + + + + + + + + + + + + + + + + + + +
MIN_ABS (|a| = |b|, a>0, b<0)MAX_ABS (|a| = |b|, a>0, b<0)
Src1Src2ResultSrc1Src2Result
abbaba
babbaa
+
Table 5-25. Comparison Result of Equal-Magnitude Input Cases for MIN_ABS and MAX_ABS, (|a| = |b|, a>0, b<0)
+

Operation + ¶ +

+
RangeDP(SRC1[63:0], SRC2[63:0], CmpOpCtl[1:0], SignSelCtl[1:0])
+{
+    // Check if SNAN and report IE, see also Table 5-23
+    IF (SRC1 = SNAN) THEN RETURN (QNAN(SRC1), set IE);
+    IF (SRC2 = SNAN) THEN RETURN (QNAN(SRC2), set IE);
+    Src1.exp := SRC1[62:52];
+    Src1.fraction := SRC1[51:0];
+    IF ((Src1.exp = 0 ) and (Src1.fraction != 0)) THEN// Src1 is a denormal number
+        IF DAZ THEN Src1.fraction := 0;
+        ELSE IF (SRC2 <> QNAN) Set DE; FI;
+    FI;
+    Src2.exp := SRC2[62:52];
+    Src2.fraction := SRC2[51:0];
+    IF ((Src2.exp = 0) and (Src2.fraction !=0 )) THEN// Src2 is a denormal number
+        IF DAZ THEN Src2.fraction := 0;
+        ELSE IF (SRC1 <> QNAN) Set DE; FI;
+    FI;
+    IF (SRC2 = QNAN) THEN{TMP[63:0] := SRC1[63:0]}
+    ELSE IF(SRC1 = QNAN) THEN{TMP[63:0] := SRC2[63:0]}
+    ELSE IF (Both SRC1, SRC2 are magnitude-0 and opposite-signed) TMP[63:0] := from Table 5-24
+    ELSE IF (Both SRC1, SRC2 are magnitude-equal and opposite-signed and CmpOpCtl[1:0] > 01) TMP[63:0] := from Table 5-25
+    ELSE
+        Case(CmpOpCtl[1:0])
+        00: TMP[63:0] := (SRC1[63:0] ≤ SRC2[63:0]) ? SRC1[63:0] : SRC2[63:0];
+        01: TMP[63:0] := (SRC1[63:0] ≤ SRC2[63:0]) ? SRC2[63:0] : SRC1[63:0];
+        10: TMP[63:0] := (ABS(SRC1[63:0]) ≤ ABS(SRC2[63:0])) ? SRC1[63:0] : SRC2[63:0];
+        11: TMP[63:0] := (ABS(SRC1[63:0]) ≤ ABS(SRC2[63:0])) ? SRC2[63:0] : SRC1[63:0];
+        ESAC;
+    FI;
+    Case(SignSelCtl[1:0])
+    00: dest := (SRC1[63] << 63) OR (TMP[62:0]);// Preserve Src1 sign bit
+    01: dest := TMP[63:0];// Preserve sign of compare result
+    10: dest := (0 << 63) OR (TMP[62:0]);// Zero out sign bit
+    11: dest := (1 << 63) OR (TMP[62:0]);// Set the sign bit
+    ESAC;
+    RETURN dest[63:0];
+}
+CmpOpCtl[1:0]= imm8[1:0];
+SignSelCtl[1:0]=imm8[3:2];
+
+

VRANGEPD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+                IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                    THEN DEST[i+63:i] := RangeDP (SRC1[i+63:i], SRC2[63:0], CmpOpCtl[1:0], SignSelCtl[1:0]);
+                    ELSE DEST[i+63:i] := RangeDP (SRC1[i+63:i], SRC2[i+63:i], CmpOpCtl[1:0], SignSelCtl[1:0]);
+                FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+63:i] = 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+The following example describes a common usage of this instruction for checking that the input operand is
+bounded between ±1023.
+VRANGEPD zmm_dst, zmm_src, zmm_1023, 02h;
+Where:
+            zmm_dst is the destination operand.
+            zmm_src is the input operand to compare against ±1023 (this is SRC1).
+            zmm_1023 is the reference operand, contains the value of 1023 (and this is SRC2).
+            IMM=02(imm8[1:0]='10) selects the Min Absolute value operation with selection of SRC1.sign.
+In case |zmm_src| < 1023 (i.e., SRC1 is smaller than 1023 in magnitude), then its value will be written into
+zmm_dst. Otherwise, the value stored in zmm_dst will get the value of 1023 (received on zmm_1023, which is
+SRC2).
+However, the sign control (imm8[3:2]='00) instructs to select the sign of SRC1 received from zmm_src. So, even
+in the case of |zmm_src| ≥ 1023, the selected sign of SRC1 is kept.
+Thus, if zmm_src < -1023, the result of VRANGEPD will be the minimal value of -1023 while if zmm_src > +1023,
+the result of VRANGE will be the maximal value of +1023.
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRANGEPD __m512d _mm512_range_pd ( __m512d a, __m512d b, int imm);
+
+
VRANGEPD __m512d _mm512_range_round_pd ( __m512d a, __m512d b, int imm, int sae);
+
+
VRANGEPD __m512d _mm512_mask_range_pd (__m512 ds, __mmask8 k, __m512d a, __m512d b, int imm);
+
+
VRANGEPD __m512d _mm512_mask_range_round_pd (__m512d s, __mmask8 k, __m512d a, __m512d b, int imm, int sae);
+
+
VRANGEPD __m512d _mm512_maskz_range_pd ( __mmask8 k, __m512d a, __m512d b, int imm);
+
+
VRANGEPD __m512d _mm512_maskz_range_round_pd ( __mmask8 k, __m512d a, __m512d b, int imm, int sae);
+
+
VRANGEPD __m256d _mm256_range_pd ( __m256d a, __m256d b, int imm);
+
+
VRANGEPD __m256d _mm256_mask_range_pd (__m256d s, __mmask8 k, __m256d a, __m256d b, int imm);
+
+
VRANGEPD __m256d _mm256_maskz_range_pd ( __mmask8 k, __m256d a, __m256d b, int imm);
+
+
VRANGEPD __m128d _mm_range_pd ( __m128 a, __m128d b, int imm);
+
+
VRANGEPD __m128d _mm_mask_range_pd (__m128 s, __mmask8 k, __m128d a, __m128d b, int imm);
+
+
VRANGEPD __m128d _mm_maskz_range_pd ( __mmask8 k, __m128d a, __m128d b, int imm);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vrangeps.html b/x86/vrangeps.html new file mode 100644 index 0000000..405566d --- /dev/null +++ b/x86/vrangeps.html @@ -0,0 +1,177 @@ + +VRANGEPS + — Range Restriction Calculation for Packed Pairs of Float32 Values

VRANGEPS + — Range Restriction Calculation for Packed Pairs of Float32 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 50 /r ib VRANGEPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst, imm8AV/VAVX512VL AVX512DQCalculate four RANGE operation output value from 4 pairs of single-precision floating-point values in xmm2 and xmm3/m128/m32bcst, store the results to xmm1 under the writemask k1. Imm8 specifies the comparison and sign of the range operation.
EVEX.256.66.0F3A.W0 50 /r ib VRANGEPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst, imm8AV/VAVX512VL AVX512DQCalculate eight RANGE operation output value from 8 pairs of single-precision floating-point values in ymm2 and ymm3/m256/m32bcst, store the results to ymm1 under the writemask k1. Imm8 specifies the comparison and sign of the range operation.
EVEX.512.66.0F3A.W0 50 /r ib VRANGEPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{sae}, imm8AV/VAVX512DQCalculate 16 RANGE operation output value from 16 pairs of single-precision floating-point values in zmm2 and zmm3/m512/m32bcst, store the results to zmm1 under the writemask k1. Imm8 specifies the comparison and sign of the range operation.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

This instruction calculates 4/8/16 range operation outputs from two sets of packed input single-precision floating-point values in the first source operand (the second operand) and the second source operand (the third operand). The range outputs are written to the destination operand (the first operand) under the writemask k1.

+

Bits7:4 of imm8 byte must be zero. The range operation output is performed in two parts, each configured by a two-bit control field within imm8[3:0]:

+
    +
  • Imm8[1:0] specifies the initial comparison operation to be one of max, min, max absolute value or min absolute value of the input value pair. Each comparison of two input values produces an intermediate result that combines with the sign selection control (imm8[3:2]) to determine the final range operation output.
  • +
  • Imm8[3:2] specifies the sign of the range operation output to be one of the following: from the first input value, from the comparison result, set or clear.
+

The encodings of imm8[1:0] and imm8[3:2] are shown in Figure 5-27.

+

When one or more of the input value is a NAN, the comparison operation may signal invalid exception (IE). Details with one of more input value is NAN is listed in Table 5-23. If the comparison raises an IE, the sign select control (imm8[3:2]) has no effect to the range operation output; this is indicated also in Table 5-23.

+

When both input values are zeros of opposite signs, the comparison operation of MIN/MAX in the range compare operation is slightly different from the conceptually similar floating-point MIN/MAX operation that are found in the instructions VMAXPD/VMINPD. The details of MIN/MAX/MIN_ABS/MAX_ABS operation for VRANGEPD/PS/SD/SS for magnitude-0, opposite-signed input cases are listed in Table 5-24.

+

Additionally, non-zero, equal-magnitude with opposite-sign input values perform MIN_ABS or MAX_ABS comparison operation with result listed in Table 5-25.

+

Operation + ¶ +

+
RangeSP(SRC1[31:0], SRC2[31:0], CmpOpCtl[1:0], SignSelCtl[1:0])
+{
+    // Check if SNAN and report IE, see also Table 5-23
+    IF (SRC1=SNAN) THEN RETURN (QNAN(SRC1), set IE);
+    IF (SRC2=SNAN) THEN RETURN (QNAN(SRC2), set IE);
+    Src1.exp := SRC1[30:23];
+    Src1.fraction := SRC1[22:0];
+    IF ((Src1.exp = 0 ) and (Src1.fraction != 0 )) THEN// Src1 is a denormal number
+        IF DAZ THEN Src1.fraction := 0;
+        ELSE IF (SRC2 <> QNAN) Set DE; FI;
+    FI;
+    Src2.exp := SRC2[30:23];
+    Src2.fraction := SRC2[22:0];
+    IF ((Src2.exp = 0 ) and (Src2.fraction != 0 )) THEN// Src2 is a denormal number
+        IF DAZ THEN Src2.fraction := 0;
+        ELSE IF (SRC1 <> QNAN) Set DE; FI;
+    FI;
+    IF (SRC2 = QNAN) THEN{TMP[31:0] := SRC1[31:0]}
+    ELSE IF(SRC1 = QNAN) THEN{TMP[31:0] := SRC2[31:0]}
+    ELSE IF (Both SRC1, SRC2 are magnitude-0 and opposite-signed) TMP[31:0] := from Table 5-24
+    ELSE IF (Both SRC1, SRC2 are magnitude-equal and opposite-signed and CmpOpCtl[1:0] > 01) TMP[31:0] := from Table 5-25
+    ELSE
+        Case(CmpOpCtl[1:0])
+        00: TMP[31:0] := (SRC1[31:0] ≤ SRC2[31:0]) ? SRC1[31:0] : SRC2[31:0];
+        01: TMP[31:0] := (SRC1[31:0] ≤ SRC2[31:0]) ? SRC2[31:0] : SRC1[31:0];
+        10: TMP[31:0] := (ABS(SRC1[31:0]) ≤ ABS(SRC2[31:0])) ? SRC1[31:0] : SRC2[31:0];
+        11: TMP[31:0] := (ABS(SRC1[31:0]) ≤ ABS(SRC2[31:0])) ? SRC2[31:0] : SRC1[31:0];
+        ESAC;
+    FI;
+    Case(SignSelCtl[1:0])
+    00: dest := (SRC1[31] << 31) OR (TMP[30:0]);// Preserve Src1 sign bit
+    01: dest := TMP[31:0];// Preserve sign of compare result
+    10: dest := (0 << 31) OR (TMP[30:0]);// Zero out sign bit
+    11: dest := (1 << 31) OR (TMP[30:0]);// Set the sign bit
+    ESAC;
+    RETURN dest[31:0];
+}
+CmpOpCtl[1:0]= imm8[1:0];
+SignSelCtl[1:0]=imm8[3:2];
+
+

VRANGEPS + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := RangeSP (SRC1[i+31:i], SRC2[31:0], CmpOpCtl[1:0], SignSelCtl[1:0]);
+                ELSE DEST[i+31:i] := RangeSP (SRC1[i+31:i], SRC2[i+31:i], CmpOpCtl[1:0], SignSelCtl[1:0]);
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+31:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+31:i] = 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+The following example describes a common usage of this instruction for checking that the input operand is
+bounded between ±150.
+VRANGEPS zmm_dst, zmm_src, zmm_150, 02h;
+Where:
+zmm_dst is the destination operand.
+zmm_src is the input operand to compare against ±150.
+zmm_150 is the reference operand, contains the value of 150.
+IMM=02(imm8[1:0]=’10) selects the Min Absolute value operation with selection of src1.sign.
+In case |zmm_src| < 150, then its value will be written into zmm_dst. Otherwise, the value stored in zmm_dst
+will get the value of 150 (received on zmm_150).
+However, the sign control (imm8[3:2]=’00) instructs to select the sign of SRC1 received from zmm_src. So, even
+in the case of |zmm_src| ≥ 150, the selected sign of SRC1 is kept.
+Thus, if zmm_src < -150, the result of VRANGEPS will be the minimal value of -150 while if zmm_src > +150,
+the result of VRANGE will be the maximal value of +150.
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRANGEPS __m512 _mm512_range_ps ( __m512 a, __m512 b, int imm);
+
+
VRANGEPS __m512 _mm512_range_round_ps ( __m512 a, __m512 b, int imm, int sae);
+
+
VRANGEPS __m512 _mm512_mask_range_ps (__m512 s, __mmask16 k, __m512 a, __m512 b, int imm);
+
+
VRANGEPS __m512 _mm512_mask_range_round_ps (__m512 s, __mmask16 k, __m512 a, __m512 b, int imm, int sae);
+
+
VRANGEPS __m512 _mm512_maskz_range_ps ( __mmask16 k, __m512 a, __m512 b, int imm);
+
+
VRANGEPS __m512 _mm512_maskz_range_round_ps ( __mmask16 k, __m512 a, __m512 b, int imm, int sae);
+
+
VRANGEPS __m256 _mm256_range_ps ( __m256 a, __m256 b, int imm);
+
+
VRANGEPS __m256 _mm256_mask_range_ps (__m256 s, __mmask8 k, __m256 a, __m256 b, int imm);
+
+
VRANGEPS __m256 _mm256_maskz_range_ps ( __mmask8 k, __m256 a, __m256 b, int imm);
+
+
VRANGEPS __m128 _mm_range_ps ( __m128 a, __m128 b, int imm);
+
+
VRANGEPS __m128 _mm_mask_range_ps (__m128 s, __mmask8 k, __m128 a, __m128 b, int imm);
+
+
VRANGEPS __m128 _mm_maskz_range_ps ( __mmask8 k, __m128 a, __m128 b, int imm);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vrangesd.html b/x86/vrangesd.html new file mode 100644 index 0000000..0031bff --- /dev/null +++ b/x86/vrangesd.html @@ -0,0 +1,148 @@ + +VRANGESD + — Range Restriction Calculation From a Pair of Scalar Float64 Values

VRANGESD + — Range Restriction Calculation From a Pair of Scalar Float64 Values

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W1 51 /r VRANGESD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}, imm8AV/VAVX512DQCalculate a RANGE operation output value from 2 double precision floating-point values in xmm2 and xmm3/m64, store the output to xmm1 under writemask. Imm8 specifies the comparison and sign of the range operation.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

This instruction calculates a range operation output from two input double precision floating-point values in the low qword element of the first source operand (the second operand) and second source operand (the third operand). The range output is written to the low qword element of the destination operand (the first operand) under the writemask k1.

+

Bits7:4 of imm8 byte must be zero. The range operation output is performed in two parts, each configured by a two-bit control field within imm8[3:0]:

+
    +
  • Imm8[1:0] specifies the initial comparison operation to be one of max, min, max absolute value or min absolute value of the input value pair. Each comparison of two input values produces an intermediate result that combines with the sign selection control (imm8[3:2]) to determine the final range operation output.
  • +
  • Imm8[3:2] specifies the sign of the range operation output to be one of the following: from the first input value, from the comparison result, set or clear.
+

The encodings of imm8[1:0] and imm8[3:2] are shown in Figure 5-27.

+

Bits 128:63 of the destination operand are copied from the respective element of the first source operand.

+

When one or more of the input value is a NAN, the comparison operation may signal invalid exception (IE). Details with one of more input value is NAN is listed in Table 5-23. If the comparison raises an IE, the sign select control (imm8[3:2]) has no effect to the range operation output; this is indicated also in Table 5-23.

+

When both input values are zeros of opposite signs, the comparison operation of MIN/MAX in the range compare operation is slightly different from the conceptually similar floating-point MIN/MAX operation that are found in the instructions VMAXPD/VMINPD. The details of MIN/MAX/MIN_ABS/MAX_ABS operation for VRANGEPD/PS/SD/SS for magnitude-0, opposite-signed input cases are listed in Table 5-24.

+

Additionally, non-zero, equal-magnitude with opposite-sign input values perform MIN_ABS or MAX_ABS comparison operation with result listed in Table 5-25.

+

Operation + ¶ +

+
RangeDP(SRC1[63:0], SRC2[63:0], CmpOpCtl[1:0], SignSelCtl[1:0])
+{
+    // Check if SNAN and report IE, see also Table 5-23
+    IF (SRC1 = SNAN) THEN RETURN (QNAN(SRC1), set IE);
+    IF (SRC2 = SNAN) THEN RETURN (QNAN(SRC2), set IE);
+    Src1.exp := SRC1[62:52];
+    Src1.fraction := SRC1[51:0];
+    IF ((Src1.exp = 0 ) and (Src1.fraction != 0)) THEN// Src1 is a denormal number
+        IF DAZ THEN Src1.fraction := 0;
+        ELSE IF (SRC2 <> QNAN) Set DE; FI;
+    FI;
+    Src2.exp := SRC2[62:52];
+    Src2.fraction := SRC2[51:0];
+    IF ((Src2.exp = 0) and (Src2.fraction !=0 )) THEN// Src2 is a denormal number
+        IF DAZ THEN Src2.fraction := 0;
+        ELSE IF (SRC1 <> QNAN) Set DE; FI;
+    FI;
+    IF (SRC2 = QNAN) THEN{TMP[63:0] := SRC1[63:0]}
+    ELSE IF(SRC1 = QNAN) THEN{TMP[63:0] := SRC2[63:0]}
+    ELSE IF (Both SRC1, SRC2 are magnitude-0 and opposite-signed) TMP[63:0] := from Table 5-24
+    ELSE IF (Both SRC1, SRC2 are magnitude-equal and opposite-signed and CmpOpCtl[1:0] > 01) TMP[63:0] := from Table 5-25
+    ELSE
+        Case(CmpOpCtl[1:0])
+        00: TMP[63:0] := (SRC1[63:0] ≤ SRC2[63:0]) ? SRC1[63:0] : SRC2[63:0];
+        01: TMP[63:0] := (SRC1[63:0] ≤ SRC2[63:0]) ? SRC2[63:0] : SRC1[63:0];
+        10: TMP[63:0] := (ABS(SRC1[63:0]) ≤ ABS(SRC2[63:0])) ? SRC1[63:0] : SRC2[63:0];
+        11: TMP[63:0] := (ABS(SRC1[63:0]) ≤ ABS(SRC2[63:0])) ? SRC2[63:0] : SRC1[63:0];
+        ESAC;
+    FI;
+    Case(SignSelCtl[1:0])
+    00: dest := (SRC1[63] << 63) OR (TMP[62:0]);// Preserve Src1 sign bit
+    01: dest := TMP[63:0];// Preserve sign of compare result
+    10: dest := (0 << 63) OR (TMP[62:0]);// Zero out sign bit
+    11: dest := (1 << 63) OR (TMP[62:0]);// Set the sign bit
+    ESAC;
+    RETURN dest[63:0];
+}
+CmpOpCtl[1:0]= imm8[1:0];
+SignSelCtl[1:0]=imm8[3:2];
+
+

VRANGESD + ¶ +

+
IF k1[0] OR *no writemask*
+        THEN DEST[63:0] := RangeDP (SRC1[63:0], SRC2[63:0], CmpOpCtl[1:0], SignSelCtl[1:0]);
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[63:0] = 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+The following example describes a common usage of this instruction for checking that the input operand is bound-
+ed between ±1023.
+VRANGESD xmm_dst, xmm_src, xmm_1023, 02h;
+Where:
+xmm_dst is the destination operand.
+xmm_src is the input operand to compare against ±1023.
+xmm_1023 is the reference operand, contains the value of 1023.
+IMM=02(imm8[1:0]=’10) selects the Min Absolute value operation with selection of src1.sign.
+In case |xmm_src| < 1023, then its value will be written into xmm_dst. Otherwise, the value stored in xmm_dst
+will get the value of 1023 (received on xmm_1023).
+However, the sign control (imm8[3:2]=’00) instructs to select the sign of SRC1 received from xmm_src. So, even
+in the case of |xmm_src| ≥ 1023, the selected sign of SRC1 is kept.
+Thus, if xmm_src < -1023, the result of VRANGEPD will be the minimal value of -1023while if xmm_src > +1023,
+the result of VRANGE will be the maximal value of +1023.
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRANGESD __m128d _mm_range_sd ( __m128d a, __m128d b, int imm);
+
+
VRANGESD __m128d _mm_range_round_sd ( __m128d a, __m128d b, int imm, int sae);
+
+
VRANGESD __m128d _mm_mask_range_sd (__m128d s, __mmask8 k, __m128d a, __m128d b, int imm);
+
+
VRANGESD __m128d _mm_mask_range_round_sd (__m128d s, __mmask8 k, __m128d a, __m128d b, int imm, int sae);
+
+
VRANGESD __m128d _mm_maskz_range_sd ( __mmask8 k, __m128d a, __m128d b, int imm);
+
+
VRANGESD __m128d _mm_maskz_range_round_sd ( __mmask8 k, __m128d a, __m128d b, int imm, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrangess.html b/x86/vrangess.html new file mode 100644 index 0000000..e22b666 --- /dev/null +++ b/x86/vrangess.html @@ -0,0 +1,148 @@ + +VRANGESS + — Range Restriction Calculation From a Pair of Scalar Float32 Values

VRANGESS + — Range Restriction Calculation From a Pair of Scalar Float32 Values

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W0 51 /r VRANGESS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}, imm8AV/VAVX512DQCalculate a RANGE operation output value from 2 single-precision floating-point values in xmm2 and xmm3/m32, store the output to xmm1 under writemask. Imm8 specifies the comparison and sign of the range operation.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction calculates a range operation output from two input single-precision floating-point values in the low dword element of the first source operand (the second operand) and second source operand (the third operand). The range output is written to the low dword element of the destination operand (the first operand) under the writemask k1.

+

Bits7:4 of imm8 byte must be zero. The range operation output is performed in two parts, each configured by a two-bit control field within imm8[3:0]:

+
    +
  • Imm8[1:0] specifies the initial comparison operation to be one of max, min, max absolute value or min absolute value of the input value pair. Each comparison of two input values produces an intermediate result that combines with the sign selection control (imm8[3:2]) to determine the final range operation output.
  • +
  • Imm8[3:2] specifies the sign of the range operation output to be one of the following: from the first input value, from the comparison result, set or clear.
+

The encodings of imm8[1:0] and imm8[3:2] are shown in Figure 5-27.

+

Bits 128:31 of the destination operand are copied from the respective elements of the first source operand.

+

When one or more of the input value is a NAN, the comparison operation may signal invalid exception (IE). Details with one of more input value is NAN is listed in Table 5-23. If the comparison raises an IE, the sign select control (imm8[3:2]) has no effect to the range operation output; this is indicated also in Table 5-23.

+

When both input values are zeros of opposite signs, the comparison operation of MIN/MAX in the range compare operation is slightly different from the conceptually similar floating-point MIN/MAX operation that are found in the instructions VMAXPD/VMINPD. The details of MIN/MAX/MIN_ABS/MAX_ABS operation for VRANGEPD/PS/SD/SS for magnitude-0, opposite-signed input cases are listed in Table 5-24.

+

Additionally, non-zero, equal-magnitude with opposite-sign input values perform MIN_ABS or MAX_ABS comparison operation with result listed in Table 5-25.

+

Operation + ¶ +

+
RangeSP(SRC1[31:0], SRC2[31:0], CmpOpCtl[1:0], SignSelCtl[1:0])
+{
+    // Check if SNAN and report IE, see also Table 5-23
+    IF (SRC1=SNAN) THEN RETURN (QNAN(SRC1), set IE);
+    IF (SRC2=SNAN) THEN RETURN (QNAN(SRC2), set IE);
+    Src1.exp := SRC1[30:23];
+    Src1.fraction := SRC1[22:0];
+    IF ((Src1.exp = 0 ) and (Src1.fraction != 0 )) THEN// Src1 is a denormal number
+        IF DAZ THEN Src1.fraction := 0;
+        ELSE IF (SRC2 <> QNAN) Set DE; FI;
+    FI;
+    Src2.exp := SRC2[30:23];
+    Src2.fraction := SRC2[22:0];
+    IF ((Src2.exp = 0 ) and (Src2.fraction != 0 )) THEN// Src2 is a denormal number
+        IF DAZ THEN Src2.fraction := 0;
+        ELSE IF (SRC1 <> QNAN) Set DE; FI;
+    FI;
+    IF (SRC2 = QNAN) THEN{TMP[31:0] := SRC1[31:0]}
+    ELSE IF(SRC1 = QNAN) THEN{TMP[31:0] := SRC2[31:0]}
+    ELSE IF (Both SRC1, SRC2 are magnitude-0 and opposite-signed) TMP[31:0] := from Table 5-24
+    ELSE IF (Both SRC1, SRC2 are magnitude-equal and opposite-signed and CmpOpCtl[1:0] > 01) TMP[31:0] := from Table 5-25
+    ELSE
+        Case(CmpOpCtl[1:0])
+        00: TMP[31:0] := (SRC1[31:0] ≤ SRC2[31:0]) ? SRC1[31:0] : SRC2[31:0];
+        01: TMP[31:0] := (SRC1[31:0] ≤ SRC2[31:0]) ? SRC2[31:0] : SRC1[31:0];
+        10: TMP[31:0] := (ABS(SRC1[31:0]) ≤ ABS(SRC2[31:0])) ? SRC1[31:0] : SRC2[31:0];
+        11: TMP[31:0] := (ABS(SRC1[31:0]) ≤ ABS(SRC2[31:0])) ? SRC2[31:0] : SRC1[31:0];
+        ESAC;
+    FI;
+    Case(SignSelCtl[1:0])
+    00: dest := (SRC1[31] << 31) OR (TMP[30:0]);// Preserve Src1 sign bit
+    01: dest := TMP[31:0];// Preserve sign of compare result
+    10: dest := (0 << 31) OR (TMP[30:0]);// Zero out sign bit
+    11: dest := (1 << 31) OR (TMP[30:0]);// Set the sign bit
+    ESAC;
+    RETURN dest[31:0];
+}
+CmpOpCtl[1:0]= imm8[1:0];
+SignSelCtl[1:0]=imm8[3:2];
+
+

VRANGESS + ¶ +

+
IF k1[0] OR *no writemask*
+        THEN DEST[31:0] := RangeSP (SRC1[31:0], SRC2[31:0], CmpOpCtl[1:0], SignSelCtl[1:0]);
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[31:0] = 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+The following example describes a common usage of this instruction for checking that the input operand is
+bounded between ±150.
+VRANGESS zmm_dst, zmm_src, zmm_150, 02h;
+Where:
+xmm_dst is the destination operand.
+xmm_src is the input operand to compare against ±150.
+xmm_150 is the reference operand, contains the value of 150.
+IMM=02(imm8[1:0]=’10) selects the Min Absolute value operation with selection of src1.sign.
+In case |xmm_src| < 150, then its value will be written into zmm_dst. Otherwise, the value stored in xmm_dst
+will get the value of 150 (received on zmm_150).
+However, the sign control (imm8[3:2]=’00) instructs to select the sign of SRC1 received from xmm_src. So, even
+in the case of |xmm_src| ≥ 150, the selected sign of SRC1 is kept.
+Thus, if xmm_src < -150, the result of VRANGESS will be the minimal value of -150 while if xmm_src > +150,
+the result of VRANGE will be the maximal value of +150.
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRANGESS __m128 _mm_range_ss ( __m128 a, __m128 b, int imm);
+
+
VRANGESS __m128 _mm_range_round_ss ( __m128 a, __m128 b, int imm, int sae);
+
+
VRANGESS __m128 _mm_mask_range_ss (__m128 s, __mmask8 k, __m128 a, __m128 b, int imm);
+
+
VRANGESS __m128 _mm_mask_range_round_ss (__m128 s, __mmask8 k, __m128 a, __m128 b, int imm, int sae);
+
+
VRANGESS __m128 _mm_maskz_range_ss ( __mmask8 k, __m128 a, __m128 b, int imm);
+
+
VRANGESS __m128 _mm_maskz_range_round_ss ( __mmask8 k, __m128 a, __m128 b, int imm, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrcp14pd.html b/x86/vrcp14pd.html new file mode 100644 index 0000000..bdf0d67 --- /dev/null +++ b/x86/vrcp14pd.html @@ -0,0 +1,139 @@ + +VRCP14PD + — Compute Approximate Reciprocals of Packed Float64 Values

VRCP14PD + — Compute Approximate Reciprocals of Packed Float64 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 4C /r VRCP14PD xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512FComputes the approximate reciprocals of the packed double precision floating-point values in xmm2/m128/m64bcst and stores the results in xmm1. Under writemask.
EVEX.256.66.0F38.W1 4C /r VRCP14PD ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512FComputes the approximate reciprocals of the packed double precision floating-point values in ymm2/m256/m64bcst and stores the results in ymm1. Under writemask.
EVEX.512.66.0F38.W1 4C /r VRCP14PD zmm1 {k1}{z}, zmm2/m512/m64bcstAV/VAVX512FComputes the approximate reciprocals of the packed double precision floating-point values in zmm2/m512/m64bcst and stores the results in zmm1. Under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction performs a SIMD computation of the approximate reciprocals of eight/four/two packed double precision floating-point values in the source operand (the second operand) and stores the packed double precision floating-point results in the destination operand. The maximum relative error for this approximation is less than 2-14.

+

The source operand can be a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM register conditionally updated according to the writemask.

+

The VRCP14PD instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ with the sign of the source value is returned. A denormal source value will be treated as zero only in case of DAZ bit set in MXCSR. Otherwise it is treated correctly (i.e., not as a 0.0). Underflow results are flushed to zero only in case of FTZ bit set in MXCSR. Otherwise it will be treated correctly (i.e., correct underflow result is written) with the sign of the operand. When a source value is a SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

MXCSR exception flags are not affected by this instruction and floating-point exceptions are not reported.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input valueResult valueComments
0 ≤ X ≤ 2-1024INFVery small denormal
-2-1024 ≤ X ≤ -0-INFVery small denormal
X > 21022UnderflowUp to 18 bits of fractions are returned*
X < -21022-UnderflowUp to 18 bits of fractions are returned*
X = 2-n2n
X = -2-n-2n
+
Table 5-26. VRCP14PD/VRCP14SD Special Cases
+

* in this case the mantissa is shifted right by one or two bits

+

A numerically exact implementation of VRCP14xx can be found at https://software.intel.com/en-us/articles/refer- + ¶ +

+

ence-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRCP14PD ((EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+63:i] := APPROXIMATE(1.0/SRC[63:0]);
+                ELSE DEST[i+63:i] := APPROXIMATE(1.0/SRC[i+63:i]);
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+63:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+63:i] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCP14PD __m512d _mm512_rcp14_pd( __m512d a);
+
+
VRCP14PD __m512d _mm512_mask_rcp14_pd(__m512d s, __mmask8 k, __m512d a);
+
+
VRCP14PD __m512d _mm512_maskz_rcp14_pd( __mmask8 k, __m512d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vrcp14ps.html b/x86/vrcp14ps.html new file mode 100644 index 0000000..e3cf8cc --- /dev/null +++ b/x86/vrcp14ps.html @@ -0,0 +1,153 @@ + +VRCP14PS + — Compute Approximate Reciprocals of Packed Float32 Values

VRCP14PS + — Compute Approximate Reciprocals of Packed Float32 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 4C /r VRCP14PS xmm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512FComputes the approximate reciprocals of the packed single-precision floating-point values in xmm2/m128/m32bcst and stores the results in xmm1. Under writemask.
EVEX.256.66.0F38.W0 4C /r VRCP14PS ymm1 {k1}{z}, ymm2/m256/m32bcstAV/VAVX512VL AVX512FComputes the approximate reciprocals of the packed single-precision floating-point values in ymm2/m256/m32bcst and stores the results in ymm1. Under writemask.
EVEX.512.66.0F38.W0 4C /r VRCP14PS zmm1 {k1}{z}, zmm2/m512/m32bcstAV/VAVX512FComputes the approximate reciprocals of the packed single-precision floating-point values in zmm2/m512/m32bcst and stores the results in zmm1. Under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction performs a SIMD computation of the approximate reciprocals of the packed single-precision floating-point values in the source operand (the second operand) and stores the packed single-precision floating-point results in the destination operand (the first operand). The maximum relative error for this approximation is less than 2-14.

+

The source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register conditionally updated according to the writemask.

+

The VRCP14PS instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ with the sign of the source value is returned. A denormal source value will be treated as zero only in case of DAZ bit set in MXCSR. Otherwise it is treated correctly (i.e., not as a 0.0). Underflow results are flushed to zero only in case of FTZ bit set in MXCSR. Otherwise it will be treated correctly (i.e., correct underflow result is written) with the sign of the operand. When a source value is a SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

MXCSR exception flags are not affected by this instruction and floating-point exceptions are not reported.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input valueResult valueComments
0 ≤ X ≤ 2-128INFVery small denormal
-2-128 ≤ X ≤ -0-INFVery small denormal
X > 2126UnderflowUp to 18 bits of fractions are returned1
X < -2126-UnderflowUp to 18 bits of fractions are returned1
X = 2-n2n
X = -2-n-2n
+
Table 5-27. VRCP14PS/VRCP14SS Special Cases
+
+

1. In this case, the mantissa is shifted right by one or two bits.

+

A numerically exact implementation of VRCP14xx can be found at:

+

https://software.intel.com/en-us/articles/reference-implementations-for-IA-approximation-instructions-vrcp14- + ¶ +

+

vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRCP14PS (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+31:i] := APPROXIMATE(1.0/SRC[31:0]);
+                ELSE DEST[i+31:i] := APPROXIMATE(1.0/SRC[i+31:i]);
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+31:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+31:i] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCP14PS __m512 _mm512_rcp14_ps( __m512 a);
+
+
VRCP14PS __m512 _mm512_mask_rcp14_ps(__m512 s, __mmask16 k, __m512 a);
+
+
VRCP14PS __m512 _mm512_maskz_rcp14_ps( __mmask16 k, __m512 a);
+
+
VRCP14PS __m256 _mm256_rcp14_ps( __m256 a);
+
+
VRCP14PS __m256 _mm512_mask_rcp14_ps(__m256 s, __mmask8 k, __m256 a);
+
+
VRCP14PS __m256 _mm512_maskz_rcp14_ps( __mmask8 k, __m256 a);
+
+
VRCP14PS __m128 _mm_rcp14_ps( __m128 a);
+
+
VRCP14PS __m128 _mm_mask_rcp14_ps(__m128 s, __mmask8 k, __m128 a);
+
+
VRCP14PS __m128 _mm_maskz_rcp14_ps( __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vrcp14sd.html b/x86/vrcp14sd.html new file mode 100644 index 0000000..a579341 --- /dev/null +++ b/x86/vrcp14sd.html @@ -0,0 +1,89 @@ + +VRCP14SD + — Compute Approximate Reciprocal of Scalar Float64 Value

VRCP14SD + — Compute Approximate Reciprocal of Scalar Float64 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W1 4D /r VRCP14SD xmm1 {k1}{z}, xmm2, xmm3/m64AV/VAVX512FComputes the approximate reciprocal of the scalar double precision floating-point value in xmm3/m64 and stores the result in xmm1 using writemask k1. Also, upper double precision floating-point value (bits[127:64]) from xmm2 is copied to xmm1[127:64].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a SIMD computation of the approximate reciprocal of the low double precision floating-point value in the second source operand (the third operand) stores the result in the low quadword element of the destination operand (the first operand) according to the writemask k1. Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand (the second operand). The maximum relative error for this approximation is less than 2-14. The source operand can be an XMM register or a 64-bit memory location. The destination operand is an XMM register.

+

The VRCP14SD instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ with the sign of the source value is returned. A denormal source value will be treated as zero only in case of DAZ bit set in MXCSR. Otherwise it is treated correctly (i.e., not as a 0.0). Underflow results are flushed to zero only in case of FTZ bit set in MXCSR. Otherwise it will be treated correctly (i.e., correct underflow result is written) with the sign of the operand. When a source value is a SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned. See Table 5-26 for special-case input values.

+

MXCSR exception flags are not affected by this instruction and floating-point exceptions are not reported.

+

A numerically exact implementation of VRCP14xx can be found at:

+

https://software.intel.com/en-us/articles/reference-implementations-for-IA-approximation-instructions-vrcp14- + ¶ +

+

vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRCP14SD (EVEX version) + ¶ +

+
IF k1[0] OR *no writemask*
+        THEN DEST[63:0] := APPROXIMATE(1.0/SRC2[63:0]);
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE
+                    ; zeroing-masking
+                DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCP14SD __m128d _mm_rcp14_sd( __m128d a, __m128d b);
+
+
VRCP14SD __m128d _mm_mask_rcp14_sd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VRCP14SD __m128d _mm_maskz_rcp14_sd( __mmask8 k, __m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-51, “Type E5 Class Exception Conditions.”

diff --git a/x86/vrcp14ss.html b/x86/vrcp14ss.html new file mode 100644 index 0000000..f438193 --- /dev/null +++ b/x86/vrcp14ss.html @@ -0,0 +1,88 @@ + +VRCP14SS + — Compute Approximate Reciprocal of Scalar Float32 Value

VRCP14SS + — Compute Approximate Reciprocal of Scalar Float32 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W0 4D /r VRCP14SS xmm1 {k1}{z}, xmm2, xmm3/m32AV/VAVX512FComputes the approximate reciprocal of the scalar single-precision floating-point value in xmm3/m32 and stores the results in xmm1 using writemask k1. Also, upper double precision floating-point value (bits[127:32]) from xmm2 is copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a SIMD computation of the approximate reciprocal of the low single-precision floating-point value in the second source operand (the third operand) and stores the result in the low quadword element of the destination operand (the first operand) according to the writemask k1. Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand (the second operand). The maximum relative error for this approximation is less than 2-14. The source operand can be an XMM register or a 32-bit memory location. The destination operand is an XMM register.

+

The VRCP14SS instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ with the sign of the source value is returned. A denormal source value will be treated as zero only in case of DAZ bit set in MXCSR. Otherwise it is treated correctly (i.e., not as a 0.0). Underflow results are flushed to zero only in case of FTZ bit set in MXCSR. Otherwise it will be treated correctly (i.e., correct underflow result is written) with the sign of the operand. When a source value is a SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned. See Table 5-27 for special-case input values.

+

MXCSR exception flags are not affected by this instruction and floating-point exceptions are not reported.

+

A numerically exact implementation of VRCP14xx can be found at https://software.intel.com/en-us/articles/refer- + ¶ +

+

ence-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRCP14SS (EVEX version) + ¶ +

+
IF k1[0] OR *no writemask*
+        THEN DEST[31:0] := APPROXIMATE(1.0/SRC2[31:0]);
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE
+                    ; zeroing-masking
+                DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCP14SS __m128 _mm_rcp14_ss( __m128 a, __m128 b);
+
+
VRCP14SS __m128 _mm_mask_rcp14_ss(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VRCP14SS __m128 _mm_maskz_rcp14_ss( __mmask8 k, __m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-51, “Type E5 Class Exception Conditions.”

diff --git a/x86/vrcp28pd.html b/x86/vrcp28pd.html new file mode 100644 index 0000000..84d4907 --- /dev/null +++ b/x86/vrcp28pd.html @@ -0,0 +1,137 @@ + +VRCP28PD + — Approximation to the Reciprocal of Packed Double Precision Floating-Point ValuesWith Less Than 2^-28 Relative Error

VRCP28PD + — Approximation to the Reciprocal of Packed Double Precision Floating-Point ValuesWith Less Than 2^-28 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W1 CA /r VRCP28PD zmm1 {k1}{z}, zmm2/m512/m64bcst {sae}AV/VAVX512ERComputes the approximate reciprocals ( < 2^-28 relative error) of the packed double precision floating-point values in zmm2/m512/m64bcst and stores the results in zmm1. Under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
A Full ModRM:reg (w) ModRM:r/m (r) N/A N/A
+

Description + ¶ +

+

Computes the reciprocal approximation of the float64 values in the source operand (the second operand) and store the results to the destination operand (the first operand). The approximate reciprocal is evaluated with less than 2^-28 of maximum relative error.

+

Denormal input values are treated as zeros and do not signal #DE, irrespective of MXCSR.DAZ. Denormal results are flushed to zeros and do not signal #UE, irrespective of MXCSR.FTZ.

+

If any source element is NaN, the quietized NaN source value is returned for that element. If any source element is ±∞, ±0.0 is returned for that element. Also, if any source element is ±0.0, ±∞ is returned for that element.

+

The source operand is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

A numerically exact implementation of VRCP28xx can be found at https://software.intel.com/en-us/articles/refer- + ¶ +

+

ence-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRCP28PD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+63:i] := RCP_28_DP(1.0/SRC[63:0]);
+                ELSE DEST[i+63:i] := RCP_28_DP(1.0/SRC[i+63:i]);
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+63:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+63:i] := 0
+        FI;
+    FI;
+ENDFOR;
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input ValueResult ValueComments
NANQNAN(input)If (SRC = SNaN) then #I
0 ≤ X < 2-1022INFPositive input denormal or zero; #Z
-2-1022 < X ≤ -0-INFNegative input denormal or zero; #Z
X > 21022+0.0f
X < -21022-0.0f
X = +∞+0.0f
X = -∞-0.0f
X = 2-n2nExact result (unless input/output is a denormal)
X = -2-n-2nExact result (unless input/output is a denormal)
+
Table 6-46. VRCP28PD Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCP28PD __m512d _mm512_rcp28_round_pd ( __m512d a, int sae);
+
+
VRCP28PD __m512d _mm512_mask_rcp28_round_pd(__m512d a, __mmask8 m, __m512d b, int sae);
+
+
VRCP28PD __m512d _mm512_maskz_rcp28_round_pd( __mmask8 m, __m512d b, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Divide-by-zero.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vrcp28ps.html b/x86/vrcp28ps.html new file mode 100644 index 0000000..31bf3a9 --- /dev/null +++ b/x86/vrcp28ps.html @@ -0,0 +1,137 @@ + +VRCP28PS + — Approximation to the Reciprocal of Packed Single Precision Floating-Point ValuesWith Less Than 2^-28 Relative Error

VRCP28PS + — Approximation to the Reciprocal of Packed Single Precision Floating-Point ValuesWith Less Than 2^-28 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W0 CA /r VRCP28PS zmm1 {k1}{z}, zmm2/m512/m32bcst {sae}AV/VAVX512ERComputes the approximate reciprocals ( < 2^-28 relative error) of the packed single-precision floating-point values in zmm2/m512/m32bcst and stores the results in zmm1. Under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
A Full ModRM:reg (w) ModRM:r/m (r) N/A N/A
+

Description + ¶ +

+

Computes the reciprocal approximation of the float32 values in the source operand (the second operand) and store the results to the destination operand (the first operand) using the writemask k1. The approximate reciprocal is evaluated with less than 2^-28 of maximum relative error prior to final rounding. The final results are rounded to < 2^-23 relative error before written to the destination.

+

Denormal input values are treated as zeros and do not signal #DE, irrespective of MXCSR.DAZ. Denormal results are flushed to zeros and do not signal #UE, irrespective of MXCSR.FTZ.

+

If any source element is NaN, the quietized NaN source value is returned for that element. If any source element is ±∞, ±0.0 is returned for that element. Also, if any source element is ±0.0, ±∞ is returned for that element.

+

The source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

A numerically exact implementation of VRCP28xx can be found at https://software.intel.com/en-us/articles/refer- + ¶ +

+

ence-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRCP28PS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+31:i] := RCP_28_SP(1.0/SRC[31:0]);
+                ELSE DEST[i+31:i] := RCP_28_SP(1.0/SRC[i+31:i]);
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+31:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+31:i] := 0
+        FI;
+    FI;
+ENDFOR;
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input ValueResult ValueComments
NANQNAN(input)If (SRC = SNaN) then #I
0 ≤ X < 2-126INFPositive input denormal or zero; #Z
-2-126 < X ≤ -0-INFNegative input denormal or zero; #Z
X > 2126+0.0f
X < -2126-0.0f
X = +∞+0.0f
X = -∞-0.0f
X = 2-n2nExact result (unless input/output is a denormal)
X = -2-n-2nExact result (unless input/output is a denormal)
+
Table 6-48. VRCP28PS Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCP28PS _mm512_rcp28_round_ps ( __m512 a, int sae);
+
+
VRCP28PS __m512 _mm512_mask_rcp28_round_ps(__m512 s, __mmask16 m, __m512 a, int sae);
+
+
VRCP28PS __m512 _mm512_maskz_rcp28_round_ps( __mmask16 m, __m512 a, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Divide-by-zero.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vrcp28sd.html b/x86/vrcp28sd.html new file mode 100644 index 0000000..8523191 --- /dev/null +++ b/x86/vrcp28sd.html @@ -0,0 +1,132 @@ + +VRCP28SD + — Approximation to the Reciprocal of Scalar Double Precision Floating-Point ValueWith Less Than 2^-28 Relative Error

VRCP28SD + — Approximation to the Reciprocal of Scalar Double Precision Floating-Point ValueWith Less Than 2^-28 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W1 CB /r VRCP28SD xmm1 {k1}{z}, xmm2, xmm3/m64 {sae}AV/VAVX512ERComputes the approximate reciprocal ( < 2^-28 relative error) of the scalar double precision floating-point value in xmm3/m64 and stores the results in xmm1. Under writemask. Also, upper double precision floating-point value (bits[127:64]) from xmm2 is copied to xmm1[127:64].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
A Tuple1 Scalar ModRM:reg (w) EVEX.vvvv (r) ModRM:r/m (r) N/A
+

Description + ¶ +

+

Computes the reciprocal approximation of the low float64 value in the second source operand (the third operand) and store the result to the destination operand (the first operand). The approximate reciprocal is evaluated with less than 2^-28 of maximum relative error. The result is written into the low float64 element of the destination operand according to the writemask k1. Bits 127:64 of the destination is copied from the corresponding bits of the first source operand (the second operand).

+

A denormal input value is treated as zero and does not signal #DE, irrespective of MXCSR.DAZ. A denormal result is flushed to zero and does not signal #UE, irrespective of MXCSR.FTZ.

+

If any source element is NaN, the quietized NaN source value is returned for that element. If any source element is ±∞, ±0.0 is returned for that element. Also, if any source element is ±0.0, ±∞ is returned for that element.

+

The first source operand is an XMM register. The second source operand is an XMM register or a 64-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

A numerically exact implementation of VRCP28xx can be found at https://software.intel.com/en-us/articles/refer- + ¶ +

+

ence-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRCP28SD ((EVEX Encoded Versions) + ¶ +

+
IF k1[0] OR *no writemask* THEN
+        DEST[63: 0] := RCP_28_DP(1.0/SRC2[63: 0]);
+ELSE
+    IF *merging-masking* ; merging-masking
+        THEN *DEST[63: 0] remains unchanged*
+        ELSE ; zeroing-masking
+            DEST[63: 0] := 0
+    FI;
+FI;
+ENDFOR;
+DEST[127:64] := SRC1[127: 64]
+DEST[MAXVL-1:128] := 0
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input ValueResult ValueComments
NANQNAN(input)If (SRC = SNaN) then #I
0 ≤ X < 2-1022INFPositive input denormal or zero; #Z
-2-1022 < X ≤ -0-INFNegative input denormal or zero; #Z
X > 21022+0.0f
X < -21022-0.0f
X = +∞+0.0f
X = -∞-0.0f
X = 2-n2nExact result (unless input/output is a denormal)
X = -2-n-2nExact result (unless input/output is a denormal)
+
Table 6-47. VRCP28SD Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCP28SD __m128d _mm_rcp28_round_sd ( __m128d a, __m128d b, int sae);
+
+
VRCP28SD __m128d _mm_mask_rcp28_round_sd(__m128d s, __mmask8 m, __m128d a, __m128d b, int sae);
+
+
VRCP28SD __m128d _mm_maskz_rcp28_round_sd(__mmask8 m, __m128d a, __m128d b, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Divide-by-zero.

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrcp28ss.html b/x86/vrcp28ss.html new file mode 100644 index 0000000..cd7598e --- /dev/null +++ b/x86/vrcp28ss.html @@ -0,0 +1,132 @@ + +VRCP28SS + — Approximation to the Reciprocal of Scalar Single Precision Floating-Point ValueWith Less Than 2^-28 Relative Error

VRCP28SS + — Approximation to the Reciprocal of Scalar Single Precision Floating-Point ValueWith Less Than 2^-28 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W0 CB /r VRCP28SS xmm1 {k1}{z}, xmm2, xmm3/m32 {sae}AV/VAVX512ERComputes the approximate reciprocal ( < 2^-28 relative error) of the scalar single-precision floating-point value in xmm3/m32 and stores the results in xmm1. Under writemask. Also, upper 3 single-precision floating-point values (bits[127:32]) from xmm2 is copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
A Tuple1 Scalar ModRM:reg (w) EVEX.vvvv (r) ModRM:r/m (r) N/A
+

Description + ¶ +

+

Computes the reciprocal approximation of the low float32 value in the second source operand (the third operand) and store the result to the destination operand (the first operand). The approximate reciprocal is evaluated with less than 2^-28 of maximum relative error prior to final rounding. The final result is rounded to < 2^-23 relative error before written into the low float32 element of the destination according to writemask k1. Bits 127:32 of the destination is copied from the corresponding bits of the first source operand (the second operand).

+

A denormal input value is treated as zero and does not signal #DE, irrespective of MXCSR.DAZ. A denormal result is flushed to zero and does not signal #UE, irrespective of MXCSR.FTZ.

+

If any source element is NaN, the quietized NaN source value is returned for that element. If any source element is ±∞, ±0.0 is returned for that element. Also, if any source element is ±0.0, ±∞ is returned for that element.

+

The first source operand is an XMM register. The second source operand is an XMM register or a 32-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

A numerically exact implementation of VRCP28xx can be found at https://software.intel.com/en-us/articles/refer- + ¶ +

+

ence-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRCP28SS ((EVEX Encoded Versions) + ¶ +

+
IF k1[0] OR *no writemask* THEN
+        DEST[31: 0] := RCP_28_SP(1.0/SRC2[31: 0]);
+ELSE
+    IF *merging-masking* ; merging-masking
+        THEN *DEST[31: 0] remains unchanged*
+        ELSE ; zeroing-masking
+            DEST[31: 0] := 0
+    FI;
+FI;
+ENDFOR;
+DEST[127:32] := SRC1[127: 32]
+DEST[MAXVL-1:128] := 0
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input ValueResult ValueComments
NANQNAN(input)If (SRC = SNaN) then #I
0 ≤ X < 2-126INFPositive input denormal or zero; #Z
-2-126 < X ≤ -0-INFNegative input denormal or zero; #Z
X > 2126+0.0f
X < -2126-0.0f
X = +∞+0.0f
X = -∞-0.0f
X = 2-n2nExact result (unless input/output is a denormal)
X = -2-n-2nExact result (unless input/output is a denormal)
+
Table 6-49. VRCP28SS Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCP28SS __m128 _mm_rcp28_round_ss ( __m128 a, __m128 b, int sae);
+
+
VRCP28SS __m128 _mm_mask_rcp28_round_ss(__m128 s, __mmask8 m, __m128 a, __m128 b, int sae);
+
+
VRCP28SS __m128 _mm_maskz_rcp28_round_ss(__mmask8 m, __m128 a, __m128 b, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Divide-by-zero.

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrcpph.html b/x86/vrcpph.html new file mode 100644 index 0000000..0d15a0e --- /dev/null +++ b/x86/vrcpph.html @@ -0,0 +1,139 @@ + +VRCPPH + — Compute Reciprocals of Packed FP16 Values

VRCPPH + — Compute Reciprocals of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP6.W0 4C /r VRCPPH xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLCompute the approximate reciprocals of packed FP16 values in xmm2/m128/m16bcst and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 4C /r VRCPPH ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLCompute the approximate reciprocals of packed FP16 values in ymm2/m256/m16bcst and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 4C /r VRCPPH zmm1{k1}{z}, zmm2/m512/m16bcstAV/VAVX512-FP16Compute the approximate reciprocals of packed FP16 values in zmm2/m512/m16bcst and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction performs a SIMD computation of the approximate reciprocals of 8/16/32 packed FP16 values in the source operand (the second operand) and stores the packed FP16 results in the destination operand. The maximum relative error for this approximation is less than 2−11 + 2−14.

+

For special cases, see Table 5-28.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input ValueResult ValueComments
0 ≤ X ≤ 2-16INFVery small denormal
−2-16 ≤ X ≤ -0−INFVery small denormal
X > +∞+0
X < −∞−0
X = 2-n2n
X = −2-n−2n
+
Table 5-28. VRCPPH/VRCPSH Special Cases
+

Operation + ¶ +

+

VRCPPH dest{k1}, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := src.fp16[0]
+        ELSE:
+            tsrc := src.fp16[i]
+        DEST.fp16[i] := APPROXIMATE(1.0 / tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[i] := 0
+    //else DEST.fp16[i] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCPPH __m128h _mm_mask_rcp_ph (__m128h src, __mmask8 k, __m128h a);
+
+
VRCPPH __m128h _mm_maskz_rcp_ph (__mmask8 k, __m128h a);
+
+
VRCPPH __m128h _mm_rcp_ph (__m128h a);
+
+
VRCPPH __m256h _mm256_mask_rcp_ph (__m256h src, __mmask16 k, __m256h a);
+
+
VRCPPH __m256h _mm256_maskz_rcp_ph (__mmask16 k, __m256h a);
+
+
VRCPPH __m256h _mm256_rcp_ph (__m256h a);
+
+
VRCPPH __m512h _mm512_mask_rcp_ph (__m512h src, __mmask32 k, __m512h a);
+
+
VRCPPH __m512h _mm512_maskz_rcp_ph (__mmask32 k, __m512h a);
+
+
VRCPPH __m512h _mm512_rcp_ph (__m512h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vrcpsh.html b/x86/vrcpsh.html new file mode 100644 index 0000000..075f274 --- /dev/null +++ b/x86/vrcpsh.html @@ -0,0 +1,79 @@ + +VRCPSH + — Compute Reciprocal of Scalar FP16 Value

VRCPSH + — Compute Reciprocal of Scalar FP16 Value

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.66.MAP6.W0 4D /r VRCPSH xmm1{k1}{z}, xmm2, xmm3/m16AV/VAVX512-FP16Compute the approximate reciprocal of the low FP16 value in xmm3/m16 and store the result in xmm1 subject to writemask k1. Bits 127:16 from xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a SIMD computation of the approximate reciprocal of the low FP16 value in the second source operand (the third operand) and stores the result in the low word element of the destination operand (the first operand) according to the writemask k1. Bits 127:16 of the XMM register destination are copied from corresponding bits in the first source operand (the second operand). The maximum relative error for this approximation is less than 2−11 + 2−14.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

For special cases, see Table 5-28.

+

Operation + ¶ +

+

VRCPSH dest{k1}, src1, src2 + ¶ +

+
IF k1[0] or *no writemask*:
+    DEST.fp16[0] := APPROXIMATE(1.0 / src2.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+//else DEST.fp16[0] remains unchanged
+DEST[127:16] := src1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRCPSH __m128h _mm_mask_rcp_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VRCPSH __m128h _mm_maskz_rcp_sh (__mmask8 k, __m128h a, __m128h b);
+
+
VRCPSH __m128h _mm_rcp_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-58, “Type E10 Class Exception Conditions.”

diff --git a/x86/vreducepd.html b/x86/vreducepd.html new file mode 100644 index 0000000..20c7221 --- /dev/null +++ b/x86/vreducepd.html @@ -0,0 +1,226 @@ + +VREDUCEPD + — Perform Reduction Transformation on Packed Float64 Values

VREDUCEPD + — Perform Reduction Transformation on Packed Float64 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 56 /r ib VREDUCEPD xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8AV/VAVX512VL AVX512DQPerform reduction transformation on packed double precision floating-point values in xmm2/m128/m32bcst by subtracting a number of fraction bits specified by the imm8 field. Stores the result in xmm1 register under writemask k1.
EVEX.256.66.0F3A.W1 56 /r ib VREDUCEPD ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8AV/VAVX512VL AVX512DQPerform reduction transformation on packed double precision floating-point values in ymm2/m256/m32bcst by subtracting a number of fraction bits specified by the imm8 field. Stores the result in ymm1 register under writemask k1.
EVEX.512.66.0F3A.W1 56 /r ib VREDUCEPD zmm1 {k1}{z}, zmm2/m512/m64bcst{sae}, imm8AV/VAVX512DQPerform reduction transformation on double precision floating-point values in zmm2/m512/m32bcst by subtracting a number of fraction bits specified by the imm8 field. Stores the result in zmm1 register under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Perform reduction transformation of the packed binary encoded double precision floating-point values in the source operand (the second operand) and store the reduced results in binary floating-point format to the destination operand (the first operand) under the writemask k1.

+

The reduction transformation subtracts the integer part and the leading M fractional bits from the binary floating-point source value, where M is a unsigned integer specified by imm8[7:4], see Figure 5-28. Specifically, the reduction transformation can be expressed as:

+

dest = src – (ROUND(2M*src))*2-M;

+

where “Round()” treats “src”, “2M”, and their product as binary floating-point numbers with normalized significand and biased exponents.

+

The magnitude of the reduced result can be expressed by considering src= 2p*man2,

+

where ‘man2’ is the normalized significand and ‘p’ is the unbiased exponent

+

Then if RC = RNE: 0<=|Reduced Result|<=2p-M-1

+

Then if RC ≠ RNE: 0<=|Reduced Result|<2p-M

+

This instruction might end up with a precision exception set. However, in case of SPE set (i.e., Suppress Precision Exception, which is imm8[3]=1), no precision exception is reported.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +10 +imm8 +SP + R + Round Control Override +Fixed point length +Suppress Precision Exception: Imm8[3] +Imm8[1:0] = 00b : Round nearest even +Round Select: Imm8[2] +Imm8[3] = 0b : Use MXCSR exception mask +Imm8[7:4] : Number of fixed points to subtract +Imm8[1:0] = 01b : Round down +Imm8[2] = 0b : Use Imm8[1:0] +Imm8[3] = 1b : Suppress +Imm8[1:0] = 10b : Round up +Imm8[2] = 1b : Use MXCSR +Imm8[1:0] = 11b : Truncate +
Figure 5-28. Imm8 Controls for VREDUCEPD/SD/PS/SS
+

Handling of special case of input values are listed in Table 5-29.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Round ModeReturned value
|Src1| < 2-M-1RNESrc1
|Src1| < 2-MRPI, Src1 > 0Round (Src1-2-M) *
RPI, Src1 ≤ 0Src1
RNI, Src1 ≥ 0Src1
RNI, Src1 < 0Round (Src1+2-M) *
Src1 = ±0, or Dest = ±0 (Src1!=INF)NOT RNI+0.0
RNI-0.0
Src1 = ±INFany+0.0
Src1= ±NANn/aQNaN(Src1)
+
Table 5-29. VREDUCEPD/SD/PS/SS Special Cases
+

* Round control = (imm8.MS1)? MXCSR.RC: imm8.RC

+

Operation + ¶ +

+
ReduceArgumentDP(SRC[63:0], imm8[7:0])
+{
+    // Check for NaN
+    IF (SRC [63:0] = NAN) THEN
+        RETURN (Convert SRC[63:0] to QNaN); FI;
+    M := imm8[7:4]; // Number of fraction bits of the normalized significand to be subtracted
+    RC := imm8[1:0];// Round Control for ROUND() operation
+    RC source := imm[2];
+    SPE := imm[3];// Suppress Precision Exception
+    TMP[63:0] := 2-M *{ROUND(2M*SRC[63:0], SPE, RC_source, RC)}; // ROUND() treats SRC and 2M as standard binary FP values
+    TMP[63:0] := SRC[63:0] – TMP[63:0]; // subtraction under the same RC,SPE controls
+    RETURN TMP[63:0]; // binary encoded FP with biased exponent and normalized significand
+}
+
+

VREDUCEPD + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b == 1) AND (SRC *is memory*)
+                THEN DEST[i+63:i] := ReduceArgumentDP(SRC[63:0], imm8[7:0]);
+                ELSE DEST[i+63:i] := ReduceArgumentDP(SRC[i+63:i], imm8[7:0]);
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+63:i] remains unchanged*
+            ELSE
+                    ; zeroing-masking
+                DEST[i+63:i] = 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VREDUCEPD __m512d _mm512_mask_reduce_pd( __m512d a, int imm, int sae)
+
+
VREDUCEPD __m512d _mm512_mask_reduce_pd(__m512d s, __mmask8 k, __m512d a, int imm, int sae)
+
+
VREDUCEPD __m512d _mm512_maskz_reduce_pd(__mmask8 k, __m512d a, int imm, int sae)
+
+
VREDUCEPD __m256d _mm256_mask_reduce_pd( __m256d a, int imm)
+
+
VREDUCEPD __m256d _mm256_mask_reduce_pd(__m256d s, __mmask8 k, __m256d a, int imm)
+
+
VREDUCEPD __m256d _mm256_maskz_reduce_pd(__mmask8 k, __m256d a, int imm)
+
+
VREDUCEPD __m128d _mm_mask_reduce_pd( __m128d a, int imm)
+
+
VREDUCEPD __m128d _mm_mask_reduce_pd(__m128d s, __mmask8 k, __m128d a, int imm)
+
+
VREDUCEPD __m128d _mm_maskz_reduce_pd(__mmask8 k, __m128d a, int imm)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

If SPE is enabled, precision exception is not reported (regardless of MXCSR exception mask).

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vreduceph.html b/x86/vreduceph.html new file mode 100644 index 0000000..6b2f61f --- /dev/null +++ b/x86/vreduceph.html @@ -0,0 +1,175 @@ + +VREDUCEPH + — Perform Reduction Transformation on Packed FP16 Values

VREDUCEPH + — Perform Reduction Transformation on Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.0F3A.W0 56 /r /ib VREDUCEPH xmm1{k1}{z}, xmm2/m128/m16bcst, imm8AV/VAVX512-FP16 AVX512VLPerform reduction transformation on packed FP16 values in xmm2/m128/m16bcst by subtracting a number of fraction bits specified by the imm8 field. Store the result in xmm1 subject to writemask k1.
EVEX.256.NP.0F3A.W0 56 /r /ib VREDUCEPH ymm1{k1}{z}, ymm2/m256/m16bcst, imm8AV/VAVX512-FP16 AVX512VLPerform reduction transformation on packed FP16 values in ymm2/m256/m16bcst by subtracting a number of fraction bits specified by the imm8 field. Store the result in ymm1 subject to writemask k1.
EVEX.512.NP.0F3A.W0 56 /r /ib VREDUCEPH zmm1{k1}{z}, zmm2/m512/m16bcst {sae}, imm8AV/VAVX512-FP16Perform reduction transformation on packed FP16 values in zmm2/m512/m16bcst by subtracting a number of fraction bits specified by the imm8 field. Store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8 (r)N/A
+

Description + ¶ +

+

This instruction performs a reduction transformation of the packed binary encoded FP16 values in the source operand (the second operand) and store the reduced results in binary FP format to the destination operand (the first operand) under the writemask k1.

+

The reduction transformation subtracts the integer part and the leading M fractional bits from the binary FP source value, where M is a unsigned integer specified by imm8[7:4]. Specifically, the reduction transformation can be expressed as:

+

dest = src − (ROUND(2M * src)) * 2−M

+

where ROUND() treats src, 2M, and their product as binary FP numbers with normalized significand and biased exponents.

+

The magnitude of the reduced result can be expressed by considering src = 2p * man2, where ‘man2’ is the normalized significand and ‘p’ is the unbiased exponent.

+

Then if RC=RNE: 0 ≤ |ReducedResult| ≤ 2−M−1.

+

Then if RC ≠ RNE: 0 ≤ |ReducedResult| < 2−M.

+

This instruction might end up with a precision exception set. However, in case of SPE set (i.e., Suppress Precision Exception, which is imm8[3]=1), no precision exception is reported.

+

This instruction may generate tiny non-zero result. If it does so, it does not report underflow exception, even if underflow exceptions are unmasked (UM flag in MXCSR register is 0).

+

For special cases, see Table 5-30.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input valueRound ModeReturned Value
|Src1| < 2M1RNESrc1
|Src1| < 2−MRU, Src1 > 0Round(Src1 − 2M)1
RU, Src1 ≤ 0Src1
RD, Src1 ≥ 0Src1
RD, Src1 < 0Round(Src1 + 2M)
Src1 = ±0 or Dest = ±0 (Src1 ≠ ∞)NOT RD+0.0
RD−0.0
Src1 = ±∞Any+0.0
Src1 = ±NANAnyQNaN (Src1)
+
Table 5-30. VREDUCEPH/VREDUCESH Special Cases
+
+

1. The Round(.) function uses rounding controls specified by (imm8[2]? MXCSR.RC: imm8[1:0]).

+

Operation + ¶ +

+
def reduce_fp16(src, imm8):
+    nan := (src.exp = 0x1F) and (src.fraction != 0)
+    if nan:
+        return QNAN(src)
+    m := imm8[7:4]
+    rc := imm8[1:0]
+    rc_source := imm8[2]
+    spe := imm[3] // suppress precision exception
+    tmp := 2^(-m) * ROUND(2^m * src, spe, rc_source, rc)
+    tmp := src - tmp // using same RC, SPE controls
+    return tmp
+
+

VREDUCEPH dest{k1}, src, imm8 + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := src.fp16[0]
+        ELSE:
+            tsrc := src.fp16[i]
+        DEST.fp16[i] := reduce_fp16(tsrc, imm8)
+    ELSE IF *zeroing*:
+        DEST.fp16[i] := 0
+    //else DEST.fp16[i] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VREDUCEPH __m128h _mm_mask_reduce_ph (__m128h src, __mmask8 k, __m128h a, int imm8);
+
+
VREDUCEPH __m128h _mm_maskz_reduce_ph (__mmask8 k, __m128h a, int imm8);
+
+
VREDUCEPH __m128h _mm_reduce_ph (__m128h a, int imm8);
+
+
VREDUCEPH __m256h _mm256_mask_reduce_ph (__m256h src, __mmask16 k, __m256h a, int imm8);
+
+
VREDUCEPH __m256h _mm256_maskz_reduce_ph (__mmask16 k, __m256h a, int imm8);
+
+
VREDUCEPH __m256h _mm256_reduce_ph (__m256h a, int imm8);
+
+
VREDUCEPH __m512h _mm512_mask_reduce_ph (__m512h src, __mmask32 k, __m512h a, int imm8);
+
+
VREDUCEPH __m512h _mm512_maskz_reduce_ph (__mmask32 k, __m512h a, int imm8);
+
+
VREDUCEPH __m512h _mm512_reduce_ph (__m512h a, int imm8);
+
+
VREDUCEPH __m512h _mm512_mask_reduce_round_ph (__m512h src, __mmask32 k, __m512h a, int imm8, const int sae);
+
+
VREDUCEPH __m512h _mm512_maskz_reduce_round_ph (__mmask32 k, __m512h a, int imm8, const int sae);
+
+
VREDUCEPH __m512h _mm512_reduce_round_ph (__m512h a, int imm8, const int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vreduceps.html b/x86/vreduceps.html new file mode 100644 index 0000000..c94d925 --- /dev/null +++ b/x86/vreduceps.html @@ -0,0 +1,138 @@ + +VREDUCEPS + — Perform Reduction Transformation on Packed Float32 Values

VREDUCEPS + — Perform Reduction Transformation on Packed Float32 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 56 /r ib VREDUCEPS xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8AV/VAVX512VL AVX512DQPerform reduction transformation on packed single-precision floating-point values in xmm2/m128/m32bcst by subtracting a number of fraction bits specified by the imm8 field. Stores the result in xmm1 register under writemask k1.
EVEX.256.66.0F3A.W0 56 /r ib VREDUCEPS ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8AV/VAVX512VL AVX512DQPerform reduction transformation on packed single-precision floating-point values in ymm2/m256/m32bcst by subtracting a number of fraction bits specified by the imm8 field. Stores the result in ymm1 register under writemask k1.
EVEX.512.66.0F3A.W0 56 /r ib VREDUCEPS zmm1 {k1}{z}, zmm2/m512/m32bcst{sae}, imm8AV/VAVX512DQPerform reduction transformation on packed single-precision floating-point values in zmm2/m512/m32bcst by subtracting a number of fraction bits specified by the imm8 field. Stores the result in zmm1 register under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Perform reduction transformation of the packed binary encoded single-precision floating-point values in the source operand (the second operand) and store the reduced results in binary floating-point format to the destination operand (the first operand) under the writemask k1.

+

The reduction transformation subtracts the integer part and the leading M fractional bits from the binary floating-point source value, where M is a unsigned integer specified by imm8[7:4], see Figure 5-28. Specifically, the reduction transformation can be expressed as:

+

dest = src – (ROUND(2M*src))*2-M;

+

where “Round()” treats “src”, “2M”, and their product as binary floating-point numbers with normalized significand and biased exponents.

+

The magnitude of the reduced result can be expressed by considering src= 2p*man2,

+

where ‘man2’ is the normalized significand and ‘p’ is the unbiased exponent

+

Then if RC = RNE: 0<=|Reduced Result|<=2p-M-1

+

Then if RC ≠ RNE: 0<=|Reduced Result|<2p-M

+

This instruction might end up with a precision exception set. However, in case of SPE set (i.e., Suppress Precision Exception, which is imm8[3]=1), no precision exception is reported.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

Handling of special case of input values are listed in Table 5-29.

+

Operation + ¶ +

+
ReduceArgumentSP(SRC[31:0], imm8[7:0])
+{
+    // Check for NaN
+    IF (SRC [31:0] = NAN) THEN
+        RETURN (Convert SRC[31:0] to QNaN); FI
+    M := imm8[7:4]; // Number of fraction bits of the normalized significand to be subtracted
+    RC := imm8[1:0];// Round Control for ROUND() operation
+    RC source := imm[2];
+    SPE := imm[3];// Suppress Precision Exception
+    TMP[31:0] := 2-M *{ROUND(2M*SRC[31:0], SPE, RC_source, RC)}; // ROUND() treats SRC and 2M as standard binary FP values
+    TMP[31:0] := SRC[31:0] – TMP[31:0]; // subtraction under the same RC,SPE controls
+RETURN TMP[31:0]; // binary encoded FP with biased exponent and normalized significand
+}
+
+

VREDUCEPS + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b == 1) AND (SRC *is memory*)
+                THEN DEST[i+31:i] := ReduceArgumentSP(SRC[31:0], imm8[7:0]);
+                ELSE DEST[i+31:i] := ReduceArgumentSP(SRC[i+31:i], imm8[7:0]);
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+31:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+31:i] = 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VREDUCEPS __m512 _mm512_mask_reduce_ps( __m512 a, int imm, int sae)
+
+
VREDUCEPS __m512 _mm512_mask_reduce_ps(__m512 s, __mmask16 k, __m512 a, int imm, int sae)
+
+
VREDUCEPS __m512 _mm512_maskz_reduce_ps(__mmask16 k, __m512 a, int imm, int sae)
+
+
VREDUCEPS __m256 _mm256_mask_reduce_ps( __m256 a, int imm)
+
+
VREDUCEPS __m256 _mm256_mask_reduce_ps(__m256 s, __mmask8 k, __m256 a, int imm)
+
+
VREDUCEPS __m256 _mm256_maskz_reduce_ps(__mmask8 k, __m256 a, int imm)
+
+
VREDUCEPS __m128 _mm_mask_reduce_ps( __m128 a, int imm)
+
+
VREDUCEPS __m128 _mm_mask_reduce_ps(__m128 s, __mmask8 k, __m128 a, int imm)
+
+
VREDUCEPS __m128 _mm_maskz_reduce_ps(__mmask8 k, __m128 a, int imm)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

If SPE is enabled, precision exception is not reported (regardless of MXCSR exception mask).

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions”; additionally:

+ + + +
#UDIf EVEX.vvvv != 1111B.
diff --git a/x86/vreducesd.html b/x86/vreducesd.html new file mode 100644 index 0000000..d891795 --- /dev/null +++ b/x86/vreducesd.html @@ -0,0 +1,104 @@ + +VREDUCESD + — Perform a Reduction Transformation on a Scalar Float64 Value

VREDUCESD + — Perform a Reduction Transformation on a Scalar Float64 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W1 57 VREDUCESD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}, imm8/rAV/VAVX512D QPerform a reduction transformation on a scalar double precision floating-point value in xmm3/m64 by subtracting a number of fraction bits specified by the imm8 field. Also, upper double precision floating-point value (bits[127:64]) from xmm2 are copied to xmm1[127:64]. Stores the result in xmm1 register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Perform a reduction transformation of the binary encoded double precision floating-point value in the low qword element of the second source operand (the third operand) and store the reduced result in binary floating-point format to the low qword element of the destination operand (the first operand) under the writemask k1. Bits 127:64 of the destination operand are copied from respective qword elements of the first source operand (the second operand).

+

The reduction transformation subtracts the integer part and the leading M fractional bits from the binary floating-point source value, where M is a unsigned integer specified by imm8[7:4], see Figure 5-28. Specifically, the reduction transformation can be expressed as:

+

dest = src – (ROUND(2M*src))*2-M;

+

where “Round()” treats “src”, “2M”, and their product as binary floating-point numbers with normalized significand and biased exponents.

+

The magnitude of the reduced result can be expressed by considering src= 2p*man2,

+

where ‘man2’ is the normalized significand and ‘p’ is the unbiased exponent

+

Then if RC = RNE: 0<=|Reduced Result|<=2p-M-1

+

Then if RC ≠ RNE: 0<=|Reduced Result|<2p-M

+

This instruction might end up with a precision exception set. However, in case of SPE set (i.e., Suppress Precision Exception, which is imm8[3]=1), no precision exception is reported.

+

The operation is write masked.

+

Handling of special case of input values are listed in Table 5-29.

+

Operation + ¶ +

+
ReduceArgumentDP(SRC[63:0], imm8[7:0])
+{
+    // Check for NaN
+    IF (SRC [63:0] = NAN) THEN
+        RETURN (Convert SRC[63:0] to QNaN); FI;
+    M := imm8[7:4]; // Number of fraction bits of the normalized significand to be subtracted
+    RC := imm8[1:0];// Round Control for ROUND() operation
+    RC source := imm[2];
+    SPE := imm[3];// Suppress Precision Exception
+    TMP[63:0] := 2-M *{ROUND(2M*SRC[63:0], SPE, RC_source, RC)}; // ROUND() treats SRC and 2M as standard binary FP values
+    TMP[63:0] := SRC[63:0] – TMP[63:0]; // subtraction under the same RC,SPE controls
+    RETURN TMP[63:0]; // binary encoded FP with biased exponent and normalized significand
+}
+
+

VREDUCESD + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[63:0] := ReduceArgumentDP(SRC2[63:0], imm8[7:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] = 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VREDUCESD __m128d _mm_mask_reduce_sd( __m128d a, __m128d b, int imm, int sae)
+
+
VREDUCESD __m128d _mm_mask_reduce_sd(__m128d s, __mmask16 k, __m128d a, __m128d b, int imm, int sae)
+
+
VREDUCESD __m128d _mm_maskz_reduce_sd(__mmask16 k, __m128d a, __m128d b, int imm, int sae)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

If SPE is enabled, precision exception is not reported (regardless of MXCSR exception mask).

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vreducesh.html b/x86/vreducesh.html new file mode 100644 index 0000000..fa777a4 --- /dev/null +++ b/x86/vreducesh.html @@ -0,0 +1,88 @@ + +VREDUCESH + — Perform Reduction Transformation on Scalar FP16 Value

VREDUCESH + — Perform Reduction Transformation on Scalar FP16 Value

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.NP.0F3A.W0 57 /r /ib VREDUCESH xmm1{k1}{z}, xmm2, xmm3/m16 {sae}, imm8AV/VAVX512-FP16Perform a reduction transformation on the low binary encoded FP16 value in xmm3/m16 by subtracting a number of fraction bits specified by the imm8 field. Store the result in xmm1 subject to writemask k1. Bits 127:16 from xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

This instruction performs a reduction transformation of the low binary encoded FP16 value in the source operand (the second operand) and store the reduced result in binary FP format to the low element of the destination operand (the first operand) under the writemask k1. For further details see the description of VREDUCEPH.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

This instruction might end up with a precision exception set. However, in case of SPE set (i.e., Suppress Precision Exception, which is imm8[3]=1), no precision exception is reported.

+

This instruction may generate tiny non-zero result. If it does so, it does not report underflow exception, even if underflow exceptions are unmasked (UM flag in MXCSR register is 0).

+

For special cases, see Table 5-30.

+

Operation + ¶ +

+

VREDUCESH dest{k1}, src, imm8 + ¶ +

+
IF k1[0] or *no writemask*:
+    dest.fp16[0] := reduce_fp16(src2.fp16[0], imm8)
+        // see VREDUCEPH
+ELSE IF *zeroing*:
+    dest.fp16[0] := 0
+//else dest.fp16[0] remains unchanged
+DEST[127:16] := src1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VREDUCESH __m128h _mm_mask_reduce_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int imm8, const int sae);
+
+
VREDUCESH __m128h _mm_maskz_reduce_round_sh (__mmask8 k, __m128h a, __m128h b, int imm8, const int sae);
+
+
VREDUCESH __m128h _mm_reduce_round_sh (__m128h a, __m128h b, int imm8, const int sae);
+
+
VREDUCESH __m128h _mm_mask_reduce_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int imm8);
+
+
VREDUCESH __m128h _mm_maskz_reduce_sh (__mmask8 k, __m128h a, __m128h b, int imm8);
+
+
VREDUCESH __m128h _mm_reduce_sh (__m128h a, __m128h b, int imm8);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vreducess.html b/x86/vreducess.html new file mode 100644 index 0000000..1bf0919 --- /dev/null +++ b/x86/vreducess.html @@ -0,0 +1,103 @@ + +VREDUCESS + — Perform a Reduction Transformation on a Scalar Float32 Value

VREDUCESS + — Perform a Reduction Transformation on a Scalar Float32 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W0 57 /r /ib VREDUCESS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}, imm8AV/VAVX512DQPerform a reduction transformation on a scalar single-precision floating-point value in xmm3/m32 by subtracting a number of fraction bits specified by the imm8 field. Also, upper single-precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32]. Stores the result in xmm1 register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Perform a reduction transformation of the binary encoded single-precision floating-point value in the low dword element of the second source operand (the third operand) and store the reduced result in binary floating-point format to the low dword element of the destination operand (the first operand) under the writemask k1. Bits 127:32 of the destination operand are copied from respective dword elements of the first source operand (the second operand).

+

The reduction transformation subtracts the integer part and the leading M fractional bits from the binary floating-point source value, where M is a unsigned integer specified by imm8[7:4], see Figure 5-28. Specifically, the reduction transformation can be expressed as:

+

dest = src – (ROUND(2M*src))*2-M;

+

where “Round()” treats “src”, “2M”, and their product as binary floating-point numbers with normalized significand and biased exponents.

+

The magnitude of the reduced result can be expressed by considering src= 2p*man2,

+

where ‘man2’ is the normalized significand and ‘p’ is the unbiased exponent

+

Then if RC = RNE: 0<=|Reduced Result|<=2p-M-1

+

Then if RC ≠ RNE: 0<=|Reduced Result|<2p-M

+

This instruction might end up with a precision exception set. However, in case of SPE set (i.e., Suppress Precision Exception, which is imm8[3]=1), no precision exception is reported.

+

Handling of special case of input values are listed in Table 5-29.

+

Operation + ¶ +

+
ReduceArgumentSP(SRC[31:0], imm8[7:0])
+{
+    // Check for NaN
+    IF (SRC [31:0] = NAN) THEN
+        RETURN (Convert SRC[31:0] to QNaN); FI
+    M := imm8[7:4]; // Number of fraction bits of the normalized significand to be subtracted
+    RC := imm8[1:0];// Round Control for ROUND() operation
+    RC source := imm[2];
+    SPE := imm[3];// Suppress Precision Exception
+    TMP[31:0] := 2-M *{ROUND(2M*SRC[31:0], SPE, RC_source, RC)}; // ROUND() treats SRC and 2M as standard binary FP values
+    TMP[31:0] := SRC[31:0] – TMP[31:0]; // subtraction under the same RC,SPE controls
+RETURN TMP[31:0]; // binary encoded FP with biased exponent and normalized significand
+}
+
+

VREDUCESS + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[31:0] := ReduceArgumentSP(SRC2[31:0], imm8[7:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] = 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VREDUCESS __m128 _mm_mask_reduce_ss( __m128 a, __m128 b, int imm, int sae)
+
+
VREDUCESS __m128 _mm_mask_reduce_ss(__m128 s, __mmask16 k, __m128 a, __m128 b, int imm, int sae)
+
+
VREDUCESS __m128 _mm_maskz_reduce_ss(__mmask16 k, __m128 a, __m128 b, int imm, int sae)
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

If SPE is enabled, precision exception is not reported (regardless of MXCSR exception mask).

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrndscalepd.html b/x86/vrndscalepd.html new file mode 100644 index 0000000..0f10748 --- /dev/null +++ b/x86/vrndscalepd.html @@ -0,0 +1,221 @@ + +VRNDSCALEPD + — Round Packed Float64 Values to Include a Given Number of Fraction Bits

VRNDSCALEPD + — Round Packed Float64 Values to Include a Given Number of Fraction Bits

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W1 09 /r ib VRNDSCALEPD xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8AV/VAVX512VL AVX512FRounds packed double precision floating-point values in xmm2/m128/m64bcst to a number of fraction bits specified by the imm8 field. Stores the result in xmm1 register. Under writemask.
EVEX.256.66.0F3A.W1 09 /r ib VRNDSCALEPD ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8AV/VAVX512VL AVX512FRounds packed double precision floating-point values in ymm2/m256/m64bcst to a number of fraction bits specified by the imm8 field. Stores the result in ymm1 register. Under writemask.
EVEX.512.66.0F3A.W1 09 /r ib VRNDSCALEPD zmm1 {k1}{z}, zmm2/m512/m64bcst{sae}, imm8AV/VAVX512FRounds packed double precision floating-point values in zmm2/m512/m64bcst to a number of fraction bits specified by the imm8 field. Stores the result in zmm1 register using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Round the double precision floating-point values in the source operand by the rounding mode specified in the immediate operand (see Figure 5-29) and places the result in the destination operand.

+

The destination operand (the first operand) is a ZMM/YMM/XMM register conditionally updated according to the writemask. The source operand (the second operand) can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location, or a 512/256/128-bit vector broadcasted from a 64-bit memory location.

+

The rounding process rounds the input to an integral value, plus number bits of fraction that are specified by imm8[7:4] (to be included in the result) and returns the result as a double precision floating-point value.

+

It should be noticed that no overflow is induced while executing this instruction (although the source is scaled by the imm8[7:4] value).

+

The immediate operand also specifies control fields for the rounding operation, three bit fields are defined and shown in the “Immediate Control Description” figure below. Bit 3 of the immediate byte controls the processor behavior for a precision exception, bit 2 selects the source of rounding mode control. Bits 1:0 specify a non-sticky rounding-mode value (immediate control table below lists the encoded values for rounding-mode field).

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN. If DAZ is set to ‘1 then denormals will be converted to zero before rounding.

+

The sign of the result of this instruction is preserved, including the sign of zero.

+

The formula of the operation on each data element for VRNDSCALEPD is

+

ROUND(x) = 2-M*Round_to_INT(x*2M, round_ctrl),

+

round_ctrl = imm[3:0];

+

M=imm[7:4];

+

The operation of x*2M is computed as if the exponent range is unlimited (i.e., no overflow ever occurs).

+

VRNDSCALEPD is a more general form of the VEX-encoded VROUNDPD instruction. In VROUNDPD, the formula of the operation on each element is

+

ROUND(x) = Round_to_INT(x, round_ctrl),

+

round_ctrl = imm[3:0];

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +0 +imm8 +SPE +RS +Round Control Override +Fixed point length +Suppress Precision Exception: Imm8[3] +Imm8[1:0] = 00b : Round nearest even +Round Select: Imm8[2] +Imm8[3] = 0b : Use MXCSR exception mask +Imm8[7:4] : Number of fixed points to preserve +Imm8[1:0] = 01b : Round down +Imm8[2] = 0b : Use Imm8[1:0] +Imm8[3] = 1b : Suppress +Imm8[1:0] = 10b : Round up +Imm8[2] = 1b : Use MXCSR +Imm8[1:0] = 11b : Truncate +
Figure 5-29. Imm8 Controls for VRNDSCALEPD/SD/PS/SS
+

Handling of special case of input values are listed in Table 5-31.

+
+ + + + + + + + + + + + +
Returned value
Src1=±infSrc1
Src1=±NANSrc1 converted to QNAN
Src1=±0Src1
+
Table 5-31. VRNDSCALEPD/SD/PS/SS Special Cases
+

Operation + ¶ +

+
RoundToIntegerDP(SRC[63:0], imm8[7:0]) {
+    if (imm8[2] = 1)
+        rounding_direction := MXCSR:RC
+                    ; get round control from MXCSR
+    else
+        rounding_direction := imm8[1:0]
+                    ; get round control from imm8[1:0]
+    FI
+    M := imm8[7:4] ; get the scaling factor
+    case (rounding_direction)
+    00: TMP[63:0] := round_to_nearest_even_integer(2M*SRC[63:0])
+    01: TMP[63:0] := round_to_equal_or_smaller_integer(2M*SRC[63:0])
+    10: TMP[63:0] := round_to_equal_or_larger_integer(2M*SRC[63:0])
+    11: TMP[63:0] := round_to_nearest_smallest_magnitude_integer(2M*SRC[63:0])
+    ESAC
+    Dest[63:0] := 2-M* TMP[63:0]
+                ; scale down back to 2-M
+    if (imm8[3] = 0) Then ; check SPE
+        if (SRC[63:0] != Dest[63:0]) Then
+                    ; check precision lost
+            set_precision()
+                ; set #PE
+        FI;
+    FI;
+    return(Dest[63:0])
+}
+
+

VRNDSCALEPD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF *src is a memory operand*
+    THEN TMP_SRC := BROADCAST64(SRC, VL, k1)
+    ELSE TMP_SRC := SRC
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := RoundToIntegerDP((TMP_SRC[i+63:i], imm8[7:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+63:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+63:i] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRNDSCALEPD __m512d _mm512_roundscale_pd( __m512d a, int imm);
+
+
VRNDSCALEPD __m512d _mm512_roundscale_round_pd( __m512d a, int imm, int sae);
+
+
VRNDSCALEPD __m512d _mm512_mask_roundscale_pd(__m512d s, __mmask8 k, __m512d a, int imm);
+
+
VRNDSCALEPD __m512d _mm512_mask_roundscale_round_pd(__m512d s, __mmask8 k, __m512d a, int imm, int sae);
+
+
VRNDSCALEPD __m512d _mm512_maskz_roundscale_pd( __mmask8 k, __m512d a, int imm);
+
+
VRNDSCALEPD __m512d _mm512_maskz_roundscale_round_pd( __mmask8 k, __m512d a, int imm, int sae);
+
+
VRNDSCALEPD __m256d _mm256_roundscale_pd( __m256d a, int imm);
+
+
VRNDSCALEPD __m256d _mm256_mask_roundscale_pd(__m256d s, __mmask8 k, __m256d a, int imm);
+
+
VRNDSCALEPD __m256d _mm256_maskz_roundscale_pd( __mmask8 k, __m256d a, int imm);
+
+
VRNDSCALEPD __m128d _mm_roundscale_pd( __m128d a, int imm);
+
+
VRNDSCALEPD __m128d _mm_mask_roundscale_pd(__m128d s, __mmask8 k, __m128d a, int imm);
+
+
VRNDSCALEPD __m128d _mm_maskz_roundscale_pd( __mmask8 k, __m128d a, int imm);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

If SPE is enabled, precision exception is not reported (regardless of MXCSR exception mask).

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vrndscaleph.html b/x86/vrndscaleph.html new file mode 100644 index 0000000..d2cfdac --- /dev/null +++ b/x86/vrndscaleph.html @@ -0,0 +1,179 @@ + +VRNDSCALEPH + — Round Packed FP16 Values to Include a Given Number of Fraction Bits

VRNDSCALEPH + — Round Packed FP16 Values to Include a Given Number of Fraction Bits

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.0F3A.W0 08 /r /ib VRNDSCALEPH xmm1{k1}{z}, xmm2/m128/m16bcst, imm8AV/VAVX512-FP16 AVX512VLRound packed FP16 values in xmm2/m128/m16bcst to a number of fraction bits specified by the imm8 field. Store the result in xmm1 subject to writemask k1.
EVEX.256.NP.0F3A.W0 08 /r /ib VRNDSCALEPH ymm1{k1}{z}, ymm2/m256/m16bcst, imm8AV/VAVX512-FP16 AVX512VLRound packed FP16 values in ymm2/m256/m16bcst to a number of fraction bits specified by the imm8 field. Store the result in ymm1 subject to writemask k1.
EVEX.512.NP.0F3A.W0 08 /r /ib VRNDSCALEPH zmm1{k1}{z}, zmm2/m512/m16bcst {sae}, imm8AV/VAVX512-FP16Round packed FP16 values in zmm2/m512/m16bcst to a number of fraction bits specified by the imm8 field. Store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8 (r)N/A
+

Description + ¶ +

+

This instruction rounds the FP16 values in the source operand by the rounding mode specified in the immediate operand (see Table 5-32) and places the result in the destination operand. The destination operand is conditionally updated according to the writemask.

+

The rounding process rounds the input to an integral value, plus number bits of fraction that are specified by imm8[7:4] (to be included in the result), and returns the result as an FP16 value.

+

Note that no overflow is induced while executing this instruction (although the source is scaled by the imm8[7:4] value).

+

The immediate operand also specifies control fields for the rounding operation. Three bit fields are defined and shown in Table 5-32, “Imm8 Controls for VRNDSCALEPH/VRNDSCALESH.” Bit 3 of the immediate byte controls the processor behavior for a precision exception, bit 2 selects the source of rounding mode control, and bits 1:0 specify a non-sticky rounding-mode value.

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN.

+

The sign of the result of this instruction is preserved, including the sign of zero. Special cases are described in Table 5-33.

+

The formula of the operation on each data element for VRNDSCALEPH is

+

ROUND(x) = 2−M *Round_to_INT(x * 2M, round_ctrl),

+

round_ctrl = imm[3:0];

+

M=imm[7:4];

+

The operation of x * 2M is computed as if the exponent range is unlimited (i.e., no overflow ever occurs).

+

If this instruction encoding’s SPE bit (bit 3) in the immediate operand is 1, VRNDSCALEPH can set MXCSR.UE without MXCSR.PE.

+

EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+
+ + + + + + + + + + + + + + + +
Imm8 BitsDescription
imm8[7:4]Number of fixed points to preserve.
imm8[3]Suppress Precision Exception (SPE) 0b00: Implies use of MXCSR exception mask. 0b01: Implies suppress.
imm8[2]Round Select (RS) 0b00: Implies use of imm8[1:0]. 0b01: Implies use of MXCSR.
imm8[1:0]Round Control Override: 0b00: Round nearest even. 0b01: Round down. 0b10: Round up. 0b11: Truncate.
+
Table 5-32. Imm8 Controls for VRNDSCALEPH/VRNDSCALESH
+
+ + + + + + + + + + + + +
Input ValueReturned Value
Src1 = ±∞Src1
Src1 = ±NaNSrc1 converted to QNaN
Src1 = ±0Src1
+
Table 5-33. VRNDSCALEPH/VRNDSCALESH Special Cases
+

Operation + ¶ +

+
def round_fp16_to_integer(src, imm8):
+    if imm8[2] = 1:
+        rounding_direction := MXCSR.RC
+    else:
+        rounding_direction := imm8[1:0]
+    m := imm8[7:4] // scaling factor
+    tsrc1 := 2^m * src
+    if rounding_direction = 0b00:
+        tmp := round_to_nearest_even_integer(trc1)
+    else if rounding_direction = 0b01:
+        tmp := round_to_equal_or_smaller_integer(trc1)
+    else if rounding_direction = 0b10:
+        tmp := round_to_equal_or_larger_integer(trc1)
+    else if rounding_direction = 0b11:
+        tmp := round_to_smallest_magnitude_integer(trc1)
+    dst := 2^(-m) * tmp
+    if imm8[3]==0: // check SPE
+        if src != dst:
+            MXCSR.PE := 1
+    return dst
+
+

VRNDSCALEPH dest{k1}, src, imm8 + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := src.fp16[0]
+        ELSE:
+            tsrc := src.fp16[i]
+        DEST.fp16[i] := round_fp16_to_integer(tsrc, imm8)
+    ELSE IF *zeroing*:
+        DEST.fp16[i] := 0
+    //else DEST.fp16[i] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRNDSCALEPH __m128h _mm_mask_roundscale_ph (__m128h src, __mmask8 k, __m128h a, int imm8);
+
+
VRNDSCALEPH __m128h _mm_maskz_roundscale_ph (__mmask8 k, __m128h a, int imm8);
+
+
VRNDSCALEPH __m128h _mm_roundscale_ph (__m128h a, int imm8);
+
+
VRNDSCALEPH __m256h _mm256_mask_roundscale_ph (__m256h src, __mmask16 k, __m256h a, int imm8);
+
+
VRNDSCALEPH __m256h _mm256_maskz_roundscale_ph (__mmask16 k, __m256h a, int imm8);
+
+
VRNDSCALEPH __m256h _mm256_roundscale_ph (__m256h a, int imm8);
+
+
VRNDSCALEPH __m512h _mm512_mask_roundscale_ph (__m512h src, __mmask32 k, __m512h a, int imm8);
+
+
VRNDSCALEPH __m512h _mm512_maskz_roundscale_ph (__mmask32 k, __m512h a, int imm8);
+
+
VRNDSCALEPH __m512h _mm512_roundscale_ph (__m512h a, int imm8);
+
+
VRNDSCALEPH __m512h _mm512_mask_roundscale_round_ph (__m512h src, __mmask32 k, __m512h a, int imm8, const int sae);
+
+
VRNDSCALEPH __m512h _mm512_maskz_roundscale_round_ph (__mmask32 k, __m512h a, int imm8, const int sae);
+
+
VRNDSCALEPH __m512h _mm512_roundscale_round_ph (__m512h a, int imm8, const int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vrndscaleps.html b/x86/vrndscaleps.html new file mode 100644 index 0000000..f31d29b --- /dev/null +++ b/x86/vrndscaleps.html @@ -0,0 +1,156 @@ + +VRNDSCALEPS + — Round Packed Float32 Values to Include a Given Number of Fraction Bits

VRNDSCALEPS + — Round Packed Float32 Values to Include a Given Number of Fraction Bits

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F3A.W0 08 /r ib VRNDSCALEPS xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8AV/VAVX512VL AVX512FRounds packed single-precision floating-point values in xmm2/m128/m32bcst to a number of fraction bits specified by the imm8 field. Stores the result in xmm1 register. Under writemask.
EVEX.256.66.0F3A.W0 08 /r ib VRNDSCALEPS ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8AV/VAVX512VL AVX512FRounds packed single-precision floating-point values in ymm2/m256/m32bcst to a number of fraction bits specified by the imm8 field. Stores the result in ymm1 register. Under writemask.
EVEX.512.66.0F3A.W0 08 /r ib VRNDSCALEPS zmm1 {k1}{z}, zmm2/m512/m32bcst{sae}, imm8AV/VAVX512FRounds packed single-precision floating-point values in zmm2/m512/m32bcst to a number of fraction bits specified by the imm8 field. Stores the result in zmm1 register using writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)imm8N/A
+

Description + ¶ +

+

Round the single-precision floating-point values in the source operand by the rounding mode specified in the immediate operand (see Figure 5-29) and places the result in the destination operand.

+

The destination operand (the first operand) is a ZMM register conditionally updated according to the writemask. The source operand (the second operand) can be a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 32-bit memory location.

+

The rounding process rounds the input to an integral value, plus number bits of fraction that are specified by imm8[7:4] (to be included in the result) and returns the result as a single-precision floating-point value.

+

It should be noticed that no overflow is induced while executing this instruction (although the source is scaled by the imm8[7:4] value).

+

The immediate operand also specifies control fields for the rounding operation, three bit fields are defined and shown in the “Immediate Control Description” figure below. Bit 3 of the immediate byte controls the processor behavior for a precision exception, bit 2 selects the source of rounding mode control. Bits 1:0 specify a non-sticky rounding-mode value (immediate control table below lists the encoded values for rounding-mode field).

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN. If DAZ is set to ‘1 then denormals will be converted to zero before rounding.

+

The sign of the result of this instruction is preserved, including the sign of zero.

+

The formula of the operation on each data element for VRNDSCALEPS is

+

ROUND(x) = 2-M*Round_to_INT(x*2M, round_ctrl),

+

round_ctrl = imm[3:0];

+

M=imm[7:4];

+

The operation of x*2M is computed as if the exponent range is unlimited (i.e., no overflow ever occurs).

+

VRNDSCALEPS is a more general form of the VEX-encoded VROUNDPS instruction. In VROUNDPS, the formula of the operation on each element is

+

ROUND(x) = Round_to_INT(x, round_ctrl),

+

round_ctrl = imm[3:0];

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Handling of special case of input values are listed in Table 5-31.

+

Operation + ¶ +

+
RoundToIntegerSP(SRC[31:0], imm8[7:0]) {
+    if (imm8[2] = 1)
+        rounding_direction := MXCSR:RC
+                    ; get round control from MXCSR
+    else
+        rounding_direction := imm8[1:0]
+                    ; get round control from imm8[1:0]
+    FI
+    M := imm8[7:4] ; get the scaling factor
+    case (rounding_direction)
+    00: TMP[31:0] := round_to_nearest_even_integer(2M*SRC[31:0])
+    01: TMP[31:0] := round_to_equal_or_smaller_integer(2M*SRC[31:0])
+    10: TMP[31:0] := round_to_equal_or_larger_integer(2M*SRC[31:0])
+    11: TMP[31:0] := round_to_nearest_smallest_magnitude_integer(2M*SRC[31:0])
+    ESAC;
+    Dest[31:0] := 2-M* TMP[31:0] ; scale down back to 2-M
+    if (imm8[3] = 0) Then ; check SPE
+        if (SRC[31:0] != Dest[31:0]) Then
+                    ; check precision lost
+            set_precision() ; set #PE
+        FI;
+    FI;
+    return(Dest[31:0])
+}
+VRNDSCALEPS (EVEX encoded versions)
+(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF *src is a memory operand*
+    THEN TMP_SRC := BROADCAST32(SRC, VL, k1)
+    ELSE TMP_SRC := SRC
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := RoundToIntegerSP(TMP_SRC[i+31:i]), imm8[7:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+31:i] remains unchanged*
+            ELSE
+                    ; zeroing-masking
+                DEST[i+31:i] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRNDSCALEPS __m512 _mm512_roundscale_ps( __m512 a, int imm);
+
+
VRNDSCALEPS __m512 _mm512_roundscale_round_ps( __m512 a, int imm, int sae);
+
+
VRNDSCALEPS __m512 _mm512_mask_roundscale_ps(__m512 s, __mmask16 k, __m512 a, int imm);
+
+
VRNDSCALEPS __m512 _mm512_mask_roundscale_round_ps(__m512 s, __mmask16 k, __m512 a, int imm, int sae);
+
+
VRNDSCALEPS __m512 _mm512_maskz_roundscale_ps( __mmask16 k, __m512 a, int imm);
+
+
VRNDSCALEPS __m512 _mm512_maskz_roundscale_round_ps( __mmask16 k, __m512 a, int imm, int sae);
+
+
VRNDSCALEPS __m256 _mm256_roundscale_ps( __m256 a, int imm);
+
+
VRNDSCALEPS __m256 _mm256_mask_roundscale_ps(__m256 s, __mmask8 k, __m256 a, int imm);
+
+
VRNDSCALEPS __m256 _mm256_maskz_roundscale_ps( __mmask8 k, __m256 a, int imm);
+
+
VRNDSCALEPS __m128 _mm_roundscale_ps( __m256 a, int imm);
+
+
VRNDSCALEPS __m128 _mm_mask_roundscale_ps(__m128 s, __mmask8 k, __m128 a, int imm);
+
+
VRNDSCALEPS __m128 _mm_maskz_roundscale_ps( __mmask8 k, __m128 a, int imm);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

If SPE is enabled, precision exception is not reported (regardless of MXCSR exception mask).

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vrndscalesd.html b/x86/vrndscalesd.html new file mode 100644 index 0000000..1c15d6b --- /dev/null +++ b/x86/vrndscalesd.html @@ -0,0 +1,126 @@ + +VRNDSCALESD + — Round Scalar Float64 Value to Include a Given Number of Fraction Bits

VRNDSCALESD + — Round Scalar Float64 Value to Include a Given Number of Fraction Bits

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W1 0B /r ib VRNDSCALESD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}, imm8AV/VAVX512FRounds scalar double precision floating-point value in xmm3/m64 to a number of fraction bits specified by the imm8 field. Stores the result in xmm1 register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)imm8
+

Description + ¶ +

+

Rounds a double precision floating-point value in the low quadword (see Figure 5-29) element of the second source operand (the third operand) by the rounding mode specified in the immediate operand and places the result in the corresponding element of the destination operand (the first operand) according to the writemask. The quadword element at bits 127:64 of the destination is copied from the first source operand (the second operand).

+

The destination and first source operands are XMM registers, the 2nd source operand can be an XMM register or memory location. Bits MAXVL-1:128 of the destination register are cleared.

+

The rounding process rounds the input to an integral value, plus number bits of fraction that are specified by imm8[7:4] (to be included in the result) and returns the result as a double precision floating-point value.

+

It should be noticed that no overflow is induced while executing this instruction (although the source is scaled by the imm8[7:4] value).

+

The immediate operand also specifies control fields for the rounding operation, three bit fields are defined and shown in the “Immediate Control Description” figure below. Bit 3 of the immediate byte controls the processor behavior for a precision exception, bit 2 selects the source of rounding mode control. Bits 1:0 specify a non-sticky rounding-mode value (immediate control table below lists the encoded values for rounding-mode field).

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN. If DAZ is set to ‘1 then denormals will be converted to zero before rounding.

+

The sign of the result of this instruction is preserved, including the sign of zero.

+

The formula of the operation for VRNDSCALESD is

+

ROUND(x) = 2-M*Round_to_INT(x*2M, round_ctrl),

+

round_ctrl = imm[3:0];

+

M=imm[7:4];

+

The operation of x*2M is computed as if the exponent range is unlimited (i.e., no overflow ever occurs).

+

VRNDSCALESD is a more general form of the VEX-encoded VROUNDSD instruction. In VROUNDSD, the formula of the operation is

+

ROUND(x) = Round_to_INT(x, round_ctrl),

+

round_ctrl = imm[3:0];

+

EVEX encoded version: The source operand is a XMM register or a 64-bit memory location. The destination operand is a XMM register.

+

Handling of special case of input values are listed in Table 5-31.

+

Operation + ¶ +

+
RoundToIntegerDP(SRC[63:0], imm8[7:0]) {
+    if (imm8[2] = 1)
+        rounding_direction := MXCSR:RC
+                        ; get round control from MXCSR
+    else
+        rounding_direction := imm8[1:0]
+                        ; get round control from imm8[1:0]
+    FI
+    M := imm8[7:4] ; get the scaling factor
+    case (rounding_direction)
+    00: TMP[63:0] := round_to_nearest_even_integer(2M*SRC[63:0])
+    01: TMP[63:0] := round_to_equal_or_smaller_integer(2M*SRC[63:0])
+    10: TMP[63:0] := round_to_equal_or_larger_integer(2M*SRC[63:0])
+    11: TMP[63:0] := round_to_nearest_smallest_magnitude_integer(2M*SRC[63:0])
+    ESAC
+    Dest[63:0] := 2-M* TMP[63:0]
+                    ; scale down back to 2-M
+    if (imm8[3] = 0) Then ; check SPE
+        if (SRC[63:0] != Dest[63:0]) Then
+                        ; check precision lost
+            set_precision()
+                    ; set #PE
+        FI;
+    FI;
+    return(Dest[63:0])
+}
+VRNDSCALESD (EVEX encoded version)
+IF k1[0] or *no writemask*
+    THEN DEST[63:0] := RoundToIntegerDP(SRC2[63:0], Zero_upper_imm[7:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRNDSCALESD __m128d _mm_roundscale_sd ( __m128d a, __m128d b, int imm);
+
+
VRNDSCALESD __m128d _mm_roundscale_round_sd ( __m128d a, __m128d b, int imm, int sae);
+
+
VRNDSCALESD __m128d _mm_mask_roundscale_sd (__m128d s, __mmask8 k, __m128d a, __m128d b, int imm);
+
+
VRNDSCALESD __m128d _mm_mask_roundscale_round_sd (__m128d s, __mmask8 k, __m128d a, __m128d b, int imm, int sae);
+
+
VRNDSCALESD __m128d _mm_maskz_roundscale_sd ( __mmask8 k, __m128d a, __m128d b, int imm);
+
+
VRNDSCALESD __m128d _mm_maskz_roundscale_round_sd ( __mmask8 k, __m128d a, __m128d b, int imm, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

If SPE is enabled, precision exception is not reported (regardless of MXCSR exception mask).

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrndscalesh.html b/x86/vrndscalesh.html new file mode 100644 index 0000000..944452e --- /dev/null +++ b/x86/vrndscalesh.html @@ -0,0 +1,95 @@ + +VRNDSCALESH + — Round Scalar FP16 Value to Include a Given Number of Fraction Bits

VRNDSCALESH + — Round Scalar FP16 Value to Include a Given Number of Fraction Bits

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.NP.0F3A.W0 0A /r /ib VRNDSCALESH xmm1{k1}{z}, xmm2, xmm3/m16 {sae}, imm8AV/VAVX512-FP16Round the low FP16 value in xmm3/m16 to a number of fraction bits specified by the imm8 field. Store the result in xmm1 subject to writemask k1. Bits 127:16 from xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)imm8 (r)
+

Description + ¶ +

+

This instruction rounds the low FP16 value in the second source operand by the rounding mode specified in the immediate operand (see Table 5-32) and places the result in the destination operand.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

The rounding process rounds the input to an integral value, plus number bits of fraction that are specified by imm8[7:4] (to be included in the result), and returns the result as a FP16 value.

+

Note that no overflow is induced while executing this instruction (although the source is scaled by the imm8[7:4] value).

+

The immediate operand also specifies control fields for the rounding operation. Three bit fields are defined and shown in Table 5-32, “Imm8 Controls for VRNDSCALEPH/VRNDSCALESH.” Bit 3 of the immediate byte controls the processor behavior for a precision exception, bit 2 selects the source of rounding mode control, and bits 1:0 specify a non-sticky rounding-mode value.

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN.

+

The sign of the result of this instruction is preserved, including the sign of zero. Special cases are described in Table 5-33.

+

If this instruction encoding’s SPE bit (bit 3) in the immediate operand is 1, VRNDSCALESH can set MXCSR.UE without MXCSR.PE.

+

The formula of the operation on each data element for VRNDSCALESH is:

+

ROUND(x) = 2−M *Round_to_INT(x * 2M, round_ctrl),

+

round_ctrl = imm[3:0];

+

M=imm[7:4];

+

The operation of x * 2M is computed as if the exponent range is unlimited (i.e., no overflow ever occurs).

+

Operation + ¶ +

+

VRNDSCALESH dest{k1}, src1, src2, imm8 + ¶ +

+
IF k1[0] or *no writemask*:
+    DEST.fp16[0] := round_fp16_to_integer(src2.fp16[0], imm8) // see VRNDSCALEPH
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+//else DEST.fp16[0] remains unchanged
+DEST[127:16] = src1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRNDSCALESH __m128h _mm_mask_roundscale_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int imm8, const int sae);
+
+
VRNDSCALESH __m128h _mm_maskz_roundscale_round_sh (__mmask8 k, __m128h a, __m128h b, int imm8, const int sae);
+
+
VRNDSCALESH __m128h _mm_roundscale_round_sh (__m128h a, __m128h b, int imm8, const int sae);
+
+
VRNDSCALESH __m128h _mm_mask_roundscale_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int imm8);
+
+
VRNDSCALESH __m128h _mm_maskz_roundscale_sh (__mmask8 k, __m128h a, __m128h b, int imm8);
+
+
VRNDSCALESH __m128h _mm_roundscale_sh (__m128h a, __m128h b, int imm8);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Precision.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrndscaless.html b/x86/vrndscaless.html new file mode 100644 index 0000000..c4e1bc0 --- /dev/null +++ b/x86/vrndscaless.html @@ -0,0 +1,124 @@ + +VRNDSCALESS + — Round Scalar Float32 Value to Include a Given Number of Fraction Bits

VRNDSCALESS + — Round Scalar Float32 Value to Include a Given Number of Fraction Bits

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F3A.W0 0A /r ib VRNDSCALESS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}, imm8AV/VAVX512FRounds scalar single-precision floating-point value in xmm3/m32 to a number of fraction bits specified by the imm8 field. Stores the result in xmm1 register under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Rounds the single-precision floating-point value in the low doubleword element of the second source operand (the third operand) by the rounding mode specified in the immediate operand (see Figure 5-29) and places the result in the corresponding element of the destination operand (the first operand) according to the writemask. The double-word elements at bits 127:32 of the destination are copied from the first source operand (the second operand).

+

The destination and first source operands are XMM registers, the 2nd source operand can be an XMM register or memory location. Bits MAXVL-1:128 of the destination register are cleared.

+

The rounding process rounds the input to an integral value, plus number bits of fraction that are specified by imm8[7:4] (to be included in the result) and returns the result as a single-precision floating-point value.

+

It should be noticed that no overflow is induced while executing this instruction (although the source is scaled by the imm8[7:4] value).

+

The immediate operand also specifies control fields for the rounding operation, three bit fields are defined and shown in the “Immediate Control Description” figure below. Bit 3 of the immediate byte controls the processor behavior for a precision exception, bit 2 selects the source of rounding mode control. Bits 1:0 specify a non-sticky rounding-mode value (immediate control tables below lists the encoded values for rounding-mode field).

+

The Precision Floating-Point Exception is signaled according to the immediate operand. If any source operand is an SNaN then it will be converted to a QNaN. If DAZ is set to ‘1 then denormals will be converted to zero before rounding.

+

The sign of the result of this instruction is preserved, including the sign of zero.

+

The formula of the operation for VRNDSCALESS is

+

ROUND(x) = 2-M*Round_to_INT(x*2M, round_ctrl),

+

round_ctrl = imm[3:0];

+

M=imm[7:4];

+

The operation of x*2M is computed as if the exponent range is unlimited (i.e., no overflow ever occurs).

+

VRNDSCALESS is a more general form of the VEX-encoded VROUNDSS instruction. In VROUNDSS, the formula of the operation on each element is

+

ROUND(x) = Round_to_INT(x, round_ctrl),

+

round_ctrl = imm[3:0];

+

EVEX encoded version: The source operand is a XMM register or a 32-bit memory location. The destination operand is a XMM register.

+

Handling of special case of input values are listed in Table 5-31.

+

Operation + ¶ +

+
RoundToIntegerSP(SRC[31:0], imm8[7:0]) {
+    if (imm8[2] = 1)
+        rounding_direction := MXCSR:RC
+                    ; get round control from MXCSR
+    else
+        rounding_direction := imm8[1:0]
+                    ; get round control from imm8[1:0]
+    FI
+    M := imm8[7:4] ; get the scaling factor
+    case (rounding_direction)
+    00: TMP[31:0] := round_to_nearest_even_integer(2M*SRC[31:0])
+    01: TMP[31:0] := round_to_equal_or_smaller_integer(2M*SRC[31:0])
+    10: TMP[31:0] := round_to_equal_or_larger_integer(2M*SRC[31:0])
+    11: TMP[31:0] := round_to_nearest_smallest_magnitude_integer(2M*SRC[31:0])
+    ESAC;
+    Dest[31:0] := 2-M* TMP[31:0] ; scale down back to 2-M
+    if (imm8[3] = 0) Then ; check SPE
+        if (SRC[31:0] != Dest[31:0]) Then
+                    ; check precision lost
+            set_precision() ; set #PE
+        FI;
+    FI;
+    return(Dest[31:0])
+}
+VRNDSCALESS (EVEX encoded version)
+IF k1[0] or *no writemask*
+    THEN DEST[31:0] := RoundToIntegerSP(SRC2[31:0], Zero_upper_imm[7:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRNDSCALESS __m128 _mm_roundscale_ss ( __m128 a, __m128 b, int imm);
+
+
VRNDSCALESS __m128 _mm_roundscale_round_ss ( __m128 a, __m128 b, int imm, int sae);
+
+
VRNDSCALESS __m128 _mm_mask_roundscale_ss (__m128 s, __mmask8 k, __m128 a, __m128 b, int imm);
+
+
VRNDSCALESS __m128 _mm_mask_roundscale_round_ss (__m128 s, __mmask8 k, __m128 a, __m128 b, int imm, int sae);
+
+
VRNDSCALESS __m128 _mm_maskz_roundscale_ss ( __mmask8 k, __m128 a, __m128 b, int imm);
+
+
VRNDSCALESS __m128 _mm_maskz_roundscale_round_ss ( __mmask8 k, __m128 a, __m128 b, int imm, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision.

+

If SPE is enabled, precision exception is not reported (regardless of MXCSR exception mask).

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrsqrt14pd.html b/x86/vrsqrt14pd.html new file mode 100644 index 0000000..710100f --- /dev/null +++ b/x86/vrsqrt14pd.html @@ -0,0 +1,153 @@ + +VRSQRT14PD + — Compute Approximate Reciprocals of Square Roots of Packed Float64 Values

VRSQRT14PD + — Compute Approximate Reciprocals of Square Roots of Packed Float64 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 4E /r VRSQRT14PD xmm1 {k1}{z}, xmm2/m128/m64bcstAV/VAVX512VL AVX512FComputes the approximate reciprocal square roots of the packed double precision floating-point values in xmm2/m128/m64bcst and stores the results in xmm1. Under writemask.
EVEX.256.66.0F38.W1 4E /r VRSQRT14PD ymm1 {k1}{z}, ymm2/m256/m64bcstAV/VAVX512VL AVX512FComputes the approximate reciprocal square roots of the packed double precision floating-point values in ymm2/m256/m64bcst and stores the results in ymm1. Under writemask.
EVEX.512.66.0F38.W1 4E /r VRSQRT14PD zmm1 {k1}{z}, zmm2/m512/m64bcstAV/VAVX512FComputes the approximate reciprocal square roots of the packed double precision floating-point values in zmm2/m512/m64bcst and stores the results in zmm1 under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction performs a SIMD computation of the approximate reciprocals of the square roots of the eight packed double precision floating-point values in the source operand (the second operand) and stores the packed double precision floating-point results in the destination operand (the first operand) according to the writemask. The maximum relative error for this approximation is less than 2-14.

+

EVEX.512 encoded version: The source operand can be a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.256 encoded version: The source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 64-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.

+

EVEX.128 encoded version: The source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 64-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

The VRSQRT14PD instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ with the sign of the source value is returned. When the source operand is an +∞ then +ZERO value is returned. A denormal source value is treated as zero only if DAZ bit is set in MXCSR. Otherwise it is treated correctly and performs the approximation with the specified masked response. When a source value is a negative value (other than 0.0) a floating-point QNaN_indefinite is returned. When a source value is an SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

MXCSR exception flags are not affected by this instruction and floating-point exceptions are not reported.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

A numerically exact implementation of VRSQRT14xx can be found at https://software.intel.com/en-us/arti- + ¶ +

+

cles/reference-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRSQRT14PD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+63:i] := APPROXIMATE(1.0/ SQRT(SRC[63:0]));
+                ELSE DEST[i+63:i] := APPROXIMATE(1.0/ SQRT(SRC[i+63:i]));
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+63:i] remains unchanged*
+            ELSE
+                    ; zeroing-masking
+                DEST[i+63:i] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input valueResult valueComments
Any denormalNormalCannot generate overflow
X = 2-2n2n
X<0QNaN_IndefiniteIncluding -INF
X = -0-INF
X = +0+INF
X = +INF+0
+
Table 5-34. VRSQRT14PD Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRT14PD __m512d _mm512_rsqrt14_pd( __m512d a);
+
+
VRSQRT14PD __m512d _mm512_mask_rsqrt14_pd(__m512d s, __mmask8 k, __m512d a);
+
+
VRSQRT14PD __m512d _mm512_maskz_rsqrt14_pd( __mmask8 k, __m512d a);
+
+
VRSQRT14PD __m256d _mm256_rsqrt14_pd( __m256d a);
+
+
VRSQRT14PD __m256d _mm512_mask_rsqrt14_pd(__m256d s, __mmask8 k, __m256d a);
+
+
VRSQRT14PD __m256d _mm512_maskz_rsqrt14_pd( __mmask8 k, __m256d a);
+
+
VRSQRT14PD __m128d _mm_rsqrt14_pd( __m128d a);
+
+
VRSQRT14PD __m128d _mm_mask_rsqrt14_pd(__m128d s, __mmask8 k, __m128d a);
+
+
VRSQRT14PD __m128d _mm_maskz_rsqrt14_pd( __mmask8 k, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vrsqrt14ps.html b/x86/vrsqrt14ps.html new file mode 100644 index 0000000..5b7b66d --- /dev/null +++ b/x86/vrsqrt14ps.html @@ -0,0 +1,153 @@ + +VRSQRT14PS + — Compute Approximate Reciprocals of Square Roots of Packed Float32 Values

VRSQRT14PS + — Compute Approximate Reciprocals of Square Roots of Packed Float32 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 4E /r VRSQRT14PS xmm1 {k1}{z}, xmm2/m128/m32bcstAV/VAVX512VL AVX512FComputes the approximate reciprocal square roots of the packed single-precision floating-point values in xmm2/m128/m32bcst and stores the results in xmm1. Under writemask.
EVEX.256.66.0F38.W0 4E /r VRSQRT14PS ymm1 {k1}{z}, ymm2/m256/m32bcstAV/VAVX512VL AVX512FComputes the approximate reciprocal square roots of the packed single-precision floating-point values in ymm2/m256/m32bcst and stores the results in ymm1. Under writemask.
EVEX.512.66.0F38.W0 4E /r VRSQRT14PS zmm1 {k1}{z}, zmm2/m512/m32bcstAV/VAVX512FComputes the approximate reciprocal square roots of the packed single-precision floating-point values in zmm2/m512/m32bcst and stores the results in zmm1. Under writemask.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction performs a SIMD computation of the approximate reciprocals of the square roots of 16 packed single-precision floating-point values in the source operand (the second operand) and stores the packed single-precision floating-point results in the destination operand (the first operand) according to the writemask. The maximum relative error for this approximation is less than 2-14.

+

EVEX.512 encoded version: The source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.256 encoded version: The source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 32-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.

+

EVEX.128 encoded version: The source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 32-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

The VRSQRT14PS instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ with the sign of the source value is returned. When the source operand is an +∞ then +ZERO value is returned. A denormal source value is treated as zero only if DAZ bit is set in MXCSR. Otherwise it is treated correctly and performs the approximation with the specified masked response. When a source value is a negative value (other than 0.0) a floating-point QNaN_indefinite is returned. When a source value is an SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

MXCSR exception flags are not affected by this instruction and floating-point exceptions are not reported.

+

Note: EVEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

A numerically exact implementation of VRSQRT14xx can be found at https://software.intel.com/en-us/arti- + ¶ +

+

cles/reference-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRSQRT14PS (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+31:i] := APPROXIMATE(1.0/ SQRT(SRC[31:0]));
+                ELSE DEST[i+31:i] := APPROXIMATE(1.0/ SQRT(SRC[i+31:i]));
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+31:i] remains unchanged*
+            ELSE
+                    ; zeroing-masking
+                DEST[i+31:i] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[MAXVL-1:VL] := 0
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input valueResult valueComments
Any denormalNormalCannot generate overflow
X = 2-2n2n
X<0QNaN_IndefiniteIncluding -INF
X = -0-INF
X = +0+INF
X = +INF+0
+
Table 5-36. VRSQRT14PS Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRT14PS __m512 _mm512_rsqrt14_ps( __m512 a);
+
+
VRSQRT14PS __m512 _mm512_mask_rsqrt14_ps(__m512 s, __mmask16 k, __m512 a);
+
+
VRSQRT14PS __m512 _mm512_maskz_rsqrt14_ps( __mmask16 k, __m512 a);
+
+
VRSQRT14PS __m256 _mm256_rsqrt14_ps( __m256 a);
+
+
VRSQRT14PS __m256 _mm256_mask_rsqrt14_ps(__m256 s, __mmask8 k, __m256 a);
+
+
VRSQRT14PS __m256 _mm256_maskz_rsqrt14_ps( __mmask8 k, __m256 a);
+
+
VRSQRT14PS __m128 _mm_rsqrt14_ps( __m128 a);
+
+
VRSQRT14PS __m128 _mm_mask_rsqrt14_ps(__m128 s, __mmask8 k, __m128 a);
+
+
VRSQRT14PS __m128 _mm_maskz_rsqrt14_ps( __mmask8 k, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

diff --git a/x86/vrsqrt14sd.html b/x86/vrsqrt14sd.html new file mode 100644 index 0000000..a1d075a --- /dev/null +++ b/x86/vrsqrt14sd.html @@ -0,0 +1,120 @@ + +VRSQRT14SD + — Compute Approximate Reciprocal of Square Root of Scalar Float64 Value

VRSQRT14SD + — Compute Approximate Reciprocal of Square Root of Scalar Float64 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W1 4F /r VRSQRT14SD xmm1 {k1}{z}, xmm2, xmm3/m64AV/VAVX512FComputes the approximate reciprocal square root of the scalar double precision floating-point value in xmm3/m64 and stores the result in the low quadword element of xmm1 using writemask k1. Bits[127:64] of xmm2 is copied to xmm1[127:64].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Computes the approximate reciprocal of the square roots of the scalar double precision floating-point value in the low quadword element of the source operand (the second operand) and stores the result in the low quadword element of the destination operand (the first operand) according to the writemask. The maximum relative error for this approximation is less than 2-14. The source operand can be an XMM register or a 32-bit memory location. The destination operand is an XMM register.

+

Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

The VRSQRT14SD instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ with the sign of the source value is returned. When the source operand is an +∞ then +ZERO value is returned. A denormal source value is treated as zero only if DAZ bit is set in MXCSR. Otherwise it is treated correctly and performs the approximation with the specified masked response. When a source value is a negative value (other than 0.0) a floating-point QNaN_indefinite is returned. When a source value is an SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

MXCSR exception flags are not affected by this instruction and floating-point exceptions are not reported.

+

A numerically exact implementation of VRSQRT14xx can be found at https://software.intel.com/en-us/arti- + ¶ +

+

cles/reference-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRSQRT14SD (EVEX version) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[63:0] := APPROXIMATE(1.0/ SQRT(SRC2[63:0]))
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE
+                    ; zeroing-masking
+                THEN DEST[63:0] := 0
+        FI;
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input valueResult valueComments
Any denormalNormalCannot generate overflow
X = 2-2n2n
X<0QNaN_IndefiniteIncluding -INF
X = -0-INF
X = +0+INF
X = +INF+0
+
Table 5-35. VRSQRT14SD Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRT14SD __m128d _mm_rsqrt14_sd( __m128d a, __m128d b);
+
+
VRSQRT14SD __m128d _mm_mask_rsqrt14_sd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VRSQRT14SD __m128d _mm_maskz_rsqrt14_sd( __mmask8d m, __m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-51, “Type E5 Class Exception Conditions.”

diff --git a/x86/vrsqrt14ss.html b/x86/vrsqrt14ss.html new file mode 100644 index 0000000..6ca8865 --- /dev/null +++ b/x86/vrsqrt14ss.html @@ -0,0 +1,120 @@ + +VRSQRT14SS + — Compute Approximate Reciprocal of Square Root of Scalar Float32 Value

VRSQRT14SS + — Compute Approximate Reciprocal of Square Root of Scalar Float32 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W0 4F /r VRSQRT14SS xmm1 {k1}{z}, xmm2, xmm3/m32AV/VAVX512FComputes the approximate reciprocal square root of the scalar single-precision floating-point value in xmm3/m32 and stores the result in the low doubleword element of xmm1 using writemask k1. Bits[127:32] of xmm2 is copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Computes of the approximate reciprocal of the square root of the scalar single-precision floating-point value in the low doubleword element of the source operand (the second operand) and stores the result in the low doubleword element of the destination operand (the first operand) according to the writemask. The maximum relative error for this approximation is less than 2-14. The source operand can be an XMM register or a 32-bit memory location. The destination operand is an XMM register.

+

Bits (127:32) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.

+

The VRSQRT14SS instruction is not affected by the rounding control bits in the MXCSR register. When a source value is a 0.0, an ∞ with the sign of the source value is returned. When the source operand is an ∞, zero with the sign of the source value is returned. A denormal source value is treated as zero only if DAZ bit is set in MXCSR. Otherwise it is treated correctly and performs the approximation with the specified masked response. When a source value is a negative value (other than 0.0) a floating-point indefinite is returned. When a source value is an SNaN or QNaN, the SNaN is converted to a QNaN or the source QNaN is returned.

+

MXCSR exception flags are not affected by this instruction and floating-point exceptions are not reported.

+

A numerically exact implementation of VRSQRT14xx can be found at https://software.intel.com/en-us/arti- + ¶ +

+

cles/reference-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRSQRT14SS (EVEX version) + ¶ +

+
IF k1[0] or *no writemask*
+    THEN DEST[31:0] := APPROXIMATE(1.0/ SQRT(SRC2[31:0]))
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE
+                    ; zeroing-masking
+                THEN DEST[31:0] := 0
+        FI;
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input valueResult valueComments
Any denormalNormalCannot generate overflow
X = 2-2n2n
X<0QNaN_IndefiniteIncluding -INF
X = -0-INF
X = +0+INF
X = +INF+0
+
Table 5-37. VRSQRT14SS Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRT14SS __m128 _mm_rsqrt14_ss( __m128 a, __m128 b);
+
+
VRSQRT14SS __m128 _mm_mask_rsqrt14_ss(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VRSQRT14SS __m128 _mm_maskz_rsqrt14_ss( __mmask8 k, __m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-51, “Type E5 Class Exception Conditions.”

diff --git a/x86/vrsqrt28pd.html b/x86/vrsqrt28pd.html new file mode 100644 index 0000000..fe1419f --- /dev/null +++ b/x86/vrsqrt28pd.html @@ -0,0 +1,125 @@ + +VRSQRT28PD + — Approximation to the Reciprocal Square Root of Packed Double PrecisionFloating-Point Values With Less Than 2^-28 Relative Error

VRSQRT28PD + — Approximation to the Reciprocal Square Root of Packed Double PrecisionFloating-Point Values With Less Than 2^-28 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W1 CC /r VRSQRT28PD zmm1 {k1}{z}, zmm2/m512/m64bcst {sae}AV/VAVX512ERComputes approximations to the Reciprocal square root (<2^-28 relative error) of the packed double precision floating-point values from zmm2/m512/m64bcst and stores result in zmm1with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
AFull ModRM:reg (w) ModRM:r/m (r) N/A N/A
+

Description + ¶ +

+

Computes the reciprocal square root of the float64 values in the source operand (the second operand) and store the results to the destination operand (the first operand). The approximate reciprocal is evaluated with less than 2^-28 of maximum relative error.

+

If any source element is NaN, the quietized NaN source value is returned for that element. Negative (non-zero) source numbers, as well as -∞, return the canonical NaN and set the Invalid Flag (#I).

+

A value of -0 must return -∞ and set the DivByZero flags (#Z). Negative numbers should return NaN and set the Invalid flag (#I). Note however that the instruction flush input denormals to zero of the same sign, so negative denormals return -∞ and set the DivByZero flag.

+

The source operand is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

A numerically exact implementation of VRSQRT28xx can be found at https://software.intel.com/en-us/arti- + ¶ +

+

cles/reference-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRSQRT28PD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+63:i] := (1.0/ SQRT(SRC[63:0]));
+                ELSE DEST[i+63:i] := (1.0/ SQRT(SRC[i+63:i]));
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+63:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+63:i] := 0
+        FI;
+    FI;
+ENDFOR;
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input ValueResult ValueComments
NANQNAN(input)If (SRC = SNaN) then #I
X = 2-2n2n
X<0QNaN_IndefiniteIncluding -INF
X = -0 or negative denormal-INF#Z
X = +0 or positive denormal+INF#Z
X = +INF+0
+
Table 6-50. VRSQRT28PD Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRT28PD __m512d _mm512_rsqrt28_round_pd(__m512d a, int sae);
+
+
VRSQRT28PD __m512d _mm512_mask_rsqrt28_round_pd(__m512d s, __mmask8 m,__m512d a, int sae);
+
+
VRSQRT28PD __m512d _mm512_maskz_rsqrt28_round_pd(__mmask8 m,__m512d a, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Divide-by-zero.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vrsqrt28ps.html b/x86/vrsqrt28ps.html new file mode 100644 index 0000000..adc2331 --- /dev/null +++ b/x86/vrsqrt28ps.html @@ -0,0 +1,125 @@ + +VRSQRT28PS + — Approximation to the Reciprocal Square Root of Packed Single PrecisionFloating-Point Values With Less Than 2^-28 Relative Error

VRSQRT28PS + — Approximation to the Reciprocal Square Root of Packed Single PrecisionFloating-Point Values With Less Than 2^-28 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W0 CC /r VRSQRT28PS zmm1 {k1}{z}, zmm2/m512/m32bcst {sae}AV/VAVX512ERComputes approximations to the Reciprocal square root (<2^-28 relative error) of the packed single-precision floating-point values from zmm2/m512/m32bcst and stores result in zmm1with writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
AFull ModRM:reg (w) ModRM:r/m (r) N/A N/A
+

Description + ¶ +

+

Computes the reciprocal square root of the float32 values in the source operand (the second operand) and store the results to the destination operand (the first operand). The approximate reciprocal is evaluated with less than 2^-28 of maximum relative error prior to final rounding. The final results is rounded to < 2^-23 relative error before written to the destination.

+

If any source element is NaN, the quietized NaN source value is returned for that element. Negative (non-zero) source numbers, as well as -∞, return the canonical NaN and set the Invalid Flag (#I).

+

A value of -0 must return -∞ and set the DivByZero flags (#Z). Negative numbers should return NaN and set the Invalid flag (#I). Note however that the instruction flush input denormals to zero of the same sign, so negative denormals return -∞ and set the DivByZero flag.

+

The source operand is a ZMM register, a 512-bit memory location, or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register, conditionally updated using writemask k1.

+

EVEX.vvvv is reserved and must be 1111b otherwise instructions will #UD.

+

A numerically exact implementation of VRSQRT28xx can be found at https://software.intel.com/en-us/arti- + ¶ +

+

cles/reference-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRSQRT28PS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC *is memory*)
+                THEN DEST[i+31:i] := (1.0/ SQRT(SRC[31:0]));
+                ELSE DEST[i+31:i] := (1.0/ SQRT(SRC[i+31:i]));
+            FI;
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[i+31:i] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[i+31:i] := 0
+        FI;
+    FI;
+ENDFOR;
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input ValueResult ValueComments
NANQNAN(input)If (SRC = SNaN) then #I
X = 2-2n2n
X<0QNaN_IndefiniteIncluding -INF
X = -0 or negative denormal-INF#Z
X = +0 or positive denormal+INF#Z
X = +INF+0
+
Table 6-52. VRSQRT28PS Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRT28PS __m512 _mm512_rsqrt28_round_ps(__m512 a, int sae);
+
+
VRSQRT28PS __m512 _mm512_mask_rsqrt28_round_ps(__m512 s, __mmask16 m,__m512 a, int sae);
+
+
VRSQRT28PS __m512 _mm512_maskz_rsqrt28_round_ps(__mmask16 m,__m512 a, int sae);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Divide-by-zero.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vrsqrt28sd.html b/x86/vrsqrt28sd.html new file mode 100644 index 0000000..2f41ad7 --- /dev/null +++ b/x86/vrsqrt28sd.html @@ -0,0 +1,120 @@ + +VRSQRT28SD + — Approximation to the Reciprocal Square Root of Scalar Double PrecisionFloating-Point Value With Less Than 2^-28 Relative Error

VRSQRT28SD + — Approximation to the Reciprocal Square Root of Scalar Double PrecisionFloating-Point Value With Less Than 2^-28 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W1 CD /r VRSQRT28SD xmm1 {k1}{z}, xmm2, xmm3/m64 {sae}AV/VAVX512ERComputes approximate reciprocal square root (<2^-28 relative error) of the scalar double precision floating-point value from xmm3/m64 and stores result in xmm1with writemask k1. Also, upper double precision floating-point value (bits[127:64]) from xmm2 is copied to xmm1[127:64].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
A Tuple1 Scalar ModRM:reg (w) EVEX.vvvv (r) ModRM:r/m (r) N/A
+

Description + ¶ +

+

Computes the reciprocal square root of the low float64 value in the second source operand (the third operand) and store the result to the destination operand (the first operand). The approximate reciprocal square root is evaluated with less than 2^-28 of maximum relative error. The result is written into the low float64 element of xmm1 according to the writemask k1. Bits 127:64 of the destination is copied from the corresponding bits of the first source operand (the second operand).

+

If any source element is NaN, the quietized NaN source value is returned for that element. Negative (non-zero) source numbers, as well as -∞, return the canonical NaN and set the Invalid Flag (#I).

+

A value of -0 must return -∞ and set the DivByZero flags (#Z). Negative numbers should return NaN and set the Invalid flag (#I). Note however that the instruction flush input denormals to zero of the same sign, so negative denormals return -∞ and set the DivByZero flag.

+

The first source operand is an XMM register. The second source operand is an XMM register or a 64-bit memory location. The destination operand is a XMM register.

+

A numerically exact implementation of VRSQRT28xx can be found at https://software.intel.com/en-us/arti- + ¶ +

+

cles/reference-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRSQRT28SD (EVEX Encoded Versions) + ¶ +

+
    IF k1[0] OR *no writemask* THEN
+                DEST[63: 0] := (1.0/ SQRT(SRC[63: 0]));
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63: 0] remains unchanged*
+            ELSE ; zeroing-masking
+                    DEST[63: 0] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[127:64] := SRC1[127: 64]
+DEST[MAXVL-1:128] := 0
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input ValueResult ValueComments
NANQNAN(input)If (SRC = SNaN) then #I
X = 2-2n2n
X<0QNaN_IndefiniteIncluding -INF
X = -0 or negative denormal-INF#Z
X = +0 or positive denormal+INF#Z
X = +INF+0
+
Table 6-51. VRSQRT28SD Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRT28SD __m128d _mm_rsqrt28_round_sd(__m128d a, __m128d b, int rounding);
+
+
VRSQRT28SD __m128d _mm_mask_rsqrt28_round_sd(__m128d s, __mmask8 m,__m128d a, __m128d b, int rounding);
+
+
VRSQRT28SD __m128d _mm_maskz_rsqrt28_round_sd( __mmask8 m,__m128d a, __m128d b, int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Divide-by-zero.

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrsqrt28ss.html b/x86/vrsqrt28ss.html new file mode 100644 index 0000000..21affeb --- /dev/null +++ b/x86/vrsqrt28ss.html @@ -0,0 +1,120 @@ + +VRSQRT28SS + — Approximation to the Reciprocal Square Root of Scalar Single Precision Floating-Point Value With Less Than 2^-28 Relative Error

VRSQRT28SS + — Approximation to the Reciprocal Square Root of Scalar Single Precision Floating-Point Value With Less Than 2^-28 Relative Error

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W0 CD /r VRSQRT28SS xmm1 {k1}{z}, xmm2, xmm3/m32 {sae}AV/VAVX512ERComputes approximate reciprocal square root (<2^-28 relative error) of the scalar single-precision floating-point value from xmm3/m32 and stores result in xmm1with writemask k1. Also, upper 3 single-precision floating-point value (bits[127:32]) from xmm2 is copied to xmm1[127:32].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
A Tuple1 Scalar ModRM:reg (w) EVEX.vvvv (r) ModRM:r/m (r) N/A
+

Description + ¶ +

+

Computes the reciprocal square root of the low float32 value in the second source operand (the third operand) and store the result to the destination operand (the first operand). The approximate reciprocal square root is evaluated with less than 2^-28 of maximum relative error prior to final rounding. The final result is rounded to < 2^-23 relative error before written to the low float32 element of the destination according to the writemask k1. Bits 127:32 of the destination is copied from the corresponding bits of the first source operand (the second operand).

+

If any source element is NaN, the quietized NaN source value is returned for that element. Negative (non-zero) source numbers, as well as -∞, return the canonical NaN and set the Invalid Flag (#I).

+

A value of -0 must return -∞ and set the DivByZero flags (#Z). Negative numbers should return NaN and set the Invalid flag (#I). Note however that the instruction flush input denormals to zero of the same sign, so negative denormals return -∞ and set the DivByZero flag.

+

The first source operand is an XMM register. The second source operand is an XMM register or a 32-bit memory location. The destination operand is a XMM register.

+

A numerically exact implementation of VRSQRT28xx can be found at https://software.intel.com/en-us/arti- + ¶ +

+

cles/reference-implementations-for-IA-approximation-instructions-vrcp14-vrsqrt14-vrcp28-vrsqrt28-vexp2. + ¶ +

+

Operation + ¶ +

+

VRSQRT28SS (EVEX Encoded Versions) + ¶ +

+
    IF k1[0] OR *no writemask* THEN
+                DEST[31: 0] := (1.0/ SQRT(SRC[31: 0]));
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31: 0] remains unchanged*
+            ELSE ; zeroing-masking
+                    DEST[31: 0] := 0
+        FI;
+    FI;
+ENDFOR;
+DEST[127:32] := SRC1[127: 32]
+DEST[MAXVL-1:128] := 0
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input ValueResult ValueComments
NANQNAN(input)If (SRC = SNaN) then #I
X = 2-2n2n
X<0QNaN_IndefiniteIncluding -INF
X = -0 or negative denormal-INF#Z
X = +0 or positive denormal+INF#Z
X = +INF+0
+
Table 6-53. VRSQRT28SS Special Cases
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRT28SS __m128 _mm_rsqrt28_round_ss(__m128 a, __m128 b, int rounding);
+
+
VRSQRT28SS __m128 _mm_mask_rsqrt28_round_ss(__m128 s, __mmask8 m,__m128 a,__m128 b, int rounding);
+
+
VRSQRT28SS __m128 _mm_maskz_rsqrt28_round_ss(__mmask8 m,__m128 a,__m128 b, int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid (if SNaN input), Divide-by-zero.

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vrsqrtph.html b/x86/vrsqrtph.html new file mode 100644 index 0000000..11ccb77 --- /dev/null +++ b/x86/vrsqrtph.html @@ -0,0 +1,140 @@ + +VRSQRTPH + — Compute Reciprocals of Square Roots of Packed FP16 Values

VRSQRTPH + — Compute Reciprocals of Square Roots of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP6.W0 4E /r VRSQRTPH xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLCompute the approximate reciprocals of the square roots of packed FP16 values in xmm2/m128/m16bcst and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 4E /r VRSQRTPH ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLCompute the approximate reciprocals of the square roots of packed FP16 values in ymm2/m256/m16bcst and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 4E /r VRSQRTPH zmm1{k1}{z}, zmm2/m512/m16bcstAV/VAVX512-FP16Compute the approximate reciprocals of the square roots of packed FP16 values in zmm2/m512/m16bcst and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction performs a SIMD computation of the approximate reciprocals square-root of 8/16/32 packed FP16 floating-point values in the source operand (the second operand) and stores the packed FP16 floating-point results in the destination operand.

+

The maximum relative error for this approximation is less than 2−11 + 2−14. For special cases, see Table 5-38.

+

The destination elements are updated according to the writemask.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Input valueReset ValueComments
Any denormalNormalCannot generate overflow
X = 2−2n2n
X<0QNaN_IndefiniteIncluding −∞
X = −0−∞
X = +0+∞
X = +∞+0
+
Table 5-38. VRSQRTPH/VRSQRTSH Special Cases
+

Operation + ¶ +

+

VRSQRTPH dest{k1}, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := src.fp16[0]
+        ELSE:
+            tsrc := src.fp16[i]
+        DEST.fp16[i] := APPROXIMATE(1.0 / SQRT(tsrc) )
+    ELSE IF *zeroing*:
+        DEST.fp16[i] := 0
+    //else DEST.fp16[i] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRTPH __m128h _mm_mask_rsqrt_ph (__m128h src, __mmask8 k, __m128h a);
+
+
VRSQRTPH __m128h _mm_maskz_rsqrt_ph (__mmask8 k, __m128h a);
+
+
VRSQRTPH __m128h _mm_rsqrt_ph (__m128h a);
+
+
VRSQRTPH __m256h _mm256_mask_rsqrt_ph (__m256h src, __mmask16 k, __m256h a);
+
+
VRSQRTPH __m256h _mm256_maskz_rsqrt_ph (__mmask16 k, __m256h a);
+
+
VRSQRTPH __m256h _mm256_rsqrt_ph (__m256h a);
+
+
VRSQRTPH __m512h _mm512_mask_rsqrt_ph (__m512h src, __mmask32 k, __m512h a);
+
+
VRSQRTPH __m512h _mm512_maskz_rsqrt_ph (__mmask32 k, __m512h a);
+
+
VRSQRTPH __m512h _mm512_rsqrt_ph (__m512h a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/vrsqrtsh.html b/x86/vrsqrtsh.html new file mode 100644 index 0000000..8de55ab --- /dev/null +++ b/x86/vrsqrtsh.html @@ -0,0 +1,82 @@ + +VRSQRTSH + — Compute Approximate Reciprocal of Square Root of Scalar FP16 Value

VRSQRTSH + — Compute Approximate Reciprocal of Square Root of Scalar FP16 Value

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.66.MAP6.W0 4F /r VRSQRTSH xmm1{k1}{z}, xmm2, xmm3/m16AV/VAVX512-FP16Compute the approximate reciprocal square root of the FP16 value in xmm3/m16 and store the result in the low word element of xmm1 subject to writemask k1. Bits 127:16 of xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs the computation of the approximate reciprocal square-root of the low FP16 value in the second source operand (the third operand) and stores the result in the low word element of the destination operand (the first operand) according to the writemask k1.

+

The maximum relative error for this approximation is less than 2−11 + 2−14.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL−1:128 of the destination operand are zeroed.

+

For special cases, see Table 5-38.

+

Operation + ¶ +

+

VRSQRTSH dest{k1}, src1, src2 + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF k1[0] or *no writemask*:
+    DEST.fp16[0] := APPROXIMATE(1.0 / SQRT(src2.fp16[0]))
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+//else DEST.fp16[0] remains unchanged
+DEST[127:16] := src1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VRSQRTSH __m128h _mm_mask_rsqrt_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VRSQRTSH __m128h _mm_maskz_rsqrt_sh (__mmask8 k, __m128h a, __m128h b);
+
+
VRSQRTSH __m128h _mm_rsqrt_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-58, “Type E10 Class Exception Conditions.”

diff --git a/x86/vscalefpd.html b/x86/vscalefpd.html new file mode 100644 index 0000000..0b5c22b --- /dev/null +++ b/x86/vscalefpd.html @@ -0,0 +1,210 @@ + +VSCALEFPD + — Scale Packed Float64 Values With Float64 Values

VSCALEFPD + — Scale Packed Float64 Values With Float64 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W1 2C /r VSCALEFPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstAV/VAVX512VL AVX512FScale the packed double precision floating-point values in xmm2 using values from xmm3/m128/m64bcst. Under writemask k1.
EVEX.256.66.0F38.W1 2C /r VSCALEFPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstAV/VAVX512VL AVX512FScale the packed double precision floating-point values in ymm2 using values from ymm3/m256/m64bcst. Under writemask k1.
EVEX.512.66.0F38.W1 2C /r VSCALEFPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er}AV/VAVX512FScale the packed double precision floating-point values in zmm2 using values from zmm3/m512/m64bcst. Under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a floating-point scale of the packed double precision floating-point values in the first source operand by multiplying them by 2 to the power of the double precision floating-point values in second source operand.

+

The equation of this operation is given by:

+

zmm1 := zmm2*2floor(zmm3).

+

Floor(zmm3) means maximum integer value ≤ zmm3.

+

If the result cannot be represented in double precision, then the proper overflow response (for positive scaling operand), or the proper underflow response (for negative scaling operand) is issued. The overflow and underflow responses are dependent on the rounding mode (for IEEE-compliant rounding), as well as on other settings in MXCSR (exception mask bits, FTZ bit), and on the SAE bit.

+

The first source operand is a ZMM/YMM/XMM register. The second source operand is a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.

+

Handling of special-case input values are listed in Table 5-39 and Table 5-40.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Src2Set IE
±NaN+Inf-Inf0/Denorm/Norm
Src1±QNaNQNaN(Src1)+INF+0QNaN(Src1)IF either source is SNAN
±SNaNQNaN(Src1)QNaN(Src1)QNaN(Src1)QNaN(Src1)YES
±InfQNaN(Src2)Src1QNaN_IndefiniteSrc1IF Src2 is SNAN or -INF
±0QNaN(Src2)QNaN_IndefiniteSrc1Src1IF Src2 is SNAN or +INF
Denorm/NormQNaN(Src2)±INF (Src1 sign)±0 (Src1 sign)Compute ResultIF Src2 is SNAN
+
Table 5-39. VSCALEFPD/SD/PS/SS Special Cases
+
+ + + + + + + + + + + + +
Special CaseReturned valueFaults
|result| < 2-1074±0 or ±Min-Denormal (Src1 sign)Underflow
|result| ≥ 21024±INF (Src1 sign) or ±Max-normal (Src1 sign)Overflow
+
Table 5-40. Additional VSCALEFPD/SD Special Cases
+

Operation + ¶ +

+
SCALE(SRC1, SRC2)
+{
+TMP_SRC2 := SRC2
+TMP_SRC1 := SRC1
+IF (SRC2 is denormal AND MXCSR.DAZ) THEN TMP_SRC2=0
+IF (SRC1 is denormal AND MXCSR.DAZ) THEN TMP_SRC1=0
+/* SRC2 is a 64 bits floating-point value */
+DEST[63:0] := TMP_SRC1[63:0] * POW(2, Floor(TMP_SRC2[63:0]))
+}
+
+

VSCALEFPD (EVEX encoded versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+IF (VL = 512) AND (EVEX.b = 1) AND (SRC2 *is register*)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := SCALE(SRC1[i+63:i], SRC2[63:0]);
+                ELSE DEST[i+63:i] := SCALE(SRC1[i+63:i], SRC2[i+63:i]);
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE
+                        ; zeroing-masking
+                    DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSCALEFPD __m512d _mm512_scalef_round_pd(__m512d a, __m512d b, int rounding);
+
+
VSCALEFPD __m512d _mm512_mask_scalef_round_pd(__m512d s, __mmask8 k, __m512d a, __m512d b, int rounding);
+
+
VSCALEFPD __m512d _mm512_maskz_scalef_round_pd(__mmask8 k, __m512d a, __m512d b, int rounding);
+
+
VSCALEFPD __m512d _mm512_scalef_pd(__m512d a, __m512d b);
+
+
VSCALEFPD __m512d _mm512_mask_scalef_pd(__m512d s, __mmask8 k, __m512d a, __m512d b);
+
+
VSCALEFPD __m512d _mm512_maskz_scalef_pd(__mmask8 k, __m512d a, __m512d b);
+
+
VSCALEFPD __m256d _mm256_scalef_pd(__m256d a, __m256d b);
+
+
VSCALEFPD __m256d _mm256_mask_scalef_pd(__m256d s, __mmask8 k, __m256d a, __m256d b);
+
+
VSCALEFPD __m256d _mm256_maskz_scalef_pd(__mmask8 k, __m256d a, __m256d b);
+
+
VSCALEFPD __m128d _mm_scalef_pd(__m128d a, __m128d b);
+
+
VSCALEFPD __m128d _mm_mask_scalef_pd(__m128d s, __mmask8 k, __m128d a, __m128d b);
+
+
VSCALEFPD __m128d _mm_maskz_scalef_pd(__mmask8 k, __m128d a, __m128d b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal (for Src1).

+

Denormal is not reported for Src2.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vscalefph.html b/x86/vscalefph.html new file mode 100644 index 0000000..e92d774 --- /dev/null +++ b/x86/vscalefph.html @@ -0,0 +1,190 @@ + +VSCALEFPH + — Scale Packed FP16 Values with FP16 Values

VSCALEFPH + — Scale Packed FP16 Values with FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.66.MAP6.W0 2C /r VSCALEFPH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLScale the packed FP16 values in xmm2 using values from xmm3/m128/m16bcst, and store the result in xmm1 subject to writemask k1.
EVEX.256.66.MAP6.W0 2C /r VSCALEFPH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLScale the packed FP16 values in ymm2 using values from ymm3/m256/m16bcst, and store the result in ymm1 subject to writemask k1.
EVEX.512.66.MAP6.W0 2C /r VSCALEFPH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Scale the packed FP16 values in zmm2 using values from zmm3/m512/m16bcst, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a floating-point scale of the packed FP16 values in the first source operand by multiplying it by 2 to the power of the FP16 values in second source operand. The destination elements are updated according to the writemask.

+

The equation of this operation is given by:

+

zmm1 := zmm2 * 2floor(zmm3).

+

Floor(zmm3) means maximum integer value ≤ zmm3.

+

If the result cannot be represented in FP16, then the proper overflow response (for positive scaling operand), or the proper underflow response (for negative scaling operand), is issued. The overflow and underflow responses are dependent on the rounding mode (for IEEE-compliant rounding), as well as on other settings in MXCSR (exception mask bits), and on the SAE bit.

+

Handling of special-case input values are listed in Table 5-41 and Table 5-42.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Src1Src2Set IE
±NaN+INF−INF0/Denorm/Norm
±QNaNQNaN(Src1)+INF+0QNaN(Src1)IF either source is SNaN
±SNaNQNaN(Src1)QNaN(Src1)QNaN(Src1)QNaN(Src1)YES
±INFQNaN(Src2)Src1QNaN_IndefiniteSrc1IF Src2 is SNaN or −INF
±0QNaN(Src2)QNaN_IndefiniteSrc1Src1IF Src2 is SNaN or +INF
Denorm/NormQNaN(Src2)±INF (Src1 sign)±0 (Src1 sign)Compute ResultIF Src2 is SNaN
+
Table 5-41. VSCALEFPH/VSCALEFSH Special Cases
+
+ + + + + + + + + + + + +
Special CaseReturned ValueFaults
|result| < 2-24±0 or ±Min-Denormal (Src1 sign)Underflow
|result| ≥ 216±INF (Src1 sign) or ±Max-Denormal (Src1 sign)Overflow
+
Table 5-42. Additional VSCALEFPH/VSCALEFSH Special Cases
+

Operation + ¶ +

+
def scale_fp16(src1,src2):
+    tmp1 := src1
+    tmp2 := src2
+    return tmp1 * POW(2, FLOOR(tmp2))
+
+

VSCALEFPH dest{k1}, src1, src2 + ¶ +

+
VL = 128, 256, or 512
+KL := VL / 16
+IF (VL = 512) AND (EVEX.b = 1) and no memory operand:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC2 is memory and (EVEX.b = 1):
+            tsrc := src2.fp16[0]
+        ELSE:
+            tsrc := src2.fp16[i]
+        dest.fp16[i] := scale_fp16(src1.fp16[i],tsrc)
+    ELSE IF *zeroing*:
+        dest.fp16[i] := 0
+    //else dest.fp16[i] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSCALEFPH __m128h _mm_mask_scalef_ph (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VSCALEFPH __m128h _mm_maskz_scalef_ph (__mmask8 k, __m128h a, __m128h b);
+
+
VSCALEFPH __m128h _mm_scalef_ph (__m128h a, __m128h b);
+
+
VSCALEFPH __m256h _mm256_mask_scalef_ph (__m256h src, __mmask16 k, __m256h a, __m256h b);
+
+
VSCALEFPH __m256h _mm256_maskz_scalef_ph (__mmask16 k, __m256h a, __m256h b);
+
+
VSCALEFPH __m256h _mm256_scalef_ph (__m256h a, __m256h b);
+
+
VSCALEFPH __m512h _mm512_mask_scalef_ph (__m512h src, __mmask32 k, __m512h a, __m512h b);
+
+
VSCALEFPH __m512h _mm512_maskz_scalef_ph (__mmask32 k, __m512h a, __m512h b);
+
+
VSCALEFPH __m512h _mm512_scalef_ph (__m512h a, __m512h b);
+
+
VSCALEFPH __m512h _mm512_mask_scalef_round_ph (__m512h src, __mmask32 k, __m512h a, __m512h b, const int rounding);
+
+
VSCALEFPH __m512h _mm512_maskz_scalef_round_ph (__mmask32 k, __m512h a, __m512h b, const int;
+
+
VSCALEFPH __m512h _mm512_scalef_round_ph (__m512h a, __m512h b, const int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions”.

+

Denormal-operand exception (#D) is checked and signaled for src1 operand, but not for src2 operand. The denormal-operand exception is checked for src1 operand only if the src2 operand is not NaN. If the src2 operand is NaN, the processor generates NaN and does not signal denormal-operand exception, even if src1 operand is denormal.

diff --git a/x86/vscalefps.html b/x86/vscalefps.html new file mode 100644 index 0000000..43d0489 --- /dev/null +++ b/x86/vscalefps.html @@ -0,0 +1,155 @@ + +VSCALEFPS + — Scale Packed Float32 Values With Float32 Values

VSCALEFPS + — Scale Packed Float32 Values With Float32 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 2C /r VSCALEFPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstAV/VAVX512VL AVX512FScale the packed single-precision floating-point values in xmm2 using values from xmm3/m128/m32bcst. Under writemask k1.
EVEX.256.66.0F38.W0 2C /r VSCALEFPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstAV/VAVX512VL AVX512FScale the packed single-precision values in ymm2 using floating-point values from ymm3/m256/m32bcst. Under writemask k1.
EVEX.512.66.0F38.W0 2C /r VSCALEFPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er}AV/VAVX512FScale the packed single-precision floating-point values in zmm2 using floating-point values from zmm3/m512/m32bcst. Under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a floating-point scale of the packed single-precision floating-point values in the first source operand by multiplying them by 2 to the power of the float32 values in second source operand.

+

The equation of this operation is given by:

+

zmm1 := zmm2*2floor(zmm3).

+

Floor(zmm3) means maximum integer value ≤ zmm3.

+

If the result cannot be represented in single-precision, then the proper overflow response (for positive scaling operand), or the proper underflow response (for negative scaling operand) is issued. The overflow and underflow responses are dependent on the rounding mode (for IEEE-compliant rounding), as well as on other settings in MXCSR (exception mask bits, FTZ bit), and on the SAE bit.

+

EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand is a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM register conditionally updated with writemask k1.

+

EVEX.256 encoded version: The first source operand is a YMM register. The second source operand is a YMM register, a 256-bit memory location, or a 256-bit vector broadcasted from a 32-bit memory location. The destination operand is a YMM register, conditionally updated using writemask k1.

+

EVEX.128 encoded version: The first source operand is an XMM register. The second source operand is a XMM register, a 128-bit memory location, or a 128-bit vector broadcasted from a 32-bit memory location. The destination operand is a XMM register, conditionally updated using writemask k1.

+

Handling of special-case input values are listed in Table 5-39 and Table 5-43.

+
+ + + + + + + + + + + + +
Special CaseReturned valueFaults
|result| < 2-149±0 or ±Min-Denormal (Src1 sign)Underflow
|result| ≥ 2128±INF (Src1 sign) or ±Max-normal (Src1 sign)Overflow
+
Table 5-43. Additional VSCALEFPS/SS Special Cases
+

Operation + ¶ +

+
SCALE(SRC1, SRC2)
+{ ; Check for denormal operands
+TMP_SRC2 := SRC2
+TMP_SRC1 := SRC1
+IF (SRC2 is denormal AND MXCSR.DAZ) THEN TMP_SRC2=0
+IF (SRC1 is denormal AND MXCSR.DAZ) THEN TMP_SRC1=0
+/* SRC2 is a 32 bits floating-point value */
+DEST[31:0] := TMP_SRC1[31:0] * POW(2, Floor(TMP_SRC2[31:0]))
+}
+
+

VSCALEFPS (EVEX encoded versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+IF (VL = 512) AND (EVEX.b = 1) AND (SRC2 *is register*)
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b = 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := SCALE(SRC1[i+31:i], SRC2[31:0]);
+                ELSE DEST[i+31:i] := SCALE(SRC1[i+31:i], SRC2[i+31:i]);
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE ; zeroing-masking
+                    DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSCALEFPS __m512 _mm512_scalef_round_ps(__m512 a, __m512 b, int rounding);
+
+
VSCALEFPS __m512 _mm512_mask_scalef_round_ps(__m512 s, __mmask16 k, __m512 a, __m512 b, int rounding);
+
+
VSCALEFPS __m512 _mm512_maskz_scalef_round_ps(__mmask16 k, __m512 a, __m512 b, int rounding);
+
+
VSCALEFPS __m512 _mm512_scalef_ps(__m512 a, __m512 b);
+
+
VSCALEFPS __m512 _mm512_mask_scalef_ps(__m512 s, __mmask16 k, __m512 a, __m512 b);
+
+
VSCALEFPS __m512 _mm512_maskz_scalef_ps(__mmask16 k, __m512 a, __m512 b);
+
+
VSCALEFPS __m256 _mm256_scalef_ps(__m256 a, __m256 b);
+
+
VSCALEFPS __m256 _mm256_mask_scalef_ps(__m256 s, __mmask8 k, __m256 a, __m256 b);
+
+
VSCALEFPS __m256 _mm256_maskz_scalef_ps(__mmask8 k, __m256 a, __m256 b);
+
+
VSCALEFPS __m128 _mm_scalef_ps(__m128 a, __m128 b);
+
+
VSCALEFPS __m128 _mm_mask_scalef_ps(__m128 s, __mmask8 k, __m128 a, __m128 b);
+
+
VSCALEFPS __m128 _mm_maskz_scalef_ps(__mmask8 k, __m128 a, __m128 b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal (for Src1).

+

Denormal is not reported for Src2.

+

Other Exceptions + ¶ +

+

See Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vscalefsd.html b/x86/vscalefsd.html new file mode 100644 index 0000000..2cd19e1 --- /dev/null +++ b/x86/vscalefsd.html @@ -0,0 +1,103 @@ + +VSCALEFSD + — Scale Scalar Float64 Values With Float64 Values

VSCALEFSD + — Scale Scalar Float64 Values With Float64 Values

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W1 2D /r VSCALEFSD xmm1 {k1}{z}, xmm2, xmm3/m64{er}AV/VAVX512FScale the scalar double precision floating-point values in xmm2 using the value from xmm3/m64. Under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a floating-point scale of the scalar double precision floating-point value in the first source operand by multiplying it by 2 to the power of the double precision floating-point value in second source operand.

+

The equation of this operation is given by:

+

xmm1 := xmm2*2floor(xmm3).

+

Floor(xmm3) means maximum integer value ≤ xmm3.

+

If the result cannot be represented in double precision, then the proper overflow response (for positive scaling operand), or the proper underflow response (for negative scaling operand) is issued. The overflow and underflow responses are dependent on the rounding mode (for IEEE-compliant rounding), as well as on other settings in MXCSR (exception mask bits, FTZ bit), and on the SAE bit.

+

EVEX encoded version: The first source operand is an XMM register. The second source operand is an XMM register or a memory location. The destination operand is an XMM register conditionally updated with writemask k1.

+

Handling of special-case input values are listed in Table 5-39 and Table 5-40.

+

Operation + ¶ +

+
SCALE(SRC1, SRC2)
+{
+    ; Check for denormal operands
+TMP_SRC2 := SRC2
+TMP_SRC1 := SRC1
+IF (SRC2 is denormal AND MXCSR.DAZ) THEN TMP_SRC2=0
+IF (SRC1 is denormal AND MXCSR.DAZ) THEN TMP_SRC1=0
+/* SRC2 is a 64 bits floating-point value */
+DEST[63:0] := TMP_SRC1[63:0] * POW(2, Floor(TMP_SRC2[63:0]))
+}
+
+

VSCALEFSD (EVEX encoded version) + ¶ +

+
IF (EVEX.b= 1) and SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] OR *no writemask*
+    THEN DEST[63:0] := SCALE(SRC1[63:0], SRC2[63:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[63:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[63:0] := 0
+        FI
+FI;
+DEST[127:64] := SRC1[127:64]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSCALEFSD __m128d _mm_scalef_round_sd(__m128d a, __m128d b, int);
+
+
VSCALEFSD __m128d _mm_mask_scalef_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, int);
+
+
VSCALEFSD __m128d _mm_maskz_scalef_round_sd(__mmask8 k, __m128d a, __m128d b, int);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal (for Src1).

+

Denormal is not reported for Src2.

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vscalefsh.html b/x86/vscalefsh.html new file mode 100644 index 0000000..466035c --- /dev/null +++ b/x86/vscalefsh.html @@ -0,0 +1,94 @@ + +VSCALEFSH + — Scale Scalar FP16 Values with FP16 Values

VSCALEFSH + — Scale Scalar FP16 Values with FP16 Values

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.66.MAP6.W0 2D /r VSCALEFSH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Scale the FP16 values in xmm2 using the value from xmm3/m16 and store the result in xmm1 subject to writemask k1. Bits 127:16 from xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a floating-point scale of the low FP16 element in the first source operand by multiplying it by 2 to the power of the low FP16 element in second source operand, storing the result in the low element of the destination operand.

+

Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

The equation of this operation is given by:

+

xmm1 := xmm2 * 2floor(xmm3).

+

Floor(xmm3) means maximum integer value ≤ xmm3.

+

If the result cannot be represented in FP16, then the proper overflow response (for positive scaling operand), or the proper underflow response (for negative scaling operand), is issued. The overflow and underflow responses are dependent on the rounding mode (for IEEE-compliant rounding), as well as on other settings in MXCSR (exception mask bits, FTZ bit), and on the SAE bit.

+

Handling of special-case input values are listed in Table 5-41 and Table 5-42.

+

Operation + ¶ +

+

VSCALEFSH dest{k1}, src1, src2 + ¶ +

+
IF (EVEX.b = 1) and no memory operand:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] or *no writemask*:
+    dest.fp16[0] := scale_fp16(src1.fp16[0], src2.fp16[0]) // see VSCALEFPH
+ELSE IF *zeroing*:
+    dest.fp16[0] := 0
+//else DEST.fp16[0] remains unchanged
+DEST[127:16] := src1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSCALEFSH __m128h _mm_mask_scalef_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VSCALEFSH __m128h _mm_maskz_scalef_round_sh (__mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VSCALEFSH __m128h _mm_scalef_round_sh (__m128h a, __m128h b, const int rounding);
+
+
VSCALEFSH __m128h _mm_mask_scalef_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VSCALEFSH __m128h _mm_maskz_scalef_sh (__mmask8 k, __m128h a, __m128h b);
+
+
VSCALEFSH __m128h _mm_scalef_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

+

Denormal-operand exception (#D) is checked and signaled for src1 operand, but not for src2 operand. The denormal-operand exception is checked for src1 operand only if the src2 operand is not NaN. If the src2 operand is NaN, the processor generates NaN and does not signal denormal-operand exception, even if src1 operand is denormal.

diff --git a/x86/vscalefss.html b/x86/vscalefss.html new file mode 100644 index 0000000..44d4091 --- /dev/null +++ b/x86/vscalefss.html @@ -0,0 +1,103 @@ + +VSCALEFSS + — Scale Scalar Float32 Value With Float32 Value

VSCALEFSS + — Scale Scalar Float32 Value With Float32 Value

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.LLIG.66.0F38.W0 2D /r VSCALEFSS xmm1 {k1}{z}, xmm2, xmm3/m32{er}AV/VAVX512FScale the scalar single-precision floating-point value in xmm2 using floating-point value from xmm3/m32. Under writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a floating-point scale of the scalar single-precision floating-point value in the first source operand by multiplying it by 2 to the power of the float32 value in second source operand.

+

The equation of this operation is given by:

+

xmm1 := xmm2*2floor(xmm3).

+

Floor(xmm3) means maximum integer value ≤ xmm3.

+

If the result cannot be represented in single-precision, then the proper overflow response (for positive scaling operand), or the proper underflow response (for negative scaling operand) is issued. The overflow and underflow responses are dependent on the rounding mode (for IEEE-compliant rounding), as well as on other settings in MXCSR (exception mask bits, FTZ bit), and on the SAE bit.

+

EVEX encoded version: The first source operand is an XMM register. The second source operand is an XMM register or a memory location. The destination operand is an XMM register conditionally updated with writemask k1.

+

Handling of special-case input values are listed in Table 5-39 and Table 5-43.

+

Operation + ¶ +

+
SCALE(SRC1, SRC2)
+{
+    ; Check for denormal operands
+TMP_SRC2 := SRC2
+TMP_SRC1 := SRC1
+IF (SRC2 is denormal AND MXCSR.DAZ) THEN TMP_SRC2=0
+IF (SRC1 is denormal AND MXCSR.DAZ) THEN TMP_SRC1=0
+/* SRC2 is a 32 bits floating-point value */
+DEST[31:0] := TMP_SRC1[31:0] * POW(2, Floor(TMP_SRC2[31:0]))
+}
+
+

VSCALEFSS (EVEX encoded version) + ¶ +

+
IF (EVEX.b= 1) and SRC2 *is a register*
+    THEN
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC);
+    ELSE
+        SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC);
+FI;
+IF k1[0] OR *no writemask*
+    THEN DEST[31:0] := SCALE(SRC1[31:0], SRC2[31:0])
+    ELSE
+        IF *merging-masking* ; merging-masking
+            THEN *DEST[31:0] remains unchanged*
+            ELSE ; zeroing-masking
+                DEST[31:0] := 0
+        FI
+FI;
+DEST[127:32] := SRC1[127:32]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSCALEFSS __m128 _mm_scalef_round_ss(__m128 a, __m128 b, int);
+
+
VSCALEFSS __m128 _mm_mask_scalef_round_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, int);
+
+
VSCALEFSS __m128 _mm_maskz_scalef_round_ss(__mmask8 k, __m128 a, __m128 b, int);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Overflow, Underflow, Invalid, Precision, Denormal (for Src1).

+

Denormal is not reported for Src2.

+

Other Exceptions + ¶ +

+

See Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vscatterdps.vscatterdpd.vscatterqps.vscatterqpd.html b/x86/vscatterdps.vscatterdpd.vscatterqps.vscatterqpd.html new file mode 100644 index 0000000..0612613 --- /dev/null +++ b/x86/vscatterdps.vscatterdpd.vscatterqps.vscatterqpd.html @@ -0,0 +1,252 @@ + +VSCATTERDPS/VSCATTERDPD/VSCATTERQPS/VSCATTERQPD + — Scatter Packed Single, PackedDouble with Signed Dword and Qword Indices

VSCATTERDPS/VSCATTERDPD/VSCATTERQPS/VSCATTERQPD + — Scatter Packed Single, PackedDouble with Signed Dword and Qword Indices

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/E n64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.128.66.0F38.W0 A2 /vsib VSCATTERDPS vm32x {k1}, xmm1AV/VAVX512VL AVX512FUsing signed dword indices, scatter single-precision floating-point values to memory using writemask k1.
EVEX.256.66.0F38.W0 A2 /vsib VSCATTERDPS vm32y {k1}, ymm1AV/VAVX512VL AVX512FUsing signed dword indices, scatter single-precision floating-point values to memory using writemask k1.
EVEX.512.66.0F38.W0 A2 /vsib VSCATTERDPS vm32z {k1}, zmm1AV/VAVX512FUsing signed dword indices, scatter single-precision floating-point values to memory using writemask k1.
EVEX.128.66.0F38.W1 A2 /vsib VSCATTERDPD vm32x {k1}, xmm1AV/VAVX512VL AVX512FUsing signed dword indices, scatter double precision floating-point values to memory using writemask k1.
EVEX.256.66.0F38.W1 A2 /vsib VSCATTERDPD vm32x {k1}, ymm1AV/VAVX512VL AVX512FUsing signed dword indices, scatter double precision floating-point values to memory using writemask k1.
EVEX.512.66.0F38.W1 A2 /vsib VSCATTERDPD vm32y {k1}, zmm1AV/VAVX512FUsing signed dword indices, scatter double precision floating-point values to memory using writemask k1.
EVEX.128.66.0F38.W0 A3 /vsib VSCATTERQPS vm64x {k1}, xmm1AV/VAVX512VL AVX512FUsing signed qword indices, scatter single-precision floating-point values to memory using writemask k1.
EVEX.256.66.0F38.W0 A3 /vsib VSCATTERQPS vm64y {k1}, xmm1AV/VAVX512VL AVX512FUsing signed qword indices, scatter single-precision floating-point values to memory using writemask k1.
EVEX.512.66.0F38.W0 A3 /vsib VSCATTERQPS vm64z {k1}, ymm1AV/VAVX512FUsing signed qword indices, scatter single-precision floating-point values to memory using writemask k1.
EVEX.128.66.0F38.W1 A3 /vsib VSCATTERQPD vm64x {k1}, xmm1AV/VAVX512VL AVX512FUsing signed qword indices, scatter double precision floating-point values to memory using writemask k1.
EVEX.256.66.0F38.W1 A3 /vsib VSCATTERQPD vm64y {k1}, ymm1AV/VAVX512VL AVX512FUsing signed qword indices, scatter double precision floating-point values to memory using writemask k1.
EVEX.512.66.0F38.W1 A3 /vsib VSCATTERQPD vm64z {k1}, zmm1AV/VAVX512FUsing signed qword indices, scatter double precision floating-point values to memory using writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarBaseReg (R): VSIB:base, VectorReg(R): VSIB:indexModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Stores up to 16 elements (or 8 elements) in doubleword/quadword vector zmm1 to the memory locations pointed by base address BASE_ADDR and index vector VINDEX, with scale SCALE. The elements are specified via the VSIB (i.e., the index register is a vector register, holding packed indices). Elements will only be stored if their corresponding mask bit is one. The entire mask register will be set to zero by this instruction unless it triggers an exception.

+

This instruction can be suspended by an exception if at least one element is already scattered (i.e., if the exception is triggered by an element other than the rightmost one with its mask bit set). When this happens, the destination register and the mask register (k1) are partially updated. If any traps or interrupts are pending from already scattered elements, they will be delivered in lieu of the exception; in this case, EFLAG.RF is set to one so an instruction breakpoint is not re-triggered when the instruction is continued.

+

Note that:

+
    +
  • Only writes to overlapping vector indices are guaranteed to be ordered with respect to each other (from LSB to MSB of the source registers). Note that this also include partially overlapping vector indices. Writes that are not overlapped may happen in any order. Memory ordering with other instructions follows the Intel-64 memory ordering model. Note that this does not account for non-overlapping indices that map into the same physical address locations.
  • +
  • If two or more destination indices completely overlap, the “earlier” write(s) may be skipped.
  • +
  • Faults are delivered in a right-to-left manner. That is, if a fault is triggered by an element and delivered, all elements closer to the LSB of the destination zmm will be completed (and non-faulting). Individual elements closer to the MSB may or may not be completed. If a given element triggers multiple faults, they are delivered in the conventional order.
  • +
  • Elements may be scattered in any order, but faults must be delivered in a right-to left order; thus, elements to the left of a faulting one may be scattered before the fault is delivered. A given implementation of this instruction is repeatable - given the same input values and architectural state, the same set of elements to the left of the faulting one will be scattered.
  • +
  • This instruction does not perform AC checks, and so will never deliver an AC fault.
  • +
  • Not valid with 16-bit effective addresses. Will deliver a #UD fault.
  • +
  • If this instruction overwrites itself and then takes a fault, only a subset of elements may be completed before the fault is delivered (as described above). If the fault handler completes and attempts to re-execute this instruction, the new instruction will be executed, and the scatter will not complete.
+

Note that the presence of VSIB byte is enforced in this instruction. Hence, the instruction will #UD fault if ModRM.rm is different than 100b.

+

This instruction has special disp8*N and alignment rules. N is considered to be the size of a single vector element.

+

The scaled index may require more bits to represent than the address bits used by the processor (e.g., in 32-bit mode, if the scale is greater than one). In this case, the most significant bits beyond the number of address bits are ignored.

+

The instruction will #UD fault if the k0 mask register is specified.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist
+VINDEX stands for the memory operand vector of indices (a ZMM register)
+SCALE stands for the memory operand scalar (1, 2, 4 or 8)
+DISP is the optional 1 or 4 byte displacement
+
+

VSCATTERDPS (EVEX encoded versions) + ¶ +

+
(KL, VL)= (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN MEM[BASE_ADDR +SignExtend(VINDEX[i+31:i]) * SCALE + DISP] :=
+            SRC[i+31:i]
+            k1[j] := 0
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+
+

VSCATTERDPD (EVEX encoded versions) + ¶ +

+
(KL, VL)= (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j] OR *no writemask*
+        THEN MEM[BASE_ADDR +SignExtend(VINDEX[k+31:k]) * SCALE + DISP] :=
+            SRC[i+63:i]
+            k1[j] := 0
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+
+

VSCATTERQPS (EVEX encoded versions) + ¶ +

+
(KL, VL)= (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    k := j * 64
+    IF k1[j] OR *no writemask*
+        THEN MEM[BASE_ADDR + (VINDEX[k+63:k]) * SCALE + DISP] :=
+            SRC[i+31:i]
+            k1[j] := 0
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+
+

VSCATTERQPD (EVEX encoded versions) + ¶ +

+
(KL, VL)= (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN MEM[BASE_ADDR + (VINDEX[i+63:i]) * SCALE + DISP] :=
+            SRC[i+63:i]
+            k1[j] := 0
+    FI;
+ENDFOR
+k1[MAX_KL-1:KL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSCATTERDPD void _mm512_i32scatter_pd(void * base, __m256i vdx, __m512d a, int scale);
+
+
VSCATTERDPD void _mm512_mask_i32scatter_pd(void * base, __mmask8 k, __m256i vdx, __m512d a, int scale);
+
+
VSCATTERDPS void _mm512_i32scatter_ps(void * base, __m512i vdx, __m512 a, int scale);
+
+
VSCATTERDPS void _mm512_mask_i32scatter_ps(void * base, __mmask16 k, __m512i vdx, __m512 a, int scale);
+
+
VSCATTERQPD void _mm512_i64scatter_pd(void * base, __m512i vdx, __m512d a, int scale);
+
+
VSCATTERQPD void _mm512_mask_i64scatter_pd(void * base, __mmask8 k, __m512i vdx, __m512d a, int scale);
+
+
VSCATTERQPS void _mm512_i64scatter_ps(void * base, __m512i vdx, __m256 a, int scale);
+
+
VSCATTERQPS void _mm512_mask_i64scatter_ps(void * base, __mmask8 k, __m512i vdx, __m256 a, int scale);
+
+
VSCATTERDPD void _mm256_i32scatter_pd(void * base, __m128i vdx, __m256d a, int scale);
+
+
VSCATTERDPD void _mm256_mask_i32scatter_pd(void * base, __mmask8 k, __m128i vdx, __m256d a, int scale);
+
+
VSCATTERDPS void _mm256_i32scatter_ps(void * base, __m256i vdx, __m256 a, int scale);
+
+
VSCATTERDPS void _mm256_mask_i32scatter_ps(void * base, __mmask8 k, __m256i vdx, __m256 a, int scale);
+
+
VSCATTERQPD void _mm256_i64scatter_pd(void * base, __m256i vdx, __m256d a, int scale);
+
+
VSCATTERQPD void _mm256_mask_i64scatter_pd(void * base, __mmask8 k, __m256i vdx, __m256d a, int scale);
+
+
VSCATTERQPS void _mm256_i64scatter_ps(void * base, __m256i vdx, __m128 a, int scale);
+
+
VSCATTERQPS void _mm256_mask_i64scatter_ps(void * base, __mmask8 k, __m256i vdx, __m128 a, int scale);
+
+
VSCATTERDPD void _mm_i32scatter_pd(void * base, __m128i vdx, __m128d a, int scale);
+
+
VSCATTERDPD void _mm_mask_i32scatter_pd(void * base, __mmask8 k, __m128i vdx, __m128d a, int scale);
+
+
VSCATTERDPS void _mm_i32scatter_ps(void * base, __m128i vdx, __m128 a, int scale);
+
+
VSCATTERDPS void _mm_mask_i32scatter_ps(void * base, __mmask8 k, __m128i vdx, __m128 a, int scale);
+
+
VSCATTERQPD void _mm_i64scatter_pd(void * base, __m128i vdx, __m128d a, int scale);
+
+
VSCATTERQPD void _mm_mask_i64scatter_pd(void * base, __mmask8 k, __m128i vdx, __m128d a, int scale);
+
+
VSCATTERQPS void _mm_i64scatter_ps(void * base, __m128i vdx, __m128 a, int scale);
+
+
VSCATTERQPS void _mm_mask_i64scatter_ps(void * base, __mmask8 k, __m128i vdx, __m128 a, int scale);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Overflow, Underflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

See Table 2-61, “Type E12 Class Exception Conditions.”

diff --git a/x86/vscatterpf0dps.vscatterpf0qps.vscatterpf0dpd.vscatterpf0qpd.html b/x86/vscatterpf0dps.vscatterpf0qps.vscatterpf0dpd.vscatterpf0qpd.html new file mode 100644 index 0000000..99b7e6a --- /dev/null +++ b/x86/vscatterpf0dps.vscatterpf0qps.vscatterpf0dpd.vscatterpf0qpd.html @@ -0,0 +1,161 @@ + +VSCATTERPF0DPS/VSCATTERPF0QPS/VSCATTERPF0DPD/VSCATTERPF0QPD + — Sparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write

VSCATTERPF0DPS/VSCATTERPF0QPS/VSCATTERPF0DPD/VSCATTERPF0QPD + — Sparse PrefetchPacked SP/DP Data Values with Signed Dword, Signed Qword Indices Using T0 Hint With Intentto Write

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W0 C6 /5 /vsib VSCATTERPF0DPS vm32z {k1}AV/VAVX512PFUsing signed dword indices, prefetch sparse byte memory locations containing single-precision data using writemask k1 and T0 hint with intent to write.
EVEX.512.66.0F38.W0 C7 /5 /vsib VSCATTERPF0QPS vm64z {k1}AV/VAVX512PFUsing signed qword indices, prefetch sparse byte memory locations containing single-precision data using writemask k1 and T0 hint with intent to write.
EVEX.512.66.0F38.W1 C6 /5 /vsib VSCATTERPF0DPD vm32y {k1}AV/VAVX512PFUsing signed dword indices, prefetch sparse byte memory locations containing double precision data using writemask k1 and T0 hint with intent to write.
EVEX.512.66.0F38.W1 C7 /5 /vsib VSCATTERPF0QPD vm64z {k1}AV/VAVX512PFUsing signed qword indices, prefetch sparse byte memory locations containing double precision data using writemask k1 and T0 hint with intent to write.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarBaseReg (R): VSIB:base, VectorReg(R): VSIB:indexN/AN/AN/A
+

Description + ¶ +

+

The instruction conditionally prefetches up to sixteen 32-bit or eight 64-bit integer byte data elements. The elements are specified via the VSIB (i.e., the index register is an zmm, holding packed indices). Elements will only be prefetched if their corresponding mask bit is one.

+

cache lines will be brought into exclusive state (RFO) specified by a locality hint (T0):

+
    +
  • T0 (temporal data)—prefetch data into the first level cache.
+

[PS data] For dword indices, the instruction will prefetch sixteen memory locations. For qword indices, the instruction will prefetch eight values.

+

[PD data] For dword and qword indices, the instruction will prefetch eight memory locations.

+

Note that:

+

(1) The prefetches may happen in any order (or not at all). The instruction is a hint.

+

(2) The mask is left unchanged.

+

(3) Not valid with 16-bit effective addresses. Will deliver a #UD fault.

+

(4) No FP nor memory faults may be produced by this instruction.

+

(5) Prefetches do not handle cache line splits

+

(6) A #UD is signaled if the memory operand is encoded without the SIB byte.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist.
+VINDEX stands for the memory operand vector of indices (a vector register).
+SCALE stands for the memory operand scalar (1, 2, 4 or 8).
+DISP is the optional 1, 2 or 4 byte displacement.
+PREFETCH(mem, Level, State) Prefetches a byte memory location pointed by ‘mem’ into the cache level specified by ‘Level’; a request
+for exclusive/ownership is done if ‘State’ is 1. Note that the memory location ignore cache line splits. This operation is considered a
+hint for the processor and may be skipped depending on implementation.
+
+

VSCATTERPF0DPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[i+31:i]) * SCALE + DISP], Level=0, RFO = 1)
+    FI;
+ENDFOR
+
+

VSCATTERPF0DPD (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[k+31:k]) * SCALE + DISP], Level=0, RFO = 1)
+    FI;
+ENDFOR
+
+

VSCATTERPF0QPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 256)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[i+63:i]) * SCALE + DISP], Level=0, RFO = 1)
+    FI;
+ENDFOR
+
+

VSCATTERPF0QPD (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 64
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[k+63:k]) * SCALE + DISP], Level=0, RFO = 1)
+    FI;
+ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSCATTERPF0DPD void _mm512_prefetch_i32scatter_pd(void *base, __m256i vdx, int scale, int hint);
+
+
VSCATTERPF0DPD void _mm512_mask_prefetch_i32scatter_pd(void *base, __mmask8 m, __m256i vdx, int scale, int hint);
+
+
VSCATTERPF0DPS void _mm512_prefetch_i32scatter_ps(void *base, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF0DPS void _mm512_mask_prefetch_i32scatter_ps(void *base, __mmask16 m, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF0QPD void _mm512_prefetch_i64scatter_pd(void * base, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF0QPD void _mm512_mask_prefetch_i64scatter_pd(void * base, __mmask8 m, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF0QPS void _mm512_prefetch_i64scatter_ps(void * base, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF0QPS void _mm512_mask_prefetch_i64scatter_ps(void * base, __mmask8 m, __m512i vdx, int scale, int hint);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-62, “Type E12NP Class Exception Conditions.”

diff --git a/x86/vscatterpf1dps.vscatterpf1qps.vscatterpf1dpd.vscatterpf1qpd.html b/x86/vscatterpf1dps.vscatterpf1qps.vscatterpf1dpd.vscatterpf1qpd.html new file mode 100644 index 0000000..a84993e --- /dev/null +++ b/x86/vscatterpf1dps.vscatterpf1qps.vscatterpf1dpd.vscatterpf1qpd.html @@ -0,0 +1,161 @@ + +VSCATTERPF1DPS/VSCATTERPF1QPS/VSCATTERPF1DPD/VSCATTERPF1QPD + — Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write

VSCATTERPF1DPS/VSCATTERPF1QPS/VSCATTERPF1DPD/VSCATTERPF1QPD + — Sparse PrefetchPacked SP/DP Data Values With Signed Dword, Signed Qword Indices Using T1 Hint With Intentto Write

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.512.66.0F38.W0 C6 /6 /vsib VSCATTERPF1DPS vm32z {k1}AV/VAVX512PFUsing signed dword indices, prefetch sparse byte memory locations containing single-precision data using writemask k1 and T1 hint with intent to write.
EVEX.512.66.0F38.W0 C7 /6 /vsib VSCATTERPF1QPS vm64z {k1}AV/VAVX512PFUsing signed qword indices, prefetch sparse byte memory locations containing single-precision data using writemask k1 and T1 hint with intent to write.
EVEX.512.66.0F38.W1 C6 /6 /vsib VSCATTERPF1DPD vm32y {k1}AV/VAVX512PFUsing signed dword indices, prefetch sparse byte memory locations containing double precision data using writemask k1 and T1 hint with intent to write.
EVEX.512.66.0F38.W1 C7 /6 /vsib VSCATTERPF1QPD vm64z {k1}AV/VAVX512PFUsing signed qword indices, prefetch sparse byte memory locations containing double precision data using writemask k1 and T1 hint with intent to write.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
ATuple1 ScalarBaseReg (R): VSIB:base, VectorReg(R): VSIB:indexN/AN/AN/A
+

Description + ¶ +

+

The instruction conditionally prefetches up to sixteen 32-bit or eight 64-bit integer byte data elements. The elements are specified via the VSIB (i.e., the index register is an zmm, holding packed indices). Elements will only be prefetched if their corresponding mask bit is one.

+

cache lines will be brought into exclusive state (RFO) specified by a locality hint (T1):

+
    +
  • T1 (temporal data)—prefetch data into the second level cache.
+

[PS data] For dword indices, the instruction will prefetch sixteen memory locations. For qword indices, the instruction will prefetch eight values.

+

[PD data] For dword and qword indices, the instruction will prefetch eight memory locations.

+

Note that:

+

(1) The prefetches may happen in any order (or not at all). The instruction is a hint.

+

(2) The mask is left unchanged.

+

(3) Not valid with 16-bit effective addresses. Will deliver a #UD fault.

+

(4) No FP nor memory faults may be produced by this instruction.

+

(5) Prefetches do not handle cache line splits

+

(6) A #UD is signaled if the memory operand is encoded without the SIB byte.

+

Operation + ¶ +

+
BASE_ADDR stands for the memory operand base address (a GPR); may not exist.
+VINDEX stands for the memory operand vector of indices (a vector register).
+SCALE stands for the memory operand scalar (1, 2, 4 or 8).
+DISP is the optional 1, 2 or 4 byte displacement.
+PREFETCH(mem, Level, State) Prefetches a byte memory location pointed by ‘mem’ into the cache level specified by ‘Level’; a request
+for exclusive/ownership is done if ‘State’ is 1. Note that the memory location ignore cache line splits. This operation is considered a
+hint for the processor and may be skipped depending on implementation.
+
+

VSCATTERPF1DPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[i+31:i]) * SCALE + DISP], Level=1, RFO = 1)
+    FI;
+ENDFOR
+
+

VSCATTERPF1DPD (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 32
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[k+31:k]) * SCALE + DISP], Level=1, RFO = 1)
+    FI;
+ENDFOR
+
+

VSCATTERPF1QPS (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[i+63:i]) * SCALE + DISP], Level=1, RFO = 1)
+    FI;
+ENDFOR
+
+

VSCATTERPF1QPD (EVEX Encoded Version) + ¶ +

+
(KL, VL) = (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    k := j * 64
+    IF k1[j]
+        Prefetch( [BASE_ADDR + SignExtend(VINDEX[k+63:k]) * SCALE + DISP], Level=1, RFO = 1)
+    FI;
+ENDFOR
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSCATTERPF1DPD void _mm512_prefetch_i32scatter_pd(void *base, __m256i vdx, int scale, int hint);
+
+
VSCATTERPF1DPD void _mm512_mask_prefetch_i32scatter_pd(void *base, __mmask8 m, __m256i vdx, int scale, int hint);
+
+
VSCATTERPF1DPS void _mm512_prefetch_i32scatter_ps(void *base, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF1DPS void _mm512_mask_prefetch_i32scatter_ps(void *base, __mmask16 m, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF1QPD void _mm512_prefetch_i64scatter_pd(void * base, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF1QPD void _mm512_mask_prefetch_i64scatter_pd(void * base, __mmask8 m, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF1QPS void _mm512_prefetch_i64scatter_ps(void *base, __m512i vdx, int scale, int hint);
+
+
VSCATTERPF1QPS void _mm512_mask_prefetch_i64scatter_ps(void *base, __mmask8 m, __m512i vdx, int scale, int hint);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-62, “Type E12NP Class Exception Conditions.”

diff --git a/x86/vshuff32x4.vshuff64x2.vshufi32x4.vshufi64x2.html b/x86/vshuff32x4.vshuff64x2.vshufi32x4.vshufi64x2.html new file mode 100644 index 0000000..7cacd5a --- /dev/null +++ b/x86/vshuff32x4.vshuff64x2.vshufi32x4.vshufi64x2.html @@ -0,0 +1,305 @@ + +VSHUFF32x4/VSHUFF64x2/VSHUFI32x4/VSHUFI64x2 + — Shuffle Packed Values at 128-BitGranularity

VSHUFF32x4/VSHUFF64x2/VSHUFI32x4/VSHUFI64x2 + — Shuffle Packed Values at 128-BitGranularity

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
EVEX.256.66.0F3A.W0 23 /r ib VSHUFF32X4 ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst, imm8AV/VAVX512VL AVX512FShuffle 128-bit packed single-precision floating-point values selected by imm8 from ymm2 and ymm3/m256/m32bcst and place results in ymm1 subject to writemask k1.
EVEX.512.66.0F3A.W0 23 /r ib VSHUFF32x4 zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst, imm8AV/VAVX512FShuffle 128-bit packed single-precision floating-point values selected by imm8 from zmm2 and zmm3/m512/m32bcst and place results in zmm1 subject to writemask k1.
EVEX.256.66.0F3A.W1 23 /r ib VSHUFF64X2 ymm1{k1}{z}, ymm2, ymm3/m256/m64bcst, imm8AV/VAVX512VL AVX512FShuffle 128-bit packed double precision floating-point values selected by imm8 from ymm2 and ymm3/m256/m64bcst and place results in ymm1 subject to writemask k1.
EVEX.512.66.0F3A.W1 23 /r ib VSHUFF64x2 zmm1{k1}{z}, zmm2, zmm3/m512/m64bcst, imm8AV/VAVX512FShuffle 128-bit packed double precision floating-point values selected by imm8 from zmm2 and zmm3/m512/m64bcst and place results in zmm1 subject to writemask k1.
EVEX.256.66.0F3A.W0 43 /r ib VSHUFI32X4 ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst, imm8AV/VAVX512VL AVX512FShuffle 128-bit packed double-word values selected by imm8 from ymm2 and ymm3/m256/m32bcst and place results in ymm1 subject to writemask k1.
EVEX.512.66.0F3A.W0 43 /r ib VSHUFI32x4 zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst, imm8AV/VAVX512FShuffle 128-bit packed double-word values selected by imm8 from zmm2 and zmm3/m512/m32bcst and place results in zmm1 subject to writemask k1.
EVEX.256.66.0F3A.W1 43 /r ib VSHUFI64X2 ymm1{k1}{z}, ymm2, ymm3/m256/m64bcst, imm8AV/VAVX512VL AVX512FShuffle 128-bit packed quad-word values selected by imm8 from ymm2 and ymm3/m256/m64bcst and place results in ymm1 subject to writemask k1.
EVEX.512.66.0F3A.W1 43 /r ib VSHUFI64x2 zmm1{k1}{z}, zmm2, zmm3/m512/m64bcst, imm8AV/VAVX512FShuffle 128-bit packed quad-word values selected by imm8 from zmm2 and zmm3/m512/m64bcst and place results in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

256-bit Version: Moves one of the two 128-bit packed single-precision floating-point values from the first source operand (second operand) into the low 128-bit of the destination operand (first operand); moves one of the two packed 128-bit floating-point values from the second source operand (third operand) into the high 128-bit of the destination operand. The selector operand (third operand) determines which values are moved to the destination operand.

+

512-bit Version: Moves two of the four 128-bit packed single-precision floating-point values from the first source operand (second operand) into the low 256-bit of each double qword of the destination operand (first operand); moves two of the four packed 128-bit floating-point values from the second source operand (third operand) into the high 256-bit of the destination operand. The selector operand (third operand) determines which values are moved to the destination operand.

+

The first source operand is a vector register. The second source operand can be a ZMM register, a 512-bit memory location or a 512-bit vector broadcasted from a 32/64-bit memory location. The destination operand is a vector register.

+

The writemask updates the destination operand with the granularity of 32/64-bit data elements.

+

Operation + ¶ +

+
Select2(SRC, control) {
+CASE (control[0]) OF
+    0: TMP := SRC[127:0];
+    1: TMP := SRC[255:128];
+ESAC;
+RETURN TMP
+}
+Select4(SRC, control) {
+CASE (control[1:0]) OF
+    0: TMP := SRC[127:0];
+    1: TMP := SRC[255:128];
+    2: TMP := SRC[383:256];
+    3: TMP := SRC[511:384];
+ESAC;
+RETURN TMP
+}
+
+

VSHUFF32x4 (EVEX versions) + ¶ +

+
(KL, VL) = (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+31:i] := SRC2[31:0]
+        ELSE TMP_SRC2[i+31:i] := SRC2[i+31:i]
+    FI;
+ENDFOR;
+IF VL = 256
+    TMP_DEST[127:0] := Select2(SRC1[255:0], imm8[0]);
+    TMP_DEST[255:128] := Select2(SRC2[255:0], imm8[1]);
+FI;
+IF VL = 512
+    TMP_DEST[127:0] := Select4(SRC1[511:0], imm8[1:0]);
+    TMP_DEST[255:128] := Select4(SRC1[511:0], imm8[3:2]);
+    TMP_DEST[383:256] := Select4(TMP_SRC2[511:0], imm8[5:4]);
+    TMP_DEST[511:384] := Select4(TMP_SRC2[511:0], imm8[7:6]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    THEN DEST[i+31:i] := 0
+            FI;
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSHUFF64x2 (EVEX 512-bit version) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0]
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i]
+    FI;
+ENDFOR;
+IF VL = 256
+    TMP_DEST[127:0] := Select2(SRC1[255:0], imm8[0]);
+    TMP_DEST[255:128] := Select2(SRC2[255:0], imm8[1]);
+FI;
+IF VL = 512
+    TMP_DEST[127:0] := Select4(SRC1[511:0], imm8[1:0]);
+    TMP_DEST[255:128] := Select4(SRC1[511:0], imm8[3:2]);
+    TMP_DEST[383:256] := Select4(TMP_SRC2[511:0], imm8[5:4]);
+    TMP_DEST[511:384] := Select4(TMP_SRC2[511:0], imm8[7:6]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    THEN DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSHUFI32x4 (EVEX 512-bit version) + ¶ +

+
(KL, VL) = (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+31:i] := SRC2[31:0]
+        ELSE TMP_SRC2[i+31:i] := SRC2[i+31:i]
+    FI;
+ENDFOR;
+IF VL = 256
+    TMP_DEST[127:0] := Select2(SRC1[255:0], imm8[0]);
+    TMP_DEST[255:128] := Select2(SRC2[255:0], imm8[1]);
+FI;
+IF VL = 512
+    TMP_DEST[127:0] := Select4(SRC1[511:0], imm8[1:0]);
+    TMP_DEST[255:128] := Select4(SRC1[511:0], imm8[3:2]);
+    TMP_DEST[383:256] := Select4(TMP_SRC2[511:0], imm8[5:4]);
+    TMP_DEST[511:384] := Select4(TMP_SRC2[511:0], imm8[7:6]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+31:i] := TMP_DEST[i+31:i]
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    THEN DEST[i+31:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VSHUFI64x2 (EVEX 512-bit version) + ¶ +

+
(KL, VL) = (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF (EVEX.b = 1) AND (SRC2 *is memory*)
+        THEN TMP_SRC2[i+63:i] := SRC2[63:0]
+        ELSE TMP_SRC2[i+63:i] := SRC2[i+63:i]
+    FI;
+ENDFOR;
+IF VL = 256
+    TMP_DEST[127:0] := Select2(SRC1[255:0], imm8[0]);
+    TMP_DEST[255:128] := Select2(SRC2[255:0], imm8[1]);
+FI;
+IF VL = 512
+    TMP_DEST[127:0] := Select4(SRC1[511:0], imm8[1:0]);
+    TMP_DEST[255:128] := Select4(SRC1[511:0], imm8[3:2]);
+    TMP_DEST[383:256] := Select4(TMP_SRC2[511:0], imm8[5:4]);
+    TMP_DEST[511:384] := Select4(TMP_SRC2[511:0], imm8[7:6]);
+FI;
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask*
+        THEN DEST[i+63:i] := TMP_DEST[i+63:i]
+        ELSE
+            IF *merging-masking*
+                        ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                            ; zeroing-masking
+                    THEN DEST[i+63:i] := 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSHUFI32x4 __m512i _mm512_shuffle_i32x4(__m512i a, __m512i b, int imm);
+
+
VSHUFI32x4 __m512i _mm512_mask_shuffle_i32x4(__m512i s, __mmask16 k, __m512i a, __m512i b, int imm);
+
+
VSHUFI32x4 __m512i _mm512_maskz_shuffle_i32x4( __mmask16 k, __m512i a, __m512i b, int imm);
+
+
VSHUFI32x4 __m256i _mm256_shuffle_i32x4(__m256i a, __m256i b, int imm);
+
+
VSHUFI32x4 __m256i _mm256_mask_shuffle_i32x4(__m256i s, __mmask8 k, __m256i a, __m256i b, int imm);
+
+
VSHUFI32x4 __m256i _mm256_maskz_shuffle_i32x4( __mmask8 k, __m256i a, __m256i b, int imm);
+
+
VSHUFF32x4 __m512 _mm512_shuffle_f32x4(__m512 a, __m512 b, int imm);
+
+
VSHUFF32x4 __m512 _mm512_mask_shuffle_f32x4(__m512 s, __mmask16 k, __m512 a, __m512 b, int imm);
+
+
VSHUFF32x4 __m512 _mm512_maskz_shuffle_f32x4( __mmask16 k, __m512 a, __m512 b, int imm);
+
+
VSHUFI64x2 __m512i _mm512_shuffle_i64x2(__m512i a, __m512i b, int imm);
+
+
VSHUFI64x2 __m512i _mm512_mask_shuffle_i64x2(__m512i s, __mmask8 k, __m512i b, __m512i b, int imm);
+
+
VSHUFI64x2 __m512i _mm512_maskz_shuffle_i64x2( __mmask8 k, __m512i a, __m512i b, int imm);
+
+
VSHUFF64x2 __m512d _mm512_shuffle_f64x2(__m512d a, __m512d b, int imm);
+
+
VSHUFF64x2 __m512d _mm512_mask_shuffle_f64x2(__m512d s, __mmask8 k, __m512d a, __m512d b, int imm);
+
+
VSHUFF64x2 __m512d _mm512_maskz_shuffle_f64x2( __mmask8 k, __m512d a, __m512d b, int imm);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-50, “Type E4NF Class Exception Conditions.”

+

Additionally:

+ + + +
#UDIf EVEX.L’L = 0 for VSHUFF32x4/VSHUFF64x2.
diff --git a/x86/vsqrtph.html b/x86/vsqrtph.html new file mode 100644 index 0000000..3cdfd54 --- /dev/null +++ b/x86/vsqrtph.html @@ -0,0 +1,113 @@ + +VSQRTPH + — Compute Square Root of Packed FP16 Values

VSQRTPH + — Compute Square Root of Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 51 /r VSQRTPH xmm1{k1}{z}, xmm2/m128/m16bcstAV/VAVX512-FP16 AVX512VLCompute square roots of the packed FP16 values in xmm2/m128/m16bcst, and store the result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 51 /r VSQRTPH ymm1{k1}{z}, ymm2/m256/m16bcstAV/VAVX512-FP16 AVX512VLCompute square roots of the packed FP16 values in ymm2/m256/m16bcst, and store the result in ymm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 51 /r VSQRTPH zmm1{k1}{z}, zmm2/m512/m16bcst {er}AV/VAVX512-FP16Compute square roots of the packed FP16 values in zmm2/m512/m16bcst, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction performs a packed FP16 square-root computation on the values from source operand and stores the packed FP16 result in the destination operand. The destination elements are updated according to the write-mask.

+

Operation + ¶ +

+

VSQRTPH dest{k1}, src + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR i := 0 to KL-1:
+    IF k1[i] or *no writemask*:
+        IF SRC is memory and (EVEX.b = 1):
+            tsrc := src.fp16[0]
+        ELSE:
+            tsrc := src.fp16[i]
+        DEST.fp16[i] := SQRT(tsrc)
+    ELSE IF *zeroing*:
+        DEST.fp16[i] := 0
+    //else DEST.fp16[i] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSQRTPH __m128h _mm_mask_sqrt_ph (__m128h src, __mmask8 k, __m128h a);
+
+
VSQRTPH __m128h _mm_maskz_sqrt_ph (__mmask8 k, __m128h a);
+
+
VSQRTPH __m128h _mm_sqrt_ph (__m128h a);
+
+
VSQRTPH __m256h _mm256_mask_sqrt_ph (__m256h src, __mmask16 k, __m256h a);
+
+
VSQRTPH __m256h _mm256_maskz_sqrt_ph (__mmask16 k, __m256h a);
+
+
VSQRTPH __m256h _mm256_sqrt_ph (__m256h a);
+
+
VSQRTPH __m512h _mm512_mask_sqrt_ph (__m512h src, __mmask32 k, __m512h a);
+
+
VSQRTPH __m512h _mm512_maskz_sqrt_ph (__mmask32 k, __m512h a);
+
+
VSQRTPH __m512h _mm512_sqrt_ph (__m512h a);
+
+
VSQRTPH __m512h _mm512_mask_sqrt_round_ph (__m512h src, __mmask32 k, __m512h a, const int rounding);
+
+
VSQRTPH __m512h _mm512_maskz_sqrt_round_ph (__mmask32 k, __m512h a, const int rounding);
+
+
VSQRTPH __m512h _mm512_sqrt_round_ph (__m512h a, const int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vsqrtsh.html b/x86/vsqrtsh.html new file mode 100644 index 0000000..7679e2e --- /dev/null +++ b/x86/vsqrtsh.html @@ -0,0 +1,83 @@ + +VSQRTSH + — Compute Square Root of Scalar FP16 Value

VSQRTSH + — Compute Square Root of Scalar FP16 Value

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 51 /r VSQRTSH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Compute square root of the low FP16 value in xmm3/m16 and store the result in xmm1 subject to writemask k1. Bits 127:16 from xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction performs a scalar FP16 square-root computation on the source operand and stores the FP16 result in the destination operand. Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+

VSQRTSH dest{k1}, src1, src2 + ¶ +

+
IF k1[0] or *no writemask*:
+    DEST.fp16[0] := SQRT(src2.fp16[0])
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+//else DEST.fp16[0] remains unchanged
+DEST[127:16] := src1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSQRTSH __m128h _mm_mask_sqrt_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VSQRTSH __m128h _mm_maskz_sqrt_round_sh (__mmask8 k, __m128h a, __m128h b, const int rounding);
+
+
VSQRTSH __m128h _mm_sqrt_round_sh (__m128h a, __m128h b, const int rounding);
+
+
VSQRTSH __m128h _mm_mask_sqrt_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VSQRTSH __m128h _mm_maskz_sqrt_sh (__mmask8 k, __m128h a, __m128h b);
+
+
VSQRTSH __m128h _mm_sqrt_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Precision, Denormal

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vsubph.html b/x86/vsubph.html new file mode 100644 index 0000000..9f4a349 --- /dev/null +++ b/x86/vsubph.html @@ -0,0 +1,129 @@ + +VSUBPH + — Subtract Packed FP16 Values

VSUBPH + — Subtract Packed FP16 Values

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.128.NP.MAP5.W0 5C /r VSUBPH xmm1{k1}{z}, xmm2, xmm3/m128/m16bcstAV/VAVX512-FP16 AVX512VLSubtract packed FP16 values from xmm3/m128/m16bcst to xmm2, and store the result in xmm1 subject to writemask k1.
EVEX.256.NP.MAP5.W0 5C /r VSUBPH ymm1{k1}{z}, ymm2, ymm3/m256/m16bcstAV/VAVX512-FP16 AVX512VLSubtract packed FP16 values from ymm3/m256/m16bcst to ymm2, and store the result in ymm1 subject to writemask k1.
EVEX.512.NP.MAP5.W0 5C /r VSUBPH zmm1{k1}{z}, zmm2, zmm3/m512/m16bcst {er}AV/VAVX512-FP16Subtract packed FP16 values from zmm3/m512/m16bcst to zmm2, and store the result in zmm1 subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AFullModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction subtracts packed FP16 values from second source operand from the corresponding elements in the first source operand, storing the packed FP16 result in the destination operand. The destination elements are updated according to the writemask.

+

Operation + ¶ +

+

VSUBPH (EVEX encoded versions) when src2 operand is a register + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+IF (VL = 512) AND (EVEX.b = 1):
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        DEST.fp16[j] := SRC1.fp16[j] - SRC2.fp16[j]
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

VSUBPH (EVEX encoded versions) when src2 operand is a memory source + ¶ +

+
VL = 128, 256 or 512
+KL := VL/16
+FOR j := 0 TO KL-1:
+    IF k1[j] OR *no writemask*:
+        IF EVEX.b = 1:
+            DEST.fp16[j] := SRC1.fp16[j] - SRC2.fp16[0]
+        ELSE:
+            DEST.fp16[j] := SRC1.fp16[j] - SRC2.fp16[j]
+    ELSE IF *zeroing*:
+        DEST.fp16[j] := 0
+    // else dest.fp16[j] remains unchanged
+DEST[MAXVL-1:VL] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSUBPH __m128h _mm_mask_sub_ph (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VSUBPH __m128h _mm_maskz_sub_ph (__mmask8 k, __m128h a, __m128h b);
+
+
VSUBPH __m128h _mm_sub_ph (__m128h a, __m128h b);
+
+
VSUBPH __m256h _mm256_mask_sub_ph (__m256h src, __mmask16 k, __m256h a, __m256h b);
+
+
VSUBPH __m256h _mm256_maskz_sub_ph (__mmask16 k, __m256h a, __m256h b);
+
+
VSUBPH __m256h _mm256_sub_ph (__m256h a, __m256h b);
+
+
VSUBPH __m512h _mm512_mask_sub_ph (__m512h src, __mmask32 k, __m512h a, __m512h b);
+
+
VSUBPH __m512h _mm512_maskz_sub_ph (__mmask32 k, __m512h a, __m512h b);
+
+
VSUBPH __m512h _mm512_sub_ph (__m512h a, __m512h b);
+
+
VSUBPH __m512h _mm512_mask_sub_round_ph (__m512h src, __mmask32 k, __m512h a, __m512h b, int rounding);
+
+
VSUBPH __m512h _mm512_maskz_sub_round_ph (__mmask32 k, __m512h a, __m512h b, int rounding);
+
+
VSUBPH __m512h _mm512_sub_round_ph (__m512h a, __m512h b, int rounding);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”

diff --git a/x86/vsubsh.html b/x86/vsubsh.html new file mode 100644 index 0000000..eb47b34 --- /dev/null +++ b/x86/vsubsh.html @@ -0,0 +1,87 @@ + +VSUBSH + — Subtract Scalar FP16 Value

VSUBSH + — Subtract Scalar FP16 Value

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature SupportDescription
EVEX.LLIG.F3.MAP5.W0 5C /r VSUBSH xmm1{k1}{z}, xmm2, xmm3/m16 {er}AV/VAVX512-FP16Subtract the low FP16 value in xmm3/m16 from xmm2 and store the result in xmm1 subject to writemask k1. Bits 127:16 from xmm2 are copied to xmm1[127:16].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

This instruction subtracts the low FP16 value from the second source operand from the corresponding value in the first source operand, storing the FP16 result in the destination operand. Bits 127:16 of the destination operand are copied from the corresponding bits of the first source operand. Bits MAXVL-1:128 of the destination operand are zeroed. The low FP16 element of the destination is updated according to the writemask.

+

Operation + ¶ +

+

VSUBSH (EVEX encoded versions) + ¶ +

+
IF EVEX.b = 1 and SRC2 is a register:
+    SET_RM(EVEX.RC)
+ELSE
+    SET_RM(MXCSR.RC)
+IF k1[0] OR *no writemask*:
+    DEST.fp16[0] := SRC1.fp16[0] - SRC2.fp16[0]
+ELSE IF *zeroing*:
+    DEST.fp16[0] := 0
+// else dest.fp16[0] remains unchanged
+DEST[127:16] := SRC1[127:16]
+DEST[MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VSUBSH __m128h _mm_mask_sub_round_sh (__m128h src, __mmask8 k, __m128h a, __m128h b, int rounding);
+
+
VSUBSH __m128h _mm_maskz_sub_round_sh (__mmask8 k, __m128h a, __m128h b, int rounding);
+
+
VSUBSH __m128h _mm_sub_round_sh (__m128h a, __m128h b, int rounding);
+
+
VSUBSH __m128h _mm_mask_sub_sh (__m128h src, __mmask8 k, __m128h a, __m128h b);
+
+
VSUBSH __m128h _mm_maskz_sub_sh (__mmask8 k, __m128h a, __m128h b);
+
+
VSUBSH __m128h _mm_sub_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Underflow, Overflow, Precision, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.”

diff --git a/x86/vtestpd.vtestps.html b/x86/vtestpd.vtestps.html new file mode 100644 index 0000000..bf86ea6 --- /dev/null +++ b/x86/vtestpd.vtestps.html @@ -0,0 +1,171 @@ + +VTESTPD/VTESTPS + — Packed Bit Test

VTESTPD/VTESTPS + — Packed Bit Test

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp /En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.66.0F38.W0 0E /r VTESTPS xmm1, xmm2/m128RMV/VAVXSet ZF and CF depending on sign bit AND and ANDN of packed single-precision floating-point sources.
VEX.256.66.0F38.W0 0E /r VTESTPS ymm1, ymm2/m256RMV/VAVXSet ZF and CF depending on sign bit AND and ANDN of packed single-precision floating-point sources.
VEX.128.66.0F38.W0 0F /r VTESTPD xmm1, xmm2/m128RMV/VAVXSet ZF and CF depending on sign bit AND and ANDN of packed double precision floating-point sources.
VEX.256.66.0F38.W0 0F /r VTESTPD ymm1, ymm2/m256RMV/VAVXSet ZF and CF depending on sign bit AND and ANDN of packed double precision floating-point sources.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
RMModRM:reg (r)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

VTESTPS performs a bitwise comparison of all the sign bits of the packed single-precision elements in the first source operation and corresponding sign bits in the second source operand. If the AND of the source sign bits with the dest sign bits produces all zeros, the ZF is set else the ZF is clear. If the AND of the source sign bits with the inverted dest sign bits produces all zeros the CF is set else the CF is clear. An attempt to execute VTESTPS with VEX.W=1 will cause #UD.

+

VTESTPD performs a bitwise comparison of all the sign bits of the double precision elements in the first source operation and corresponding sign bits in the second source operand. If the AND of the source sign bits with the dest sign bits produces all zeros, the ZF is set else the ZF is clear. If the AND the source sign bits with the inverted dest sign bits produces all zeros the CF is set else the CF is clear. An attempt to execute VTESTPS with VEX.W=1 will cause #UD.

+

The first source register is specified by the ModR/M reg field.

+

128-bit version: The first source register is an XMM register. The second source register can be an XMM register or a 128-bit memory location. The destination register is not modified.

+

VEX.256 encoded version: The first source register is a YMM register. The second source register can be a YMM register or a 256-bit memory location. The destination register is not modified.

+

Note: In VEX-encoded versions, VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VTESTPS (128-bit version) + ¶ +

+
TEMP[127:0] := SRC[127:0] AND DEST[127:0]
+IF (TEMP[31] = TEMP[63] = TEMP[95] = TEMP[127] = 0)
+    THEN ZF := 1;
+    ELSE ZF := 0;
+TEMP[127:0] := SRC[127:0] AND NOT DEST[127:0]
+IF (TEMP[31] = TEMP[63] = TEMP[95] = TEMP[127] = 0)
+    THEN CF := 1;
+    ELSE CF := 0;
+DEST (unmodified)
+AF := OF := PF := SF := 0;
+
+

VTESTPS (VEX.256 encoded version) + ¶ +

+
TEMP[255:0] := SRC[255:0] AND DEST[255:0]
+IF (TEMP[31] = TEMP[63] = TEMP[95] = TEMP[127]= TEMP[160] =TEMP[191] = TEMP[224] = TEMP[255] = 0)
+    THEN ZF := 1;
+    ELSE ZF := 0;
+TEMP[255:0] := SRC[255:0] AND NOT DEST[255:0]
+IF (TEMP[31] = TEMP[63] = TEMP[95] = TEMP[127]= TEMP[160] =TEMP[191] = TEMP[224] = TEMP[255] = 0)
+    THEN CF := 1;
+    ELSE CF := 0;
+DEST (unmodified)
+AF := OF := PF := SF := 0;
+
+

VTESTPD (128-bit version) + ¶ +

+
TEMP[127:0] := SRC[127:0] AND DEST[127:0]
+IF ( TEMP[63] = TEMP[127] = 0)
+    THEN ZF := 1;
+    ELSE ZF := 0;
+TEMP[127:0] := SRC[127:0] AND NOT DEST[127:0]
+IF ( TEMP[63] = TEMP[127] = 0)
+    THEN CF := 1;
+    ELSE CF := 0;
+DEST (unmodified)
+AF := OF := PF := SF := 0;
+
+

VTESTPD (VEX.256 encoded version) + ¶ +

+
TEMP[255:0] := SRC[255:0] AND DEST[255:0]
+IF (TEMP[63] = TEMP[127] = TEMP[191] = TEMP[255] = 0)
+    THEN ZF := 1;
+    ELSE ZF := 0;
+TEMP[255:0] := SRC[255:0] AND NOT DEST[255:0]
+IF (TEMP[63] = TEMP[127] = TEMP[191] = TEMP[255] = 0)
+    THEN CF := 1;
+    ELSE CF := 0;
+DEST (unmodified)
+AF := OF := PF := SF := 0;
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VTESTPS int _mm256_testz_ps (__m256 s1, __m256 s2);
+
+
int _mm256_testc_ps (__m256 s1, __m256 s2);
+
+
int _mm256_testnzc_ps (__m256 s1, __m128 s2);
+
+
int _mm_testz_ps (__m128 s1, __m128 s2);
+
+
int _mm_testc_ps (__m128 s1, __m128 s2);
+
+
int _mm_testnzc_ps (__m128 s1, __m128 s2);
+
+
VTESTPD int _mm256_testz_pd (__m256d s1, __m256d s2);
+
+
int _mm256_testc_pd (__m256d s1, __m256d s2);
+
+
int _mm256_testnzc_pd (__m256d s1, __m256d s2);
+
+
int _mm_testz_pd (__m128d s1, __m128d s2);
+
+
int _mm_testc_pd (__m128d s1, __m128d s2);
+
+
int _mm_testnzc_pd (__m128d s1, __m128d s2);
+
+

Flags Affected + ¶ +

+

The OF, AF, PF, SF flags are cleared and the ZF, CF flags are set according to the operation.

+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-21, “Type 4 Class Exception Conditions.”

+

Additionally:

+ + + + + +
#UDIf VEX.vvvv ≠ 1111B.
If VEX.W = 1 for VTESTPS or VTESTPD.
diff --git a/x86/vucomish.html b/x86/vucomish.html new file mode 100644 index 0000000..7dd2ed7 --- /dev/null +++ b/x86/vucomish.html @@ -0,0 +1,95 @@ + +VUCOMISH + — Unordered Compare Scalar FP16 Values and Set EFLAGS

VUCOMISH + — Unordered Compare Scalar FP16 Values and Set EFLAGS

+ + + + + + + + + + + + + +
Instruction En bit Mode Flag +Support Instruction En bit Mode Flag +Support 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag Op/ 64/32 CPUID Feature Instruction En bit Mode Flag 64/32 CPUID Feature Instruction En bit Mode Flag CPUID Feature Instruction En bit Mode Flag p/ 64/32 CPUID Feature Instruction En bit Mode Flag +Support +Description +EVEX.LLIG.NP.MAP5.W0 2E /r A V/V AVX512-FP16 Description +EVEX.LLIG.NP.MAP5.W0 2E /r A V/V AVX512-FP16 VUCOMISH xmm1, xmm2/m16 {sae} Description +EVEX.LLIG.NP.MAP5.W0 2E /r A V/V AVX512-FP16 Description +EVEX.LLIG.NP.MAP5.W0 2E /r A V/V AVX512-FP16 Op/ 64/32 CPUID Feature SupportDescription
VUCOMISH xmm1, xmm2/m16 {sae}Compare low FP16 values in xmm1 and xmm2/m16 and set the EFLAGS flags accordingly.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
AScalarModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

This instruction compares the FP16 values in the low word of operand 1 (first operand) and operand 2 (second operand), and sets the ZF, PF, and CF flags in the EFLAGS register according to the result (unordered, greater than, less than, or equal). The OF, SF and AF flags in the EFLAGS register are set to 0. The unordered result is returned if either source operand is a NaN (QNaN or SNaN).

+

Operand 1 is an XMM register; operand 2 can be an XMM register or a 16-bit memory location.

+

The VUCOMISH instruction differs from the VCOMISH instruction in that it signals a SIMD floating-point invalid operation exception (#I) only if a source operand is an SNaN. The COMISS instruction signals an invalid numeric exception when a source operand is either a QNaN or SNaN.

+

The EFLAGS register is not updated if an unmasked SIMD floating-point exception is generated. EVEX.vvvv are reserved and must be 1111b, otherwise instructions will #UD.

+

Operation + ¶ +

+

VUCOMISH + ¶ +

+
RESULT := UnorderedCompare(SRC1.fp16[0],SRC2.fp16[0])
+if RESULT is UNORDERED:
+    ZF, PF, CF := 1, 1, 1
+else if RESULT is GREATER_THAN:
+    ZF, PF, CF := 0, 0, 0
+else if RESULT is LESS_THAN:
+    ZF, PF, CF := 0, 0, 1
+else: // RESULT is EQUALS
+    ZF, PF, CF := 1, 0, 0
+OF, AF, SF := 0, 0, 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VUCOMISH int _mm_ucomieq_sh (__m128h a, __m128h b);
+
+
VUCOMISH int _mm_ucomige_sh (__m128h a, __m128h b);
+
+
VUCOMISH int _mm_ucomigt_sh (__m128h a, __m128h b);
+
+
VUCOMISH int _mm_ucomile_sh (__m128h a, __m128h b);
+
+
VUCOMISH int _mm_ucomilt_sh (__m128h a, __m128h b);
+
+
VUCOMISH int _mm_ucomineq_sh (__m128h a, __m128h b);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

Invalid, Denormal.

+

Other Exceptions + ¶ +

+

EVEX-encoded instructions, see Table 2-48, “Type E3NF Class Exception Conditions.”

diff --git a/x86/vzeroall.html b/x86/vzeroall.html new file mode 100644 index 0000000..efa5998 --- /dev/null +++ b/x86/vzeroall.html @@ -0,0 +1,74 @@ + +VZEROALL + — Zero XMM, YMM, and ZMM Registers

VZEROALL + — Zero XMM, YMM, and ZMM Registers

+ + + + + + + + + + + + + +
Opcode/InstructionOp /En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.256.0F.WIG 77 VZEROALLZOV/VAVXZero some of the XMM, YMM, and ZMM registers.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

In 64-bit mode, the instruction zeroes XMM0-XMM15, YMM0-YMM15, and ZMM0-ZMM15. Outside 64-bit mode, it zeroes only XMM0-XMM7, YMM0-YMM7, and ZMM0-ZMM7. VZEROALL does not modify ZMM16-ZMM31.

+

Note: VEX.vvvv is reserved and must be 1111b, otherwise instructions will #UD. In Compatibility and legacy 32-bit mode only the lower 8 registers are modified.

+

Operation + ¶ +

+
simd_reg_file[][] is a two dimensional array representing the SIMD register file containing all the overlapping xmm, ymm, and zmm
+registers present in that implementation. The major dimension is the register number: 0 for xmm0, ymm0, and zmm0; 1 for xmm1,
+ymm1, and zmm1; etc. The minor dimension size is the width of the implemented SIMD state measured in bits. On a machine
+supporting Intel AVX-512, the width is 512.
+
+

VZEROALL (VEX.256 encoded version) + ¶ +

+
IF (64-bit mode)
+    limit :=15
+ELSE
+    limit := 7
+FOR i in 0 .. limit:
+    simd_reg_file[i][MAXVL-1:0] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VZEROALL: _mm256_zeroall()
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-25, “Type 8 Class Exception Conditions.”

diff --git a/x86/vzeroupper.html b/x86/vzeroupper.html new file mode 100644 index 0000000..2c36d99 --- /dev/null +++ b/x86/vzeroupper.html @@ -0,0 +1,74 @@ + +VZEROUPPER + — Zero Upper Bits of YMM and ZMM Registers

VZEROUPPER + — Zero Upper Bits of YMM and ZMM Registers

+ + + + + + + + + + + + + +
Opcode/InstructionOp /En64/32 bit Mode SupportCPUID Feature FlagDescription
VEX.128.0F.WIG 77 VZEROUPPERZOV/VAVXZero bits in positions 128 and higher of some YMM and ZMM registers.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

In 64-bit mode, the instruction zeroes the bits in positions 128 and higher in YMM0-YMM15 and ZMM0-ZMM15. Outside 64-bit mode, it zeroes those bits only in YMM0-YMM7 and ZMM0-ZMM7. VZEROUPPER does not modify the lower 128 bits of these registers and it does not modify ZMM16-ZMM31.

+

This instruction is recommended when transitioning between AVX and legacy SSE code; it will eliminate performance penalties caused by false dependencies.

+

Note: VEX.vvvv is reserved and must be 1111b otherwise instructions will #UD. In Compatibility and legacy 32-bit mode only the lower 8 registers are modified.

+

Operation + ¶ +

+
simd_reg_file[][] is a two dimensional array representing the SIMD register file containing all the overlapping xmm, ymm, and zmm
+registers present in that implementation. The major dimension is the register number: 0 for xmm0, ymm0, and zmm0; 1 for xmm1,
+ymm1, and zmm1; etc. The minor dimension size is the width of the implemented SIMD state measured in bits.
+
+

VZEROUPPER + ¶ +

+
IF (64-bit mode)
+    limit :=15
+ELSE
+    limit := 7
+FOR i in 0 .. limit:
+    simd_reg_file[i][MAXVL-1:128] := 0
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VZEROUPPER: _mm256_zeroupper()
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

See Table 2-25, “Type 8 Class Exception Conditions.”

diff --git a/x86/wait.fwait.html b/x86/wait.fwait.html new file mode 100644 index 0000000..a734692 --- /dev/null +++ b/x86/wait.fwait.html @@ -0,0 +1,93 @@ + +WAIT/FWAIT + — Wait

WAIT/FWAIT + — Wait

+ + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
9BWAITZOValidValidCheck pending unmasked floating-point exceptions.
9BFWAITZOValidValidCheck pending unmasked floating-point exceptions.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Causes the processor to check for and handle pending, unmasked, floating-point exceptions before proceeding. (FWAIT is an alternate mnemonic for WAIT.)

+

This instruction is useful for synchronizing exceptions in critical sections of code. Coding a WAIT instruction after a floating-point instruction ensures that any unmasked floating-point exceptions the instruction may raise are handled before the processor can modify the instruction’s results. See the section titled “Floating-Point Exception Synchronization” in Chapter 8 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, for more information on using the WAIT/FWAIT instruction.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
CheckForPendingUnmaskedFloatingPointExceptions;
+
+

FPU Flags Affected + ¶ +

+

The C0, C1, C2, and C3 flags are undefined.

+

Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#NMIf CR0.MP[bit 1] = 1 and CR0.TS[bit 3] = 1.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/wakeup.html b/x86/wakeup.html new file mode 100644 index 0000000..b73d18c --- /dev/null +++ b/x86/wakeup.html @@ -0,0 +1,180 @@ + +GETSEC[WAKEUP] + — Wake Up Sleeping Processors in Measured Environment

GETSEC[WAKEUP] + — Wake Up Sleeping Processors in Measured Environment

+ + + + + + + + + +
OpcodeInstructionDescription
NP 0F 37 (EAX=8)GETSEC[WAKEUP]Wake up the responding logical processors from the SENTER sleep state.
+

Description + ¶ +

+

The GETSEC[WAKEUP] leaf function broadcasts a wake-up message to all logical processors currently in the SENTER sleep state. This GETSEC leaf must be executed only by the ILP, in order to wake-up the RLPs. Responding logical processors (RLPs) enter the SENTER sleep state after completion of the SENTER rendezvous sequence.

+

The GETSEC[WAKEUP] instruction may only be executed:

+
    +
  • In a measured environment as initiated by execution of GETSEC[SENTER].
  • +
  • Outside of authenticated code execution mode.
  • +
  • Execution is not allowed unless the processor is in protected mode with CPL = 0 and EFLAGS.VM = 0.
  • +
  • In addition, the logical processor must be designated as the boot-strap processor as configured by setting IA32_APIC_BASE.BSP = 1.
+

If these conditions are not met, attempts to execute GETSEC[WAKEUP] result in a general protection violation.

+

An RLP exits the SENTER sleep state and start execution in response to a WAKEUP signal initiated by ILP’s execution of GETSEC[WAKEUP]. The RLP retrieves a pointer to a data structure that contains information to enable execution from a defined entry point. This data structure is located using a physical address held in the Intel® TXT-capable chipset configuration register LT.MLE.JOIN. The register is publicly writable in the chipset by all processors and is not restricted by the Intel® TXT-capable chipset configuration register lock status. The format of this data structure is defined in Table 7-12.

+
+ + + + + + + + + + + + + + + +
OffsetField
0GDT limit
4GDT base pointer
8Segment selector initializer
12EIP
+
Table 7-12. RLP MVMM JOIN Data Structure
+

The MLE JOIN data structure contains the information necessary to initialize RLP processor state and permit the processor to join the measured environment. The GDTR, LIP, and CS, DS, SS, and ES selector values are initialized using this data structure. The CS selector index is derived directly from the segment selector initializer field; DS, SS, and ES selectors are initialized to CS+8. The segment descriptor fields are initialized implicitly with BASE = 0, LIMIT = FFFFFH, G = 1, D = 1, P = 1, S = 1; read/write/access for DS, SS, and ES; and execute/read/access for CS. It is the responsibility of external software to establish a GDT pointed to by the MLE JOIN data structure that contains descriptor entries consistent with the implicit settings initialized by the processor (see Table 7-6). Certain states from the content of Table 7-12 are checked for consistency by the processor prior to execution. A failure of any consistency check results in the RLP aborting entry into the protected environment and signaling an Intel® TXT shutdown condition. The specific checks performed are documented later in this section. After successful completion of processor consistency checks and subsequent initialization, RLP execution in the measured environment begins from the entry point at offset 12 (as indicated in Table 7-12).

+

Operation + ¶ +

+
(* The state of the internal flag ACMODEFLAG and SENTERFLAG persist across instruction boundary *)
+IF (CR4.SMXE=0)
+    THEN #UD;
+ELSE IF (in VMX non-root operation)
+    THEN VM Exit (reason=”GETSEC instruction”);
+ELSE IF (GETSEC leaf unsupported)
+    THEN #UD;
+ELSE IF ((CR0.PE=0) or (CPL>0) or (EFLAGS.VM=1) or (SENTERFLAG=0) or (ACMODEFLAG=1) or (IN_SMM=0) or (in VMX operation) or
+(IA32_APIC_BASE.BSP=0) or (TXT chipset not present))
+    THEN #GP(0);
+ELSE
+    SignalTXTMsg(WAKEUP);
+END;
+
+

RLP_SIPI_WAKEUP_FROM_SENTER_ROUTINE: (RLP Only) + ¶ +

+
WHILE (no SignalWAKEUP event);
+IF (IA32_SMM_MONITOR_CTL[0] ≠ ILP.IA32_SMM_MONITOR_CTL[0])
+    THEN TXT-SHUTDOWN(#IllegalEvent)
+IF (IA32_SMM_MONITOR_CTL[0] = 0)
+    THEN Unmask SMI pin event;
+ELSE
+    Mask SMI pin event;
+Mask A20M, and NMI external pin events (unmask INIT);
+Mask SignalWAKEUP event;
+Invalidate processor TLB(s);
+Drain outgoing transactions;
+TempGDTRLIMIT := LOAD(LT.MLE.JOIN);
+TempGDTRBASE := LOAD(LT.MLE.JOIN+4);
+TempSegSel := LOAD(LT.MLE.JOIN+8);
+TempEIP := LOAD(LT.MLE.JOIN+12);
+IF (TempGDTLimit & FFFF0000h)
+    THEN TXT-SHUTDOWN(#BadJOINFormat);
+IF ((TempSegSel > TempGDTRLIMIT-15) or (TempSegSel < 8))
+    THEN TXT-SHUTDOWN(#BadJOINFormat);
+IF ((TempSegSel.TI=1) or (TempSegSel.RPL≠0))
+    THEN TXT-SHUTDOWN(#BadJOINFormat);
+CR0.[PG,CD,NW,AM,WP] := 0;
+CR0.[NE,PE] := 1;
+CR4 := 00004000h;
+EFLAGS := 00000002h;
+IA32_EFER := 0;
+GDTR.BASE := TempGDTRBASE;
+GDTR.LIMIT := TempGDTRLIMIT;
+CS.SEL := TempSegSel;
+CS.BASE := 0;
+CS.LIMIT := FFFFFh;
+CS.G := 1;
+CS.D := 1;
+CS.AR := 9Bh;
+DS.SEL := TempSegSel+8;
+DS.BASE := 0;
+DS.LIMIT := FFFFFh;
+DS.G := 1;
+DS.D := 1;
+DS.AR := 93h;
+SS := DS;
+ES := DS;
+DR7 := 00000400h;
+IA32_DEBUGCTL := 0;
+EIP := TempEIP;
+END;
+
+

Flags Affected + ¶ +

+

None.

+

Use of Prefixes + ¶ +

+

LOCK Causes #UD.

+

REP* Cause #UD (includes REPNE/REPNZ and REP/REPE/REPZ).

+

Operand size Causes #UD.

+

NP 66/F2/F3 prefixes are not allowed.

+

Segmentoverrides Ignored.

+

Address size Ignored.

+

REX Ignored.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[WAKEUP] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)IfCR0.PE=0orCPL>0orEFLAGS.VM=1.
If in VMX operation.
If a protected partition is not already active or the processor is currently in authenticated code mode.
If the processor is in SMM.
#UDIf CR4.SMXE = 0.
If GETSEC[WAKEUP] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[WAKEUP] is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#UDIf CR4.SMXE = 0.
If GETSEC[WAKEUP] is not reported as supported by GETSEC[CAPABILITIES].
#GP(0)GETSEC[WAKEUP] is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+

64-Bit Mode Exceptions + ¶ +

+

All protected mode exceptions apply.

+

VM-exit Condition + ¶ +

+

Reason (GETSEC) If in VMX non-root operation.

diff --git a/x86/wbinvd.html b/x86/wbinvd.html new file mode 100644 index 0000000..4d9666b --- /dev/null +++ b/x86/wbinvd.html @@ -0,0 +1,102 @@ + +WBINVD + — Write Back and Invalidate Cache

WBINVD + — Write Back and Invalidate Cache

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 09WBINVDZOValidValidWrite back and flush Internal caches; initiate writing-back and flushing of external caches.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Writes back all modified cache lines in the processor’s internal cache to main memory and invalidates (flushes) the internal caches. The instruction then issues a special-function bus cycle that directs external caches to also write back modified data and another bus cycle to indicate that the external caches should be invalidated.

+

After executing this instruction, the processor does not wait for the external caches to complete their write-back and flushing operations before proceeding with instruction execution. It is the responsibility of hardware to respond to the cache write-back and flush signals. The amount of time or cycles for WBINVD to complete will vary due to size and other factors of different cache hierarchies. As a consequence, the use of the WBINVD instruction can have an impact on logical processor interrupt/event response time. Additional information of WBINVD behavior in a cache hierarchy with hierarchical sharing topology can be found in Chapter 2 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A.

+

The WBINVD instruction is a privileged instruction. When the processor is running in protected mode, the CPL of a program or procedure must be 0 to execute this instruction. This instruction is also a serializing instruction (see “Serializing Instructions” in Chapter 9 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A).

+

In situations where cache coherency with main memory is not a concern, software can use the INVD instruction.

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

IA-32 Architecture Compatibility + ¶ +

+

The WBINVD instruction is implementation dependent, and its function may be implemented differently on future Intel 64 and IA-32 processors. The instruction is not supported on IA-32 processors earlier than the Intel486 processor.

+

Operation + ¶ +

+
WriteBack(InternalCaches);
+Flush(InternalCaches);
+SignalWriteBack(ExternalCaches);
+SignalFlush(ExternalCaches);
+Continue; (* Continue execution *)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
WBINVD void _wbinvd(void);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the current privilege level is not 0.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)WBINVD cannot be executed at the virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/wbnoinvd.html b/x86/wbnoinvd.html new file mode 100644 index 0000000..4b5d0de --- /dev/null +++ b/x86/wbnoinvd.html @@ -0,0 +1,94 @@ + +WBNOINVD + — Write Back and Do Not Invalidate Cache

WBNOINVD + — Write Back and Do Not Invalidate Cache

+ + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F3 0F 09 WBNOINVDZOV/VWBNOINVDWrite back and do not flush internal caches; initiate writing-back without flushing of external caches.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

Description + ¶ +

+

The WBNOINVD instruction writes back all modified cache lines in the processor’s internal cache to main memory but does not invalidate (flush) the internal caches.

+

After executing this instruction, the processor does not wait for the external caches to complete their write-back operation before proceeding with instruction execution. It is the responsibility of hardware to respond to the cache write-back signal. The amount of time or cycles for WBNOINVD to complete will vary due to size and other factors of different cache hierarchies. As a consequence, the use of the WBNOINVD instruction can have an impact on logical processor interrupt/event response time.

+

The WBNOINVD instruction is a privileged instruction. When the processor is running in protected mode, the CPL of a program or procedure must be 0 to execute this instruction. This instruction is also a serializing instruction (see “Serializing Instructions” in Chapter 9 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A).

+

This instruction’s operation is the same in non-64-bit modes and 64-bit mode.

+

Operation + ¶ +

+
WriteBack(InternalCaches);
+Continue; (* Continue execution *)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
WBNOINVD void _wbnoinvd(void);
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + +
#GP(0)If the current privilege level is not 0.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)WBNOINVD cannot be executed at the virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/wrfsbase.wrgsbase.html b/x86/wrfsbase.wrgsbase.html new file mode 100644 index 0000000..9fe4fac --- /dev/null +++ b/x86/wrfsbase.wrgsbase.html @@ -0,0 +1,125 @@ + +WRFSBASE/WRGSBASE + — Write FS/GS Segment Base

WRFSBASE/WRGSBASE + — Write FS/GS Segment Base

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32-bit ModeCPUID Feature FlagDescription
F3 0F AE /2 WRFSBASE r32MV/IFSGSBASELoad the FS base address with the 32-bit value in the source register.
F3 REX.W 0F AE /2 WRFSBASE r64MV/IFSGSBASELoad the FS base address with the 64-bit value in the source register.
F3 0F AE /3 WRGSBASE r32MV/IFSGSBASELoad the GS base address with the 32-bit value in the source register.
F3 REX.W 0F AE /3 WRGSBASE r64MV/IFSGSBASELoad the GS base address with the 64-bit value in the source register.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Loads the FS or GS segment base address with the general-purpose register indicated by the modR/M:r/m field.

+

The source operand may be either a 32-bit or a 64-bit general-purpose register. The REX.W prefix indicates the operand size is 64 bits. If no REX.W prefix is used, the operand size is 32 bits; the upper 32 bits of the source register are ignored and upper 32 bits of the base address (for FS or GS) are cleared.

+

This instruction is supported only in 64-bit mode.

+

Operation + ¶ +

+
FS/GS segment base address := SRC;
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
WRFSBASE void _writefsbase_u32( unsigned int );
+
+
WRFSBASE _writefsbase_u64( unsigned __int64 );
+
+
WRGSBASE void _writegsbase_u32( unsigned int );
+
+
WRGSBASE _writegsbase_u64( unsigned __int64 );
+
+

Protected Mode Exceptions + ¶ +

+ + + +
#UDThe WRFSBASE and WRGSBASE instructions are not recognized in protected mode.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe WRFSBASE and WRGSBASE instructions are not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe WRFSBASE and WRGSBASE instructions are not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + +
#UDThe WRFSBASE and WRGSBASE instructions are not recognized in compatibility mode.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.FSGSBASE[bit 16] = 0.
If CPUID.07H.0H:EBX.FSGSBASE[bit 0] = 0
#GP(0)If the source register contains a non-canonical address.
diff --git a/x86/wrmsr.html b/x86/wrmsr.html new file mode 100644 index 0000000..d262131 --- /dev/null +++ b/x86/wrmsr.html @@ -0,0 +1,107 @@ + +WRMSR + — Write to Model Specific Register

WRMSR + — Write to Model Specific Register

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F 30WRMSRZOValidValidWrite the value in EDX:EAX to MSR specified by ECX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Writes the contents of registers EDX:EAX into the 64-bit model specific register (MSR) specified in the ECX register. (On processors that support the Intel 64 architecture, the high-order 32 bits of RCX are ignored.) The contents of the EDX register are copied to high-order 32 bits of the selected MSR and the contents of the EAX register are copied to low-order 32 bits of the MSR. (On processors that support the Intel 64 architecture, the high-order 32 bits of each of RAX and RDX are ignored.) Undefined or reserved bits in an MSR should be set to values previously read.

+

This instruction must be executed at privilege level 0 or in real-address mode; otherwise, a general protection exception #GP(0) is generated. Specifying a reserved or unimplemented MSR address in ECX will also cause a general protection exception. The processor will also generate a general protection exception if software attempts to write to bits in a reserved MSR.

+

When the WRMSR instruction is used to write to an MTRR, the TLBs are invalidated. This includes global entries (see “Translation Lookaside Buffers (TLBs)” in Chapter 3 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A).

+

MSRs control functions for testability, execution tracing, performance-monitoring and machine check errors. Chapter 2, “Model-Specific Registers (MSRs),” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 4, lists all MSRs that can be written with this instruction and their addresses. Note that each processor family has its own set of MSRs.

+

The WRMSR instruction is a serializing instruction (see “Serializing Instructions” in Chapter 9 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A). Note that WRMSR to the IA32_TSC_DEADLINE MSR (MSR index 6E0H) and the X2APIC MSRs (MSR indices 802H to 83FH) are not serializing.

+

The CPUID instruction should be used to determine whether MSRs are supported (CPUID.01H:EDX[5] = 1) before using this instruction.

+

IA-32 Architecture Compatibility + ¶ +

+

The MSRs and the ability to read them with the WRMSR instruction were introduced into the IA-32 architecture with the Pentium processor. Execution of this instruction by an IA-32 processor earlier than the Pentium processor results in an invalid opcode exception #UD.

+

Operation + ¶ +

+
MSR[ECX] := EDX:EAX;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If the value in ECX specifies a reserved or unimplemented MSR address.
If the value in EDX:EAX sets bits that are reserved in the MSR specified by ECX.
If the source register contains a non-canonical address and ECX specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE, IA32_KERNEL_GS_BASE, IA32_L-STAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GPIf the value in ECX specifies a reserved or unimplemented MSR address.
If the value in EDX:EAX sets bits that are reserved in the MSR specified by ECX.
If the source register contains a non-canonical address and ECX specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE, IA32_KERNEL_GS_BASE, IA32_L-STAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)The WRMSR instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/wrpkru.html b/x86/wrpkru.html new file mode 100644 index 0000000..2480968 --- /dev/null +++ b/x86/wrpkru.html @@ -0,0 +1,92 @@ + +WRPKRU + — Write Data to User Page Key Register

WRPKRU + — Write Data to User Page Key Register

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 EF WRPKRUZOV/VOSPKEWrites EAX into PKRU.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Writes the value of EAX into PKRU. ECX and EDX must be 0 when WRPKRU is executed; otherwise, a general-protection exception (#GP) occurs.

+

WRPKRU can be executed only if CR4.PKE = 1; otherwise, an invalid-opcode exception (#UD) occurs. Software can discover the value of CR4.PKE by examining CPUID.(EAX=07H,ECX=0H):ECX.OSPKE [bit 4].

+

On processors that support the Intel 64 Architecture, the high-order 32-bits of RCX, RDX, and RAX are ignored.

+

WRPKRU will never execute speculatively. Memory accesses affected by PKRU register will not execute (even speculatively) until all prior executions of WRPKRU have completed execution and updated the PKRU register.

+

Operation + ¶ +

+
IF (ECX = 0 AND EDX = 0)
+    THEN PKRU := EAX;
+    ELSE #GP(0);
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
WRPKRU void _wrpkru(uint32_t);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If ECX ≠ 0.
If EDX ≠ 0.
#UDIf the LOCK prefix is used.
If CR4.PKE = 0.
+

Real-Address Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/wrssd.wrssq.html b/x86/wrssd.wrssq.html new file mode 100644 index 0000000..1b14d56 --- /dev/null +++ b/x86/wrssd.wrssq.html @@ -0,0 +1,185 @@ + +WRSSD/WRSSQ + — Write to Shadow Stack

WRSSD/WRSSQ + — Write to Shadow Stack

+ + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
0F 38 F6 !(11):rrr:bbb WRSSD m32, r32MRV/VCET_SSWrite 4 bytes to shadow stack.
REX.W 0F 38 F6 !(11):rrr:bbb WRSSQ m64, r64MRV/N.E.CET_SSWrite 8 bytes to shadow stack.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Writes bytes in register source to the shadow stack.

+

Operation + ¶ +

+
IF CPL = 3
+    IF (CR4.CET & IA32_U_CET.SH_STK_EN) = 0
+        THEN #UD; FI;
+    IF (IA32_U_CET.WR_SHSTK_EN) = 0
+        THEN #UD; FI;
+ELSE
+    IF (CR4.CET & IA32_S_CET.SH_STK_EN) = 0
+        THEN #UD; FI;
+    IF (IA32_S_CET.WR_SHSTK_EN) = 0
+        THEN #UD; FI;
+FI;
+DEST_LA = Linear_Address(mem operand)
+IF (operand size is 64 bit)
+    THEN
+        (* Destination not 8B aligned *)
+        IF DEST_LA[2:0]
+            THEN GP(0); FI;
+        Shadow_stack_store 8 bytes of SRC to DEST_LA;
+    ELSE
+        (* Destination not 4B aligned *)
+        IF DEST_LA[1:0]
+            THEN GP(0); FI;
+        Shadow_stack_store 4 bytes of SRC[31:0] to DEST_LA;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
WRSSD void _wrssd(__int32, void *);
+
+
WRSSQ void _wrssq(__int64, void *);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
If CPL = 3 and IA32_U_CET.SH_STK_EN = 0.
If CPL < 3 and IA32_S_CET.SH_STK_EN = 0.
If CPL = 3 and IA32_U_CET.WR_SHSTK_EN = 0.
If CPL < 3 and IA32_S_CET.WR_SHSTK_EN = 0.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If destination is located in a non-writeable segment.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If linear address of destination is not 4 byte aligned.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs if destination is not a user shadow stack when CPL3 and not a supervisor shadow stack when CPL < 3.
Other terminal and non-terminal faults.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe WRSS instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe WRSS instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
If CPL = 3 and IA32_U_CET.SH_STK_EN = 0.
If CPL < 3 and IA32_S_CET.SH_STK_EN = 0.
If CPL = 3 and IA32_U_CET.WR_SHSTK_EN = 0.
If CPL < 3 and IA32_S_CET.WR_SHSTK_EN = 0.
#PF(fault-code)If a page fault occurs if destination is not a user shadow stack when CPL3 and not a supervisor shadow stack when CPL < 3.
Other terminal and non-terminal faults.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
If CPL = 3 and IA32_U_CET.SH_STK_EN = 0.
If CPL < 3 and IA32_S_CET.SH_STK_EN = 0.
If CPL = 3 and IA32_U_CET.WR_SHSTK_EN = 0.
If CPL < 3 and IA32_S_CET.WR_SHSTK_EN = 0.
#GP(0)If a memory address is in a non-canonical form.
If linear address of destination is not 4 byte aligned.
#PF(fault-code)If a page fault occurs if destination is not a user shadow stack when CPL3 and not a supervisor shadow stack when CPL < 3.
Other terminal and non-terminal faults.
diff --git a/x86/wrussd.wrussq.html b/x86/wrussd.wrussq.html new file mode 100644 index 0000000..f447392 --- /dev/null +++ b/x86/wrussd.wrussq.html @@ -0,0 +1,168 @@ + +WRUSSD/WRUSSQ + — Write to User Shadow Stack

WRUSSD/WRUSSQ + — Write to User Shadow Stack

+ + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 38 F5 !(11):rrr:bbb WRUSSD m32, r32MRV/VCET_SSWrite 4 bytes to shadow stack.
66 REX.W 0F 38 F5 !(11):rrr:bbb WRUSSQ m64, r64MRV/N.E.CET_SSWrite 8 bytes to shadow stack.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (w)ModRM:reg (r)N/AN/A
+

Description + ¶ +

+

Writes bytes in register source to a user shadow stack page. The WRUSS instruction can be executed only if CPL = 0, however the processor treats its shadow-stack accesses as user accesses.

+

Operation + ¶ +

+
IF CR4.CET = 0
+    THEN #UD; FI;
+IF CPL > 0
+    THEN #GP(0); FI;
+DEST_LA = Linear_Address(mem operand)
+IF (operand size is 64 bit)
+    THEN
+        (* Destination not 8B aligned *)
+        IF DEST_LA[2:0]
+            THEN GP(0); FI;
+        Shadow_stack_store 8 bytes of SRC to DEST_LA as user-mode access;
+    ELSE
+        (* Destination not 4B aligned *)
+        IF DEST_LA[1:0]
+            THEN GP(0); FI;
+        Shadow_stack_store 4 bytes of SRC[31:0] to DEST_LA as user-mode access;
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

C/C++ Compiler Intrinsic Equivalent + ¶ +

+
WRUSSD void _wrussd(__int32, void *);
+
+
WRUSSQ void _wrussq(__int64, void *);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If destination is located in a non-writeable segment.
If the DS, ES, FS, or GS register is used to access memory and it contains a NULL segment selector.
If linear address of destination is not 4 byte aligned.
If CPL is not 0.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If destination is not a user shadow stack.
Other terminal and non-terminal faults.
+

Real-Address Mode Exceptions + ¶ +

+ + + +
#UDThe WRUSS instruction is not recognized in real-address mode.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#UDThe WRUSS instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
#GP(0)If a memory address is in a non-canonical form.
If linear address of destination is not 4 byte aligned.
If CPL is not 0.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code)If destination is not a user shadow stack.
Other terminal and non-terminal faults.
+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + +
#UDIf the LOCK prefix is used.
If CR4.CET = 0.
#GP(0)If a memory address is in a non-canonical form.
If linear address of destination is not 4 byte aligned.
If CPL is not 0.
#PF(fault-code)If destination is not a user shadow stack.
Other terminal and non-terminal faults.
diff --git a/x86/xabort.html b/x86/xabort.html new file mode 100644 index 0000000..ba7b035 --- /dev/null +++ b/x86/xabort.html @@ -0,0 +1,92 @@ + +XABORT + — Transactional Abort

XABORT + — Transactional Abort

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32bit Mode SupportCPUID Feature FlagDescription
C6 F8 ib XABORT imm8AV/VRTMCauses an RTM abort if in RTM execution.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand2Operand3Operand4
Aimm8N/AN/AN/A
+

Description + ¶ +

+

XABORT forces an RTM abort. Following an RTM abort, the logical processor resumes execution at the fallback address computed through the outermost XBEGIN instruction. The EAX register is updated to reflect an XABORT instruction caused the abort, and the imm8 argument will be provided in bits 31:24 of EAX.

+

Operation + ¶ +

+

XABORT + ¶ +

+
IF RTM_ACTIVE = 0
+    THEN
+        Treat as NOP;
+    ELSE
+        GOTO RTM_ABORT_PROCESSING;
+FI;
+(* For any RTM abort condition encountered during RTM execution *)
+RTM_ABORT_PROCESSING:
+    Restore architectural register state;
+    Discard memory updates performed in transaction;
+    Update EAX with status and XABORT argument;
+    RTM_NEST_COUNT:= 0;
+    RTM_ACTIVE:= 0;
+    SUSLDTRK_ACTIVE := 0;
+    IF 64-bit Mode
+        THEN
+            RIP:= fallbackRIP;
+        ELSE
+            EIP := fallbackEIP;
+    FI;
+END
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XABORT void _xabort( unsigned int);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+ + + + + +
#UDCPUID.(EAX=7, ECX=0):EBX.RTM[bit 11] = 0.
If LOCK prefix is used.
diff --git a/x86/xacquire.xrelease.html b/x86/xacquire.xrelease.html new file mode 100644 index 0000000..170aa68 --- /dev/null +++ b/x86/xacquire.xrelease.html @@ -0,0 +1,153 @@ + +XACQUIRE/XRELEASE + — Hardware Lock Elision Prefix Hints

XACQUIRE/XRELEASE + — Hardware Lock Elision Prefix Hints

+ + + + + + + + + + + + + + + + + +
Opcode/Instruction64/32bit Mode SupportCPUID Feature FlagDescription
F2 XACQUIREV/VHLE1A hint used with an “XACQUIRE-enabled“ instruction to start lock elision on the instruction memory operand address.
F3 XRELEASEV/VHLEA hint used with an “XRELEASE-enabled“ instruction to end lock elision on the instruction memory operand address.
+
+

1. Software is not required to check the HLE feature flag to use XACQUIRE or XRELEASE, as they are treated as regular prefix if HLE feature flag reports 0.

+

Description + ¶ +

+

The XACQUIRE prefix is a hint to start lock elision on the memory address specified by the instruction and the XRELEASE prefix is a hint to end lock elision on the memory address specified by the instruction.

+

The XACQUIRE prefix hint can only be used with the following instructions (these instructions are also referred to as XACQUIRE-enabled when used with the XACQUIRE prefix):

+
    +
  • Instructions with an explicit LOCK prefix (F0H) prepended to forms of the instruction where the destination operand is a memory operand: ADD, ADC, AND, BTC, BTR, BTS, CMPXCHG, CMPXCHG8B, DEC, INC, NEG, NOT, OR, SBB, SUB, XOR, XADD, and XCHG.
  • +
  • The XCHG instruction either with or without the presence of the LOCK prefix.
+

The XRELEASE prefix hint can only be used with the following instructions (also referred to as XRELEASE-enabled when used with the XRELEASE prefix):

+
    +
  • Instructions with an explicit LOCK prefix (F0H) prepended to forms of the instruction where the destination operand is a memory operand: ADD, ADC, AND, BTC, BTR, BTS, CMPXCHG, CMPXCHG8B, DEC, INC, NEG, NOT, OR, SBB, SUB, XOR, XADD, and XCHG.
  • +
  • The XCHG instruction either with or without the presence of the LOCK prefix.
  • +
  • The “MOV mem, reg” (Opcode 88H/89H) and “MOV mem, imm” (Opcode C6H/C7H) instructions. In these cases, the XRELEASE is recognized without the presence of the LOCK prefix.
+

The lock variables must satisfy the guidelines described in Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1, Section 16.3.3, for elision to be successful, otherwise an HLE abort may be signaled.

+

If an encoded byte sequence that meets XACQUIRE/XRELEASE requirements includes both prefixes, then the HLE semantic is determined by the prefix byte that is placed closest to the instruction opcode. For example, an F3F2C6 will not be treated as a XRELEASE-enabled instruction since the F2H (XACQUIRE) is closest to the instruction opcode C6. Similarly, an F2F3F0 prefixed instruction will be treated as a XRELEASE-enabled instruction since F3H (XRELEASE) is closest to the instruction opcode.

+

Intel 64 and IA-32 Compatibility

+

The effect of the XACQUIRE/XRELEASE prefix hint is the same in non-64-bit modes and in 64-bit mode.

+

For instructions that do not support the XACQUIRE hint, the presence of the F2H prefix behaves the same way as prior hardware, according to

+
    +
  • REPNE/REPNZ semantics for string instructions,
  • +
  • Serve as SIMD prefix for legacy SIMD instructions operating on XMM register
  • +
  • Cause #UD if prepending the VEX prefix.
  • +
  • Undefined for non-string instructions or other situations.
+

For instructions that do not support the XRELEASE hint, the presence of the F3H prefix behaves the same way as in prior hardware, according to

+
    +
  • REP/REPE/REPZ semantics for string instructions,
  • +
  • Serve as SIMD prefix for legacy SIMD instructions operating on XMM register
  • +
  • Cause #UD if prepending the VEX prefix.
  • +
  • Undefined for non-string instructions or other situations.
+

Operation + ¶ +

+

XACQUIRE + ¶ +

+
IF XACQUIRE-enabled instruction
+    THEN
+        IF (HLE_NEST_COUNT < MAX_HLE_NEST_COUNT) THEN
+            HLE_NEST_COUNT++
+            IF (HLE_NEST_COUNT = 1) THEN
+                HLE_ACTIVE := 1
+                IF 64-bit mode
+                    THEN
+                        restartRIP := instruction pointer of the XACQUIRE-enabled instruction
+                    ELSE
+                        restartEIP := instruction pointer of the XACQUIRE-enabled instruction
+                FI;
+                Enter HLE Execution (* record register state, start tracking memory state *)
+            FI; (* HLE_NEST_COUNT = 1*)
+            IF ElisionBufferAvailable
+                THEN
+                    Allocate elision buffer
+                    Record address and data for forwarding and commit checking
+                    Perform elision
+                ELSE
+                    Perform lock acquire operation transactionally but without elision
+            FI;
+        ELSE (* HLE_NEST_COUNT = MAX_HLE_NEST_COUNT*)
+                GOTO HLE_ABORT_PROCESSING
+        FI;
+    ELSE
+        Treat instruction as non-XACQUIRE F2H prefixed legacy instruction
+FI;
+
+

XRELEASE + ¶ +

+
IF XRELEASE-enabled instruction
+    THEN
+        IF (HLE_NEST_COUNT > 0)
+            THEN
+                HLE_NEST_COUNT--
+                IF lock address matches in elision buffer THEN
+                    IF lock satisfies address and value requirements THEN
+                        Deallocate elision buffer
+                    ELSE
+                        GOTO HLE_ABORT_PROCESSING
+                    FI;
+                FI;
+                IF (HLE_NEST_COUNT = 0)
+                    THEN
+                        IF NoAllocatedElisionBuffer
+                            THEN
+                                Try to commit transactional execution
+                                IF fail to commit transactional execution
+                                    THEN
+                                        GOTO HLE_ABORT_PROCESSING;
+                                    ELSE (* commit success *)
+                                        HLE_ACTIVE := 0
+                                FI;
+                            ELSE
+                                GOTO HLE_ABORT_PROCESSING
+                        FI;
+                FI;
+        FI; (* HLE_NEST_COUNT > 0 *)
+    ELSE
+        Treat instruction as non-XRELEASE F3H prefixed legacy instruction
+FI;
+(* For any HLE abort condition encountered during HLE execution *)
+HLE_ABORT_PROCESSING:
+    HLE_ACTIVE := 0
+    HLE_NEST_COUNT := 0
+    Restore architectural register state
+    Discard memory updates performed in transaction
+    Free any allocated lock elision buffers
+    IF 64-bit mode
+        THEN
+            RIP := restartRIP
+        ELSE
+            EIP := restartEIP
+    FI;
+    Execute and retire instruction at RIP (or EIP) and ignore any HLE hint
+END
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+ + + +
#GP(0)If the use of prefix causes instruction length to exceed 15 bytes.
diff --git a/x86/xadd.html b/x86/xadd.html new file mode 100644 index 0000000..cd9806a --- /dev/null +++ b/x86/xadd.html @@ -0,0 +1,169 @@ + +XADD + — Exchange and Add

XADD + — Exchange and Add

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
0F C0 /rXADD r/m8, r8MRValidValidExchange r8 and r/m8; load sum into r/m8.
REX + 0F C0 /rXADD r/m8*, r8*MRValidN.E.Exchange r8 and r/m8; load sum into r/m8.
0F C1 /rXADD r/m16, r16MRValidValidExchange r16 and r/m16; load sum into r/m16.
0F C1 /rXADD r/m32, r32MRValidValidExchange r32 and r/m32; load sum into r/m32.
REX.W + 0F C1 /rXADD r/m64, r64MRValidN.E.Exchange r64 and r/m64; load sum into r/m64.
+
+

* In64-bitmode,r/m8cannotbeencodedtoaccessthefollowingbyteregistersifaREXprefixisused:AH,BH,CH,DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MRModRM:r/m (r, w)ModRM:reg (r, w)N/AN/A
+

Description + ¶ +

+

Exchanges the first operand (destination operand) with the second operand (source operand), then loads the sum of the two values into the destination operand. The destination operand can be a register or a memory location; the source operand is a register.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

IA-32 Architecture Compatibility + ¶ +

+

IA-32 processors earlier than the Intel486 processor do not recognize this instruction. If this instruction is used, you should provide an equivalent code sequence that runs on earlier processors.

+

Operation + ¶ +

+
TEMP := SRC + DEST;
+SRC := DEST;
+DEST := TEMP;
+
+

Flags Affected + ¶ +

+

The CF, PF, AF, SF, ZF, and OF flags are set according to the result of the addition, which is stored in the destination operand.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination is located in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/xbegin.html b/x86/xbegin.html new file mode 100644 index 0000000..6ac4448 --- /dev/null +++ b/x86/xbegin.html @@ -0,0 +1,165 @@ + +XBEGIN + — Transactional Begin

XBEGIN + — Transactional Begin

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32bit Mode SupportCPUID Feature FlagDescription
C7 F8 XBEGIN rel16AV/VRTMSpecifies the start of an RTM region. Provides a 16-bit relative offset to compute the address of the fallback instruction address at which execution resumes following an RTM abort.
C7 F8 XBEGIN rel32AV/VRTMSpecifies the start of an RTM region. Provides a 32-bit relative offset to compute the address of the fallback instruction address at which execution resumes following an RTM abort.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand2Operand3Operand4
AOffsetN/AN/AN/A
+

Description + ¶ +

+

The XBEGIN instruction specifies the start of an RTM code region. If the logical processor was not already in transactional execution, then the XBEGIN instruction causes the logical processor to transition into transactional execution. The XBEGIN instruction that transitions the logical processor into transactional execution is referred to as the outermost XBEGIN instruction. The instruction also specifies a relative offset to compute the address of the fallback code path following a transactional abort. (Use of the 16-bit operand size does not cause this address to be truncated to 16 bits, unlike a near jump to a relative offset.)

+

On an RTM abort, the logical processor discards all architectural register and memory updates performed during the RTM execution and restores architectural state to that corresponding to the outermost XBEGIN instruction. The fallback address following an abort is computed from the outermost XBEGIN instruction.

+

Execution of XBEGIN while in a suspend read address tracking region causes a transactional abort.

+

Operation + ¶ +

+

XBEGIN + ¶ +

+
IF RTM_NEST_COUNT < MAX_RTM_NEST_COUNT AND SUSLDTRK_ACTIVE = 0
+    THEN
+        RTM_NEST_COUNT++
+        IF RTM_NEST_COUNT = 1 THEN
+            IF 64-bit Mode
+                THEN
+                    IF OperandSize = 16
+                        THEN fallbackRIP := RIP + SignExtend64(rel16);
+                        ELSE fallbackRIP := RIP + SignExtend64(rel32);
+                    FI;
+                    IF fallbackRIP is not canonical
+                        THEN #GP(0);
+                    FI;
+                ELSE
+                    IF OperandSize = 16
+                        THEN fallbackEIP := EIP + SignExtend32(rel16);
+                        ELSE fallbackEIP := EIP + rel32;
+                    FI;
+                    IF fallbackEIP outside code segment limit
+                        THEN #GP(0);
+                    FI;
+            FI;
+            RTM_ACTIVE := 1
+            Enter RTM Execution (* record register state, start tracking memory state*)
+        FI; (* RTM_NEST_COUNT = 1 *)
+    ELSE (* RTM_NEST_COUNT = MAX_RTM_NEST_COUNT OR SUSLDTRK_ACTIVE = 1 *)
+        GOTO RTM_ABORT_PROCESSING
+FI;
+(* For any RTM abort condition encountered during RTM execution *)
+RTM_ABORT_PROCESSING:
+    Restore architectural register state
+    Discard memory updates performed in transaction
+    Update EAX with status
+    RTM_NEST_COUNT := 0
+    RTM_ACTIVE := 0
+    SUSLDTRK_ACTIVE := 0
+    IF 64-bit mode
+        THEN
+            RIP := fallbackRIP
+        ELSE
+            EIP := fallbackEIP
+    FI;
+END
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XBEGIN unsigned int _xbegin( void );
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + +
#UDCPUID.(EAX=7, ECX=0):EBX.RTM[bit 11]=0.
If LOCK prefix is used.
#GP(0)If the fallback address is outside the CS segment.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the fallback address is outside the address space 0000H and FFFFH.
#UDCPUID.(EAX=7, ECX=0):EBX.RTM[bit 11]=0.
If LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + +
#GP(0)If the fallback address is outside the address space 0000H and FFFFH.
#UDCPUID.(EAX=7, ECX=0):EBX.RTM[bit 11]=0.
If LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-bit Mode Exceptions + ¶ +

+ + + + + + + + +
#UDCPUID.(EAX=7, ECX=0):EBX.RTM[bit 11] = 0.
If LOCK prefix is used.
#GP(0)If the fallback address is non-canonical.
diff --git a/x86/xchg.html b/x86/xchg.html new file mode 100644 index 0000000..aa4cf00 --- /dev/null +++ b/x86/xchg.html @@ -0,0 +1,263 @@ + +XCHG + — Exchange Register/Memory With Register

XCHG + — Exchange Register/Memory With Register

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
90+rwXCHG AX, r16OValidValidExchange r16 with AX.
90+rwXCHG r16, AXOValidValidExchange AX with r16.
90+rdXCHG EAX, r32OValidValidExchange r32 with EAX.
REX.W + 90+rdXCHG RAX, r64OValidN.E.Exchange r64 with RAX.
90+rdXCHG r32, EAXOValidValidExchange EAX with r32.
REX.W + 90+rdXCHG r64, RAXOValidN.E.Exchange RAX with r64.
86 /rXCHG r/m8, r8MRValidValidExchange r8 (byte register) with byte from r/m8.
REX + 86 /rXCHG r/m8*, r8*MRValidN.E.Exchange r8 (byte register) with byte from r/m8.
86 /rXCHG r8, r/m8RMValidValidExchange byte from r/m8 with r8 (byte register).
REX + 86 /rXCHG r8*, r/m8*RMValidN.E.Exchange byte from r/m8 with r8 (byte register).
87 /rXCHG r/m16, r16MRValidValidExchange r16 with word from r/m16.
87 /rXCHG r16, r/m16RMValidValidExchange word from r/m16 with r16.
87 /rXCHG r/m32, r32MRValidValidExchange r32 with doubleword from r/m32.
REX.W + 87 /rXCHG r/m64, r64MRValidN.E.Exchange r64 with quadword from r/m64.
87 /rXCHG r32, r/m32RMValidValidExchange doubleword from r/m32 with r32.
REX.W + 87 /rXCHG r64, r/m64RMValidN.E.Exchange quadword from r/m64 with r64.
+
+

* In64-bitmode,r/m8cannotbeencodedtoaccessthefollowingbyteregistersifaREXprefixisused:AH,BH,CH,DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
OAX/EAX/RAX (r, w)opcode + rd (r, w)N/AN/A
Oopcode + rd (r, w)AX/EAX/RAX (r, w)N/AN/A
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
RMModRM:reg (w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Exchanges the contents of the destination (first) and source (second) operands. The operands can be two general-purpose registers or a register and a memory location. If a memory operand is referenced, the processor’s locking protocol is automatically implemented for the duration of the exchange operation, regardless of the presence or absence of the LOCK prefix or of the value of the IOPL. (See the LOCK prefix description in this chapter for more information on the locking protocol.)

+

This instruction is useful for implementing semaphores or similar data structures for process synchronization. (See “Bus Locking” in Chapter 9 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A, for more information on bus locking.)

+

The XCHG instruction can also be used instead of the BSWAP instruction for 16-bit operands.

+

In 64-bit mode, the instruction’s default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+
+

XCHG (E)AX, (E)AX (encoded instruction byte is 90H) is an alias for NOP regardless of data size prefixes, including REX.W.

+

Operation + ¶ +

+
TEMP := DEST;
+DEST := SRC;
+SRC := TEMP;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If either operand is in a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/xend.html b/x86/xend.html new file mode 100644 index 0000000..e21da9b --- /dev/null +++ b/x86/xend.html @@ -0,0 +1,107 @@ + +XEND + — Transactional End

XEND + — Transactional End

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 D5 XENDAV/VRTMSpecifies the end of an RTM code region.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand2Operand3Operand4
AN/AN/AN/AN/A
+

Description + ¶ +

+

The instruction marks the end of an RTM code region. If this corresponds to the outermost scope (that is, including this XEND instruction, the number of XBEGIN instructions is the same as number of XEND instructions), the logical processor will attempt to commit the logical processor state atomically. If the commit fails, the logical processor will rollback all architectural register and memory updates performed during the RTM execution. The logical processor will resume execution at the fallback address computed from the outermost XBEGIN instruction. The EAX register is updated to reflect RTM abort information.

+

Execution of XEND outside a transactional region causes a general-protection exception (#GP). Execution of XEND while in a suspend read address tracking region causes a transactional abort.

+

Operation + ¶ +

+

XEND + ¶ +

+
IF (RTM_ACTIVE = 0) THEN
+    SIGNAL #GP
+ELSE
+    IF SUSLDTRK_ACTIVE = 1
+        THEN GOTO RTM_ABORT_PROCESSING;
+    FI;
+    RTM_NEST_COUNT--
+    IF (RTM_NEST_COUNT = 0) THEN
+        Try to commit transaction
+        IF fail to commit transactional execution
+            THEN
+                GOTO RTM_ABORT_PROCESSING;
+            ELSE (* commit success *)
+                RTM_ACTIVE := 0
+        FI;
+    FI;
+FI;
+(* For any RTM abort condition encountered during RTM execution *)
+RTM_ABORT_PROCESSING:
+    Restore architectural register state
+    Discard memory updates performed in transaction
+    Update EAX with status
+    RTM_NEST_COUNT := 0
+    RTM_ACTIVE := 0
+    SUSLDTRK_ACTIVE := 0
+    IF 64-bit Mode
+        THEN
+            RIP := fallbackRIP
+        ELSE
+            EIP := fallbackEIP
+    FI;
+END
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XEND void _xend( void );
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+ + + + + + + + +
#UDCPUID.(EAX=7, ECX=0):EBX.RTM[bit 11] = 0.
If LOCK prefix is used.
#GP(0)If RTM_ACTIVE = 0.
diff --git a/x86/xgetbv.html b/x86/xgetbv.html new file mode 100644 index 0000000..56568cb --- /dev/null +++ b/x86/xgetbv.html @@ -0,0 +1,100 @@ + +XGETBV + — Get Value of Extended Control Register

XGETBV + — Get Value of Extended Control Register

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F 01 D0XGETBVZOValidValidReads an XCR specified by ECX into EDX:EAX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Reads the contents of the extended control register (XCR) specified in the ECX register into registers EDX:EAX. (On processors that support the Intel 64 architecture, the high-order 32 bits of RCX are ignored.) The EDX register is loaded with the high-order 32 bits of the XCR and the EAX register is loaded with the low-order 32 bits. (On processors that support the Intel 64 architecture, the high-order 32 bits of each of RAX and RDX are cleared.) If fewer than 64 bits are implemented in the XCR being read, the values returned to EDX:EAX in unimplemented bit locations are undefined.

+

XCR0 is supported on any processor that supports the XGETBV instruction. If CPUID.(EAX=0DH,ECX=1):EAX.XG1[bit 2] = 1, executing XGETBV with ECX = 1 returns in EDX:EAX the logicalAND of XCR0 and the current value of the XINUSE state-component bitmap. This allows software to discover the state of the init optimization used by XSAVEOPT and XSAVES. See Chapter 13, “Managing State Using the XSAVE Feature Set‚” in Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Use of any other value for ECX results in a general-protection (#GP) exception.

+

Operation + ¶ +

+
EDX:EAX := XCR[ECX];
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XGETBV unsigned __int64 _xgetbv( unsigned int);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If an invalid XCR is specified in ECX (includes ECX = 1 if CPUID.(EAX=0DH,ECX=1):EAX.XG1[bit 2] = 0).
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + +
#GP(0)If an invalid XCR is specified in ECX (includes ECX = 1 if CPUID.(EAX=0DH,ECX=1):EAX.XG1[bit 2] = 0).
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/xlat.xlatb.html b/x86/xlat.xlatb.html new file mode 100644 index 0000000..32c7f62 --- /dev/null +++ b/x86/xlat.xlatb.html @@ -0,0 +1,145 @@ + +XLAT/XLATB + — Table Look-up Translation

XLAT/XLATB + — Table Look-up Translation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
D7XLAT m8ZOValidValidSet AL to memory byte DS:[(E)BX + unsigned AL].
D7XLATBZOValidValidSet AL to memory byte DS:[(E)BX + unsigned AL].
REX.W + D7XLATBZOValidN.E.Set AL to memory byte [RBX + unsigned AL].
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Locates a byte entry in a table in memory, using the contents of the AL register as a table index, then copies the contents of the table entry back into the AL register. The index in the AL register is treated as an unsigned integer. The XLAT and XLATB instructions get the base address of the table in memory from either the DS:EBX or the DS:BX registers (depending on the address-size attribute of the instruction, 32 or 16, respectively). (The DS segment may be overridden with a segment override prefix.)

+

At the assembly-code level, two forms of this instruction are allowed: the “explicit-operand” form and the “no-operand” form. The explicit-operand form (specified with the XLAT mnemonic) allows the base address of the table to be specified explicitly with a symbol. This explicit-operands form is provided to allow documentation; however, note that the documentation provided by this form can be misleading. That is, the symbol does not have to specify the correct base address. The base address is always specified by the DS:(E)BX registers, which must be loaded correctly before the XLAT instruction is executed.

+

The no-operands form (XLATB) provides a “short form” of the XLAT instructions. Here also the processor assumes that the DS:(E)BX registers contain the base address of the table.

+

In 64-bit mode, operation is similar to that in legacy or compatibility mode. AL is used to specify the table index (the operand size is fixed at 8 bits). RBX, however, is used to specify the table’s base address. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
IF AddressSize = 16
+    THEN
+        AL := (DS:BX + ZeroExtend(AL));
+    ELSE IF (AddressSize = 32)
+        AL := (DS:EBX + ZeroExtend(AL)); FI;
+    ELSE (AddressSize = 64)
+        AL := (RBX + ZeroExtend(AL));
+FI;
+
+

Flags Affected + ¶ +

+

None.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#UDIf the LOCK prefix is used.
diff --git a/x86/xor.html b/x86/xor.html new file mode 100644 index 0000000..9d45e91 --- /dev/null +++ b/x86/xor.html @@ -0,0 +1,300 @@ + +XOR + — Logical Exclusive OR

XOR + — Logical Exclusive OR

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
34 ibXOR AL, imm8IValidValidAL XOR imm8.
35 iwXOR AX, imm16IValidValidAX XOR imm16.
35 idXOR EAX, imm32IValidValidEAX XOR imm32.
REX.W + 35 idXOR RAX, imm32IValidN.E.RAX XOR imm32 (sign-extended).
80 /6 ibXOR r/m8, imm8MIValidValidr/m8 XOR imm8.
REX + 80 /6 ibXOR r/m8*, imm8MIValidN.E.r/m8 XOR imm8.
81 /6 iwXOR r/m16, imm16MIValidValidr/m16 XOR imm16.
81 /6 idXOR r/m32, imm32MIValidValidr/m32 XOR imm32.
REX.W + 81 /6 idXOR r/m64, imm32MIValidN.E.r/m64 XOR imm32 (sign-extended).
83 /6 ibXOR r/m16, imm8MIValidValidr/m16 XOR imm8 (sign-extended).
83 /6 ibXOR r/m32, imm8MIValidValidr/m32 XOR imm8 (sign-extended).
REX.W + 83 /6 ibXOR r/m64, imm8MIValidN.E.r/m64 XOR imm8 (sign-extended).
30 /rXOR r/m8, r8MRValidValidr/m8 XOR r8.
REX + 30 /rXOR r/m8*, r8*MRValidN.E.r/m8 XOR r8.
31 /rXOR r/m16, r16MRValidValidr/m16 XOR r16.
31 /rXOR r/m32, r32MRValidValidr/m32 XOR r32.
REX.W + 31 /rXOR r/m64, r64MRValidN.E.r/m64 XOR r64.
32 /rXOR r8, r/m8RMValidValidr8 XOR r/m8.
REX + 32 /rXOR r8*, r/m8*RMValidN.E.r8 XOR r/m8.
33 /rXOR r16, r/m16RMValidValidr16 XOR r/m16.
33 /rXOR r32, r/m32RMValidValidr32 XOR r/m32.
REX.W + 33 /rXOR r64, r/m64RMValidN.E.r64 XOR r/m64.
+
+

* In64-bitmode,r/m8cannotbeencodedtoaccessthefollowingbyteregistersifaREXprefixisused:AH,BH,CH,DH.

+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
IAL/AX/EAX/RAXimm8/16/32N/AN/A
MIModRM:r/m (r, w)imm8/16/32N/AN/A
MRModRM:r/m (r, w)ModRM:reg (r)N/AN/A
RMModRM:reg (r, w)ModRM:r/m (r)N/AN/A
+

Description + ¶ +

+

Performs a bitwise exclusive OR (XOR) operation on the destination (first) and source (second) operands and stores the result in the destination operand location. The source operand can be an immediate, a register, or a memory location; the destination operand can be a register or a memory location. (However, two memory operands cannot be used in one instruction.) Each bit of the result is 1 if the corresponding bits of the operands are different; each bit is 0 if the corresponding bits are the same.

+

This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically.

+

In 64-bit mode, using a REX prefix in the form of REX.R permits access to additional registers (R8-R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. See the summary chart at the beginning of this section for encoding data and limits.

+

Operation + ¶ +

+
DEST := DEST XOR SRC;
+
+

Flags Affected + ¶ +

+

The OF and CF flags are cleared; the SF, ZF, and PF flags are set according to the result. The state of the AF flag is undefined.

+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + +
#GP(0)If the destination operand points to a non-writable segment.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If the DS, ES, FS, or GS register contains a NULL segment selector.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + +
#GPIf a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SSIf a memory operand effective address is outside the SS segment limit.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#AC(0)If alignment checking is enabled and an unaligned memory reference is made while the current privilege level is 3.
#UDIf the LOCK prefix is used but the destination is not a memory operand.
diff --git a/x86/xorpd.html b/x86/xorpd.html new file mode 100644 index 0000000..111ab04 --- /dev/null +++ b/x86/xorpd.html @@ -0,0 +1,170 @@ + +XORPD + — Bitwise Logical XOR of Packed Double Precision Floating-Point Values

XORPD + — Bitwise Logical XOR of Packed Double Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
66 0F 57/r XORPD xmm1, xmm2/m128AV/VSSE2Return the bitwise logical XOR of packed double precision floating-point values in xmm1 and xmm2/mem.
VEX.128.66.0F.WIG 57 /r VXORPD xmm1,xmm2, xmm3/m128BV/VAVXReturn the bitwise logical XOR of packed double precision floating-point values in xmm2 and xmm3/mem.
VEX.256.66.0F.WIG 57 /r VXORPD ymm1, ymm2, ymm3/m256BV/VAVXReturn the bitwise logical XOR of packed double precision floating-point values in ymm2 and ymm3/mem.
EVEX.128.66.0F.W1 57 /r VXORPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcstCV/VAVX512VL AVX512DQReturn the bitwise logical XOR of packed double precision floating-point values in xmm2 and xmm3/m128/m64bcst subject to writemask k1.
EVEX.256.66.0F.W1 57 /r VXORPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcstCV/VAVX512VL AVX512DQReturn the bitwise logical XOR of packed double precision floating-point values in ymm2 and ymm3/m256/m64bcst subject to writemask k1.
EVEX.512.66.0F.W1 57 /r VXORPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstCV/VAVX512DQReturn the bitwise logical XOR of packed double precision floating-point values in zmm2 and zmm3/m512/m64bcst subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical XOR of the two, four or eight packed double precision floating-point values from the first source operand and the second source operand, and stores the result in the destination operand.

+

EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand can be a ZMM register or a vector memory location. The destination operand is a ZMM register conditionally updated with write-mask k1.

+

VEX.256 and EVEX.256 encoded versions: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register (conditionally updated with writemask k1 in case of EVEX). The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 and EVEX.128 encoded versions: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register (conditionally updated with writemask k1 in case of EVEX). The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

VXORPD (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (2, 128), (4, 256), (8, 512)
+FOR j := 0 TO KL-1
+    i := j * 64
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN DEST[i+63:i] := SRC1[i+63:i] BITWISE XOR SRC2[63:0];
+                ELSE DEST[i+63:i] := SRC1[i+63:i] BITWISE XOR SRC2[i+63:i];
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+63:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+63:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VXORPD (VEX.256 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] BITWISE XOR SRC2[63:0]
+DEST[127:64] := SRC1[127:64] BITWISE XOR SRC2[127:64]
+DEST[191:128] := SRC1[191:128] BITWISE XOR SRC2[191:128]
+DEST[255:192] := SRC1[255:192] BITWISE XOR SRC2[255:192]
+DEST[MAXVL-1:256] := 0
+
+

VXORPD (VEX.128 Encoded Version) + ¶ +

+
DEST[63:0] := SRC1[63:0] BITWISE XOR SRC2[63:0]
+DEST[127:64] := SRC1[127:64] BITWISE XOR SRC2[127:64]
+DEST[MAXVL-1:128] := 0
+
+

XORPD (128-bit Legacy SSE Version) + ¶ +

+
DEST[63:0] := DEST[63:0] BITWISE XOR SRC[63:0]
+DEST[127:64] := DEST[127:64] BITWISE XOR SRC[127:64]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VXORPD __m512d _mm512_xor_pd (__m512d a, __m512d b);
+
+
VXORPD __m512d _mm512_mask_xor_pd (__m512d a, __mmask8 m, __m512d b);
+
+
VXORPD __m512d _mm512_maskz_xor_pd (__mmask8 m, __m512d a);
+
+
VXORPD __m256d _mm256_xor_pd (__m256d a, __m256d b);
+
+
VXORPD __m256d _mm256_mask_xor_pd (__m256d a, __mmask8 m, __m256d b);
+
+
VXORPD __m256d _mm256_maskz_xor_pd (__mmask8 m, __m256d a);
+
+
XORPD __m128d _mm_xor_pd (__m128d a, __m128d b);
+
+
VXORPD __m128d _mm_mask_xor_pd (__m128d a, __mmask8 m, __m128d b);
+
+
VXORPD __m128d _mm_maskz_xor_pd (__mmask8 m, __m128d a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instructions, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/xorps.html b/x86/xorps.html new file mode 100644 index 0000000..575647d --- /dev/null +++ b/x86/xorps.html @@ -0,0 +1,178 @@ + +XORPS + — Bitwise Logical XOR of Packed Single Precision Floating-Point Values

XORPS + — Bitwise Logical XOR of Packed Single Precision Floating-Point Values

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp / En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F 57 /r XORPS xmm1, xmm2/m128AV/VSSEReturn the bitwise logical XOR of packed single-precision floating-point values in xmm1 and xmm2/mem.
VEX.128.0F.WIG 57 /r VXORPS xmm1,xmm2, xmm3/m128BV/VAVXReturn the bitwise logical XOR of packed single-precision floating-point values in xmm2 and xmm3/mem.
VEX.256.0F.WIG 57 /r VXORPS ymm1, ymm2, ymm3/m256BV/VAVXReturn the bitwise logical XOR of packed single-precision floating-point values in ymm2 and ymm3/mem.
EVEX.128.0F.W0 57 /r VXORPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcstCV/VAVX512VL AVX512DQReturn the bitwise logical XOR of packed single-precision floating-point values in xmm2 and xmm3/m128/m32bcst subject to writemask k1.
EVEX.256.0F.W0 57 /r VXORPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcstCV/VAVX512VL AVX512DQReturn the bitwise logical XOR of packed single-precision floating-point values in ymm2 and ymm3/m256/m32bcst subject to writemask k1.
EVEX.512.0F.W0 57 /r VXORPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcstCV/VAVX512DQReturn the bitwise logical XOR of packed single-precision floating-point values in zmm2 and zmm3/m512/m32bcst subject to writemask k1.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Op/EnTuple TypeOperand 1Operand 2Operand 3Operand 4
AN/AModRM:reg (r, w)ModRM:r/m (r)N/AN/A
BN/AModRM:reg (w)VEX.vvvv (r)ModRM:r/m (r)N/A
CFullModRM:reg (w)EVEX.vvvv (r)ModRM:r/m (r)N/A
+

Description + ¶ +

+

Performs a bitwise logical XOR of the four, eight or sixteen packed single-precision floating-point values from the first source operand and the second source operand, and stores the result in the destination operand

+

EVEX.512 encoded version: The first source operand is a ZMM register. The second source operand can be a ZMM register or a vector memory location. The destination operand is a ZMM register conditionally updated with write-mask k1.

+

VEX.256 and EVEX.256 encoded versions: The first source operand is a YMM register. The second source operand is a YMM register or a 256-bit memory location. The destination operand is a YMM register (conditionally updated with writemask k1 in case of EVEX). The upper bits (MAXVL-1:256) of the corresponding ZMM register destination are zeroed.

+

VEX.128 and EVEX.128 encoded versions: The first source operand is an XMM register. The second source operand is an XMM register or 128-bit memory location. The destination operand is an XMM register (conditionally updated with writemask k1 in case of EVEX). The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed.

+

128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding register destination are unmodified.

+

Operation + ¶ +

+

VXORPS (EVEX Encoded Versions) + ¶ +

+
(KL, VL) = (4, 128), (8, 256), (16, 512)
+FOR j := 0 TO KL-1
+    i := j * 32
+    IF k1[j] OR *no writemask* THEN
+            IF (EVEX.b == 1) AND (SRC2 *is memory*)
+                THEN DEST[i+31:i] := SRC1[i+31:i] BITWISE XOR SRC2[31:0];
+                ELSE DEST[i+31:i] := SRC1[i+31:i] BITWISE XOR SRC2[i+31:i];
+            FI;
+        ELSE
+            IF *merging-masking* ; merging-masking
+                THEN *DEST[i+31:i] remains unchanged*
+                ELSE *zeroing-masking*
+                        ; zeroing-masking
+                    DEST[i+31:i] = 0
+            FI
+    FI;
+ENDFOR
+DEST[MAXVL-1:VL] := 0
+
+

VXORPS (VEX.256 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE XOR SRC2[31:0]
+DEST[63:32] := SRC1[63:32] BITWISE XOR SRC2[63:32]
+DEST[95:64] := SRC1[95:64] BITWISE XOR SRC2[95:64]
+DEST[127:96] := SRC1[127:96] BITWISE XOR SRC2[127:96]
+DEST[159:128] := SRC1[159:128] BITWISE XOR SRC2[159:128]
+DEST[191:160] := SRC1[191:160] BITWISE XOR SRC2[191:160]
+DEST[223:192] := SRC1[223:192] BITWISE XOR SRC2[223:192]
+DEST[255:224] := SRC1[255:224] BITWISE XOR SRC2[255:224].
+DEST[MAXVL-1:256] := 0
+
+

VXORPS (VEX.128 Encoded Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE XOR SRC2[31:0]
+DEST[63:32] := SRC1[63:32] BITWISE XOR SRC2[63:32]
+DEST[95:64] := SRC1[95:64] BITWISE XOR SRC2[95:64]
+DEST[127:96] := SRC1[127:96] BITWISE XOR SRC2[127:96]
+DEST[MAXVL-1:128] := 0
+
+

XORPS (128-bit Legacy SSE Version) + ¶ +

+
DEST[31:0] := SRC1[31:0] BITWISE XOR SRC2[31:0]
+DEST[63:32] := SRC1[63:32] BITWISE XOR SRC2[63:32]
+DEST[95:64] := SRC1[95:64] BITWISE XOR SRC2[95:64]
+DEST[127:96] := SRC1[127:96] BITWISE XOR SRC2[127:96]
+DEST[MAXVL-1:128] (Unmodified)
+
+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
VXORPS __m512 _mm512_xor_ps (__m512 a, __m512 b);
+
+
VXORPS __m512 _mm512_mask_xor_ps (__m512 a, __mmask16 m, __m512 b);
+
+
VXORPS __m512 _mm512_maskz_xor_ps (__mmask16 m, __m512 a);
+
+
VXORPS __m256 _mm256_xor_ps (__m256 a, __m256 b);
+
+
VXORPS __m256 _mm256_mask_xor_ps (__m256 a, __mmask8 m, __m256 b);
+
+
VXORPS __m256 _mm256_maskz_xor_ps (__mmask8 m, __m256 a);
+
+
XORPS __m128 _mm_xor_ps (__m128 a, __m128 b);
+
+
VXORPS __m128 _mm_mask_xor_ps (__m128 a, __mmask8 m, __m128 b);
+
+
VXORPS __m128 _mm_maskz_xor_ps (__mmask8 m, __m128 a);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+

Non-EVEX-encoded instructions, see Table 2-21, “Type 4 Class Exception Conditions.”

+

EVEX-encoded instructions, see Table 2-49, “Type E4 Class Exception Conditions.”

diff --git a/x86/xresldtrk.html b/x86/xresldtrk.html new file mode 100644 index 0000000..bc1ea94 --- /dev/null +++ b/x86/xresldtrk.html @@ -0,0 +1,82 @@ + +XRESLDTRK + — Resume Tracking Load Addresses

XRESLDTRK + — Resume Tracking Load Addresses

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 01 E9 XRESLDTRKZOV/VTSXLDTRKSpecifies the end of an Intel TSX suspend read address tracking region.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

Description + ¶ +

+

The instruction marks the end of an Intel TSX (RTM) suspend load address tracking region. If the instruction is used inside a suspend load address tracking region it will end the suspend region and all following load addresses will be added to the transaction read set. If this instruction is used inside an active transaction but not in a suspend region it will cause transaction abort.

+

If the instruction is used outside of a transactional region it behaves like a NOP.

+

Chapter 16, “Programming with Intel® Transactional Synchronization Extensions‚” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 provides additional information on Intel® TSX Suspend Load Address Tracking.

+

Operation + ¶ +

+

XRESLDTRK + ¶ +

+
IF RTM_ACTIVE = 1:
+    IF SUSLDTRK_ACTIVE = 1:
+        SUSLDTRK_ACTIVE := 0
+    ELSE:
+        RTM_ABORT
+ELSE:
+    NOP
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XRESLDTRK void _xresldtrk(void);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+ + + + + +
#UDIf CPUID.(EAX=7, ECX=0):EDX.TSXLDTRK[bit 16] = 0.
If the LOCK prefix is used.
diff --git a/x86/xrstor.html b/x86/xrstor.html new file mode 100644 index 0000000..139a5ae --- /dev/null +++ b/x86/xrstor.html @@ -0,0 +1,299 @@ + +XRSTOR + — Restore Processor Extended States

XRSTOR + — Restore Processor Extended States

+ + + + + + + + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F AE /5 XRSTOR memMV/VXSAVERestore state components specified by EDX:EAX from mem.
NP REX.W + 0F AE /5 XRSTOR64 memMV/N.E.XSAVERestore state components specified by EDX:EAX from mem.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Performs a full or partial restore of processor state components from the XSAVE area located at the memory address specified by the source operand. The implicit EDX:EAX register pair specifies a 64-bit instruction mask. The specific state components restored correspond to the bits set in the requested-feature bitmap (RFBM), which is the logical-AND of EDX:EAX and XCR0.

+

The format of the XSAVE area is detailed in Section 13.4, “XSAVE Area,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1. Like FXRSTOR and FXSAVE, the memory format used for x87 state depends on a REX.W prefix; see Section 13.5.1, “x87 State” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Section 13.8, “Operation of XRSTOR,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 provides a detailed description of the operation of the XRSTOR instruction. The following items provide a highlevel outline:

+
    +
  • Execution of XRSTOR may take one of two forms: standard and compacted. Bit 63 of the XCOMP_BV field in the XSAVE header determines which form is used: value 0 specifies the standard form, while value 1 specifies the compacted form.
  • +
  • If RFBM[i] = 0, XRSTOR does not update state component i.1
  • +
  • If RFBM[i] = 1 and bit i is clear in the XSTATE_BV field in the XSAVE header, XRSTOR initializes state component i.
  • +
  • If RFBM[i] = 1 and XSTATE_BV[i] = 1, XRSTOR loads state component i from the XSAVE area.
  • +
  • The standard form of XRSTOR treats MXCSR (which is part of state component 1 — SSE) differently from the XMM registers. If either form attempts to load MXCSR with an illegal value, a general-protection exception (#GP) occurs.
  • +
  • XRSTOR loads the internal value XRSTOR_INFO, which may be used to optimize a subsequent execution of XSAVEOPT or XSAVES.
  • +
  • Immediately following an execution of XRSTOR, the processor tracks as in-use (not in initial configuration) any state component i for which RFBM[i] = 1 and XSTATE_BV[i] = 1; it tracks as modified any state component i for which RFBM[i] = 0.
+

Use of a source operand not aligned to 64-byte boundary (for 64-bit and 32-bit modes) results in a general-protection (#GP) exception. In 64-bit mode, the upper 32 bits of RDX and RAX are ignored.

+

See Section 13.6, “Processor Tracking of XSAVE-Managed State,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 for discussion of the bitmaps XINUSE and XMODIFIED and of the quantity XRSTOR_INFO.

+
+

1. There is an exception if RFBM[1] = 0 and RFBM[2] = 1. In this case, the standard form of XRSTOR will load MXCSR from memory, even though MXCSR is part of state component 1 — SSE. The compacted form of XRSTOR does not make this exception.

+

Operation + ¶ +

+
RFBM := XCR0 AND EDX:EAX; /* bitwise logical AND */
+COMPMASK := XCOMP_BV field from XSAVE header;
+RSTORMASK := XSTATE_BV field from XSAVE header;
+IF COMPMASK[63] = 0
+    THEN
+        /* Standard form of XRSTOR */
+        TO_BE_RESTORED := RFBM AND RSTORMASK;
+        TO_BE_INITIALIZED := RFBM AND NOT RSTORMASK;
+        IF TO_BE_RESTORED[0] = 1
+            THEN
+                XINUSE[0] := 1;
+                load x87 state from legacy region of XSAVE area;
+        ELSIF TO_BE_INITIALIZED[0] = 1
+            THEN
+                XINUSE[0] := 0;
+                initialize x87 state;
+        FI;
+        IF RFBM[1] = 1 OR RFBM[2] = 1
+            THEN load MXCSR from legacy region of XSAVE area;
+        FI;
+        IF TO_BE_RESTORED[1] = 1
+            THEN
+                XINUSE[1] := 1;
+                load XMM registers from legacy region of XSAVE area; // this step does not load MXCSR
+        ELSIF TO_BE_INITIALIZED[1] = 1
+            THEN
+                XINUSE[1] := 0;
+                set all XMM registers to 0; // this step does not initialize MXCSR
+        FI;
+        FOR i := 2 TO 62
+            IF TO_BE_RESTORED[i] = 1
+                THEN
+                    XINUSE[i] := 1;
+                    load XSAVE state component i at offset n from base of XSAVE area;
+                        // n enumerated by CPUID(EAX=0DH,ECX=i):EBX)
+            ELSIF TO_BE_INITIALIZED[i] = 1
+                THEN
+                    XINUSE[i] := 0;
+                    initialize XSAVE state component i;
+            FI;
+        ENDFOR;
+    ELSE
+        /* Compacted form of XRSTOR */
+        IF CPUID.(EAX=0DH,ECX=1):EAX.XSAVEC[bit 1] = 0
+            THEN /* compacted form not supported */
+                #GP(0);
+        FI;
+        FORMAT = COMPMASK AND 7FFFFFFF_FFFFFFFFH;
+        RESTORE_FEATURES = FORMAT AND RFBM;
+        TO_BE_RESTORED := RESTORE_FEATURES AND RSTORMASK;
+        FORCE_INIT := RFBM AND NOT FORMAT;
+        TO_BE_INITIALIZED = (RFBM AND NOT RSTORMASK) OR FORCE_INIT;
+        IF TO_BE_RESTORED[0] = 1
+            THEN
+                XINUSE[0] := 1;
+                load x87 state from legacy region of XSAVE area;
+        ELSIF TO_BE_INITIALIZED[0] = 1
+            THEN
+                XINUSE[0] := 0;
+                initialize x87 state;
+        FI;
+        IF TO_BE_RESTORED[1] = 1
+            THEN
+                XINUSE[1] := 1;
+                load SSE state from legacy region of XSAVE area; // this step loads the XMM registers and MXCSR
+        ELSIF TO_BE_INITIALIZED[1] = 1
+            THEN
+                set all XMM registers to 0;
+                XINUSE[1] := 0;
+                MXCSR := 1F80H;
+        FI;
+        NEXT_FEATURE_OFFSET = 576;
+                                // Legacy area and XSAVE header consume 576 bytes
+        FOR i := 2 TO 62
+            IF FORMAT[i] = 1
+                THEN
+                    IF TO_BE_RESTORED[i] = 1
+                        THEN
+                            XINUSE[i] := 1;
+                            load XSAVE state component i at offset NEXT_FEATURE_OFFSET from base of XSAVE area;
+                    FI;
+                    NEXT_FEATURE_OFFSET = NEXT_FEATURE_OFFSET + n (n enumerated by CPUID(EAX=0DH,ECX=i):EAX);
+            FI;
+            IF TO_BE_INITIALIZED[i] = 1
+                THEN
+                    XINUSE[i] := 0;
+                    initialize XSAVE state component i;
+            FI;
+        ENDFOR;
+FI;
+XMODIFIED := NOT RFBM;
+IF in VMX non-root operation
+    THEN VMXNR := 1;
+    ELSE VMXNR := 0;
+FI;
+LAXA := linear address of XSAVE area;
+XRSTOR_INFO := CPL,VMXNR,LAXA,COMPMASK;
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XRSTOR void _xrstor( void * , unsigned __int64);
+
+
XRSTOR void _xrstor64( void * , unsigned __int64);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If bit 63 of the XCOMP_BV field of the XSAVE header is 1 and CPUID.(EAX=0DH,ECX=1):EAX.XSAVEC[bit 1] = 0.
If the standard form is executed and a bit in XCR0 is 0 and the corresponding bit in the XSTATE_BV field of the XSAVE header is 1.
If the standard form is executed and bytes 23:8 of the XSAVE header are not all zero.
If the compacted form is executed and a bit in XCR0 is 0 and the corresponding bit in the XCOMP_BV field of the XSAVE header is 1.
If the compacted form is executed and a bit in the XCOMP_BV field in the XSAVE header is 0 and the corresponding bit in the XSTATE_BV field is 1.
If the compacted form is executed and bytes 63:16 of the XSAVE header are not all zero.
If attempting to write any reserved bits of the MXCSR register with 1.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 64-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GPIf a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If any part of the operand lies outside the effective address space from 0 to FFFFH.
If bit 63 of the XCOMP_BV field of the XSAVE header is 1 and CPUID.(EAX=0DH,ECX=1):EAX.XSAVEC[bit 1] = 0.
If the standard form is executed and a bit in XCR0 is 0 and the corresponding bit in the XSTATE_BV field of the XSAVE header is 1.
If the standard form is executed and bytes 23:8 of the XSAVE header are not all zero.
If the compacted form is executed and a bit in XCR0 is 0 and the corresponding bit in the XCOMP_BV field of the XSAVE header is 1.
If the compacted form is executed and a bit in the XCOMP_BV field in the XSAVE header is 0 and the corresponding bit in the XSTATE_BV field is 1.
If the compacted form is executed and bytes 63:16 of the XSAVE header are not all zero.
If attempting to write any reserved bits of the MXCSR register with 1.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory address is in a non-canonical form.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If bit 63 of the XCOMP_BV field of the XSAVE header is 1 and CPUID.(EAX=0DH,ECX=1):EAX.XSAVEC[bit 1] = 0.
If the standard form is executed and a bit in XCR0 is 0 and the corresponding bit in the XSTATE_BV field of the XSAVE header is 1.
If the standard form is executed and bytes 23:8 of the XSAVE header are not all zero.
If the compacted form is executed and a bit in XCR0 is 0 and the corresponding bit in the XCOMP_BV field of the XSAVE header is 1.
If the compacted form is executed and a bit in the XCOMP_BV field in the XSAVE header is 0 and the corresponding bit in the XSTATE_BV field is 1.
If the compacted form is executed and bytes 63:16 of the XSAVE header are not all zero.
If attempting to write any reserved bits of the MXCSR register with 1.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 64-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
diff --git a/x86/xrstors.html b/x86/xrstors.html new file mode 100644 index 0000000..2578c95 --- /dev/null +++ b/x86/xrstors.html @@ -0,0 +1,240 @@ + +XRSTORS + — Restore Processor Extended States Supervisor

XRSTORS + — Restore Processor Extended States Supervisor

+ + + + + + + + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F C7 /3 XRSTORS memMV/VXSSRestore state components specified by EDX:EAX from mem.
NP REX.W + 0F C7 /3 XRSTORS64 memMV/N.E.XSSRestore state components specified by EDX:EAX from mem.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r)N/AN/AN/A
+

Description + ¶ +

+

Performs a full or partial restore of processor state components from the XSAVE area located at the memory address specified by the source operand. The implicit EDX:EAX register pair specifies a 64-bit instruction mask. The specific state components restored correspond to the bits set in the requested-feature bitmap (RFBM), which is the logical-AND of EDX:EAX and the logical-OR of XCR0 with the IA32_XSS MSR. XRSTORS may be executed only if CPL = 0.

+

The format of the XSAVE area is detailed in Section 13.4, “XSAVE Area,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1. Like FXRSTOR and FXSAVE, the memory format used for x87 state depends on a REX.W prefix; see Section 13.5.1, “x87 State” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Section 13.12, “Operation of XRSTORS,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 provides a detailed description of the operation of the XRSTOR instruction. The following items provide a high-level outline:

+
    +
  • Execution of XRSTORS is similar to that of the compacted form of XRSTOR; XRSTORS cannot restore from an XSAVE area in which the extended region is in the standard format (see Section 13.4.3, “Extended Region of an XSAVE Area” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).
  • +
  • XRSTORS differs from XRSTOR in that it can restore state components corresponding to bits set in the IA32_XSS MSR.
  • +
  • If RFBM[i] = 0, XRSTORS does not update state component i.
  • +
  • If RFBM[i] = 1 and bit i is clear in the XSTATE_BV field in the XSAVE header, XRSTORS initializes state component i.
  • +
  • If RFBM[i] = 1 and XSTATE_BV[i] = 1, XRSTORS loads state component i from the XSAVE area.
  • +
  • If XRSTORS attempts to load MXCSR with an illegal value, a general-protection exception (#GP) occurs.
  • +
  • XRSTORS loads the internal value XRSTOR_INFO, which may be used to optimize a subsequent execution of XSAVEOPT or XSAVES.
  • +
  • Immediately following an execution of XRSTORS, the processor tracks as in-use (not in initial configuration) any state component i for which RFBM[i] = 1 and XSTATE_BV[i] = 1; it tracks as modified any state component i for which RFBM[i] = 0.
+

Use of a source operand not aligned to 64-byte boundary (for 64-bit and 32-bit modes) results in a general-protection (#GP) exception. In 64-bit mode, the upper 32 bits of RDX and RAX are ignored.

+

See Section 13.6, “Processor Tracking of XSAVE-Managed State,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 for discussion of the bitmaps XINUSE and XMODIFIED and of the quantity XRSTOR_INFO.

+

Operation + ¶ +

+
RFBM := (XCR0 OR IA32_XSS) AND EDX:EAX;
+                            /* bitwise logical OR and AND */
+COMPMASK := XCOMP_BV field from XSAVE header;
+RSTORMASK := XSTATE_BV field from XSAVE header;
+FORMAT = COMPMASK AND 7FFFFFFF_FFFFFFFFH;
+RESTORE_FEATURES = FORMAT AND RFBM;
+TO_BE_RESTORED := RESTORE_FEATURES AND RSTORMASK;
+FORCE_INIT := RFBM AND NOT FORMAT;
+TO_BE_INITIALIZED = (RFBM AND NOT RSTORMASK) OR FORCE_INIT;
+IF TO_BE_RESTORED[0] = 1
+    THEN
+        XINUSE[0] := 1;
+        load x87 state from legacy region of XSAVE area;
+ELSIF TO_BE_INITIALIZED[0] = 1
+    THEN
+        XINUSE[0] := 0;
+        initialize x87 state;
+FI;
+IF TO_BE_RESTORED[1] = 1
+    THEN
+        XINUSE[1] := 1;
+        load SSE state from legacy region of XSAVE area; // this step loads the XMM registers and MXCSR
+ELSIF TO_BE_INITIALIZED[1] = 1
+    THEN
+        set all XMM registers to 0;
+        XINUSE[1] := 0;
+        MXCSR := 1F80H;
+FI;
+NEXT_FEATURE_OFFSET = 576;
+                        // Legacy area and XSAVE header consume 576 bytes
+FOR i := 2 TO 62
+    IF FORMAT[i] = 1
+        THEN
+            IF TO_BE_RESTORED[i] = 1
+                THEN
+                    XINUSE[i] := 1;
+                    load XSAVE state component i at offset NEXT_FEATURE_OFFSET from base of XSAVE area;
+            FI;
+            NEXT_FEATURE_OFFSET = NEXT_FEATURE_OFFSET + n (n enumerated by CPUID(EAX=0DH,ECX=i):EAX);
+    FI;
+    IF TO_BE_INITIALIZED[i] = 1
+        THEN
+            XINUSE[i] := 0;
+            initialize XSAVE state component i;
+    FI;
+ENDFOR;
+XMODIFIED := NOT RFBM;
+IF in VMX non-root operation
+    THEN VMXNR := 1;
+    ELSE VMXNR := 0;
+FI;
+LAXA := linear address of XSAVE area;
+XRSTOR_INFO := CPL,VMXNR,LAXA,COMPMASK;
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XRSTORS void _xrstors( void * , unsigned __int64);
+
+
XRSTORS64 void _xrstors64( void * , unsigned __int64);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If CPL > 0.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If bit 63 of the XCOMP_BV field of the XSAVE header is 0.
If a bit in XCR0|IA32_XSS is 0 and the corresponding bit in the XCOMP_BV field of the XSAVE header is 1.
If a bit in the XCOMP_BV field in the XSAVE header is 0 and the corresponding bit in the XSTATE_BV field is 1.
If bytes 63:16 of the XSAVE header are not all zero.
If attempting to write any reserved bits of the MXCSR register with 1.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSS[bit 3] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + +
#GPIf a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If any part of the operand lies outside the effective address space from 0 to FFFFH.
If bit 63 of the XCOMP_BV field of the XSAVE header is 0.
If a bit in XCR0|IA32_XSS is 0 and the corresponding bit in the XCOMP_BV field of the XSAVE header is 1.
If a bit in the XCOMP_BV field in the XSAVE header is 0 and the corresponding bit in the XSTATE_BV field is 1.
If bytes 63:16 of the XSAVE header are not all zero.
If attempting to write any reserved bits of the MXCSR register with 1.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSS[bit 3] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If CPL > 0.
If a memory address is in a non-canonical form.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If bit 63 of the XCOMP_BV field of the XSAVE header is 0.
If a bit in XCR0|IA32_XSS is 0 and the corresponding bit in the XCOMP_BV field of the XSAVE header is 1.
If a bit in the XCOMP_BV field in the XSAVE header is 0 and the corresponding bit in the XSTATE_BV field is 1.
If bytes 63:16 of the XSAVE header are not all zero.
If attempting to write any reserved bits of the MXCSR register with 1.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSS[bit 3] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
diff --git a/x86/xsave.html b/x86/xsave.html new file mode 100644 index 0000000..f624b63 --- /dev/null +++ b/x86/xsave.html @@ -0,0 +1,173 @@ + +XSAVE + — Save Processor Extended States

XSAVE + — Save Processor Extended States

+ + + + + + + + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F AE /4 XSAVE memMV/VXSAVESave state components specified by EDX:EAX to mem.
NP REX.W + 0F AE /4 XSAVE64 memMV/N.E.XSAVESave state components specified by EDX:EAX to mem.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r, w)N/AN/AN/A
+

Description + ¶ +

+

Performs a full or partial save of processor state components to the XSAVE area located at the memory address specified by the destination operand. The implicit EDX:EAX register pair specifies a 64-bit instruction mask. The specific state components saved correspond to the bits set in the requested-feature bitmap (RFBM), which is the logical-AND of EDX:EAX and XCR0.

+

The format of the XSAVE area is detailed in Section 13.4, “XSAVE Area,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1. Like FXRSTOR and FXSAVE, the memory format used for x87 state depends on a REX.W prefix; see Section 13.5.1, “x87 State” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Section 13.7, “Operation of XSAVE,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 provides a detailed description of the operation of the XSAVE instruction. The following items provide a high-level outline:

+
    +
  • XSAVE saves state component i if and only if RFBM[i] = 1.1
  • +
  • XSAVE does not modify bytes 511:464 of the legacy region of the XSAVE area (see Section 13.4.1, “Legacy Region of an XSAVE Area” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).
  • +
  • XSAVE reads the XSTATE_BV field of the XSAVE header (see Section 13.4.2, “XSAVE Header” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1) and writes a modified value back to memory as follows. If RFBM[i] = 1, XSAVE writes XSTATE_BV[i] with the value of XINUSE[i]. (XINUSE is a bitmap by which the processor tracks the status of various state components. See Section 13.6, “Processor Tracking of XSAVEManaged State” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.) If RFBM[i] = 0, XSAVE writes XSTATE_BV[i] with the value that it read from memory (it does not modify the bit). XSAVE does not write to any part of the XSAVE header other than the XSTATE_BV field.
  • +
  • XSAVE always uses the standard format of the extended region of the XSAVE area (see Section 13.4.3, “Extended Region of an XSAVE Area” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).
+
+

1. An exception is made for MXCSR and MXCSR_MASK, which belong to state component 1 — SSE. XSAVE saves these values to memory if either RFBM[1] or RFBM[2] is 1.

+

Use of a destination operand not aligned to 64-byte boundary (in either 64-bit or 32-bit modes) results in a general-protection (#GP) exception. In 64-bit mode, the upper 32 bits of RDX and RAX are ignored.

+

Operation + ¶ +

+
RFBM := XCR0 AND EDX:EAX; /* bitwise logical AND */
+OLD_BV := XSTATE_BV field from XSAVE header;
+IF RFBM[0] = 1
+    THEN store x87 state into legacy region of XSAVE area;
+FI;
+IF RFBM[1] = 1
+    THEN store XMM registers into legacy region of XSAVE area; // this step does not save MXCSR or MXCSR_MASK
+FI;
+IF RFBM[1] = 1 OR RFBM[2] = 1
+    THEN store MXCSR and MXCSR_MASK into legacy region of XSAVE area;
+FI;
+FOR i := 2 TO 62
+    IF RFBM[i] = 1
+        THEN save XSAVE state component i at offset n from base of XSAVE area (n enumerated by CPUID(EAX=0DH,ECX=i):EBX);
+    FI;
+ENDFOR;
+XSTATE_BV field in XSAVE header := (OLD_BV AND NOT RFBM) OR (XINUSE AND RFBM);
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XSAVE void _xsave( void * , unsigned __int64);
+
+
XSAVE void _xsave64( void * , unsigned __int64);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 64-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GPIf a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If any part of the operand lies outside the effective address space from 0 to FFFFH.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 64-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
diff --git a/x86/xsavec.html b/x86/xsavec.html new file mode 100644 index 0000000..459d31f --- /dev/null +++ b/x86/xsavec.html @@ -0,0 +1,185 @@ + +XSAVEC + — Save Processor Extended States With Compaction

XSAVEC + — Save Processor Extended States With Compaction

+ + + + + + + + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F C7 /4 XSAVEC memMV/VXSAVECSave state components specified by EDX:EAX to mem with compaction.
NP REX.W + 0F C7 /4 XSAVEC64 memMV/N.E.XSAVECSave state components specified by EDX:EAX to mem with compaction.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Performs a full or partial save of processor state components to the XSAVE area located at the memory address specified by the destination operand. The implicit EDX:EAX register pair specifies a 64-bit instruction mask. The specific state components saved correspond to the bits set in the requested-feature bitmap (RFBM), which is the logical-AND of EDX:EAX and XCR0.

+

The format of the XSAVE area is detailed in Section 13.4, “XSAVE Area,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1. Like FXRSTOR and FXSAVE, the memory format used for x87 state depends on a REX.W prefix; see Section 13.5.1, “x87 State” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Section 13.10, “Operation of XSAVEC,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 provides a detailed description of the operation of the XSAVEC instruction. The following items provide a highlevel outline:

+
    +
  • Execution of XSAVEC is similar to that of XSAVE. XSAVEC differs from XSAVE in that it uses compaction and that it may use the init optimization.
  • +
  • XSAVEC saves state component i if and only if RFBM[i] = 1 and XINUSE[i] = 1.1 (XINUSE is a bitmap by which the processor tracks the status of various state components. See Section 13.6, “Processor Tracking of XSAVEManaged State” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.)
  • +
  • XSAVEC does not modify bytes 511:464 of the legacy region of the XSAVE area (see Section 13.4.1, “Legacy Region of an XSAVE Area” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).
  • +
  • XSAVEC writes the logical AND of RFBM and XINUSE to the XSTATE_BV field of the XSAVE header.2,3 (See Section 13.4.2, “XSAVE Header” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.) XSAVEC sets bit 63 of the XCOMP_BV field and sets bits 62:0 of that field to RFBM[62:0]. XSAVEC does not write to any parts of the XSAVE header other than the XSTATE_BV and XCOMP_BV fields.
  • +
  • XSAVEC always uses the compacted format of the extended region of the XSAVE area (see Section 13.4.3, “Extended Region of an XSAVE Area” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).
+
+

1. There is an exception for state component 1 (SSE). MXCSR is part of SSE state, but XINUSE[1] may be 0 even if MXCSR does not have its initial value of 1F80H. In this case, XSAVEC saves SSE state as long as RFBM[1] = 1.

+

2. Unlike XSAVE and XSAVEOPT, XSAVEC clears bits in the XSTATE_BV field that correspond to bits that are clear in RFBM.

+

3. There is an exception for state component 1 (SSE). MXCSR is part of SSE state, but XINUSE[1] may be 0 even if MXCSR does not have its initial value of 1F80H. In this case, XSAVEC sets XSTATE_BV[1] to 1 as long as RFBM[1] = 1.

+

Use of a destination operand not aligned to 64-byte boundary (in either 64-bit or 32-bit modes) results in a general-protection (#GP) exception. In 64-bit mode, the upper 32 bits of RDX and RAX are ignored.

+

Operation + ¶ +

+
RFBM := XCR0 AND EDX:EAX;
+                    /* bitwise logical AND */
+TO_BE_SAVED := RFBM AND XINUSE;
+                    /* bitwise logical AND */
+If MXCSR ≠ 1F80H AND RFBM[1]
+    TO_BE_SAVED[1] = 1;
+FI;
+IF TO_BE_SAVED[0] = 1
+    THEN store x87 state into legacy region of XSAVE area;
+FI;
+IF TO_BE_SAVED[1] = 1
+    THEN store SSE state into legacy region of XSAVE area; // this step saves the XMM registers, MXCSR, and MXCSR_MASK
+FI;
+NEXT_FEATURE_OFFSET = 576;
+                    // Legacy area and XSAVE header consume 576 bytes
+FOR i := 2 TO 62
+    IF RFBM[i] = 1
+        THEN
+            IF TO_BE_SAVED[i]
+                THEN save XSAVE state component i at offset NEXT_FEATURE_OFFSET from base of XSAVE area;
+            FI;
+            NEXT_FEATURE_OFFSET = NEXT_FEATURE_OFFSET + n (n enumerated by CPUID(EAX=0DH,ECX=i):EAX);
+    FI;
+ENDFOR;
+XSTATE_BV field in XSAVE header := TO_BE_SAVED;
+XCOMP_BV field in XSAVE header := RFBM OR 80000000_00000000H;
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XSAVEC void _xsavec( void * , unsigned __int64);
+
+
XSAVEC64 void _xsavec64( void * , unsigned __int64);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSAVEC[bit 1] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 64-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GPIf a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If any part of the operand lies outside the effective address space from 0 to FFFFH.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSAVEC[bit 1] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If the memory address is in a non-canonical form.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSAVEC[bit 1] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 64-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
diff --git a/x86/xsaveopt.html b/x86/xsaveopt.html new file mode 100644 index 0000000..df70807 --- /dev/null +++ b/x86/xsaveopt.html @@ -0,0 +1,184 @@ + +XSAVEOPT + — Save Processor Extended States Optimized

XSAVEOPT + — Save Processor Extended States Optimized

+ + + + + + + + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F AE /6 XSAVEOPT memMV/VXSAVEOPTSave state components specified by EDX:EAX to mem, optimizing if possible.
NP REX.W + 0F AE /6 XSAVEOPT64 memMV/VXSAVEOPTSave state components specified by EDX:EAX to mem, optimizing if possible.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (r, w)N/AN/AN/A
+

Description + ¶ +

+

Performs a full or partial save of processor state components to the XSAVE area located at the memory address specified by the destination operand. The implicit EDX:EAX register pair specifies a 64-bit instruction mask. The specific state components saved correspond to the bits set in the requested-feature bitmap (RFBM), which is the logical-AND of EDX:EAX and XCR0.

+

The format of the XSAVE area is detailed in Section 13.4, “XSAVE Area,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1. Like FXRSTOR and FXSAVE, the memory format used for x87 state depends on a REX.W prefix; see Section 13.5.1, “x87 State” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Section 13.9, “Operation of XSAVEOPT,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 provides a detailed description of the operation of the XSAVEOPT instruction. The following items provide a highlevel outline:

+
    +
  • Execution of XSAVEOPT is similar to that of XSAVE. XSAVEOPT differs from XSAVE in that it may use the init and modified optimizations. The performance of XSAVEOPT will be equal to or better than that of XSAVE.
  • +
  • XSAVEOPT saves state component i only if RFBM[i] = 1 and XINUSE[i] = 1.1 (XINUSE is a bitmap by which the processor tracks the status of various state components. See Section 13.6, “Processor Tracking of XSAVEManaged State,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.) Even if both bits are 1, XSAVEOPT may optimize and not save state component i if (1) state component i has not been modified since the last execution of XRSTOR or XRSTORS; and (2) this execution of XSAVES corresponds to that last execution of XRSTOR or XRSTORS as determined by the internal value XRSTOR_INFO (see the Operation section below).
  • +
  • XSAVEOPT does not modify bytes 511:464 of the legacy region of the XSAVE area (see Section 13.4.1, “Legacy Region of an XSAVE Area” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).
  • +
  • XSAVEOPT reads the XSTATE_BV field of the XSAVE header (see Section 13.4.2, “XSAVE Header,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1) and writes a modified value back to memory as follows. If RFBM[i] = 1, XSAVEOPT writes XSTATE_BV[i] with the value of XINUSE[i]. If RFBM[i] = 0, XSAVEOPT writes XSTATE_BV[i] with the value that it read from memory (it does not modify the bit). XSAVEOPT does not write to any part of the XSAVE header other than the XSTATE_BV field.
  • +
  • XSAVEOPT always uses the standard format of the extended region of the XSAVE area (see Section 13.4.3, “Extended Region of an XSAVE Area” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).
+
+

1. There is an exception made for MXCSR and MXCSR_MASK, which belong to state component 1 — SSE. XSAVEOPT always saves these to memory if RFBM[1] = 1 or RFBM[2] = 1, regardless of the value of XINUSE.

+

Use of a destination operand not aligned to 64-byte boundary (in either 64-bit or 32-bit modes) will result in a general-protection (#GP) exception. In 64-bit mode, the upper 32 bits of RDX and RAX are ignored.

+

See Section 13.6, “Processor Tracking of XSAVE-Managed State,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 for discussion of the bitmap XMODIFIED and of the quantity XRSTOR_INFO.

+

Operation + ¶ +

+
RFBM := XCR0 AND EDX:EAX; /* bitwise logical AND */
+OLD_BV := XSTATE_BV field from XSAVE header;
+TO_BE_SAVED := RFBM AND XINUSE;
+IF in VMX non-root operation
+    THEN VMXNR := 1;
+    ELSE VMXNR := 0;
+FI;
+LAXA := linear address of XSAVE area;
+IF XRSTOR_INFO = CPL,VMXNR,LAXA,00000000_00000000H
+    THEN TO_BE_SAVED := TO_BE_SAVED AND XMODIFIED;
+FI;
+IF TO_BE_SAVED[0] = 1
+    THEN store x87 state into legacy region of XSAVE area;
+FI;
+IF TO_BE_SAVED[1]
+    THEN store XMM registers into legacy region of XSAVE area; // this step does not save MXCSR or MXCSR_MASK
+FI;
+IF RFBM[1] = 1 or RFBM[2] = 1
+    THEN store MXCSR and MXCSR_MASK into legacy region of XSAVE area;
+FI;
+FOR i := 2 TO 62
+    IF TO_BE_SAVED[i] = 1
+        THEN save XSAVE state component i at offset n from base of XSAVE area (n enumerated by CPUID(EAX=0DH,ECX=i):EBX);
+    FI;
+ENDFOR;
+XSTATE_BV field in XSAVE header := (OLD_BV AND NOT RFBM) OR (XINUSE AND RFBM);
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XSAVEOPT void _xsaveopt( void * , unsigned __int64);
+
+
XSAVEOPT void _xsaveopt64( void * , unsigned __int64);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSAVEOPT[bit 0] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 64-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GPIf a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If any part of the operand lies outside the effective address space from 0 to FFFFH.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSAVEOPT[bit 0] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#GP(0)If the memory address is in a non-canonical form.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSAVEOPT[bit 0] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
#ACIf this exception is disabled a general protection exception (#GP) is signaled if the memory operand is not aligned on a 64-byte boundary, as described above. If the alignment check exception (#AC) is enabled (and the CPL is 3), signaling of #AC is not guaranteed and may vary with implementation, as follows. In all implementations where #AC is not signaled, a general protection exception is signaled in its place. In addition, the width of the alignment check may also vary with implementation. For instance, for a given implementation, an alignment check exception might be signaled for a 2-byte misalignment, whereas a general protection exception might be signaled for all other misalignments (4-, 8-, or 16-byte misalignments).
diff --git a/x86/xsaves.html b/x86/xsaves.html new file mode 100644 index 0000000..84e9e2a --- /dev/null +++ b/x86/xsaves.html @@ -0,0 +1,199 @@ + +XSAVES + — Save Processor Extended States Supervisor

XSAVES + — Save Processor Extended States Supervisor

+ + + + + + + + + + + + + + + + + + + +
Opcode / InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
NP 0F C7 /5 XSAVES memMV/VXSSSave state components specified by EDX:EAX to mem with compaction, optimizing if possible.
NP REX.W + 0F C7 /5 XSAVES64 memMV/N.E.XSSSave state components specified by EDX:EAX to mem with compaction, optimizing if possible.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
MModRM:r/m (w)N/AN/AN/A
+

Description + ¶ +

+

Performs a full or partial save of processor state components to the XSAVE area located at the memory address specified by the destination operand. The implicit EDX:EAX register pair specifies a 64-bit instruction mask. The specific state components saved correspond to the bits set in the requested-feature bitmap (RFBM), the logicalAND of EDX:EAX and the logical-OR of XCR0 with the IA32_XSS MSR. XSAVES may be executed only if CPL = 0.

+

The format of the XSAVE area is detailed in Section 13.4, “XSAVE Area,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1. Like FXRSTOR and FXSAVE, the memory format used for x87 state depends on a REX.W prefix; see Section 13.5.1, “x87 State,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Section 13.11, “Operation of XSAVES,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 provides a detailed description of the operation of the XSAVES instruction. The following items provide a high-level outline:

+
    +
  • Execution of XSAVES is similar to that of XSAVEC. XSAVES differs from XSAVEC in that it can save state components corresponding to bits set in the IA32_XSS MSR and that it may use the modified optimization.
  • +
  • XSAVES saves state component i only if RFBM[i] = 1 and XINUSE[i] = 1.1 (XINUSE is a bitmap by which the processor tracks the status of various state components. See Section 13.6, “Processor Tracking of XSAVEManaged State,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.) Even if both bits are 1, XSAVES may optimize and not save state component i if (1) state component i has not been modified since the last execution of XRSTOR or XRSTORS; and (2) this execution of XSAVES correspond to that last execution of XRSTOR or XRSTORS as determined by XRSTOR_INFO (see the Operation section below).
  • +
  • XSAVES does not modify bytes 511:464 of the legacy region of the XSAVE area (see Section 13.4.1, “Legacy Region of an XSAVE Area,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).
  • +
  • XSAVES writes the logical AND of RFBM and XINUSE to the XSTATE_BV field of the XSAVE header.2 (See Section 13.4.2, “XSAVE Header,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.) XSAVES sets bit 63 of the XCOMP_BV field and sets bits 62:0 of that field to RFBM[62:0]. XSAVES does not write to any parts of the XSAVE header other than the XSTATE_BV and XCOMP_BV fields.
  • +
  • XSAVES always uses the compacted format of the extended region of the XSAVE area (see Section 13.4.3, “Extended Region of an XSAVE Area,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1).
+
+

1. There is an exception for state component 1 (SSE). MXCSR is part of SSE state, but XINUSE[1] may be 0 even if MXCSR does not have its initial value of 1F80H. In this case, the init optimization does not apply and XSAVEC will save SSE state as long as RFBM[1] = 1 and the modified optimization is not being applied.

+

2. There is an exception for state component 1 (SSE). MXCSR is part of SSE state, but XINUSE[1] may be 0 even if MXCSR does not have its initial value of 1F80H. In this case, XSAVES sets XSTATE_BV[1] to 1 as long as RFBM[1] = 1.

+

Use of a destination operand not aligned to 64-byte boundary (in either 64-bit or 32-bit modes) results in a general-protection (#GP) exception. In 64-bit mode, the upper 32 bits of RDX and RAX are ignored.

+

See Section 13.6, “Processor Tracking of XSAVE-Managed State,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 for discussion of the bitmap XMODIFIED and of the quantity XRSTOR_INFO.

+

Operation + ¶ +

+
RFBM := (XCR0 OR IA32_XSS) AND EDX:EAX;
+                                /* bitwise logical OR and AND */
+IF in VMX non-root operation
+    THEN VMXNR := 1;
+    ELSE VMXNR := 0;
+FI;
+LAXA := linear address of XSAVE area;
+COMPMASK := RFBM OR 80000000_00000000H;
+TO_BE_SAVED := RFBM AND XINUSE;
+IF XRSTOR_INFO = CPL,VMXNR,LAXA,COMPMASK
+    THEN TO_BE_SAVED := TO_BE_SAVED AND XMODIFIED;
+FI;
+IF MXCSR ≠ 1F80H AND RFBM[1]
+    THEN TO_BE_SAVED[1] = 1;
+FI;
+IF TO_BE_SAVED[0] = 1
+    THEN store x87 state into legacy region of XSAVE area;
+FI;
+IF TO_BE_SAVED[1] = 1
+    THEN store SSE state into legacy region of XSAVE area; // this step saves the XMM registers, MXCSR, and MXCSR_MASK
+FI;
+NEXT_FEATURE_OFFSET = 576;
+                            // Legacy area and XSAVE header consume 576 bytes
+FOR i := 2 TO 62
+    IF RFBM[i] = 1
+        THEN
+            IF TO_BE_SAVED[i]
+                THEN
+                    save XSAVE state component i at offset NEXT_FEATURE_OFFSET from base of XSAVE area;
+                    IF i = 8 // state component 8 is for PT state
+                        THEN IA32_RTIT_CTL.TraceEn[bit 0] := 0;
+                    FI;
+            FI;
+            NEXT_FEATURE_OFFSET = NEXT_FEATURE_OFFSET + n (n enumerated by CPUID(EAX=0DH,ECX=i):EAX);
+    FI;
+ENDFOR;
+NEW_HEADER := RFBM AND XINUSE;
+IF MXCSR ≠ 1F80H AND RFBM[1]
+    THEN NEW_HEADER[1] = 1;
+FI;
+XSTATE_BV field in XSAVE header := NEW_HEADER;
+XCOMP_BV field in XSAVE header := COMPMASK;
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XSAVES void _xsaves( void * , unsigned __int64);
+
+
XSAVES64 void _xsaves64( void * , unsigned __int64);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)IfCPL>0.
If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
#SS(0)If a memory operand effective address is outside the SS segment limit.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSS[bit 3] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + +
#GPIf a memory operand is not aligned on a 64-byte boundary, regardless of segment.
If any part of the operand lies outside the effective address space from 0 to FFFFH.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSS[bit 3] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + + + + + + +
#GP(0)IfCPL>0.
If the memory address is in a non-canonical form.
If a memory operand is not aligned on a 64-byte boundary, regardless of segment.
#SS(0)If a memory address referencing the SS segment is in a non-canonical form.
#PF(fault-code)If a page fault occurs.
#NMIf CR0.TS[bit 3] = 1.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSS[bit 3] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
diff --git a/x86/xsetbv.html b/x86/xsetbv.html new file mode 100644 index 0000000..b6ba5e4 --- /dev/null +++ b/x86/xsetbv.html @@ -0,0 +1,117 @@ + +XSETBV + — Set Extended Control Register

XSETBV + — Set Extended Control Register

+ + + + + + + + + + + + + + + +
OpcodeInstructionOp/En64-Bit ModeCompat/Leg ModeDescription
NP 0F 01 D1XSETBVZOValidValidWrite the value in EDX:EAX to the XCR specified by ECX.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

Writes the contents of registers EDX:EAX into the 64-bit extended control register (XCR) specified in the ECX register. (On processors that support the Intel 64 architecture, the high-order 32 bits of RCX are ignored.) The contents of the EDX register are copied to high-order 32 bits of the selected XCR and the contents of the EAX register are copied to low-order 32 bits of the XCR. (On processors that support the Intel 64 architecture, the high-order 32 bits of each of RAX and RDX are ignored.) Undefined or reserved bits in an XCR should be set to values previously read.

+

This instruction must be executed at privilege level 0 or in real-address mode; otherwise, a general protection exception #GP(0) is generated. Specifying a reserved or unimplemented XCR in ECX will also cause a general protection exception. The processor will also generate a general protection exception if software attempts to write to reserved bits in an XCR.

+

Currently, only XCR0 is supported. Thus, all other values of ECX are reserved and will cause a #GP(0). Note that bit 0 of XCR0 (corresponding to x87 state) must be set to 1; the instruction will cause a #GP(0) if an attempt is made to clear this bit. In addition, the instruction causes a #GP(0) if an attempt is made to set XCR0[2] (AVX state) while clearing XCR0[1] (SSE state); it is necessary to set both bits to use AVX instructions; Section 13.3, “Enabling the XSAVE Feature Set and XSAVE-Enabled Features,” of Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1.

+

Operation + ¶ +

+
XCR[ECX] := EDX:EAX;
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XSETBV void _xsetbv( unsigned int, unsigned __int64);
+
+

Protected Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + + + +
#GP(0)If the current privilege level is not 0.
If an invalid XCR is specified in ECX.
If the value in EDX:EAX sets bits that are reserved in the XCR specified by ECX.
If an attempt is made to clear bit 0 of XCR0.
If an attempt is made to set XCR0[2:1] to 10b.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Real-Address Mode Exceptions + ¶ +

+ + + + + + + + + + + + + + + + +
#GPIf an invalid XCR is specified in ECX.
If the value in EDX:EAX sets bits that are reserved in the XCR specified by ECX.
If an attempt is made to clear bit 0 of XCR0.
If an attempt is made to set XCR0[2:1] to 10b.
#UDIf CPUID.01H:ECX.XSAVE[bit 26] = 0.
If CR4.OSXSAVE[bit 18] = 0.
If the LOCK prefix is used.
+

Virtual-8086 Mode Exceptions + ¶ +

+ + + +
#GP(0)The XSETBV instruction is not recognized in virtual-8086 mode.
+

Compatibility Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

+

64-Bit Mode Exceptions + ¶ +

+

Same exceptions as in protected mode.

diff --git a/x86/xsusldtrk.html b/x86/xsusldtrk.html new file mode 100644 index 0000000..516951f --- /dev/null +++ b/x86/xsusldtrk.html @@ -0,0 +1,82 @@ + +XSUSLDTRK + — Suspend Tracking Load Addresses

XSUSLDTRK + — Suspend Tracking Load Addresses

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32 bit Mode SupportCPUID Feature FlagDescription
F2 0F 01 E8 XSUSLDTRKZOV/VTSXLDTRKSpecifies the start of an Intel TSX suspend read address tracking region.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + + + +
Op/EnTupleOperand 1Operand 2Operand 3Operand 4
ZON/AN/AN/AN/AN/A
+

Description + ¶ +

+

The instruction marks the start of an Intel TSX (RTM) suspend load address tracking region. If the instruction is used inside a transactional region, subsequent loads are not added to the read set of the transaction. If the instruction is used inside a suspend load address tracking region it will cause transaction abort.

+

If the instruction is used outside of a transactional region it behaves like a NOP.

+

Chapter 16, “Programming with Intel® Transactional Synchronization Extensions‚” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1 provides additional information on Intel® TSX Suspend Load Address Tracking.

+

Operation + ¶ +

+

XSUSLDTRK + ¶ +

+
IF RTM_ACTIVE = 1:
+    IF SUSLDTRK_ACTIVE = 0:
+        SUSLDTRK_ACTIVE := 1
+    ELSE:
+        RTM_ABORT
+ELSE:
+    NOP
+
+

Flags Affected + ¶ +

+

None.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XSUSLDTRK void _xsusldtrk(void);
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+ + + + + +
#UDIf CPUID.(EAX=7, ECX=0):EDX.TSXLDTRK[bit 16] = 0.
If the LOCK prefix is used.
diff --git a/x86/xtest.html b/x86/xtest.html new file mode 100644 index 0000000..3f2def4 --- /dev/null +++ b/x86/xtest.html @@ -0,0 +1,77 @@ + +XTEST + — Test if in Transactional Execution

XTEST + — Test if in Transactional Execution

+ + + + + + + + + + + + + +
Opcode/InstructionOp/En64/32bit Mode SupportCPUID Feature FlagDescription
NP 0F 01 D6 XTESTZOV/VHLE or RTMTest if executing in a transactional region.
+

Instruction Operand Encoding + ¶ +

+ + + + + + + + + + + + +
Op/EnOperand 1Operand2Operand3Operand4
ZON/AN/AN/AN/A
+

Description + ¶ +

+

The XTEST instruction queries the transactional execution status. If the instruction executes inside a transactionally executing RTM region or a transactionally executing HLE region, then the ZF flag is cleared, else it is set.

+

Operation + ¶ +

+

XTEST + ¶ +

+
IF (RTM_ACTIVE = 1 OR HLE_ACTIVE = 1)
+    THEN
+        ZF := 0
+    ELSE
+        ZF := 1
+FI;
+
+

Flags Affected + ¶ +

+

The ZF flag is cleared if the instruction is executed transactionally; otherwise it is set to 1. The CF, OF, SF, PF, and AF, flags are cleared.

+

Intel C/C++ Compiler Intrinsic Equivalent + ¶ +

+
XTEST int _xtest( void );
+
+

SIMD Floating-Point Exceptions + ¶ +

+

None.

+

Other Exceptions + ¶ +

+ + + + + +
#UDCPUID.(EAX=7, ECX=0):EBX.HLE[bit 4] = 0 and CPUID.(EAX=7, ECX=0):EBX.RTM[bit 11] = 0.
If LOCK prefix is used.