Packed decimal to ascii assembly

I am trying to convert packed decimal numbers to ascii strings. Here is my understanding:

Following:

bcd BYTE 34h 

Convert to decimal 34 after โ€œunpackingโ€. Now I'm not sure how to do this. If I convert this hexadecimal number to binary, it will be as follows:

 0011 0100 

Now, if my procedure prints each bit value 4 bits at a time, then it should print an ascii character string, right? Therefore, if I go through the loop, take the first 4-bit binary number, print its actual value, it will print 3 .. then do the same, print it 4.

So here is my general idea:

Take the 8-bit value "34h", move it to AH and AL. In AH, clear the second half of the bits using the following command:

 and ah, 11110000b 

and in AL, clear the first half of the bits using the following:

 and al, 00001111b 

So, AH = 0110b or 3 AND AL = 0100b or 4, then print them accordingly.

Is this a good approach? Or am I going to completely make a mistake or think about it?

EDIT: Here is my final decision with the original value 12345678h. Thanks to everyone who helped!

 ;----------------------------------------------------------------------------- PackedToAsc PROC USES eax ebx edx ecx esi ; This function displays a packed decimal value in its "ascii" form ; ie 12345678h would display decimal, 12345678 in decimal form ; ; Requires ECX = SIZEOF packed decimal ; ESI to be pointing to the packed decimal ;----------------------------------------------------------------------------- mov edx, [esi] ; temp store our offset mov eax, 0 ; clear eax mov ebx, 0 ; clear ebx L1: rol edx, 8 ; rotate left 8 bits to avoid little endian mov [esi], edx ; mov our temp back to the actual value mov al, BYTE PTR [esi] ; al = 12h 0001 0010 mov bl, BYTE PTR [esi] ; bl = 12h 0001 0010 shr al, 4 ; al = 0000 0001 and bl, 00001111b; ; bl = 0000 0010 add al, 48 ; convert to ascii call WriteChar ; display al mov al, bl add al, 48 ; convert to ascii call WriteChar ; display bl loop L1 call Crlf ret PackedToAsc END 

R

+4
source share
1 answer

BCD only uses numbers from 0 to 9.
The unpacked BCD digit uses the lower nibble of the entire byte and converts it to ASCII, which you add 48.
34h is 52 decimal and will be represented as unpacked BCD as 00000101 and 00000010
(Modified to avoid confusion when using the built-in instructions) When it is packaged: 01010010 == BCD is packaged 52

To unpack it, you can do as you did, but you need to transfer AH directly to put the value in lo-nibble. To convert to ASCII just add 48.

[edit]

MASM (=> ALL instructions, including Linux), which runs on 80x86 processors, uses a little-endian scheme. Processors like Motorola 68000 (Apple Mac) and RISC use big-endian.

When you store a number as a BCD, the low byte is at the lowest address to the most significant at the highest level, for example:

 my_unpacked_bcd DB 4,3,2,1 ; this is the decimal number 1234 my_packed_bcd DW 3412h ; this defines the same number as packed 

A packaged BCD depends on what you want to do with it. If you want ADD, MUL, DIV, SUB , you must provide the values โ€‹โ€‹required by these instructions. Also remember to add zero bytes to the beginning and end of your digits for hyphens.

+2
source

All Articles