How to represent a number in the base 2 ^ 32?

If I have base number 10 or base 16, how can I change it to base 2 ^ 32?

The reason I'm trying to do this is to implement BigInt as suggested by other members here. Why use a higher base to implement BigInt?

Will it be the same as the whole (base 10) to 2 ^ 32? What will happen after that?

+7
source share
5 answers

Are you trying to find something like a form

a0 + a1 * (2^32) + a2 * (2^32)^2 + a3 * (2^32)^3 + ... 

which is the definition of a base-2 32 system so ignore all the people who told you your question doesn't make sense!

In any case, what you are describing is called a basic transformation . There are quick ways, and there are simple ways to solve this problem. Quick methods are very complex (there are whole chapters of books devoted to this subject), and I am not going to consider them here (not least because I never tried to use them).

One easy way is to first implement two functions in your number system, multiplication and addition. (i.e. implement BigInt add(BigInt a, BigInt b) and BigInt mul(BigInt a, BigInt b) ). Once you decide this, you will notice that the base-10 number can be expressed as:

 b0 + b1 * 10 + b2 * 10^2 + b3 * 10^3 + ... 

which can also be written as:

 b0 + 10 * (b1 + 10 * (b2 + 10 * (b3 + ... 

therefore, if you move from left to right in your input line, you can delete one base digit 10 times at a time and use your add and mul functions to accumulate in your BigInt :

 BigInt a = 0; for each digit b { a = add(mul(a, 10), b); } 

Disclaimer: This method is not computationally efficient, but at least it will start.

Note. Converting from base-16 is much easier because 2 32 is an exact multiple of 16. Thus, the conversion basically comes before the bits are concatenated.

+12
source

Suppose we are talking about base-10:

 a[0]*10^0 + a[1]*10^1 + a[2]*10^2 + a[3]*10^3 + ... + a[N]*10^N 

where each a[i] represents a digit in the range from 0 to 9 inclusive.

I assume that you can parse the string that is your input value and find the array a[] . Once you do this, and assuming that you have already implemented your BigInt class with the + and * operators, you are at home. You can simply evaluate the expression above with an instance of your BigInt class.

You can evaluate this expression relatively efficiently using the Horner method .

I just wrote it from head to toe, and I'll bet that there are much more efficient base conversion schemes.

+5
source

If I have base number 10 or base 16, how can I change it to base 2 ^ 32?

Just like you convert it to any other base. Do you want to write the number n as

 n = a_0 + a_1 * 2^32 + a_2 * 2^64 + a_3 * 2^96 + ... + a_k * 2^(32 * k). 

So, find the greatest power of 2 ^ 32, which divides by n , subtracts the multiple of this power from n and repeats with the difference.

However, are you sure you asked the right question?

I suspect you want to ask another question. I suspect you want to ask: how to parse the base-10 number into an instance of my BigInteger ? It's simple. Compose your implementation code and make sure you implement + and * . I totally disagree with how you actually represent integers, but if you want to use a base of 2 ^ 32, do it. Then:

  BigInteger Parse(string s) { BigInteger b = new BigInteger(0); foreach(char c in s) { b = b * 10 + (int)c - (int)'0'; } return b; } 

I will leave it to you to translate this to C.

+4
source

Base 16 is simple since 2 32 is 16 8 accurate power. So, starting with the least significant digit, read 8 base 16 digits at a time, convert these digits to a 32-bit value, and this is the next digit base-2 32 ".

Base 10 is harder. As you say, if it is less than 2 32 then you just take the value as one base-2 32 "digit. Otherwise, the easiest method I can think of is to use the Long Division algorithm to divide the value of base-10 multiple by 2 32 ; at each step, the remainder is the next base-2 32 ". Perhaps someone who knows more number theory than me can provide a better solution.

+1
source

I think this is a perfectly reasonable thing.

What you do is a very large number (for example, an encryption key) in an array of 32-bit integers.

Basic representation 16 is a 2 ^ 4 base or a series of 4 bits at a time. If you get a stream of base 16 “digits”, fill in the bottom 4 bits of the first integer in your array, and then the next lowest one until you read 8 “digits”. Then move on to the next element of the array.

 long getBase16() { char cCurr; switch (cCurr = getchar()) { case 'A': case 'a': return 10; case 'B': case 'b': return 11; ... default: return cCurr - '0'; } } void read_input(long * plBuffer) { long * plDst = plBuffer; int iPos = 32; *(++plDst) = 0x00; long lDigit; while (lDigit = getBase16()) { if (!iPos) { *(++plDst) = 0x00; iPos = 32; } *plDst >> 4; iPos -= 4; *plDst |= (lDigit & 0x0F) << 28 } } 

There is some fix, for example, completing by switching * plDst to iPos and tracking the number of integers in your array.

There is also some work to convert from base 10.

But that’s enough to get you started.

0
source

All Articles