Is there some kind of mathematical “optimal” base that will speed up factorial calculation?

Is there some kind of mathematical "optimal" base that will accelerate factorial calculation?

Background: Just for fun, I am implementing my own bigum library. (-: Is this my first mistake? :-). I experiment with various bases used in internal representation and regression testing, writing out the exact values ​​(in decimal form) for n factorial (n!).

The way my bignum library represents integers and does multiplication, time is proportional to the total number of "1" bits in the internal representation of n !. Using base 2, 4, 8, 16, 2 ^ 8, 2 ^ 30, etc. In my internal view, everyone gives me exactly the same total number of bits "1" for any particular number.

If I made a mistake, any factorial (n!) Represented in base 18 has less than “1” bit than the same value presented in base 10 or either base 16 or base 19. And so (in principle), using base 18, my bigum library will work faster than using base 10 or some binary base 2 ^ w or base 19. I think this is due to the fact that n! either shorter or has more “trailing zeros” or both when printing on base 18 than in base 10 or either in base 16 or base 19. Is there any other base that will work even better than base 18? In other words, is there a base that represents n! with even fewer bits "1" than the base 18?

This is not a duplicate of "What is a convenient base for a signal library and primitive testing algorithm?" because I suspect that the "optimal base for working with integers, which are known to be large factorials, with many factors 2 and 3," differs from the "optimal base for working with integers that do not have any small factors and possibly prime. " (-: speeds up factorial calculations - perhaps due to other types of calculations - is my second mistake ?: -)

edit: For example:

(decimal) 16! == (decimal ) == 20,922,789,888,000 // uses decimal 14 "1" bits (dozenal ) == 2,41A,B88,000,000 // uses decimal 10 "1" bits (hexadecimal) == 130,777,758,000 // uses decimal 18 "1" bits (octadecimal) == 5F,8B5,024,000 // uses decimal 14 "1" bits 

(I store more or less digits on the right, without commas, as well as some metadata overhead). (Although you might think that "as the base grows, you will use less than" 1 "bit to represent the given number" or "As the base grows, you will use fewer non-zero digits to represent the given number," the above example shows that this is not always So.)

I save each digit as a small integer ("int" or "long int" or "byte"). Is there any other smart way to store numbers? I am sure my computer stores these integers in binary format - each digit "1", "2", "4", "8" and "G" uses one bit of "1"; each digit "3", "5", "6", "9" and "A" uses two "1" bits; each digit "7" and "B" uses three "1" bits; each “F” digit uses four “1” bits, etc.

Both decimal and octal representations of this value (16!) Require 14 "1" bits. Therefore, I made a mistake in my previous calculations: for every n representing n! in the octadecimal, the bits "1" are not always less than the value equal to the decimal value. But the question still remains: is there any other “optimal” base that requires the least amount of 1 bit to store large factorials?

Someone asks: "How do you store these numbers?" Well, that’s exactly my question - what is the best way to store numbers of form n !? I could internally use the numbers in base 10 or some kind of base of two bases, or base 18, or some other base. Which one is better? I could store these integers internally as a 1D array of digits, the length of which is required to store all the digits. Is there a reasonable way to print 100! in decimal without such an array?

+6
language-agnostic integer biginteger bignum factorial
source share
4 answers

If you are simply trying to optimize the runtime to calculate the factorial, and changing the base is the only parameter you are changing, then the optimal base will probably contain small factors. 60 may be a smart choice. If you want to experiment, I would try different bases of the form (2 ^ a) (3 ^ b) (5 ^ c)

Improving the speed of multiplication is perhaps the best way to performance. What algorithm do you use for multiplication? (school book, Karatsuba, Toom-Cook, FFT, ...)

There are other factors to consider. If you often convert numbers to decimal numbers, then a base that has a capacity of 10 will make the conversion as fast as possible.

Many (*) years ago, I wrote a base-6 floating point library specifically to solve the problem of repeatedly multiplying / dividing by 2 and / or 3. But if you are not trying to solve a specific problem, I think that you will be better served by optimizing your algorithms in general, and not just factorial optimization.

casevh

(*) I initially said “A few years ago,” until I remembered that the program worked for many days on a 12-MHz 80286.

+5
source share

While, from a purely mathematical point of view, the optimal base is e (and after rounding to the nearest integer - 3), from a practical point of view, for the bigmoon library on the computer, select the size of the machine word as the base for your number system (2 ^ 32 or 2 ^ 64). Yes, this is a huge, but higher level of abstraction of your system by Bignem - this is the decay point, basic calculations in machine words are the fast part, so delegate as many calculations to the instructions of the low-level processor, while maintaining its minimal work.

And no, this is not a mistake. This is a very good training exercise.

+2
source share

I will not pretend to know any math, so do not accept my answer as the sacred "optimal" that you are probably looking for. If I had to do factorial as quickly as possible, I would try some approximation (something like Stirling's approximation) or reduce the number of multiplications, because multiplication is an expensive operation. If you represent a number in a k-base, you can simulate multiplying by k using shift. If you select 2 bases, half of all multiplications will be a shift. Other multiplications are shifts and one bit switch. If you are aiming to minimize the number "1" in the representation of the number, it depends on what numbers you represent. As the base grows, you will use less than “1” to represent the given number, but you will need to have more bits for each order, which means more potential “1”. Hope this helps at least a little, if not, just ask, I will try to answer.

+1
source share

If by “1” bits you mean numbers, I suggest a base of 256 or 65536. In other words, make each byte / word a “digit” for the purposes of your math. The computer regularly processes these numbers and is optimized for this. Your factorials will be fast, as well as other operations.

Not to mention the fact that the computer easily copes with the transition from a similar base to them. (rhyming unintentionally)

+1
source share

All Articles