Convert string to number and vice versa

What would be the difficulty of converting a string to its equivalent number, or vice versa? Does it change depending on the programming language?

At first glance, you need to cross the entire string to convert it to a number, so is it O (n) or is some type casting used?

This doubt arose when I wrote the procedure to check whether a given number is a palindrome or not. One approach would be to continue dividing the number by base (10 here), accumulate numbers and add them together at the end. Example: 309/10 = rem (9), 30/10 = rem (0), 3/10 = rem (3). we get 903.

Another approach I took was to convert this number to a string, and since strings have many member functions for splitting, inverting, etc., the code was much shorter and cleaner, but is it better for this ?

+7
source share
5 answers

Numeric strings are numbers formatted in positional notation, so you need to take into account the value of each digit times the base power in order to convert the number to binary format.

So yes, this is an O (N) operation, because the runtime increases linearly as more digits are added. However, in practice, N can be limited to any numerical data types that the language supports (for example, int32_t, int64_t). But if types of numbers with arbitrary precision are used (some languages, such as Python, are used by default), then there is no limit to the number of digits (except for explicitly available memory).

+10
source

To convert to a number, you should always read all the numbers. Therefore, it is at least O(n) .

Now let's do something like (pseudo-code)

 a = 0 foreach digit in string do a = 10 * a + digit end 

Is O(n) . So the complexity is O(n)

+3
source

C # and C / C ++ do not have special string information that represents a (possible) numeric value. Therefore, when converting, they must parse the string digit by digit.

However, the number of digits is limited, so we only have O (1): the conversion time is limited (usually by converting the largest number). For a 32-bit int, the conversion must take into account a maximum of 10 decimal digits (and possibly a character).

The conversion from the string is actually equal to O (1), since during its analysis it is enough to consider only a limited number of characters (10 + 1 in the case of a 32-bit int).

Strictly speaking, we cannot use O -notation for the int-to-string conversion case, since the maximum value of int is limited. In any case, the time required for the conversion (in both directions) is limited by a constant.

As @Charles points out, other languages ​​(Python) can actually use arbitrary precision numbers. For parsing such numbers, the time is O(number of digits) , which is O(string length) and O(log(number)) O(string length) O(log(number)) for both conversions, respectively. With numbers with arbitrary precision, you can’t do this faster, because for both conversions, each digit must be taken into account. For transformations to / from numbers with limited accuracy, the same argument O(1) is applied. However, I myself did not consider parsing in Python, so perhaps a less efficient algorithm is used there.


EDIT: after @Steve's suggestion, I checked that parsing in C / C ++ and C # skips over the leading spaces, so the time to convert string-> int is actually O(input length) . In case it is known that the string is truncated, the conversion is again O(1) .

0
source

I am sure that working with pure numeric operators (in C ++ and C # I think it will be the operator of the module "%") will be more efficient if encoded correctly, because at some level you need to check the similar ones (does the end end ) and doing the conversion between a string and a number can only add to the complexity of the operation if you can do the same without doing this conversion.

However, I will not worry about the effect of performance on the conversion of numbers and strings, because this is probably negligible compared to the performance impact of most other areas of the program. Numeric types are limited to 64 bits, which leads to relatively low restrictions on the number of digits that you can plan in any case if you do not implement / use types with a large number of characters with custom code.

You do not need to worry about O (n) complexity, where n is the value of a number. It will be more like O (n), where n is the number of digits (which has the low cap mentioned) or (as mentioned in another answer) O (log (n)) if n is the value of the number. Relatively negligible performance impact.

Now, if, as you think, you have no restrictions on N (which is impossible, because with 2 GB of RAM you can only store numbers up to 2 billion digits), then we may have to think more about the performance of performing mathematical operators. Consider the performance of the "%" and "/" operator for this large number. But then we understand that to convert a number to a string, basically using the same operators. Once again, you cannot beat treat it as a number directly if you do it right.

0
source

If you convert the number N to a string. It takes O (log (N)) with a base of 10. (If you divide by 10 and save the remainder) If you convert a string with a length of N, then it takes O (N). (If you use an algorithm that continues to add to your number 10 ^ (N) * the digit (N))

If you use functions that are not yours (say for a string), you can only expect them to be slower.

0
source

All Articles