C # and C / C ++ do not have special string information that represents a (possible) numeric value. Therefore, when converting, they must parse the string digit by digit.
However, the number of digits is limited, so we only have O (1): the conversion time is limited (usually by converting the largest number). For a 32-bit int, the conversion must take into account a maximum of 10 decimal digits (and possibly a character).
The conversion from the string is actually equal to O (1), since during its analysis it is enough to consider only a limited number of characters (10 + 1 in the case of a 32-bit int).
Strictly speaking, we cannot use O -notation for the int-to-string conversion case, since the maximum value of int is limited. In any case, the time required for the conversion (in both directions) is limited by a constant.
As @Charles points out, other languages ββ(Python) can actually use arbitrary precision numbers. For parsing such numbers, the time is O(number of digits) , which is O(string length) and O(log(number)) O(string length) O(log(number)) for both conversions, respectively. With numbers with arbitrary precision, you canβt do this faster, because for both conversions, each digit must be taken into account. For transformations to / from numbers with limited accuracy, the same argument O(1) is applied. However, I myself did not consider parsing in Python, so perhaps a less efficient algorithm is used there.
EDIT: after @Steve's suggestion, I checked that parsing in C / C ++ and C # skips over the leading spaces, so the time to convert string-> int is actually O(input length) . In case it is known that the string is truncated, the conversion is again O(1) .
Vlad
source share