This is a common question in introductory computer science classes at the university. The main areas of focus are: a) understanding how (integer) numbers are stored as binary digits, and b) the basics of data structures, where if the programming language does not provide the desired data structure, you can use meta or collection structures, such as struct in C, class in C ++ or record in Pascal.
So, how less is an integer stored on a computer? In C, you have char, short, int, long data types that can be used to store integers of different sizes. (I will ignore the long long for this discussion.) Let's say, for the sake of generality, that on this 32-bit platform, the sizes are 8-bit, 16-bit, 32-bit, and 64-bit, respectively. Consider the values ββthat can be represented (to simplify those considered unsigned).
Now, how could you store a larger integer that cannot be stored in unsigned 64-bit format? Create your own large integer data type consisting of several smaller (but standard) integers so that they represent large values.
I think this should point you in the right direction and allow you to write your own answer to your homework or exam question.
mctylr
source share