©Conrad Weisert, September 1, 2013
In both general library code and custom applications students are introduced to 32-bit
integers as a common default. In the C family of languages, that's an
int, consuming 4 bytes.
Early micro-computers did arithmetic a byte at a time, so programmers used the smallest
integers that they believed wouldn't overflow in normal use, often 16 bits, sometimes even 8.
Larger computers and later small ones did parallel arithmetic on 32-bit and eventually 64-bit numbers,
so the performance penalty disappeared or became insignificant. The 32-bit
int became a default specification;
if you didn't specify otherwise, that's what you got in many contexts.
So textbooks and program libraries were filled with general-purpose integer routines, such as: greatest common divisor and integer to string conversion, that were limited to 32-bit arguments and 32-bit results.
For many purposes the range of a 32-bit integer is more than adequate: 232 is more than 4 billion. That's more than adequate for almost any array subscripting or inventory counting. But when we develop a general-purpose function that works with integers, there's rarely a reason to restrict its range.
|Thirty two bits is also adequate for amounts of money if we're a grocery store but probably not if we're a bank or a government agency. That's a different issue, however. We're not talking here about quantities that have a unit of measure. Object-oriented classes give us full control over the internal representation of such data.|
library functions on this web site declare their arguments and results as
long, whenever such numbers make sense.
If there's a performance
penalty, it's minuscule on today's computers.
Last modified September 1, 2013
Return to articles on technical articles.
Return to IDI home page.