©Conrad Weisert, March 17, 2012
Our Information Technology profession is filled with controversial issues and
with concepts that are subject to subtle shades of interpretation. The meaning
int primitive data type in Java
is not one of them. A Java1
simply a 32-bit signed binary integer. It is never anything else.
Nevertheless, a recent undergraduate course textbook2 manages to confuse readers with this bit of function documentation3:
* @param n the non-negative integer in decimal notation . . public static String fctnName (int n)
It's hard to explain such a gaffe. Perhaps the example was adapted and carelessly edited from a similar routine coded in another programming language.
This example set off a sprited discussion among students in an intermediate-level programming course. Some of them speculated on how one might stretch the meaning of the comment to explain the author's intent.
None of those speculations could be justified. The quoted documentation is simply erroneous and certainly not worth a debate.
In particular, the following considerations, which a few students raised, have no relevancy to the issue:
int. An end user might have entered a decimal integer into a screen form. Some other function might have returned it as a result.
But that was then and the specification of the function is now. A function specification
is what it is, not its history. There
is simply no way for the function parameter
to have any relationship whatever to "decimal notation".
nmust be "non-negative". That's a perfectly normal way of specifying a legitimate constraint on an input or function parameter.
If we delete the last three words from the comment line, the example will make sense. It may have other flaws, but they're not apparent from this small fragment.
Binary integers of various sizes are a fundamental built-in data type of the C family of programming languages. They are the only legitimate operands for these operators:
Furthermore, there is no built-in decimal data type4 in C, C++, or Java. A programmer using those languages must confront and master binary integers and exploit them in everyday programming.
double) also uses binary representation. That's why 0.1, for example, isn't exact.
Return to articles on technical articles.
Return to IDI home page.