Thank you, Bing Copilot (ChatGPT), for giving me another “thing I just learned” to blog about.
In the early days of “K&R C”, things were quite a bit different. C was not nearly as portable as it is today. While the ANSI-C standard helped quite a bit, once it became a standard, there were still issues when moving C code from machines of different architectures — for example:
int x;
What is x? According to the C standard, and “int” is “at least 16 bits.” On my Radio Shack Color Computer, and int was 16-bits (0-65535). I expect on my friend’s Commodore Amiga, the int was 32-bits, though I really don’t know. And even when you “know”, assuming that to be the case is a “bad thing.”
I used a K&R C compiler on my CoCo, and later on my 68000-based MM/1 computer. That is when I became aware that an “int” was different. Code that worked on my CoCo would port fine to the MM/1, since it was written assuming an int was 16-bits. But trying to port anything from the MM/1 to the CoCo was problematic if the code had assumed an int was 32-bits.
When I got a job at Microware in 1995, I saw my first ANSI-C compiler: Ultra C. To deal with “what size is an int” issues, Microware created their own header file, types.h, which included their definitions for variables of specific sizes:
u_int32 x;
int32 y;
All the OS library calls were prototyped to use these special types, though if you know an “unsigned long” was the same as an “u_int32” or a “short” was the same as an “int16” you could still use those.
But probably shouldn’t.
In those years, I saw other compilers do similar things, such as “U32 x;” and “I16 y”. I expect there were many variations of folks trying to solve this problem.
Some years later, I used the GCC compiler for the first time and learned that the ANSI-C specification now had it’s own types.h — called stdint.h. That gave us things like:
uint32_t x;
int32_t y;
It was easy to adopt these new standard definitions, and I have tried to use them ever since.
I was also introduced in to the defines that specified the largest value that would fit in an “int” or “long” on a system – limits.h:
...
#define CHAR_MAX 255 /*unsigned integer maximum*/
#define CHAR_MIN 0 /*unsigned integer minimum*/
/* signed int properties */
#define INT_MAX 32767 /* signed integer minimum*/
#define INT_MIN (-32767-_C2) /*signed integer maximum*/
/* signed long properties */
#define LONG_MAX 2147483647 /* signed long maximum*/
#define LONG_MIN (-2147483647-_C2) /* signed long minimum*/
...
The values would vary based on if your system was 16-bits, 32-bits or 64-bits. It allowed you to do this:
int x = INT_MAX;
unsigned int y = UINT_MAX;
…and have code that would compile on a 16-bit or 64-bit system. If you had tried something like this:
unsigned int y = 4294967295; // Max 32-bit value.
…that code would NOT work as expected when compiled on a 16-bit system (like my old CoCo, or an Arduino UNO or the PIC24 processors I use at work).
I learned to use limits.h.
But this week, I was working on code that needed to find the highest and lowest values in a 32-bit number range. I had code like this:
uint32_t EarliestSequenceNumber = 4294967295;
uint32_t LatestSequenceNumber = 0;
And that works fine, and should work fine on any system where an int can hold a 32-bit value. (Though I used hex, since I know 0xffffffff is the max value, and always have to look up or use a calculator to find out the decimal version.)
Had I been using signed integers, I would be doing this:
int32_t LargestSignedInt = 2147483647;
Or I’d use 0x7fffffff.
As I looked at my code, I wondered if C provided similar defines for the stdint.h types.
stdint.h also has stdsizes!
And it does! Since all of this changed/happened after I already “learned” C, I never got the memo about new features being added. Inside stdint.h are also defines like this:
#define INT8_MAX (127)
#define INT8_MIN (-128)
#define UINT8_MAX (255)
#define INT16_MAX (32767)
#define INT16_MIN (-32768)
#define UINT16_MAX (65535)
#define INT32_MAX (2147483647)
#define INT32_MIN (-2147483648)
#define UINT32_MAX (4294967295)
#define INT64_MAX (9223372036854775807)
#define INT64_MIN (-9223372036854775808)
#define UINT64_MAX (18446744073709551615)
…very similar to what limits.h offers for standard ints, etc. Neat!
Now mode code can do:
uint32_t EarliestSequenceNumber = UINT32_MAX;
uint32_t LatestSequenceNumber = 0;
…and that’s the new C thing I learned today.
And it may have even been there when I first learned about stdint.h and I just did not know.
And knowing is half the battle.