Category Archives: C Programming

My first C program for CoCo DISK BASIC.

On this day in history … I built the CMOC compiler and compiled my first C program for non-OS-9 CoCO.

I created this source file:

int main()
{
	char *ptr = 1024;
	while (ptr < 1536) *ptr++ = 128;
	return 0;
}

I compiled it using “cmoc hello.c” and it produces “hello.bin”.

I created a new blank disk image using “decb dskini C.DSK”.

I copied the binary to that disk image using “decb copy hello.bin C.DSK,HELLO.BIN -2”

I booted up the XRoar emulator and mounted that disk image as the first drive.

I did LOADM”HELLO” and then EXEC.

And so it begins…

Reversing bits in C

In my day job, we have a device that needs data sent to it with the bits reversed. For example, if we were sending an 8-bit value of 128, that bit pattern is 10000000. The device expects the high bit first so we’d send it 00000001.

In one system, we do an 8-bit bit reversal using a lookup table. I suppose that one needed it to be really fast.

In another (using a faster PIC24 chip with more RAM, flash and CPU speed), we do it with a simple C routine that was easy to understand.

I suppose this breaks down to four main approaches to take:

  • Smallest Code Size – for when ROM/flash is at a premium, even if the code is a confusingf mess.
  • Smallest Memory Usage – for when RAM is at a premium, even if the code is a confusing mess.
  • Fastest – for when speed is the most important thing, even if the code is a confusing mess.
  • Clean Code – easiest to understand and maintain, for when you don’t want code to be a confusing mess.

In our system, which is made up of multiple independent boards with their own CPUs and firmware, we do indeed have some places where code size is most important (because we are out of room), and other places where speed is most important.

When I noticed we did it two different ways, I wondered if there might be even more approaches we could consider.

I did a quick search on “fastest way to reverse bits in C” and found a variety of resources, and wanted to point out this fun one:

https://graphics.stanford.edu/~seander/bithacks.html#BitReverseObvious

At that section of this lengthy article are a number of methods to reverse bits. Two of them make use of systems that support 64-bit math and do it with just one line of C code (though I honestly have no understanding of how they work).

Just in case you ever need to do this, I hope this pointer is useful to you.

Happy reading!

Checksums and zeros and XMODEM and randomness.

A year or two ago, I ran across some C code at my day that finally got me to do an experiment…

When I was first using a modem to dial in to BBSes, it was strictly a text-only interface. No pictures. No downloads. Just messages. (Heck a physical bulletin board at least would let you put pictures on it! Maybe whoever came up with the term BBS was just forward thinking?)

The first program I ever had that sent a program over the modem was DFT (direct file transfer). It was magic.

Later, I got one that used a protocol known as XMODEM. It seems like warp speed compared to DFT!

XMODEM would send a series of bytes, followed by a checksum of those bytes, then the other end would calculate a checksum over the received bytes and compare. If they matched, it went on to the next series of bytes… If it did not, it would resend those bytes.

Very simple. And, believe it or not, checksums are still being used by modern programmers today, even though newer methods have been created (such as CRC).

Checking the sum…

A checksum is simple the value you get when you add up all the bytes of some data. Checksum values are normally not floating point, so they will be limited to a fixed range. For example, an 8-bit checksum (using one byte) can hold a value of 0 to 255. A 16-bit checksum (2 bytes) can hold a value of 0-65535. Since checksums can be much higher values, especially if using an 8-bit checksum, the value just rolls over.

For example, if the current checksum calculated value is 250 for an 8-bit checksum, and the next byte being counted is a 10, the checksum would be 250+10, but that exceeds what a byte can hold. The value just rolls over, like this:

250 + 10: 251, 252, 253, 254, 255, 0, 1, 2, 3, 4

Thus, the checksum after adding that 10 is now 4.

Here is a simple 8-bit checksum routine for strings in Color BASIC:

0 REM CHKSUM8.BAS
10 INPUT "STRING";A$
20 GOSUB 100
30 PRINT "CHECKSUM IS";CK
40 GOTO 10

100 REM 8-BIT CHECKSUM ON A$
110 CK=0
120 FOR A=1 TO LEN(A$)
130 CK=CK+ASC(MID$(A$,A,1))
140 IF CK>255 THEN CK=CK-255
150 NEXT
160 RETURN

Line 140 is what handles the rollover. If we had a checksum of 250 and the next byte was a 10, it would be 260. That line would detect it, and subtract 255, making it 4. (The value starts at 0.)

The goal of a checksum is to verify data and make sure it hasn’t been corrupted. You send the data and checksum. The received passes the data through a checksum routine, then compares what it calculated with the checksum that was sent with the message. If they do not match, the data has something wrong with it. If they do match, the data is less likely to have something wrong with it.

Double checking the sum.

One of the problems with just adding (summing) up the data bytes is that two swapped bytes would still create the same checksum. For example “HELLO” would have the same checksum as “HLLEO”. Same bytes. Same values added. Same checksum.

A good 8-bit checksum.

However, if one byte got changed, the checksum would catch that.

A bad 8-bit checksum.

It would be quite a coincidence if two data bytes got swapped during transfer, but I still wouldn’t use a checksum on anything where lives were at stake if it processed a bad message because the checksum didn’t catch it ;-)

Another problem is that if the value rolls over, that means a long message or a short message could cause the same checksum. In the case of an 8-bit checksum, and data bytes that range from 0-255, you could have a 255 byte followed by a 1 byte and that would roll over to 0. A checksum of no data would also be 0. Not good.

Checking the sum: Extreme edition

A 16-bit or 32-bit checksum would just be a larger number, reducing how often it could roll over.

For a 16-bit value, ranging from 0-65535, you could hold up to 257 bytes of value 255 before it would roll over:

255 * 257 = 65535

But if the data were 258 bytes of value 255, it would roll over:

255 * 258 = 65790 -> rollover to 255.

Thus, a 258-byte message of all 255s would have the same checksum as a 1-byte message of a 255.

To update the Color BASIC program for 16-bit checksum, change line 140 to be:

140 IF CK>65535 THEN CK=CK-65535

Conclusion

Obviously, an 8-bit checksum is rather useless, but if a checksum is all you can do, at least use a 16-bit checksum. If you were using the checksum for data packets larger than 257 bytes, maybe a 48-bit checksum would be better.

Or just use a CRC. They are much better and catch things like bytes being out of order.

But I have no idea how I’d write one in BASIC.

One more thing…

I almost forgot what prompted me to write this. I found some code that would flag an error if the checksum value was 0. When I first saw that, I thought “but 0 can be a valid checksum!”

For example, if there was enough data bytes that caused the value to roll over from 65535 to 0, that would be a valid checksum. To avoid any large data causing value to add up to 0 and be flagged bad, I added a small check for the 16-bit checksum validation code:

if ((checksum == 0) && (datasize < 258)) // Don't bother doing this.
{
    // checksum appears invalid.
}
else if (checksum != dataChecksum)
{
    // checksum did not match.
}
else
{
    // guess it must be okay, then! Maybe...
}

But, what about a buffer full of 00s? The checksum would also be zero, which would be valid.

Conclusion: Don’t error check for a 0 checksum.

Better yet, use something better than a checksum…

Until next time…

When there’s not enough room for sprintf…

Updates:

  • 2022-08-30 – Corrected a major bug in the Get8BitHexStringPtr() routine.

“Here we go again…”

Last week I ran out of ROM space in a work project. For each code addition, I have to do some size optimization elsewhere in the program. Some things I tried actually made the program larger. For example, we have some status bits that get set in two different structures. The code will do it like this:

shortStatus.faults |= FAULT_BIT;
longStatus.faults |= FAULT_BIT;

We have code like that in dozens of places. One of the things I had done earlier was to change that in to a function. This was primarily so I could have common code set fault bits (since each of the four different boards I work with had a different name for its status structures). It was also to reduce the amount of lines in the code and make what they were doing more clear (“clean code”).

void setFault (uint8_t faultBit)
{
    shortStatus.faults |= faultBit;
    longStatus.faults |= faultBit;
}

During a round of optimizing last week, I noticed that the overhead of calling that function was larger than just doing it manually. I could switch back and save a few bytes every time it was used, but since I still wanted to maintain “clean code”, I decided to make a macro instead of the function. Now I can still do:

setFault (FAULT_BIT);

…but under the hood it’s really doing a macro instead:

#define setFault(faultBit) { shortStatus.faults |= faultbit; longStatus.faults |= faultBit; }

Now I get what I wanted (a “function”) but retain the code size savings of in-lining each instance.

I also thought that doing something like this might be smaller:

shortStatus.faults |= FAULT_BIT;
longStatus.faults = shortStatus.faults;

…but from looking at the PIC24 assembly code, that’s much larger. I did end up using it in large blocks of code that conditionally decided which fault bit to set, and then I sync the long status at the end. As long as the overhead of “this = that” is less than the overhead of multiple inline instructions it was worth doing.

And keep in mind, this is because I am 100% out of ROM. Saving 4 bytes here, and 20 bytes there means the difference between being able to build or not.

Formatting Output

One of the reasons for the “code bloat” was adding support for an LCD display. The panel, an LCD2004, hooks up to I2C vie a PCF8574 I2C I/O chip. I wrote just the routines needed for the minimal functionality required: Initialize, Clear Screen, Position Cursor, and Write String.

The full libraries (there are many) for Arduino are so large by comparison, so often it makes more sense to spend the time to “roll your own” than port what someone else has already done. (This also means I do not have to worry about any licensing restrictions for using open source code.)

I created a simple function like:

LCDWriteDataString (0, 0, "This is my message.");

The two numbers are the X and Y (or Column and Row) of where to display the text on the 20×4 LCD screen.

But, I was quickly reminded that the PIC architecture doesn’t support passing constant string data due to “reasons”. (Harvard architecture, for those who know.)

To make it work, you had to do something like:

const char *msg = "This is my message";
LCDWriteDataString (0, 0, msg);

…or…

chr buffer[19];
memcpy (buffer, "This is my message");
LCDWriteDataString (0, 0, msg);

…or, using the CCS compiler tools, add this to make the compiler take care of it for you:

#device PASS_STRINGS=IN_RAM

Initially I did that so I could get on with the task at had, but as I ran out of ROM space, I revisited this to see which approach was smaller.

From looking at the assembly generated by the CCS compiler, I could tell that “PASS_STRINGS=IN_RAM” generated quite a bit of extra code. Passing in a constant string pointer was much smaller.

So that’s what I did. And development continued…

Then I ran out of ROM yet again. Since I had some strings that needed formatted output, I was using sprintf(). I knew that sprintf() was large, so I thought I could create my own that only did what I needed:

char buffer[21];
sprintf (buffer, "CF:%02x C:%02x T:%02x V:%02x", faults, current, temp, volts);
LCDWriteDataString (0, 0, buffer);

char buffer[21];
sprintf (buffer, "Fwd: %u", watts);
LCDWriteDataString (0, 1, buffer);

In my particular example, all I was doing is printing out an 8-bit value as HEX, and printing out a 16-bit value as a decimal number. I did not need any of the other baggage sprintf() was bringing when I started using it.

I came out with these quick and dirty routines:

char GetHexDigit(uint8_t nibble)
{
  char hexChar;

  nibble = (nibble & 0x0f);

  if (nibble <= 9)
  {
    hexChar = '0';
  }
  else
  {
    hexChar = 'A'-10;
  }

  return (hexChar + nibble);
}

char *Get8BitHexStringPtr (uint8_t value)
{
    static char hexString[3];

    hexString[0] = GetHexDigit(value >> 4);
    hexString[1] = GetHexDigit(value & 0x0f);
    hexString[2] = '\0'; // NIL terminate

    return hexString;
}

The above routine maintains a static character buffer of 3 bytes. Two for the HEX digits, and the third for a NIL terminator (0). I chose to do it this way rather than having the user pass in a buffer pointer since the more parameters you pass, the larger the function call gets. The downside is those 3 bytes of variable storage are reserved forever, so if I was also out of RAM, I might rethink this approach.

I can now use it like this:

const char *msgC = " C:"; // used by strcat()
const char *msgT = " T:"; // used by strcat()
const char *msgV = " V:"; // used by strcat()

char buffer[20];

strcpy (buffer, "CF:"); // allows constants
strcat (buffer, Get8BitHexStringPtr(faults));
strcat (buffer, msgC);
strcat (buffer, Get8BitHexStringPtr(current));
strcat (buffer, msgT);
strcat (buffer, Get8BitHexStringPtr(temp));
strcat (buffer, msgV);
strcat (buffer, Get8BitHexStringPtr(volts));

LCDWriteDataString (0, 1, buffer);

If you are wondering why I do a strcpy() with a constant string, then use const pointers for strcat(), that is due to a limitation of the compiler I am using. Their implementation of strcpy() specifically supports string constants. Their implementation of strcat() does NOT, requiring me to jump through more hoops to make this work.

Even with all that extra code, it still ends up being smaller than linking in sprintf().

And, for printing out a 16-bit value in decimal, I am sure there is a clever way to do that, but this is what I did:

char *Get16BitDecStringPtr (uint16_t value)
{
    static char decString[6];
    uint16_t temp = 10000;
    int pos = 0;

    memset (decString, '0', sizeof(decString));

    while (value > 0)
    {
        while (value >= temp)
        {
            decString[pos]++;
            value = value - temp;
        }

        pos++;
        temp = temp / 10;
    }

    decString[5] = '\0'; // NIL terminate

    return decString;
}

Since I know the value is limited to what 16-bits can old, I know the max value possible is 65535.

I initialize my five-digit string with “00000”. I start with a temporary value of 10000. If the users value is larger than that, I decrement it by that amount and increase the first digit in the string (so “0” goes to “1”). I repeat until the user value has been decremented to be less than 10000.

Then I divide that temporary value by 10, so 10000 becomes 1000. I move my position to the next character in the output string and the process repeats.

Eventually I’ve subtracted all the 10000s, 1000s, 100s, 10s and 1s that I can, leaving me with a string of five digits (“00000” to “65535”).

I am sure there is a better way, and I am open to it if it generates SMALLER code. :)

And that’s my tale of today… I needed some extra ROM space, so I got rid of sprintf() and rolled my own routines for the two specific types of output I needed.

But this is barely scratching the surface of the things I’ve been doing this week to save a few bytes here or there. I’d like to revisit this subject in the future.

Until next time…

C and size_t and 64-bits

Do what I say and nobody gets hurt!

At my day job, I work on a Windows application that supervises high power solid state microwave generators. (The actual controlling part is done by multiple embedded PIC24-based boards, which is a good thing considering the issues Windows has given us over the years.)

At some point, we switched from building a 32-bit version of this application to a 64-bit version. The compiler started complaining about various things dealing with “ints” which were not 64-bits, so the engineer put in #ifdef code like this:

#ifdef _NI_mswin64_
    unsigned __int64 length = 0;
#else
    unsigned int length = 0;
#endif

That took care of the warnings since it would now use either a native “int” or a 64-bit int, depending on the target.

Today I ran across this and wondered why C wasn’t just taking care of things. A C library routine that returns an “int” should always expect an int, whether that int is 16-bits (like on Arduino), 32-bits or 64-bits on the system. I decided to look in to this, and saw the culprits were things like this:

length = strlen (pathName);

Surely if strlen() returned an int, it should not need to be changed to an “unsigned __int64” to work.

And indeed, C already does take care of this, if you do what it tells you to do. strlen does NOT return an int:

size_t strlen ( const char * str );

size_t is a special C data type that is “whatever type of number it needs to be.” And by simply changing all the #ifdef’d code to actually use the data type the C library call specifies, all the errors go away and the #ifdefs can be removed.

size_t length = 0;

A better, more stricter compiler might have complained about using an “int” to catch something coming back as “size_t.”

Oh wait. It did. We just chose to solve it a different way.

Until next time…

Redundant C variable initialization for redundant reasons.

The top article on this site for the past 5 or so years has been a simple C tidbit about splitting 16-bit values in to 8-bit values. Because of this, I continue to drop small C things here in case they might help when someone stumbles upon them.

Today, I’ll mention some redundant, useless code I always try to add.,,

I seem to recall that older specifications for the C programming language did not guarantee variables would be initialized to 0. I am not even sure if the current specification defines this, since one of the compilers I use at work has a specific proprietary override to enable this behavior.

You might find that this code prints non-zero on certain systems:

int i;

printf ("i = %d\n", i);

Likewise, trying to print a buffer that has not been initialized might produce non-empty data:

char message[32];

printf ("Message: '%s'\n", message);

Because of this, it’s a good habit to always initialize variables with at least something:

int i=0;

char message[42];
...
memset (message, 0x0, sizeof(message));

Likewise, when setting variables in code, it is also a good idea to always set an expected result and NOT rely on any previous initialization. For example:

int result = -1;

if (something == 1)
{
    result = 10;
}
else if (something == 2)
{
    result = 42;
}
else
{
    result = -1;
}

Above, you can clearly see that in the case none of the something values are met, it defaults to setting “result” to the same value it was just initialized to.

This is just redundant, wasteful code.

And you should always do it, unless you absolutely positively need those extra bytes of code space.

It is quite possible that at some point this code could be copy/pasted elsewhere, without the initialization. On first compile, the coder sees the undeclared “result” and just adds “int result;” at the top of the function. If the final else with “result = -1;” wasn’t there, the results could be unexpected.

The reverse of this is also true. If you know you are coding so you ALWAYS return a value and never rely on initialized defaults, it would be safe to just do “int result;” at the top of this code. But, many modern compilers will warn you of “possibly initialized variables.”

Because of this, I always try to initialize any variable (sometimes to a value I know it won’t ever use, to aid in debugging — “why did I suddenly get 42 back from this function? Oh, my code must not be running…”).

And I always try to have a redundant default “else” or whatever to set it, instead of relying on “always try.”

Maybe two “always tries” make a “did”?

Until next time…

while and if and braces … oh my.

Another day, another issue with a C compiler of questionable quality…

Consider this bit of C code, which lives in an infinite main loop and is designed to do something every now and then (based on the status of a variable being toggled by a timer interrupt):

while (timeToDoSomething == true)
{
    timeToDoSomething = false;

    // Do something.
}

The program in question was trying to Do Something every 25 ms. A timer interrupt was toggling a boolean to true. The main loop would check that flag and if it were true it would set it back to false then handle whatever it was it was supposed to handle.

While this would have worked with “while”, it would really be better as an “if” — especially if the code to handle whatever it was supposed to handle took longer than 25ms causing the loop to get stuck.

Thus, it was changed to an “if”, but a typo left the old while still in the code:

//
while (timeToDoSomething == true)
if (timeToDoSomething == true)
{
    timeToDoSomething = false;

    // Do something.
}

Since things were taking longer than 25ms, the new code was still getting stuck in that loop — and that’s when the while (which was supposed to be commented out) was noticed.

The while without braces or a semicolon after it generated no compiler warning. That seemed wrong, but even GCC with full error reporting won’t show a warning.

Because C is … C.

Curly braces! Foiled again.

In C, it is common to see code formatted using whitespace like this:

if (a == 1)
    printf("One!\n");

That is fine, since it is really just doing this:

if (a == 1) printf("One!\n");

…but is considered poor coding style these days because many programmers seem to be used to languages where indention actually means something — as opposed to C, where whitespace is whitespace. Thus, you frequently find bugs where someone has added more code like this:

if (a == 1)
    printf("One!\n");
    DoSomething ();
    printf("Done.\n");

Above, it feels like it should execute three things any time a is 1, but to C, it really looks like this:

if (a == 1) printf("One!\n");
DoSomething ();
printf("Done.\n");

Thus, modern coding standards often say to always use curly braces even if there is just one thing after the if:

if (a == 1)
{
    printf("One!\n");
}

With the braces in place, adding more statements within the braces would work as expected:

if (a == 1)
{
    printf("One!\n");
    doSomething ();
    printf("Done.\n");
}

This is something that was drilled in to my brain at a position I had many years ago, and it makes great sense. And, the same thing should be said about using while. But while has it’s own quirks. Consider these two examples:

// This way:
while (1);

// That way:
while (1)
{
}

They do the same thing. One uses a semicolon to mark the end of the stuff to do, and other uses curly braces around the stuff to do. That’s the key to the code at the start of this post:

while (timeToDoSomething == true)
if (timeToDoSomething == true)
{
    timeToDoSomething = false;

    // Do something.
}

Just like you could do…

while (timeToDoSomething == true) printf("I am doing something");

…you could also write it as…

while (timeToDoSomething == true)
{
    printf("I am doing something");
}

So when the “if” got added after the “while”, it was legit code, as if the user was trying to do this:

while (timeToDoSomething == true)
{
    if (timeToDoSomething == true)
    {
        timeToDoSomething = false;

        // Do something.
    }
}

Since while can be followed by braces or a statement, it can also be followed by a statement using just braces.

The compiler can’t easily warn about needing a brace, since it is not required to have braces. But if it braces were required, that would catch the issues mentioned here with if and while blocks.

Code that looks like it should at least generate a warning is completely valid and legal C code, and that same code can be formatted in a way that makes it clear(er):

while (timeToDoSomething == true)
    if (timeToDoSomething == true)
    {
        timeToDoSomething = false;

        // Do something.
    }

Whitespace makes things look pretty, but lack of it can also make things look wrong. Or correct when they aren’t.

I suppose the soapbox message of today is just to use braces. That wouldn’t have caught this particular typo (forgetting to comment something out), but its probably still good practice…

Until next time…

Short circuiting of C comparisons

In a language like C, you often have multiple ways to accomplish the same thing. In general, the method you use shouldn’t matter if the end result is the same.

For example:

// This way:

if (a == 1)
{
   function1();
}
else if (a == 2)
{
   function2();
}
else
{
   unknown();
}

// Versus that way:

switch (a)
{
   case 1:
      function1();
      break;

   case 2:
      function2();
      break;

   default:
      unknown();
      break;
}

Both of those do the same thing, though the code they generate to do the same thing may be different.

We might not care which one we use unless we are needing to optimize for code space, memory usage or execution speed.

Optimizing for these things can be done by trail-and-error testing, but there is no guarantee that the method that worked best on the Arduino 16-bit processor and GCC compiler will be the same for a 64-BIT ARM processor and CLANG compiler.

If you ever do make a choice like this, just be sure to leave a comment explaining why you did it in case your code ever gets ported to a different architecture or compiler.

Short circuiting

Many things in C are compiler-specific, and are not part of the C standard. Some compilers are very smart and do amazing optimizations, while others may be very dump and do everything very literally. Here is an example of something I encountered during my day job that may or may not help others.

I had code that was intended to adjust power levels, unless any of the four power generators hit a maximum level. It looked like this:

// Version 1: Do nothing if power limit exceeded.
if ((Power1 > Power1Max) ||
    (Power2 > Power2Max) ||
    (Power3 > Power3Max) ||
    (Power4 > Power4Max))
{
   // Max power hit. Do nothing.
}
else
{
   increasePower ();
}

Having conditionals lead to nothing seems odd. Wouldn’t it make more sense to check to see if we can do the thing we want to do?

// Version 2: Do something is power limit not exceeded.
if ((Power1 < Power1Max) &&
    (Power2 < Power2Max) &&
    (Power3 < Power3Max) &&
    (Power4 < Power4Max))
{
   increasePower ();
}

That looks much nicer. Are there any advantages to one versus the other?

For Version 1, the use of “OR” lets the compiler stop checking the moment any of those conditions is met. If Power1 is NOT above the limit, it then checks to see if Power2 is above the limit. If it is, we are done. We already know that one of these items is above, so no need to check the others. This works great for simple logic like this.

For Version 2, the use of “AND” requires all conditions to be met. If we check Power1 and it is below the limit, we then and check Power2. If that one is NOT below, we are done. We know there is no need to check any of the others.

Those sure look the same to me, and Version 2 seems easier to read.

The first example is basically saying “here is why we won’t do something” while the second example is “here is why we WILL do something.”

Does it matter?

To be continued…

16-bits don’t always add up.

Consider this simple program:

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>

int main(int argc, char **argv)
{
    uint16_t    val1;
    uint16_t    val2;
    uint32_t    result;

    val1 = 40000;
    val2 = 50000;

    result = val1 + val2;

    printf ("%u + %u = %u\n", val1, val2, result);

    return EXIT_SUCCESS;
}

What will it print?

On my Windows PC, I see the following:

40000 + 50000 = 90000

…but if I convert the printf() and run the same code on an Arduino:

void setup() {
  // put your setup code here, to run once:
  Serial.begin(9600);

    uint16_t    val1;
    uint16_t    val2;
    uint32_t    result;

    val1 = 40000;
    val2 = 50000;

    result = val1 + val2;

    //printf ("%u + %u = %u\n", val1, val2, result);
    Serial.print(val1);
    Serial.print(" + ");
    Serial.print(val2);
    Serial.print(" = ");
    Serial.println(result);
}

void loop() {
  // put your main code here, to run repeatedly:

}

This gives me:

40000 + 50000 = 24464

…and this was the source of a bug I introduced and fixed at my day job recently.

Tha’s wrong, int’it?

I tend to write alot of code using the GCC compiler since I can work out and test the logic much quicker than repeatedly building and uploading to our target hardware. Because of that, I had “fully working” code that was incorrect for our 16-bit PIC24 processor.

In this case, the addition of “val1 + val2” is being done using native integer types. On the PC, those are 32-bit values. On the PIC24 (and Arduino, shown above), they are 16-bit values.

A 16-bit value can represent 65536 values in the range of 0-65535. If you were to have a value of 65535 and add 1 to it, on a 16-bit variable it would roll over and the result would be 0. In my example, 40000 + 50000 was rolling over 65535 and producing 24464 (which is 90000 – 65536).

You can see this happen using the Windows calculator. By default, it uses DWORD (double word – 32-bit) values. You can do the addition just fine:

You see that 40,000 + 50,000 results in 90,000, which is 0x15F90 in hex. That 0x1xxxx at the start is the rollover. If you switch the calculator in to WORD mode you see it gets truncated and the 0x1xxxx at the start goes away, leaving the 16-bit result:

Can we fix it?

The solution is very simple. In C, any time there is addition which might result in a value larger than the native int type (if you know it), you simply cast the two values being added to a larger data type, such as a 32-bit uint32_t:

void setup() {
  // put your setup code here, to run once:
  Serial.begin(9600);

    uint16_t    val1;
    uint16_t    val2;
    uint32_t    result;

    val1 = 40000;
    val2 = 50000;

    // Without casting (native int types):
    result = val1 + val2;

    //printf ("%u + %u = %u\n", val1, val2, result);
    Serial.print(val1);
    Serial.print(" + ");
    Serial.print(val2);
    Serial.print(" = ");
    Serial.println(result);

    // Wish casting:
    result = (uint32_t)val1 + (uint32_t)val2;

    Serial.print(val1);
    Serial.print(" + ");
    Serial.print(val2);
    Serial.print(" = ");
    Serial.println(result);
}

void loop() {
  // put your main code here, to run repeatedly:

}

Above, I added a second block of code that does the same add, but casting each of the val1 and val2 variables to 32-bit values. This ensures they will not roll over since even the max values of 65535 + 65535 will fit in a 32-bit variable.

The result:

40000 + 50000 = 24464
40000 + 50000 = 90000

Since I know adding any two 16-bit values can be larger than what a 16-bit value can hold (i.e., “1 + 1” is fine, as is “65000 + 535”, but larger values present a rollover problem), it is good practice to just always cast upwards. That way, the code works as intended, whether the native int of the compiler is 16-bits or 32-bits.

As my introduction of this bug “yet again” shows, it is a hard habit to get in to.

Until next time…

I hate floating point.

See also: the 902.1 incident of 2020.

Once again, oddness from floating point values took me down a rabbit hole trying to understand why something was not working as I expected.

Earlier, I had stumbled upon one of the magic values that a 32-bit floating point value cannot represent in C. Instead of 902.1, a float will give you 902.099976… Close, but it caused me issues due to how we were doing some math conversions.

float value = 902.1;
printf ("value = %f\n", value);

To work around this, I switched these values to double precision floating point values and now 902.1 shows up as 902.1:

double value = 902.1;
printf ("value = %f\n", value);

That example will indeed show 902.100000.

This extra precision ended up causing a different issue. Consider this simple code, which took a value in kilowatts and converted it to watts, then converted that to a signed integer.

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>

int main(int argc, char **argv)
{
    double kw = 64.60;
    double watts = kw * 1000;

    printf ("kw   : %f\n", kw);

    printf ("watts: %f\n", watts);

    printf ("int32: %d\n", (int32_t)watts);

    return EXIT_SUCCESS;
}

That looks simple enough, but the output shows it is not:

kw   : 64.600000
watts: 64600.000000
int32: 64599

Er… what? 64.6 multiplied by 1000 displayed as 64600.00000 so that all looks good, but when converted to a signed 32-bit integer, it turned in to 64599. “Oh no, not again…”

I was amused that, by converting these values to float instead of double it worked as I expected:

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>

int main(int argc, char **argv)
{
    float kw = 64.60;
    float watts = kw * 1000;

    printf ("kw   : %f\n", kw);

    printf ("watts: %f\n", watts);

    printf ("int32: %d\n", (int32_t)watts);

    return EXIT_SUCCESS;
}
kw   : 64.599998
watts: 64600.000000
int32: 64600

Apparently, whatever extra precision I was gaining from using double in this case was adding enough extra precision to throw off the conversion to integer.

I don’t know why. But at least I have a workaround.

Until next (floating point problem) time…