Category Archives: C Programming

Build a (marginally) better malloc and free

This is a dumb one, but maybe someone else will find it useful.

I have been working on some C code that uses dynamically allocated linked lists. There are index structures and record structures and individual elements of different kinds in the records, all malloc()’d and then (hopefully) free()’d at the end.

Since my background is low-resource embedded systems (one system has a mere 8K of RAM), I have never really done much with malloc(). In fact, on some of the environments I have worked in they did not even provide a malloc()/free(). And this is probably a good thing. Without an OS watching over you, any memory allocated that did not get properly freed (a “memory leak“) can be big trouble for an embedded system meant to “run forever without a reboot.”

What I am writing right now is for a PC, and my understanding is if you allocate a bunch of memory then exit, Windows will clean it up for you:

int main()
{
    char *ptr = malloc (65535); // Allocate 64K

    // And now just exit without freeing it.
    return 0;
}

But no one should be writing code like that intentionally. This is the same mentality that has people who throw their trash on the ground because “someone else will clean it up for me.” Just because you have a garbage collector doesn’t mean you should rely on it to clean up after your mistakes.

But I digress.

In my program, I was wondering if I was getting it right. Debug “printf” messages can only go so far in to seeing what is going on. Did I free every last record? Did all the elements inside the records get freed as well?

I have no idea.

MAUI to the rescue!

Then a memory popped into my head. When I worked for Microware Systems Corp., we had a New Media division that worked on digital TV set-top boxes. (Yes, Virginia, I saw a demo of streaming video-on-demand with pause, fast forward and rewind back in the summer of 1995. But that’s a story for another day…)

The D.A.V.I.D. product (Digital Audio Video Interactive Decoder) used various APIs to handle things like networking, MPEG video decoding, sound, and graphics. The graphics API was called M.A.U.I. (Multimedia Application User Interface).

MAUI had various APIs such as GFX (graphics device), DRW (drawing API), ANM (animation), CDB (configuration database?), BLT (a blitter) and more. There was even a MEM (memory) API that was a special way to allocate and free memory.

I did not understand much of this at the time, beyond the entry level stuff I sometimes taught in a training course.

But the memory API had some interesting features. The manual introduced it as follows:

“Efficient memory management is a requirement for any graphical environment. Graphical environments tend to be very dynamic when it comes to allocating and freeing memory segments. Therefore, unless an application takes specific steps to avoid it, memory fragmentation can become a serious problem.

The Shaded Memory API provides the facilities applications (and other APIs) required to manage multiple pools of memory.

– Microware MAUI manual, 2000

One interesting feature of this API was the ability to detect memory overflows and underflows. For example, if you allocate 50 bytes and get a pointer to that memory, then copy out 60 bytes, you have overflowed that memory by 10 bytes (“buffer overrun“). Likewise, if the pointer started at 0x1000 in memory, and you wrote to memory before that pointer, that would be a buffer underflow.

The manual describes it as follows:

To print the list of overflows/underflows call mem_list_overflows(). When a shade is created with the check overflows option true, safe areas are created at the beginning and the end of the segment. If these safe areas are overwritten, the overflow/underflow situation is reported by mem_list_overflows().

– Microware MAUI manual, 2000

This gave me an idea on how to verify I was allocating and freeing everything properly. I could make my own malloc() and free() wrapper that tracked how much memory was allocated and freed, and have a function that returned the current amount. Check it at startup and it should be zero. Check it after all the allocations and it should have some number. Free all the memory and it should be back at zero. (Malloc already tracks all of the internally, but C does not give us a legal way to get to that information.)

Simple!

Sounds simple!

At first, you might think it could be as simple as something like this:

static int S_memAllocated = 0;

void *MyMalloc (size_t size)
{
    S_memAllocated += size;

    return malloc (size);
}

Simple! But, when it comes time to free(), there is no way to tell how big that memory block is. All free() gets is a pointer.

Sounds almost simple!

To solve this problem, we can simply store the size of the memory allocated in the block of allocated memory. When it comes time to free, the size of that block will be contained in it.

To do this, if the user wanted to malloc(100) to get 100 bytes, you would allocate 100 + the size of an integer. You would then copy an integer containing the size of this allocated segment into the first bytes of the block (and increment the memory counter by that amount). After that, the pointer returned to the user should be after that copied integer. Like this:

malloc (100 + sizeof(int));
+---+----------------------+
|int| the user's 100 bytes |
+---+----------------------+
^
|_ return this location

When this memory is free()’d, the passed-in pointer would be adjusted back past the integer. Those bytes could be copied into an int (so you know how much to subtract from the counter) and then the block free()’d.

Sounds sorta simple?

Here is what I quickly came up with…

// MyMalloc.h
#ifndef MYMALLOC_H_INCLUDED
#define MYMALLOC_H_INCLUDED
size_t GetSizeAllocated (void);
void *MyMalloc (size_t size);
void MyFree (void *ptr);
#endif // MYMALLOC_H_INCLUDED

// MyMalloc.c
#include <stdlib.h> // for malloc()/free();
#include <string.h> // for memcpy()

#include "MyMalloc.h"

static size_t S_bytesAllocated = 0;

size_t GetSizeAllocated (void)
{
    return S_bytesAllocated;
}

void *MyMalloc (size_t size)
{
    // Allocate room for a "size_t" plus user's requested bytes.
    void *ptr = malloc (sizeof(size) + size);
    
    if (NULL != ptr)
    {
        // Add this amount.
        S_bytesAllocated = S_bytesAllocated + size;
        
        // Copy size into start of memory.
        memcpy (ptr, &size, sizeof (size));

        // Move pointer past the size.
        ptr = ((char*)ptr + sizeof (size));
    }

    return ptr;
}

void MyFree (void *ptr)
{
    if (NULL != ptr)
    {
        size_t size = 0;

        // Move pointer back to the size.
        ptr = ((char*)ptr - sizeof (size));
        
        // Copy out size.
        memcpy (&size, ptr, sizeof(size));

        // Subtract this amount.
        S_bytesAllocated = S_bytesAllocated - size;
        
        // Release the memory.
        free (ptr);
    }
}

Then, as a test, I wrote this program that randomly allocates ‘x’ blocks of memory of random sizes… then frees all those blocks.

#include <stdio.h>
#include <stdlib.h>

#include "MyMalloc.h"

#define NUM_ALLOCATIONS     100
#define LARGEST_ALLOCATION  1024

int main()
{
    char *ptr[NUM_ALLOCATIONS];
    
    printf ("Memory Allocated: %zu\n", GetSizeAllocated());

    // Allocate    
    for (int idx=0; idx<NUM_ALLOCATIONS; idx++)
    {
        ptr[idx] = MyMalloc ( rand() % LARGEST_ALLOCATION + 1);
        
    }

    printf ("Memory Allocated: %zu\n", GetSizeAllocated());

    // Free    
    for (int idx=0; idx<NUM_ALLOCATIONS; idx++)
    {
        MyFree (ptr[idx]);
    }

    printf ("Memory Allocated: %zu\n", GetSizeAllocated());

    return EXIT_SUCCESS;
}

When I run this, I see the memory count before the allocation, after the allocation, then after the free.

Memory Allocated: 0
Memory Allocated: 45464
Memory Allocated: 0

Since it is randomly choosing sizes, the number in the middle may* be different when you run it.

I then plugged this code into my program (I did a search/replace of malloc->MyMalloc and free->MyFree) and added the same memory prints at the start, after allocation, and after freeing.

And it worked. Whew! I guess I did not need to spend time writing MyMalloc() or this post after all.

But I had fun doing it.

Additional thoughts…

Thinking back to the MAUI memory API, extra code could be added to put a pattern at the start and end of the block. A function could be written to verify that the block still had those patterns intact, else it could report a buffer overflow or underflow.

Also, I chose “size_t” for this example just to match the parameter that malloc() takes. But, if you knew you would never be allocating more than 255 bytes at a time, you could change the value you store in the buffer to a uint8_t. Or if you knew 65535 bytes was your upper limit, use a uint16_t. This would prevent wasting 8 bytes (on a 64-bit compiler) at the start of each malloc’d buffer.

But why would you want to do that? If you were on a PC, you wouldn’t need to worry about a few extra bytes each allocation. And if you were on a memory constrained embedded system, you probably shouldn’t be doing dynamic memory allocations anyway! (But if you did, maybe uint8_t would be more than enough.)

I suppose there are plenty of enhanced memory allocation routines in existence that do really useful and fancy things. Feel free to share any suggestions in the comments.

Until next time…

Bonus Tip

If you want to integrate this code in your program without having to change all the “malloc” and “free” instances, try this:

// Other headers
#include "MyMalloc.h"
#define malloc MyMalloc
#define free MyFree

That will cause the C preprocessor to replace instances of “malloc” and “free” in your code to “MyMalloc” and “MyFree” and then it will compile referencing those functions instead.


* Or you may see the same number over and over again each time you run it. But that’s a story for another time…

Old C dog, new C tricks part 1: NULL != ptr

See Also: part 1, part 2, part 3, part 4 and part 5.

Updates:

  • 2025-02-19 – “new information has come to light!”

As someone who learned C back in the late 1980s, I am constantly surprised by all the “new” things I learn about this language. Back then, it was a K&R-era compiler, so there were no prototypes, and functions looked like this:

main(argc,argv)
int argc;          /* argc = # of arguments on command line */
char *argv[];      /* argv[1-?] = argurments */
{
    ...stuf...
} 

…and this…

MallocError(wpath)
int wpath;
{
   ShutDown(wpath);
   fputs("\nFATAL ERROR:  Towel was unable to allocate necessary memory to process\n",stderr);
   fputs(  "              this directory.\n",stderr);
   sleep(0);
   exit(0);
}

Today’s article is not about how old I am, but about something I just started doing, and wish I had done long ago.

Yoda would be happy…

When I learned to program BASIC, I learned how to compare a variable:

IF A=42 THEN PRINT "DON'T PANIC!"

When I learned C, the thing I had to get used to was double equals “==” for compare and single equal “=” for assignment:

int a = 42;

if (a == 42)
{
    printf ("Don't Panic!\n");
}

This, of course, leads to a common mistake that I have stumbled on many, many times over the past decades: Sometimes a programmer misses one of those equals:

if (a = 42)
{
    printf ("Don't Panic!\n");
}

This will cause the code to always enter that section and run it, regardless of what you think “a” is set to. Why? Because it is basically saying “if a can be set to 42, then…”


Or does it?

Normally, I wait for a follow up to discuss corrections and additional details I learn from the comments, but this one deserves an immediate revision. Aidan Hall left this tidbit:

It’s even worse than what you suggest! Assignment expressions evaluate to the value that was assigned (on the RHS), so this if block wouldn’t run:

if (a = 0) {
puts(“zeroed”);
}

– Aidan Hall

I had mistakenly thought it was testing the result of “can a be assigned” and assuming this would always be true. I did not realize it was the value of the assignment that was used. Wowza. Thanks, Aidan! And now back to the original content…


By leaving out that second equal, it now becomes an assignment. It might as well be saying:

if (1)
{
    a = 42;
    printf ("Don't Panic!\n");
}

I have caught this type of thing in code I have worked on at several jobs. And, I’ve caught it in code I wrote as well. Even recently…

But Yoda would be proud. Smarter programmers already figured out that you can write those comparisons backwards, like this:

if (42 == a)
{
    printf ("Don't Panic!\n");
}

The first time I ever saw that was at a former job, and it was code from a team over in India. I thought this was very odd, and wondered if it was some odd convention in that country, similar to how in America we would write “$5” for five dollars, but in Europe it might be “5 €” for five Euros.

Honestly, as backwards as that looks to me, phonetically it makes more sense when you read it ;-)

And don’t get me started on America’s Month/Day/Year and how confusing OS-9’s “backwards” time of Year/Month/Day was… but I quickly adopted that, since you can sort dates that way, but not in the “normal” way.

But I digress…

By reversing these comparisons, you now eliminate the possibility of forgetting an equal. This won’t give an error (but a good compiler might give a warning):

if (a = 42)

…but this cannot be compiled:

if (42 = a)

When I started working on some new code this past weekend, I just decided to start doing things that way. It quickly becomes second nature:

if (NULL != ptr)
{
}

if (false == status)
{
}

But it still looks weird.

Now to fire up that old 1980s compiler and see if that was even possible back then…

Until next time…

Early 1980s BBSes and spinning cursors.

There is a whole generation that has no idea how much cool stuff folks did with text and backspace.

One of my favorites was the “spinning cursor.” Thanks to slow speeds of 300 baud modems, you could get some interesting effects by printing a letter, then printing a character like a slash (“/”), then a backspace, then a dash (“-“), then a backspace, then a backslash (“\”), then a backspace, then a vertical bar (“|”) or exclamation mark (“!”) if your system did not have the vertical bar. Then a backspace and the next letter of the message.

Apparently I got nostalgic about this effect some time ago. I just found this “Spinning Cursor” C project I wrote on the Online GDB compiler:

https://onlinegdb.com/56zozL_gRp

Go there and you can RUN the project and see it in all its glory…

C program build date and time.

This one is just for fun, though I suppose I might not be the only one on the planet that ever needed to do this…

At my day job, I have a board that had a realtime clock, but no battery backup to retain the time. During startup, the system sends the current PC date and time (actually, it sends it in GMT, I believe, so looking at logs captured in different parts of the world will be easier — GMT is GMT anywhere on the planet ;-)

On startup, the board wants to log some things, but does not yet know the time. It had been using a hard-coded default time (like 1/1/2000). I wondered if the C compiler build date and time could be used to at least set the time based on when the firmware-in-use was compiled.

A quick chat with Bing’s AI (ChatGPT) and some experiments to make what it gave me far less bulky provided me with this:

int main()
{
    // Initialize time to when this firmware was built.
    const char *c_months = "JanFebMarAprMayJunJulAugSepOctNovDec";
    char monthStr[4];
    int year = 0;
    int month = 0;
    int day = 0;
    int hour = 0;
    int minute = 0;
    int second = 0;

    // “Mmm dd yyyy”
    strncpy(monthStr, __DATE__, 3);
    monthStr[3] = '\0';
    month = (strstr(c_months, monthStr) - c_months) / 3 + 1;

    day = atoi (&__DATE__[4]);
    year = atoi (&__DATE__[7]);

    printf ("%04d-%02d-%02d\n", year, month, day);

    // “hh:mm:ss”
    hour = atoi (&__TIME__[0]);
    minute = atoi (&__TIME__[3]);
    second = atoi (&__TIME__[6]);

    printf ("%02d:%02d:%02d\n", hour, minute, second);

    return 0;
}

This works by taking the compiler-generated macros of “__DATE__” and “__TIME__” and parsing out the values we want so they can be passed to a realtime clock routine or whatever.

In my case, this is not the code I am using since our embedded compiler handled __DATE__ in a different format. (It uses “dd-Mmm-yy” for some reason, while the standard C formatting appears to be “Mmm dd yyyy”.) But, the concept is similar.

Of course, as soon as I tested this, I found another issue. My board would power up and set to the build date (which is central standard time) and then when the system is connected, a new date/time is sent in GMT, which is currently 5 (or is it 6?) hours different, setting the clock back in time.

This makes log entries a bit odd ;-) but that’s a problem for another day.

Until then…

When a+b+c is not the same as b+a+c plus the Barr coding standard

DISCLAIMER: All compilers are not created equal. Different compilers may achieve the same result, but may take different steps to achieve that result. Optimizers and code generators can do wonderful things. Thus, if you want to leave a comment and say “compiler XYZ does not do that,” that is fine, but that is not the point of this. This is for those “other” compilers you don’t use, that do not behave that way…

During my embedded C programming career, there are some interesting optimizations I have been taught. Most of these are things I would never consider on a modern C compiler running on a system that has ample memory and CPU resources. But when you are on a microcontroller with 4K or RAM or 16K of program storage, sometimes you have to do things oddly to make it fit, or, if the CPU is slow, make it run fast enough.

True, False, or Not True or Not False?

Consider this:

bool flag = false;

if (flag)
{
// Do something
}

And “if” like this will be looking for a true result. Now, one compiler I work with has its own “TRUE” and “FALSE”, in uppercase, which all their code uses. Why? Maybe because they originated before the stdbool.h header file was added to the C standard and defined an official “true” and “false” in lowercase. Fortunately, they currently provide a stdbool which will undefine the uppercase ones (if the compiler is set to NON-case sensative — yep, by default “foo” and “FOO” and “else” and “Else” are processed the same) and define lowercase ones:

#if !getenv("CASE")
// remove TRUE and FALSE added by CCS's device .h file, only if
// compiler has case sensitivty off.

#if defined(TRUE)
#undef TRUE
#endif

#if defined(FALSE)
#undef FALSE
#endif
#endif

typedef int1 bool;
#define true 1
#define false 0
#define __bool_true_false_are_defined

With 0 representing false, and 1 representing true, the “if” works — anything that is not 0 will be processed. In a normal compiler:

if (0)
{
printf ("This will not print.\n");
}

if (1)
{
printf ("This will print\n");
}

if (42)
{
printf ("This will print\n");
}

On my Radio Shack Color Computer’s 6809 microprocessor, I expect such an “if” test compiles into assemble code that represents something like “branch if not zero”. I would expect every CPU has a similar instruction.

So checking for true (not 0) should be as fast as checking for false (0), assuming there is a similar instruction for “branch if zero.”

However, what if the CPU uses a different number of instruction cycles for a “branch if zero” versus “branch if not zero”? If that were the case, these might have different execution speeds:

if (flag == true)
{
// Do something...
}

if (flag == false)
{
// Do something...
}

But that seems unlikely, and is not the point of this post. (If you are aware of any CPU where this would be the case, please leave a comment.)

Some company coding standards I have used said to never use just “if (x)” but instead write out what it actually means. While you and I are experts and clearly know what the “if (x)” does, as should any programmer who knows programming, what if they don’t? In that case “if (x == true)” and “if (x == false)” are impossible to misunderstand, and should generate the same code as “if (x)” and “if (!x)”.

Right?

But suppose you used a crappy “C-like” compiler, and it had a “test for zero” which is used for “if (flag == false)” but used something dumb like “compare against a number” when you did “if (flag == true)” or “if (flag)”… Like, the compiler saw a check for 0 and knew it could efficiently do that… but if it was not zero, it did a compare against a number, resulting in something like…

load #1 in to some accumulator
compare register holding "flag" against accumulator
branch if equal (or if not equal)

That can generate some extra code each and every time you check for “true”, so checking for “not false” might save a few bytes every time.

Because of that, I often just default to doing this:

if (flag != false)
{
// Do something...
}

And this looks stupid. But might save enough bytes to make something compile that otherwise would not fit.

Hopefully you have never had to work in such a constrained environment with such a crappy C-like compiler.

The good news is, by changing to doing this, it works the same on “real” compilers but “might” make smaller or faster code on bad compilers.

But I digress…

Adding it all up…

I really wanted to write this about something I had never considered:

#define HEADER_LENGTH 5
#define CRC_LENGH 2

unsigned int messageSize = HEADER_LENGTH + payloadLength + CRC_LENGTH;

If the message protocol uses a format like “[HEADER][PAYLOAD][CRC]”, writing out the C code like that makes it easy to visualize what the message bytes look like.

The compiler would be seeing that code as:

unsigned int messageSize = 5 + payloadLength + 2;

A compiler might be doing…

  • Set messageSize to 5
  • Add payloadLength to messageSize
  • Add 2 to messageSize

But if you grouped the #define values together:

unsigned int messageSize = HEADER_LENGTH + CRC_LENGTH + payloadLength;

A good compiler might be changing that to:

unsigned int messageSize = 5 + 2 + payloadLength;
...
unsigned int messageSize = 7 + payloadLength;

…which results in:

  • Set messageSize to 7
  • Add payloadLength to messageSize

And if you deal with hundreds of messages where this might be calculated, that savings can really add up.

I would hope a real/smart compiler might be able to detect this and optimize the constants together … but I know this is not guaranteed to be the case.

The best thing about standards…

And as a bonus, earlier I posted asking about C coding standards trying to find one my employer could adopt, instead of rolling our own. Bing CoPilot led me to a few, including this one specifically for embedded C:

Embedded C Coding Standard | Barr Group

This “Barr C” standard has many things I have already forced myself to start doing, and does look promising. You can but a paperback book for the standard for $6 on Amazon, or download the book free as a PDF. I plan to go through it and see what all it discusses.

One thing I like about the approach is gives a reason for each of the coding standard things is presents. For example, braces:

Rules:

a. Braces shall always surround the blocks of code (a.k.a., compound
statements), following if, else, switch, while, do, and for statements; single statements and empty statements following these keywords shall also always be surrounded by braces.

b. Each left brace ({) shall appear by itself on the line below the start of the block it opens. The corresponding right brace (}) shall appear by itself in the same position the appropriate number of lines later in the file.

Reasoning:

There is considerable risk associated with the presence of empty
statements and single statements that are not surrounded by braces. Code constructs like this are often associated with bugs when nearby code is changed or commented out. This risk is entirely eliminated by the consistent use of braces. The placement of the left brace on the following line allows for easy visual checking for the corresponding right brace.

barr_c_coding_standard_2018.pdf

When I started learning C back in the late 1980s, it was the pre-ANSI K&R C. Thus, I learned C the way the books I had showed it:

if (something) {
// Do something
} else {
// Do something else
}

The placement of the “{” on the first line seems to be referred to as “line saver” in some of the code editors I use. It was at a job where their standard says “line them up so you can see what goes to what” that I had to change my style:

if (something)
{
// Do something
}
else
{
// Do something else
}

Now the start of each code block has the start brace and end brace on the same column, making it much easier to spot rather than having to look at the ends of lines or some characters in to a line.

I hated that at first, but now I am used to it.

I also used to do things like this:

if (something)
DoSomething();
else
DoSomethingElse();

Somewhere on this site, I have written about this at least once or twice. This breaks when someone adds something without thinking about the braces:

if (something)
DoSomething();
WriteToLog(); // added this
else
DoSomethingElse();

Without the braces, trying to compile this would at least give an error:

main.c: In function ‘main’:
main.c:31:5: error: ‘else’ without a previous ‘if’
31 | else
| ^~~~

BUT, if you did not have the else…

if (something)
DoSomething();
WriteToLog();

That code might “look” good, but running it would do something if the case was true, but would then ALWAYS write to the log… Because C is seeing it like this:

if (something)
{
DoSomething();
}

WriteToLog();

And I have now seen a modern programmer, brought up on scripting languages that made use of tabs rather than braces, make this mistake working on C code they were not really familiar with.

But I digress. Again.

More to come when my book arrives and I start reading through it. Unless someone presents me a better alternative, I think this one may suffice. The book is cheap, it can be downloaded free (so it is searchable) and the items I have spot checked seemed reasonable.

If you have ever worked with the Barr-C coding standard, I’d love to hear your thoughts in the comments.

Until then…

C has its limits. If you know where to look.

Thank you, Bing Copilot (ChatGPT), for giving me another “thing I just learned” to blog about.

In the early days of “K&R C”, things were quite a bit different. C was not nearly as portable as it is today. While the ANSI-C standard helped quite a bit, once it became a standard, there were still issues when moving C code from machines of different architectures — for example:

int x;

What is x? According to the C standard, and “int” is “at least 16 bits.” On my Radio Shack Color Computer, and int was 16-bits (0-65535). I expect on my friend’s Commodore Amiga, the int was 32-bits, though I really don’t know. And even when you “know”, assuming that to be the case is a “bad thing.”

I used a K&R C compiler on my CoCo, and later on my 68000-based MM/1 computer. That is when I became aware that an “int” was different. Code that worked on my CoCo would port fine to the MM/1, since it was written assuming an int was 16-bits. But trying to port anything from the MM/1 to the CoCo was problematic if the code had assumed an int was 32-bits.

When I got a job at Microware in 1995, I saw my first ANSI-C compiler: Ultra C. To deal with “what size is an int” issues, Microware created their own header file, types.h, which included their definitions for variables of specific sizes:

u_int32 x;
int32 y;

All the OS library calls were prototyped to use these special types, though if you know an “unsigned long” was the same as an “u_int32” or a “short” was the same as an “int16” you could still use those.

But probably shouldn’t.

In those years, I saw other compilers do similar things, such as “U32 x;” and “I16 y”. I expect there were many variations of folks trying to solve this problem.

Some years later, I used the GCC compiler for the first time and learned that the ANSI-C specification now had it’s own types.h — called stdint.h. That gave us things like:

uint32_t x;
int32_t y;

It was easy to adopt these new standard definitions, and I have tried to use them ever since.

I was also introduced in to the defines that specified the largest value that would fit in an “int” or “long” on a system – limits.h:

...
#define CHAR_MAX 255 /*unsigned integer maximum*/
#define CHAR_MIN 0 /*unsigned integer minimum*/

/* signed int properties */
#define INT_MAX 32767 /* signed integer minimum*/
#define INT_MIN (-32767-_C2) /*signed integer maximum*/

/* signed long properties */
#define LONG_MAX 2147483647 /* signed long maximum*/
#define LONG_MIN (-2147483647-_C2) /* signed long minimum*/
...

The values would vary based on if your system was 16-bits, 32-bits or 64-bits. It allowed you to do this:

int x = INT_MAX;
unsigned int y = UINT_MAX;

…and have code that would compile on a 16-bit or 64-bit system. If you had tried something like this:

unsigned int y = 4294967295; // Max 32-bit value.

…that code would NOT work as expected when compiled on a 16-bit system (like my old CoCo, or an Arduino UNO or the PIC24 processors I use at work).

I learned to use limits.h.

But this week, I was working on code that needed to find the highest and lowest values in a 32-bit number range. I had code like this:

uint32_t EarliestSequenceNumber = 4294967295;
uint32_t LatestSequenceNumber = 0;

And that works fine, and should work fine on any system where an int can hold a 32-bit value. (Though I used hex, since I know 0xffffffff is the max value, and always have to look up or use a calculator to find out the decimal version.)

Had I been using signed integers, I would be doing this:

int32_t LargestSignedInt = 2147483647;

Or I’d use 0x7fffffff.

As I looked at my code, I wondered if C provided similar defines for the stdint.h types.

stdint.h also has stdsizes!

And it does! Since all of this changed/happened after I already “learned” C, I never got the memo about new features being added. Inside stdint.h are also defines like this:

#define INT8_MAX  (127)
#define INT8_MIN (-128)
#define UINT8_MAX (255)

#define INT16_MAX (32767)
#define INT16_MIN (-32768)
#define UINT16_MAX (65535)

#define INT32_MAX (2147483647)
#define INT32_MIN (-2147483648)
#define UINT32_MAX (4294967295)

#define INT64_MAX (9223372036854775807)
#define INT64_MIN (-9223372036854775808)
#define UINT64_MAX (18446744073709551615)

…very similar to what limits.h offers for standard ints, etc. Neat!

Now modern code can do:

uint32_t EarliestSequenceNumber = UINT32_MAX;
uint32_t LatestSequenceNumber = 0;

…and that’s the new C thing I learned today.

And it may have even been there when I first learned about stdint.h and I just did not know.

And knowing is half the battle.

Once you go read-only…

…you can never go back.

After being shown that you can declare a global variable, as one does…

int g_globalVariable;

…and then make it be treated as a read-only variable to other files by extern-ing it as a const:

extern int const g_globalVariable;

…of course I wondered what the compiler did if you went the other way:

// main.c
#include <stdio.h>

void function (void);

int const c_Value = 0;

int main()
{
    printf("Hello World\n");
    
    printf ("c_Value: %d\n", c_Value);
    
    function ();
    
    printf ("c_Value: %d\n", c_Value);

    return 0;
}

// function.c
#include <stdio.h>

// Extern as a non-const.
extern int c_Value;

void function ()
{
    c_Value++;
}

Above, main.c contains a global const variable, but function.c tries to extern it as non-const.

But when I run the code…

Hello World
c_Value: 0


...Program finished with exit code 139
Press ENTER to exit console.

…the compiler does not complain, but we get a crash. Looking at this in a debugger shows more detail:

Program received signal SIGSEGV, Segmentation fault.
0x00005555555551ef in function () at Function.c:11
11 c_Value++;

I am unfamiliar with the inner workings on whatever compiler this Online C Compiler – online editor is using, but I suspect I’d see similar results doing this on any system with memory protection. Go back to the early days (like OS-9 on a 6809 computer, or even on a 68000 without an MMU) and … maybe it just allows it and it modifies something it shouldn’t?

We can file this away in the “don’t do this” category.

Until next time…

Modifying read-only const variables in C

This is a cool trick I just learned from commenter Sean Patrick Conner in a previous post.

If you want to have variables globally available, but want to have some control over how they are set, you can limit the variables to be static to a file containing “get” and “set” functions:

static int S_Number = 0;

void SetNumber (int number)
{
S_Number = number;
}

int GetNumber (void)
{
return S_Number;
}

This allows you to add range checking or other things that might make sense:

void SetPowerLevel (int powerLevel)
{
if ((powerLevel >= 0) || (powerLevel <= 100))
{
S_PowerLevel = powerLevel;
}
}

Using functions to get and set variables adds extra code, and also slows down access to those variables since it is having to jump in to a function each time you want to change the variable.

The benefit of adding range checking may be worth the extra code/speed, but just reading a variable has not reason to need that overhead.

Thus, Sean’s tip…

Variables declared globally in a file cannot be accessed anywhere else unless you use “extern” to declare them in any file that wants to use them. You might declare some globals in globals.c like this:

// Globals.c
int g_number;

…but trying to access “g_number” anywhere else will not work. You either need to add:

extern int g_number;

…in any file that wants access to it, or, better, make something like globals.h that contains all your extern references:

// Globals.h
extern int g_number;

Now any file that needs access to the globals can just include “globals.h” and use them:

#include "globals.h"

void function (void)
{
printf ("Number: %d\n", g_number);
}

That was not Sean’s tip.

Sean mentioned something that makes sense, but I do not think I’d ever tried: The extern can contain the “const” keyword, even if the declaration of the variable does not!

This means you could have a global variable like above, but in globals.h do this:

// Globals.h
extern int const g_number;

Now any file that includes “globals.h” has access to g_number as a read-only variable. The compiler will not let code build if there is a line trying to modify it other than globals.c where it was actually declared non-const.

Thus, you could access this variable as fast as any global, but not modify it. For that, you’d need a set routine:

// Globals.c
int c_number; // c_ to indicate it is const, which it really isn't.

// Set functions
void SetNumber (int number)
{
c_number = number;
}

Now other code can include “globals.h” and have read-only access to the variable directly, but can only set it by going through the set function, which could enforce data validation or other rules — something just setting it directly could not.

#include "Globals.h"

int main(int argc, char **argv)
{
printf ("Number: %d\n", c_number);

SetNumber (42);

printf ("Number: %d\n", c_number);

return 0;
}

That seems quite obvious now that I have been shown it. But I’ve never tried it. I have made plenty of Get/Set routines over the years (often to deal with making variable access thread-safe), but I guess it never dawns on me that, when not dealing with thread-safe variables, I could have direct read-only access to a variable, but still modify it through a function.

Global or static?

One interesting benefit is that any other code that needed direct access to this variable (for speed reasons or whatever) could just add its own extern rather than using the include “Globals.h”:

// Do this myself so I can modify it
extern int c_number;

void MyCode (void)
{
// It's my variable and I can do what I want with it!
c_number = 100;
}

By using the global, it opens up that as a possibility.

And since functions are used to set them, they could also exist to initialize them.

// Globals.c
// Declared as non-const, but named with "c_" to indicate the rest of the
// code cannot modify it.
int c_number;

// Init functions
void InitGlobals (void)
{
c_number = 42;
}

// Set functions.
void SetNumber (int number)
{
c_number = number;
}
// Globals.h

// Extern as a const so it is a read-only.
extern int const c_number;

// Prototypes
void InitGlobals (void);

void SetNumber (int number);
#include <stdio.h>

#include "Globals.h"

int main()
{
InitGlobals ();

printf ("c_number = %d\n", c_number);

// This won't work.
//c_number = 100;

SetNumber (100);

printf ("c_number = %d\n", c_number);

return 0;
}

Spiffy.

I had thought about using static to prevent the “extern” trick from working, but realize if you did that, there would be no read-only access outside of that file and a get function would be needed. And we already knew how to do that.

I love learning new techniques like this. The code I maintain in my day job has TONS of globals for various reasons, and often has duplicate code to do range checking and such. I could see using something like this to clean all of that up and still retain speed when accessing the variables.

Got any C tricks? Comment away…

const-ant confusion in C, revisited.

I do not know why this has confused me so much over the years. BING CoPilot (aka ChatGPT) explains it so clearly I do not know how I ever misunderstood it.

But I am getting ahead of myself.

Back in 2017, I wrote a bit about const in C. A comment made by Sean Patrick Conner on a recent post made me revisit the topic of const in 2024.

If you use const, you make a variable that the compiler will not allow to be changed. It becomes read-only.

int normalVariable = 42;
const int constVariable = 42;

normalVariable = 100; // This will work.

constVariable = 100; // This will not work.

When you try to compile, you will get this error:

error: assignment of read-only variable ‘constVariable’

That is super simple.

But let me make one more point-er…

But for pointers, it is a bit different. You can declare a pointer and change it, like this:

char *ptr = 0x0;

ptr = (char*)0x100;

And if you did not want the pointer to change, you might try adding const like this:

const char *ptr = 0x0;

ptr = (char*)0x100;

…but you would fine that compiles just fine, and you still can modify the pointer.

In the case of pointers, the “const” at the start means what the pointer points to, not the pointer itself. Consider this:

uint8_t buffer[10];

// Normal pointer.
uint8_t *normalPtr = &buffer[0];

// Modify what it points to.
normalPtr[0] = 0xff;

// Modify the pointer itself.
normalPtr++;

Above, without using const, you can change the data that ptr points to (inside the buffer) as well as the pointer itself.

But when you add const…

// Pointer to constant data.
const uint8_t *constPtr1 = &buffer[0];
// Or it can be written like this:
// uint8_t const *constPtr1 = &buffer[0];

// You can NOT modify the data the pointer points to:
constPtr1[1] = 1; // error: assignment of read-only location ‘*(constPtr1 + 2)

// But you can modify the pointer itself:
constPtr1++;

Some of my longstanding confusion came from where you put “const” on the line. In this case, “const uint8_t *ptr” is the same as “uint8_t const *ptr”. Because reasons?

Since using const before or after the pointer data type means “you can’t modify what this points to”, you have to use const in a different place if you want the pointer itself to not be changeable:

// Constant pointer to data.
// We can modify the data the pointer points to, but
// not the pointer itself.
uint8_t * const constPtr3 = &buffer[0];

constPtr3[3] = 3;

// But this will not work:
constPtr3++; // error: increment of read-only variable ‘constPtr3’

And if you want to make it so you cannot modify the pointer AND the data it points to, you use two consts:

// Constant pointer to constant data.

// We can NOT modify the data the pointer points to, or
// the pointer itself.
const uint8_t * const constPtr4 = &buffer[0];

// Neither of these will work:
constPtr4[4] = 4; // error: assignment of read-only location ‘*(constPtr4 + 3)’

constPtr4++; // error: increment of read-only variable ‘constPtr4’

Totally not confusing.

The pattern is that “const” makes whatever follows it read-only. You can do an integer variable both ways, as well:

const int constVariable = 42;

int const constVariable = 42;

Because reasons.

The cdecl: C gibberish ↔ English webpage will explain this and show them both to be the same:

const int constVariable
declare constVariable as const int

int const constVariable
declare constVariable as const int

Since both of those are the same, “const char *” and “char const *” should be the same, too.

const char *ptr
declare ptr as pointer to const char

char const *ptr
declare ptr as pointer to const char

However, when you place the const in front of the variable name, you are no longer referring to the pointer (*) but that variable:

char * const ptr
declare ptr as const pointer to char

Above, the pointer is constant, but not what it points to. Adding the second const:

const char * const ptr
declare ptr as const pointer to const char

char const * const ptr
declare ptr as const pointer to const char

…makes both the pointer and what it points to read-only.

Why do I care?

You probably don’t. However, any time you pass a buffer in to a function that is NOT supposed to modify it, you should make sure that buffer is read-only. (That was more or less the point of my 2017 post.)

#include <stdio.h>
#include <string.h>

void function (char *bufferPtr, size_t bufferSize)
{
    // I can modify this!
    bufferPtr[0] = 42;
}

int main()
{
    char buffer[80];
    
    strncpy (buffer, "Hello, world!", sizeof(buffer));
    
    printf ("%s\n", buffer);
    
    function (buffer, sizeof(buffer));
    
    printf ("%s\n", buffer);


    return 0;
}

When I run that, it will print “Hello, world!” and then print “*ello, world!”

If we do not want the function to be able to modify/corrupt the buffer (easily), adding const solves that:

#include <stdio.h>
#include <string.h>

void function (const char *bufferPtr, size_t bufferSize)
{
    // I can NOT modify this!
    bufferPtr[0] = 42;
}

int main()
{
    char buffer[80];
    
    strncpy (buffer, "Hello, world!", sizeof(buffer));
    
    printf ("%s\n", buffer);
    
    function (buffer, sizeof(buffer));
    
    printf ("%s\n", buffer);

    return 0;
}

But, because the pointer itself was not protected with const, inside the routine it could modify the pointer:

#include <stdio.h>
#include <string.h>

void function (const char const *bufferPtr, size_t bufferSize)
{
    // I can NOT modify this!
    //bufferPtr[0] = 42;
    
    while (*bufferPtr != '\0')
    {
        printf ("%02x ", *bufferPtr);
        
        bufferPtr++; // Increment the pointer
    }
    
    printf ("\n");
}

int main()
{
    char buffer[80];
    
    strncpy (buffer, "Hello, world!", sizeof(buffer));
    
    printf ("%s\n", buffer);
    
    function (buffer, sizeof(buffer));
    
    printf ("%s\n", buffer);

    return 0;
}

In that example, the pointer is passed in, and can be changed. But, since it was passed in, what gets changed is the temporary variable used by the function, similarly to when you pass in a variable and modify it inside a function and the variable can be changed in the function without affecting the variable that was passed in:

void numberTest (int number)
{
    printf ("%d -> ", number);
    
    number++;

    printf ("%d\n", number);
}

int main()
{
    int number = 42;
    
    printf ("Before function: %d\n", number);
    
    numberTest (number);
    
    printf ("After function: %d\n", number);

    return 0;
}

Because of that temporary nature, I don’t see any reason to restrict the pointer to be read-only. Any changes made to it within the function will be to a copy of the pointer.

In fact, even if you declare that pointer as a const, the temporary copy inside the function can still be modified:

void function (const char const *bufferPtr, size_t bufferSize)
{
// I can NOT modify this!
//bufferPtr[0] = 42;

while (*bufferPtr != '\0')
{
printf ("%02x ", *bufferPtr);

bufferPtr++; // Increment the pointer
}

printf ("\n");
}

Offhand, I cannot think of any reason you would want to pass a pointer in to a function and then not let the function use that pointer by changing it. Maybe there are some? Leave a comment…

The moral of the story is…

The important takeaway is to always use const when you are passing in a buffer you do not want to be modified by the function. And leave it out when you DO want the buffer modified:

#include <stdio.h>
#include <string.h>
#include <ctype.h>

// Uppercase string in buffer.
void function (char *bufferPtr, size_t bufferSize)
{
    while ((*bufferPtr != '\0') && (bufferSize > 0))
    {
        *bufferPtr = toupper(*bufferPtr);
        
        bufferPtr++; // Increment the pointer
        bufferSize--; // Decrement how many bytes left
    }
}

int main()
{
    char buffer[80];
    
    strncpy (buffer, "Hello, world!", sizeof(buffer));
    
    printf ("%s\n", buffer);
    
    function (buffer, sizeof(buffer));
    
    printf ("%s\n", buffer);

    return 0;
}

And if you pass that a non-modifiable string (like a real read-only constant string stored in program space or ROM or whatever), you might have a different issue to deal with. In the case of the PIC24 compiler I use, it flat out won’t let you pass in a constant string like this:

function ("CCS PIC compiler will not allow this", 80);

They have a special compiler setting which will generate code to copy any string literals into RAM before calling the function (at the tradeoff of extra code space, CPU time, and memory);

#device PASS_STRINGS=IN_RAM

But I digress. This was just about const.

Oddly, when I do the same thing in the GDB online Debugger, it happily does it. I don’t know why — surely it’s not modifying program space? Perhaps it is copying the string in to RAM behind the scenes, much like the CCS compiler can do. Or perhaps it is blindly writing to program space and there is no exception/memory protection stopping it.

Well, it crashes if I run the same code on a Windows machine using the Code::Blocks IDE (GCC compiler).

One more thing…

You could, of course, try to cheat. Inside the function that is passed a const you can make a non-const and just assign it:

// Uppercase string in buffer.
void function (const char *bufferPtr, size_t bufferSize)
{
char *ptr = bufferPtr;

while ((*ptr != '\0') && (bufferSize > 0))
{
*ptr = toupper(*ptr);

putchar (*ptr);

ptr++; // Increment the pointer
bufferSize--; // Decrement how many bytes left
}

putchar ('\n');
}

This will work if your compiler is not set to warn you about it. On GCC, mine will compile, but will emit a warning:

main.c: In function ‘function’:
main.c:16:17: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
16 | char *ptr = bufferPtr;

For programmers who ignore compiler warnings, you now have code that can corrupt/modify memory that was designed not to be touched. So keep those warnings cranked up and pay attention to them if your code is important.

Comment away. I learn so much from all of you.

C coding standard recomendations?

Only one of the programming jobs I have had used a coding standard. Their standard, created in-house, is more or less the standard I follow today. It includes things like:

  • Prefix global variables with g_
  • Prefix static variables with s_ (for local statics) or S_ (for global statics)

It also required the use of braces, which I have blogged about before, even in single-line instances such as:

if (fault == true)
{
    BlinkScaryRedLight();
}

Many of these took me a bit to get used to because they are different than how I do things. Long after that job, I have adopted many/most of that standard in my own personal style due to accepting the logic behind it.

I thought I’d ask here: Are there any good “widely accepted” C coding standards out there you would recommend? Adopting something widely used might make code easier for a new hire to adapt to, versus “now I have to learn yet another way to format my braces and name my variables.”

Comments appreciated.