Monthly Archives: December 2024

C program build date and time.

This one is just for fun, though I suppose I might not be the only one on the planet that ever needed to do this…

At my day job, I have a board that had a realtime clock, but no battery backup to retain the time. During startup, the system sends the current PC date and time (actually, it sends it in GMT, I believe, so looking at logs captured in different parts of the world will be easier — GMT is GMT anywhere on the planet ;-)

On startup, the board wants to log some things, but does not yet know the time. It had been using a hard-coded default time (like 1/1/2000). I wondered if the C compiler build date and time could be used to at least set the time based on when the firmware-in-use was compiled.

A quick chat with Bing’s AI (ChatGPT) and some experiments to make what it gave me far less bulky provided me with this:

int main()
{
    // Initialize time to when this firmware was built.
    const char *c_months = "JanFebMarAprMayJunJulAugSepOctNovDec";
    char monthStr[4];
    int year = 0;
    int month = 0;
    int day = 0;
    int hour = 0;
    int minute = 0;
    int second = 0;

    // “Mmm dd yyyy”
    strncpy(monthStr, __DATE__, 3);
    monthStr[3] = '\0';
    month = (strstr(c_months, monthStr) - c_months) / 3 + 1;

    day = atoi (&__DATE__[4]);
    year = atoi (&__DATE__[7]);

    printf ("%04d-%02d-%02d\n", year, month, day);

    // “hh:mm:ss”
    hour = atoi (&__TIME__[0]);
    minute = atoi (&__TIME__[3]);
    second = atoi (&__TIME__[6]);

    printf ("%02d:%02d:%02d\n", hour, minute, second);

    return 0;
}

This works by taking the compiler-generated macros of “__DATE__” and “__TIME__” and parsing out the values we want so they can be passed to a realtime clock routine or whatever.

In my case, this is not the code I am using since our embedded compiler handled __DATE__ in a different format. (It uses “dd-Mmm-yy” for some reason, while the standard C formatting appears to be “Mmm dd yyyy”.) But, the concept is similar.

Of course, as soon as I tested this, I found another issue. My board would power up and set to the build date (which is central standard time) and then when the system is connected, a new date/time is sent in GMT, which is currently 5 (or is it 6?) hours different, setting the clock back in time.

This makes log entries a bit odd ;-) but that’s a problem for another day.

Until then…

When a+b+c is not the same as b+a+c plus the Barr coding standard

DISCLAIMER: All compilers are not created equal. Different compilers may achieve the same result, but may take different steps to achieve that result. Optimizers and code generators can do wonderful things. Thus, if you want to leave a comment and say “compiler XYZ does not do that,” that is fine, but that is not the point of this. This is for those “other” compilers you don’t use, that do not behave that way…

During my embedded C programming career, there are some interesting optimizations I have been taught. Most of these are things I would never consider on a modern C compiler running on a system that has ample memory and CPU resources. But when you are on a microcontroller with 4K or RAM or 16K of program storage, sometimes you have to do things oddly to make it fit, or, if the CPU is slow, make it run fast enough.

True, False, or Not True or Not False?

Consider this:

bool flag = false;

if (flag)
{
// Do something
}

And “if” like this will be looking for a true result. Now, one compiler I work with has its own “TRUE” and “FALSE”, in uppercase, which all their code uses. Why? Maybe because they originated before the stdbool.h header file was added to the C standard and defined an official “true” and “false” in lowercase. Fortunately, they currently provide a stdbool which will undefine the uppercase ones (if the compiler is set to NON-case sensative — yep, by default “foo” and “FOO” and “else” and “Else” are processed the same) and define lowercase ones:

#if !getenv("CASE")
// remove TRUE and FALSE added by CCS's device .h file, only if
// compiler has case sensitivty off.

#if defined(TRUE)
#undef TRUE
#endif

#if defined(FALSE)
#undef FALSE
#endif
#endif

typedef int1 bool;
#define true 1
#define false 0
#define __bool_true_false_are_defined

With 0 representing false, and 1 representing true, the “if” works — anything that is not 0 will be processed. In a normal compiler:

if (0)
{
printf ("This will not print.\n");
}

if (1)
{
printf ("This will print\n");
}

if (42)
{
printf ("This will print\n");
}

On my Radio Shack Color Computer’s 6809 microprocessor, I expect such an “if” test compiles into assemble code that represents something like “branch if not zero”. I would expect every CPU has a similar instruction.

So checking for true (not 0) should be as fast as checking for false (0), assuming there is a similar instruction for “branch if zero.”

However, what if the CPU uses a different number of instruction cycles for a “branch if zero” versus “branch if not zero”? If that were the case, these might have different execution speeds:

if (flag == true)
{
// Do something...
}

if (flag == false)
{
// Do something...
}

But that seems unlikely, and is not the point of this post. (If you are aware of any CPU where this would be the case, please leave a comment.)

Some company coding standards I have used said to never use just “if (x)” but instead write out what it actually means. While you and I are experts and clearly know what the “if (x)” does, as should any programmer who knows programming, what if they don’t? In that case “if (x == true)” and “if (x == false)” are impossible to misunderstand, and should generate the same code as “if (x)” and “if (!x)”.

Right?

But suppose you used a crappy “C-like” compiler, and it had a “test for zero” which is used for “if (flag == false)” but used something dumb like “compare against a number” when you did “if (flag == true)” or “if (flag)”… Like, the compiler saw a check for 0 and knew it could efficiently do that… but if it was not zero, it did a compare against a number, resulting in something like…

load #1 in to some accumulator
compare register holding "flag" against accumulator
branch if equal (or if not equal)

That can generate some extra code each and every time you check for “true”, so checking for “not false” might save a few bytes every time.

Because of that, I often just default to doing this:

if (flag != false)
{
// Do something...
}

And this looks stupid. But might save enough bytes to make something compile that otherwise would not fit.

Hopefully you have never had to work in such a constrained environment with such a crappy C-like compiler.

The good news is, by changing to doing this, it works the same on “real” compilers but “might” make smaller or faster code on bad compilers.

But I digress…

Adding it all up…

I really wanted to write this about something I had never considered:

#define HEADER_LENGTH 5
#define CRC_LENGH 2

unsigned int messageSize = HEADER_LENGTH + payloadLength + CRC_LENGTH;

If the message protocol uses a format like “[HEADER][PAYLOAD][CRC]”, writing out the C code like that makes it easy to visualize what the message bytes look like.

The compiler would be seeing that code as:

unsigned int messageSize = 5 + payloadLength + 2;

A compiler might be doing…

  • Set messageSize to 5
  • Add payloadLength to messageSize
  • Add 2 to messageSize

But if you grouped the #define values together:

unsigned int messageSize = HEADER_LENGTH + CRC_LENGTH + payloadLength;

A good compiler might be changing that to:

unsigned int messageSize = 5 + 2 + payloadLength;
...
unsigned int messageSize = 7 + payloadLength;

…which results in:

  • Set messageSize to 7
  • Add payloadLength to messageSize

And if you deal with hundreds of messages where this might be calculated, that savings can really add up.

I would hope a real/smart compiler might be able to detect this and optimize the constants together … but I know this is not guaranteed to be the case.

The best thing about standards…

And as a bonus, earlier I posted asking about C coding standards trying to find one my employer could adopt, instead of rolling our own. Bing CoPilot led me to a few, including this one specifically for embedded C:

Embedded C Coding Standard | Barr Group

This “Barr C” standard has many things I have already forced myself to start doing, and does look promising. You can but a paperback book for the standard for $6 on Amazon, or download the book free as a PDF. I plan to go through it and see what all it discusses.

One thing I like about the approach is gives a reason for each of the coding standard things is presents. For example, braces:

Rules:

a. Braces shall always surround the blocks of code (a.k.a., compound
statements), following if, else, switch, while, do, and for statements; single statements and empty statements following these keywords shall also always be surrounded by braces.

b. Each left brace ({) shall appear by itself on the line below the start of the block it opens. The corresponding right brace (}) shall appear by itself in the same position the appropriate number of lines later in the file.

Reasoning:

There is considerable risk associated with the presence of empty
statements and single statements that are not surrounded by braces. Code constructs like this are often associated with bugs when nearby code is changed or commented out. This risk is entirely eliminated by the consistent use of braces. The placement of the left brace on the following line allows for easy visual checking for the corresponding right brace.

barr_c_coding_standard_2018.pdf

When I started learning C back in the late 1980s, it was the pre-ANSI K&R C. Thus, I learned C the way the books I had showed it:

if (something) {
// Do something
} else {
// Do something else
}

The placement of the “{” on the first line seems to be referred to as “line saver” in some of the code editors I use. It was at a job where their standard says “line them up so you can see what goes to what” that I had to change my style:

if (something)
{
// Do something
}
else
{
// Do something else
}

Now the start of each code block has the start brace and end brace on the same column, making it much easier to spot rather than having to look at the ends of lines or some characters in to a line.

I hated that at first, but now I am used to it.

I also used to do things like this:

if (something)
DoSomething();
else
DoSomethingElse();

Somewhere on this site, I have written about this at least once or twice. This breaks when someone adds something without thinking about the braces:

if (something)
DoSomething();
WriteToLog(); // added this
else
DoSomethingElse();

Without the braces, trying to compile this would at least give an error:

main.c: In function ‘main’:
main.c:31:5: error: ‘else’ without a previous ‘if’
31 | else
| ^~~~

BUT, if you did not have the else…

if (something)
DoSomething();
WriteToLog();

That code might “look” good, but running it would do something if the case was true, but would then ALWAYS write to the log… Because C is seeing it like this:

if (something)
{
DoSomething();
}

WriteToLog();

And I have now seen a modern programmer, brought up on scripting languages that made use of tabs rather than braces, make this mistake working on C code they were not really familiar with.

But I digress. Again.

More to come when my book arrives and I start reading through it. Unless someone presents me a better alternative, I think this one may suffice. The book is cheap, it can be downloaded free (so it is searchable) and the items I have spot checked seemed reasonable.

If you have ever worked with the Barr-C coding standard, I’d love to hear your thoughts in the comments.

Until then…

C has its limits. If you know where to look.

Thank you, Bing Copilot (ChatGPT), for giving me another “thing I just learned” to blog about.

In the early days of “K&R C”, things were quite a bit different. C was not nearly as portable as it is today. While the ANSI-C standard helped quite a bit, once it became a standard, there were still issues when moving C code from machines of different architectures — for example:

int x;

What is x? According to the C standard, and “int” is “at least 16 bits.” On my Radio Shack Color Computer, and int was 16-bits (0-65535). I expect on my friend’s Commodore Amiga, the int was 32-bits, though I really don’t know. And even when you “know”, assuming that to be the case is a “bad thing.”

I used a K&R C compiler on my CoCo, and later on my 68000-based MM/1 computer. That is when I became aware that an “int” was different. Code that worked on my CoCo would port fine to the MM/1, since it was written assuming an int was 16-bits. But trying to port anything from the MM/1 to the CoCo was problematic if the code had assumed an int was 32-bits.

When I got a job at Microware in 1995, I saw my first ANSI-C compiler: Ultra C. To deal with “what size is an int” issues, Microware created their own header file, types.h, which included their definitions for variables of specific sizes:

u_int32 x;
int32 y;

All the OS library calls were prototyped to use these special types, though if you know an “unsigned long” was the same as an “u_int32” or a “short” was the same as an “int16” you could still use those.

But probably shouldn’t.

In those years, I saw other compilers do similar things, such as “U32 x;” and “I16 y”. I expect there were many variations of folks trying to solve this problem.

Some years later, I used the GCC compiler for the first time and learned that the ANSI-C specification now had it’s own types.h — called stdint.h. That gave us things like:

uint32_t x;
int32_t y;

It was easy to adopt these new standard definitions, and I have tried to use them ever since.

I was also introduced in to the defines that specified the largest value that would fit in an “int” or “long” on a system – limits.h:

...
#define CHAR_MAX 255 /*unsigned integer maximum*/
#define CHAR_MIN 0 /*unsigned integer minimum*/

/* signed int properties */
#define INT_MAX 32767 /* signed integer minimum*/
#define INT_MIN (-32767-_C2) /*signed integer maximum*/

/* signed long properties */
#define LONG_MAX 2147483647 /* signed long maximum*/
#define LONG_MIN (-2147483647-_C2) /* signed long minimum*/
...

The values would vary based on if your system was 16-bits, 32-bits or 64-bits. It allowed you to do this:

int x = INT_MAX;
unsigned int y = UINT_MAX;

…and have code that would compile on a 16-bit or 64-bit system. If you had tried something like this:

unsigned int y = 4294967295; // Max 32-bit value.

…that code would NOT work as expected when compiled on a 16-bit system (like my old CoCo, or an Arduino UNO or the PIC24 processors I use at work).

I learned to use limits.h.

But this week, I was working on code that needed to find the highest and lowest values in a 32-bit number range. I had code like this:

uint32_t EarliestSequenceNumber = 4294967295;
uint32_t LatestSequenceNumber = 0;

And that works fine, and should work fine on any system where an int can hold a 32-bit value. (Though I used hex, since I know 0xffffffff is the max value, and always have to look up or use a calculator to find out the decimal version.)

Had I been using signed integers, I would be doing this:

int32_t LargestSignedInt = 2147483647;

Or I’d use 0x7fffffff.

As I looked at my code, I wondered if C provided similar defines for the stdint.h types.

stdint.h also has stdsizes!

And it does! Since all of this changed/happened after I already “learned” C, I never got the memo about new features being added. Inside stdint.h are also defines like this:

#define INT8_MAX  (127)
#define INT8_MIN (-128)
#define UINT8_MAX (255)

#define INT16_MAX (32767)
#define INT16_MIN (-32768)
#define UINT16_MAX (65535)

#define INT32_MAX (2147483647)
#define INT32_MIN (-2147483648)
#define UINT32_MAX (4294967295)

#define INT64_MAX (9223372036854775807)
#define INT64_MIN (-9223372036854775808)
#define UINT64_MAX (18446744073709551615)

…very similar to what limits.h offers for standard ints, etc. Neat!

Now modern code can do:

uint32_t EarliestSequenceNumber = UINT32_MAX;
uint32_t LatestSequenceNumber = 0;

…and that’s the new C thing I learned today.

And it may have even been there when I first learned about stdint.h and I just did not know.

And knowing is half the battle.

DJI Mic Mini and iPhone

Last month, DJI released the DJI Mic Mini. This tiny bluetooth microphone is about the size of a quarter, and as thick as maybe five quarters. It joins two big brothers – the DJI Mic ($249 for 2 TX + 1 RX + charging case, or $159 for 1 TX + 1 RX) and DJI Mic 2 ($349 for 2 TX + 1 RX + charging case, $219 for 1 TX + 1 RX, or $99 for just the microphone). The Mini is priced at only $169 for 2 microphones, a receiver and charging case making it $80 less than the Mic 2. You can also buy just a microphone and received for $89 or just the micrphone for $59. There are a few other options that include phone adapters for USB-C or Apple Lightning ports.

The Mic Mini claims up to 48 hours of battery life, giving is substantially longer use than the two older and larger models. But, it also has far less features. There is no built in memory so you cannot record and download audio files later — it is merely a bluetooth transmitting microphone.

With the DJI Mic 2 I have, I only have the microphone. After I received it, I quickly learned you couldn’t change any settings without owning the receiver as well! Whatever mode the DJI Mic 2 is shipped in is how you will forever use it. At least you can firmware update by plugging the Mic 2 up to a computer via USB and copying a downloaded firmware update file over to it like a flash drive.

Mic Mini and Firmware Updates

Without internal storage and no USB port, I wondered how firmware updates would be done on the Mini — if at all. It turns out, there is an app for that: DJI Mimo. It is available for both Android and iOS phones.

The app, currently at version V2.1.8, appears to have been mostly for connecting to DJI pocket cameras like the Osmo. Although the app lists DJI Mic and DJI Mic 2 as supported, it does not appear to actually connect with either of them. Instead, those microphones would connect with the Osmo (or other) DJI camera and then that connects to the phone and app.

But the Mic Mini is different. It is natively supported by DJI Mimo even without a DJI camera. Connecting the microphone via bluetooth to the phone will allow the mic to show up inside the DJI Mimo “Device Management” section. For here, you can download firmware updates for the Mini.

When I first connected my Mini to the app I was greeted with a firmware update. This update was download by the app then installed on the Mic Mini via the bluetooth connection. Very nice.

There are also a few configuration options:

  • Auto Off – “When enabled, transmitter will be automatically powered off in 15 min if not being connected to save power”
  • Power Button for Noise Cancellation – When enabled, press power button on transmitter to reduce noise”
  • Mic LED – on and off.

You can also access “About Devices” to see the Device Name (“DJI Mic Mini TX”, apparently not changeable) as well as its Device Serial Number and Firmware Version (currently 01.01.00.39).

Unfortunately, there does not seem to be much more you can do with the app. There is a microphone button on the screen, but that just brings up the Device Settings. I had expected to find some kind of recording capability, like a camera app and audio recorder. Perhaps in the future? It does seem this may be the first time they have had microphone support directly in the app.

Moving the ball forward…

Since there would have been no other way to do firmware updates on the Mic Mini, having this capability added to an app makes sense. Being able to customize a few settings is a nice bonus.

Hopefully, DJI is able to do something similar in the app for Mic and Mic 2 users that do not have the receiver and are unable to change any settings. (And uploading firmware via the app would be a much easier process than requiring access to a computer to download the update and transfer it to the Mic/Mic 2 over a USB cable.)

When I have time to work with the Mic Mini I will do a proper “review.” Until this, this is what I know…