Thank you, Bing Copilot (ChatGPT), for giving me another “thing I just learned” to blog about.
In the early days of “K&R C”, things were quite a bit different. C was not nearly as portable as it is today. While the ANSI-C standard helped quite a bit, once it became a standard, there were still issues when moving C code from machines of different architectures — for example:
int x;
What is x? According to the C standard, and “int” is “at least 16 bits.” On my Radio Shack Color Computer, and int was 16-bits (0-65535). I expect on my friend’s Commodore Amiga, the int was 32-bits, though I really don’t know. And even when you “know”, assuming that to be the case is a “bad thing.”
I used a K&R C compiler on my CoCo, and later on my 68000-based MM/1 computer. That is when I became aware that an “int” was different. Code that worked on my CoCo would port fine to the MM/1, since it was written assuming an int was 16-bits. But trying to port anything from the MM/1 to the CoCo was problematic if the code had assumed an int was 32-bits.
When I got a job at Microware in 1995, I saw my first ANSI-C compiler: Ultra C. To deal with “what size is an int” issues, Microware created their own header file, types.h, which included their definitions for variables of specific sizes:
u_int32 x; int32 y;
All the OS library calls were prototyped to use these special types, though if you know an “unsigned long” was the same as an “u_int32” or a “short” was the same as an “int16” you could still use those.
But probably shouldn’t.
In those years, I saw other compilers do similar things, such as “U32 x;” and “I16 y”. I expect there were many variations of folks trying to solve this problem.
Some years later, I used the GCC compiler for the first time and learned that the ANSI-C specification now had it’s own types.h — called stdint.h. That gave us things like:
uint32_t x; int32_t y;
It was easy to adopt these new standard definitions, and I have tried to use them ever since.
I was also introduced in to the defines that specified the largest value that would fit in an “int” or “long” on a system – limits.h:
And that works fine, and should work fine on any system where an int can hold a 32-bit value. (Though I used hex, since I know 0xffffffff is the max value, and always have to look up or use a calculator to find out the decimal version.)
Had I been using signed integers, I would be doing this:
int32_t LargestSignedInt = 2147483647;
Or I’d use 0x7fffffff.
As I looked at my code, I wondered if C provided similar defines for the stdint.h types.
stdint.h also has stdsizes!
And it does! Since all of this changed/happened after I already “learned” C, I never got the memo about new features being added. Inside stdint.h are also defines like this:
Last month, DJI released the DJI Mic Mini. This tiny bluetooth microphone is about the size of a quarter, and as thick as maybe five quarters. It joins two big brothers – the DJI Mic ($249 for 2 TX + 1 RX + charging case, or $159 for 1 TX + 1 RX) and DJI Mic 2 ($349 for 2 TX + 1 RX + charging case, $219 for 1 TX + 1 RX, or $99 for just the microphone). The Mini is priced at only $169 for 2 microphones, a receiver and charging case making it $80 less than the Mic 2. You can also buy just a microphone and received for $89 or just the micrphone for $59. There are a few other options that include phone adapters for USB-C or Apple Lightning ports.
The Mic Mini claims up to 48 hours of battery life, giving is substantially longer use than the two older and larger models. But, it also has far less features. There is no built in memory so you cannot record and download audio files later — it is merely a bluetooth transmitting microphone.
With the DJI Mic 2 I have, I only have the microphone. After I received it, I quickly learned you couldn’t change any settings without owning the receiver as well! Whatever mode the DJI Mic 2 is shipped in is how you will forever use it. At least you can firmware update by plugging the Mic 2 up to a computer via USB and copying a downloaded firmware update file over to it like a flash drive.
Mic Mini and Firmware Updates
Without internal storage and no USB port, I wondered how firmware updates would be done on the Mini — if at all. It turns out, there is an app for that: DJI Mimo. It is available for both Android and iOS phones.
The app, currently at version V2.1.8, appears to have been mostly for connecting to DJI pocket cameras like the Osmo. Although the app lists DJI Mic and DJI Mic 2 as supported, it does not appear to actually connect with either of them. Instead, those microphones would connect with the Osmo (or other) DJI camera and then that connects to the phone and app.
But the Mic Mini is different. It is natively supported by DJI Mimo even without a DJI camera. Connecting the microphone via bluetooth to the phone will allow the mic to show up inside the DJI Mimo “Device Management” section. For here, you can download firmware updates for the Mini.
When I first connected my Mini to the app I was greeted with a firmware update. This update was download by the app then installed on the Mic Mini via the bluetooth connection. Very nice.
There are also a few configuration options:
Auto Off – “When enabled, transmitter will be automatically powered off in 15 min if not being connected to save power”
Power Button for Noise Cancellation – When enabled, press power button on transmitter to reduce noise”
Mic LED – on and off.
You can also access “About Devices” to see the Device Name (“DJI Mic Mini TX”, apparently not changeable) as well as its Device Serial Number and Firmware Version (currently 01.01.00.39).
Unfortunately, there does not seem to be much more you can do with the app. There is a microphone button on the screen, but that just brings up the Device Settings. I had expected to find some kind of recording capability, like a camera app and audio recorder. Perhaps in the future? It does seem this may be the first time they have had microphone support directly in the app.
Moving the ball forward…
Since there would have been no other way to do firmware updates on the Mic Mini, having this capability added to an app makes sense. Being able to customize a few settings is a nice bonus.
Hopefully, DJI is able to do something similar in the app for Mic and Mic 2 users that do not have the receiver and are unable to change any settings. (And uploading firmware via the app would be a much easier process than requiring access to a computer to download the update and transfer it to the Mic/Mic 2 over a USB cable.)
When I have time to work with the Mic Mini I will do a proper “review.” Until this, this is what I know…
NOTE: This article was originally written two years ago, and meant to be part of a series. I never got around to writing Part 2, so I am just publishing this initial part by itself. If there is interest, I will continue the series. My Github actually shows the rest of the work I did for my “full” and “small” version of the drive code for this LCD.
Recently, my day job presented me an opportunity to play with a small 20×4 LCD display that hooked up via I2C. The module was an LCD2004. The 20 is the number of columns and the 04 is the number of rows. The LCD1602 would be a 16×2 display.
While I have found many “tutorials” about these displays, virtually all of them just teach you how to download a premade library and use library functions. Since I was going to be implementing code for an in-house project, and did not have room for a full library of functions I would not be using, I really needed to know how the device worked. Hopefully this article may help others who need (or just want) to do what I did.
LCD2004 / LCD1602 / etc.
These LCD modules use a parallel interface and require eleven I/O pins. The pinout on the LCD looks like this:
A few of the pins are listed by different names based on whoever created the data sheet or hardware. On my LCD2004 module, pins 15 and 16 are listed as A and K, but I now know they are just power lines for the backlight.
If you have something like an Arduino with enough available I/O pins, you can wire the display up directly to pins. You should be able to hook up power (5V to VDD, Ground to VSS, and probably some power to the backlight and maybe something to control contrast), and then connect the eight data lines (D0-D7) to eight available digital I/O pins on the Arduino.
The LCD module has a simple set of instruction bytes. You set the I/O pins (HIGH and LOW, each to represent a bit in a byte), along with the RS (register select) and RW (read/write) pins, then you toggle the E (Enable) pin HIGH to tell the LCD it can read the I/O pins. After a moment, you toggle E back to LOW.
The data sheets give timing requirements for various instructions. If I read it correctly, it looks like the E pin needs to be active for a minimum of 150 nanoseconds for the LCD to read the pins.
Here is a very cool YouTube video by Ian Ward that shows how the LCD works without using a CPU. He uses just buttons and dip switches. I found it quite helpful in understanding how to read and write to the LCD.
If you don’t have 11 I/O pins, you need a different solution.
A few pins short of a strike…
If you do not have eleven I/O pins available, the LCD can operate in a 4-bit mode, needing only four pins for data. You send the upper four bits of a byte using the E toggle, followed by the lower 4-bits of the byte. This is obviously twice as slow, but allows the part to be used when I/O pins are limited.
If you don’t have 7 I/O pins, you need a different solution.
PCF8574: I2C to I/O
If you do not have seven I/O pins available, you can use the PCF8574 chip. This chip acts as an I2C to I/O pin interface. You write a byte to the chip and it will toggle the eight I/O pins based on the bits in the byte. Send a zero, and all pins are set LOW. Send a 255 (0xff) and all pins are set HIGH.
Using a chip like this, you can now use the 2-wire I2C interface to communicate with the LCD module–provided it is wired up and configured to operate in 4-bit mode (four pins for data, three pins for RS, RW and E, and the spare pin can be used to toggle the backlight on and off).
Low-cost LCD controller boards are made that contain this chip and have pins for hooking up to I2C, and other pins for plugging directly to the LCD module. For just a few dollars you can buy an LCD module already soldered on to the PCF8574 board and just hook it up to 5V, Ground, I2C Data and I2C Clock and start talking to it.
If you know how.
I did not know how, so I thought I’d document what I have learned so far.
What I have learned so far.
The PCF8574 modules I have all seem to be wired the same. There is a row of 16-pins that aligns with the 16 pins of the LCD module.
One LCD I have just had the board soldered directly on to the LCD.
Another kit came with separate boards and modules, requiring me to do the soldering since the LCD did not have a header attached.
If you are going to experiment with these, just get one that’s already soldered together or make sure the LCD has a header that the board can plug in to. At least if you are like me. My soldering skills are … not optimal.
The eight I/O pins of the PCF modules I have are connected to the LCD pins as follows:
1 - to RS
2 - to RW
3 - to E
4 - to Backlight On/Off
5 - D4
6 - D5
7 - D6
8 - D7
If I were to send an I2C byte to this module with a value of 8 (that would be bit 3 set, with bits numbers 0 to 7), that would toggle the LCD backlight on. Sending a 0 would turn it off.
That was the first thing I was able to do. Here is an Arduino sketch that will toggle that pin on and off, making the backlight blink:
// PCF8574 connected to LCD2004/LCD1602/etc.
#include <Wire.h>
void setup() {
// put your setup code here, to run once:
Wire.begin ();
}
void loop() {
// put your main code here, to run repeatedly:
Wire.beginTransmission (39); // I2C address
Wire.write (8); // Backlight on
Wire.endTransmission ();
delay (500);
Wire.beginTransmission (39); // I2C address
Wire.write (0); // Backlight off
Wire.endTransmission ();
delay (500);
}
Once I understood which bit went to which LCD pin, I could then start figuring out how to talk to the LCD.
One of the first things I did was create some #defines representing each bit:
We’ll use this later when building our own bytes to send out.
Here is a datasheet for the LCD2004 module. Communicating with an LCD1602 is identical except for how many lines you have and where they exist in screen memory:
I actually started with an LCD1602 datasheet and had it all working before I understood what “1602” meant a different sized display than whatI had ;-)
Sending a byte
As you can see from the above sample code, to send an I2C byte on the Arduino, you have to include the Wire library (for I2C) and initialize it in Setup:
#include <Wire.h>
void setup() {
// put your setup code here, to run once:
Wire.begin ();
}
Then you use a few lines of code to write the byte out to the I2C address of the PCF8574 module. The address is 39 by default, but there are solder pads on these boards that let you change it to a few other addresses.
Communicating with the LCD module requires a few more steps. First, you have to figure out which pins you want set on the LCD, then you write out a byte that represents them. The “E” pin must be set (1) to tell the LCD to look at the data pins.
After a tiny pause, you write out the value again but with the E pin bit unset (0).
That’s all there is to it! The rest is just understanding what pins you need to set for what command.
Instructions versus Data
The LCD module uses a Register Select pin (RS) to tell it if the 8-bits of I/O represents an Instruction, or Data.
Instruction – If you set the 8 I/O pins and have RS off (0) then toggle the Enable pin on and off, the LCD receives those 8 I/O pins as an Instruction.
Data – If you set the 8 I/O pins and have RS on (1) then toggle the Enable pin on and off, the LCD received those 8 I/O pins as a Data byte.
Reading and Writing
In addition to sending Instructions or Data to the LCD, you can also read Data back. This tutorial will not cover that, but it’s basically the same process except you set the Read/Write pin to 1 and then pulse the E pin high/low and then you can read the pins that will be set by the LCD.
Initialize the LCD to 4-bit mode
Since only 4 of the PCF8574 I/O pins are used for data, the first thing that must be done is to initialize the LCD module to 4-bit mode. This is done by using the Function Set instruction.
Function set is described as the following:
RS RW DB7 DB6 DB5 DB4 DB3 DB2 DB1 DB0 --- --- --- --- --- --- --- --- --- --- 0 0 0 0 1 DL N F x x
Above, RS is the Register Select pin, RW is the Read/Write pin, and DB7-DB0 are the eight I/O pins. For Function Set, pins DB7-DB5 are “001” representing the Function Select instruction. After that, the pins are used for settings of Function Select:
DB4 is Data Length select bit. (DL)
DB3 is Number of Lines select bit
DB2 is Font select bit
When we are using the PCF8574 module, it ONLY gives us access to DB7-DB4, so it is very smart that they chose to make the DL setting one of those four bits. We have no way to access the pins for N or F until we toggle the LCD in to 4-bit data length mode.
If we were using all 8 I/O pins, we’d set them like this to go in to 4-bit mode:
That sequence will initialize the LCD so we can send it commands. After that, we can use Function Set to change it to 4-bit mode (DB4 as 0 for 4-bit mode):
If we used all 8 I/O pins directly, we could also set Font and Number of lines at the same time after the three initializing writes. BUT, since we are using the PCD8547 and only have access to the top four bits (DB7-DB4), we must put the LCD in to 4-bit mode first. More details on how we use that in a moment.
If I wanted to initialize the LCD, I would just need to translate the I/O pins into the bits of a PCF8574 byte. For the first three initialization writes, it would look like this:
ABove, you see only need to pass in the bit pattern for DB7 DB6 DB5 DB4. This routine will set the Backlight Bit (it doesn’t have to, but I didn’t want the screen to blank out when sending these instructions), and then write the byte out with the E pin set, pause, then write it out again with E off.
Thus, my initialization can now look like this:
// Initialize all pins off and give it time to settle.
Wire.beginTransmission(PCF8574_ADDRESS);
Wire.write(0x0);
Wire.endTransmission();
delayMicroseconds(50000);
// [7 6 5 4 3 2 1 0 ]
// [D7 D6 D5 D4 BL -E RW RS]
LCDWriteInstructionNibble(0b0011);
delay(5); // min 4.1 ms
LCDWriteInstructionNibble(0b0011);
delayMicroseconds(110); // min 100 us
LCDWriteInstructionNibble(0b0011);
delayMicroseconds(110); // min 100 us
// Set interface to 4-bit mode.
LCDWriteInstructionNibble(0b0010);
That looks much more obvious, and reduces the amount of lines we need to look at since the function will do the two writes (E on, E off) for us.
Sending 8-bits in a 4-bit world
Now that the LCD is in 4-bit mode, it will expect those four I/O pins set twice — the first time for the upper 4-bits of a byte, and then the second time for the lower 4-bits. We could, of course, do this manually as well by figuring all this out and building the raw bytes outselves.
But that makes my head hurt and is too much work.
Instead, I created a second function that will send an 8-bit value 4-bits at a time:
You’ll notice I pass in the Register Select bit, which can either be 0 (for an Instruction) or 1 (for data). That’s jumping ahead a bit, but it makes sense later.
I can then pass in a full instruction, like sending Function set to include the bits I couldn’t set during initialization when the LCD was in 8-bit mode and I didn’t have access to DB3-DB0. My LCDInit() routine set the LCD to 4-bit mode, and then uses this to send out the rest of the initialization:
// Function Set
// [0 0 1 DL N F 0 0 ]
// DL: 1=8-Bit, 0=4-Bit
// N: 1=2 Line, 0=1 Line
// F: 1=5x10, 0=5x8
// [--001DNF00]
LCDWriteByte(0, 0b00101000); // RS=0, Function Set
// Display On
// [0 0 0 0 1 D C B ]
// D: Display
// C: Cursor
// B: Blink
// [--00001DCB]
LCDWriteByte(0, 0b00001100); // RS=0, Display On
// Display Clear
// [0 0 0 0 0 0 0 1 ]
LCDWriteByte(0, 0b00000001);
delayMicroseconds(3); // 1.18ms - 2.16ms
// Entry Mode Set
// [0 0 0 0 0 1 ID S ]
// ID: 1=Increment, 0=Decrement
// S: 1=Shift based on ID (1=Left, 0=Right)
// [--000001IS]
LCDWriteByte(0, 0b00000110);
To make things even more clear, I then created a wrapper function for writing an Instruction that has RS at 0, and another for writing Data that has RS at 1:
// Entry Mode Set // [0 0 0 0 0 1 ID S ] // ID: 1=Increment, 0=Decrement // S: 1=Shift based on ID (1=Left, 0=Right) // [--000001IS] LCDWriteInstructionByte(0b00000110);
The Display Clear instruction is 00000001. There are no other bits that need to be set, so I can clear the screen by doing “LCDWriteInstructionByte (0b00000001)” or simply “LCDWriteInstructionByte(1)”;
Ultimately, I’d probably create #defines for the different instructions, and the settable bits inside of them, allowing me to build a byte like this:
FUNCTION_SET would represent the bit pattern 0b00100000, and the DL_BIT would be BIT(4), N_BIT would be BIT(3) and F_BIT would be BIT(2). Fleshing out all of those defines and then making wrapper functions would be trivial.
But in my case, I only needed a few, so if you wanted to make something that did that, you could:
This type of thing can allow your code to spiral out of control as you create functions to set bits in things like “Display On/Off Control” and then write wrapper functions like “LCDDisplayON()”, “LCDBlinkOn()” and so on.
But we won’t be going there. I’m just showing you the basic framework.
Now what?
With the basic steps to Initialize to 4-Bit Mode, then send out commands, the rest is pretty simple. If you want to write out bytes to be displayed on the screen, you just write out a byte with the Register Select bit set (for Data, instead of Instruction). The byte appears at whatever location the LCD has for the cursor position. Simple!
At the very least, you need a Clear Screen function:
The last thing I implemented was a thing that sets the X/Y position of where text will go. This is tricky because the display doesn’t match the memory inside the screen. Internally my LCD2004 just has a buffer of screen memory that maps to the LCD somehow.
The LCD data is not organized as multiple lines of 20 characters (or 16). Instead, it is just a buffer of screen memory that is mapped to the display. In the case of the LCD2004, the screen is basically 128 bytes of memory, with the FIRST line being bytes 0-19, the SECOND line being bytes 64-83, the THIRD line being bytes 20-39, and the FOURTH line being bytes 84-103.
If you were to start at memory offset 0 (top left of the display) and write 80 bytes of data (thinking you’d get 20, 20, 20 and 20 bytes on the display), that wouldn’t happen ;-) You’d see some of your data did not show up since it was writing out in the memory that is not mapped in to the display. (You can also use that memory for data storage, but I did not implement any READ routines in this code — yet.)
If you actually did start at offset 0 (the first byte of screen memory) and wrote a series of characters from 32 (space) to 127 (whatever that is), it would look like this:
Above, you can see the first line continues on line #3, and then after the end of line 3 (…EFG” we don’t see any characters until we get to the apostrophe which displays on line 2. Behind the scenes, memory looks like this:
All you need to know is that the visible screen doesn’t match LCD memory, so when creating a “set cursor position” that translates X and Y to an offset of memory, it has to have a lookup table, like this one:
You will see I created a function that sends the “Set Offset” instruction (memory location 0 to 127, I think) and then a “Set X/Y” function that translates columns and rows to an offset.
With all that said, here are the routines I cam up with. Check my GitHub for the latest versions:
The LCDTest.ino program also demonstrates how you can easily send an Instruction to load character data, and then send that data using the LCDWriteData functions.
I plan to revisit this with more details on how all that works, but wanted to share what I had so far.
After being shown that you can declare a global variable, as one does…
int g_globalVariable;
…and then make it be treated as a read-only variable to other files by extern-ing it as a const:
extern int const g_globalVariable;
…of course I wondered what the compiler did if you went the other way:
// main.c
#include <stdio.h>
void function (void);
int const c_Value = 0;
int main()
{
printf("Hello World\n");
printf ("c_Value: %d\n", c_Value);
function ();
printf ("c_Value: %d\n", c_Value);
return 0;
}
// function.c
#include <stdio.h>
// Extern as a non-const.
extern int c_Value;
void function ()
{
c_Value++;
}
Above, main.c contains a global const variable, but function.c tries to extern it as non-const.
But when I run the code…
Hello World c_Value: 0
...Program finished with exit code 139 Press ENTER to exit console.
…the compiler does not complain, but we get a crash. Looking at this in a debugger shows more detail:
Program received signal SIGSEGV, Segmentation fault. 0x00005555555551ef in function () at Function.c:11 11 c_Value++;
I am unfamiliar with the inner workings on whatever compiler this Online C Compiler – online editor is using, but I suspect I’d see similar results doing this on any system with memory protection. Go back to the early days (like OS-9 on a 6809 computer, or even on a 68000 without an MMU) and … maybe it just allows it and it modifies something it shouldn’t?
We can file this away in the “don’t do this” category.
This is a cool trick I just learned from commenter Sean Patrick Conner in a previous post.
If you want to have variables globally available, but want to have some control over how they are set, you can limit the variables to be static to a file containing “get” and “set” functions:
Using functions to get and set variables adds extra code, and also slows down access to those variables since it is having to jump in to a function each time you want to change the variable.
The benefit of adding range checking may be worth the extra code/speed, but just reading a variable has not reason to need that overhead.
Thus, Sean’s tip…
Variables declared globally in a file cannot be accessed anywhere else unless you use “extern” to declare them in any file that wants to use them. You might declare some globals in globals.c like this:
// Globals.c int g_number;
…but trying to access “g_number” anywhere else will not work. You either need to add:
extern int g_number;
…in any file that wants access to it, or, better, make something like globals.h that contains all your extern references:
// Globals.h extern int g_number;
Now any file that needs access to the globals can just include “globals.h” and use them:
#include "globals.h"
void function (void) { printf ("Number: %d\n", g_number); }
That was not Sean’s tip.
Sean mentioned something that makes sense, but I do not think I’d ever tried: The extern can contain the “const” keyword, even if the declaration of the variable does not!
This means you could have a global variable like above, but in globals.h do this:
// Globals.h extern int const g_number;
Now any file that includes “globals.h” has access to g_number as a read-only variable. The compiler will not let code build if there is a line trying to modify it other than globals.c where it was actually declared non-const.
Thus, you could access this variable as fast as any global, but not modify it. For that, you’d need a set routine:
// Globals.c int c_number; // c_ to indicate it is const, which it really isn't.
Now other code can include “globals.h” and have read-only access to the variable directly, but can only set it by going through the set function, which could enforce data validation or other rules — something just setting it directly could not.
#include "Globals.h"
int main(int argc, char **argv) { printf ("Number: %d\n", c_number);
SetNumber (42);
printf ("Number: %d\n", c_number);
return 0; }
That seems quite obvious now that I have been shown it. But I’ve never tried it. I have made plenty of Get/Set routines over the years (often to deal with making variable access thread-safe), but I guess it never dawns on me that, when not dealing with thread-safe variables, I could have direct read-only access to a variable, but still modify it through a function.
Global or static?
One interesting benefit is that any other code that needed direct access to this variable (for speed reasons or whatever) could just add its own extern rather than using the include “Globals.h”:
// Do this myself so I can modify it extern int c_number;
void MyCode (void) { // It's my variable and I can do what I want with it! c_number = 100; }
By using the global, it opens up that as a possibility.
And since functions are used to set them, they could also exist to initialize them.
// Globals.c // Declared as non-const, but named with "c_" to indicate the rest of the // code cannot modify it. int c_number;
// Extern as a const so it is a read-only. extern int const c_number;
// Prototypes void InitGlobals (void);
void SetNumber (int number);
#include <stdio.h>
#include "Globals.h"
int main() { InitGlobals ();
printf ("c_number = %d\n", c_number);
// This won't work. //c_number = 100;
SetNumber (100);
printf ("c_number = %d\n", c_number);
return 0; }
Spiffy.
I had thought about using static to prevent the “extern” trick from working, but realize if you did that, there would be no read-only access outside of that file and a get function would be needed. And we already knew how to do that.
I love learning new techniques like this. The code I maintain in my day job has TONS of globals for various reasons, and often has duplicate code to do range checking and such. I could see using something like this to clean all of that up and still retain speed when accessing the variables.
I do not know why this has confused me so much over the years. BING CoPilot (aka ChatGPT) explains it so clearly I do not know how I ever misunderstood it.
But I am getting ahead of myself.
Back in 2017, I wrote a bit about const in C. A comment made by Sean Patrick Conner on a recent post made me revisit the topic of const in 2024.
If you use const, you make a variable that the compiler will not allow to be changed. It becomes read-only.
int normalVariable = 42; const int constVariable = 42;
normalVariable = 100; // This will work.
constVariable = 100; // This will not work.
When you try to compile, you will get this error:
error: assignment of read-only variable ‘constVariable’
That is super simple.
But let me make one more point-er…
But for pointers, it is a bit different. You can declare a pointer and change it, like this:
char *ptr = 0x0;
ptr = (char*)0x100;
And if you did not want the pointer to change, you might try adding const like this:
const char *ptr = 0x0;
ptr = (char*)0x100;
…but you would fine that compiles just fine, and you still can modify the pointer.
In the case of pointers, the “const” at the start means what the pointer points to, not the pointer itself. Consider this:
uint8_t buffer[10];
// Normal pointer. uint8_t *normalPtr = &buffer[0];
// Modify what it points to. normalPtr[0] = 0xff;
// Modify the pointer itself. normalPtr++;
Above, without using const, you can change the data that ptr points to (inside the buffer) as well as the pointer itself.
But when you add const…
// Pointer to constant data. const uint8_t *constPtr1 = &buffer[0]; // Or it can be written like this: // uint8_t const *constPtr1 = &buffer[0];
// You can NOT modify the data the pointer points to: constPtr1[1] = 1; // error: assignment of read-only location ‘*(constPtr1 + 2)
// But you can modify the pointer itself: constPtr1++;
Some of my longstanding confusion came from where you put “const” on the line. In this case, “const uint8_t *ptr” is the same as “uint8_t const *ptr”. Because reasons?
Since using const before or after the pointer data type means “you can’t modify what this points to”, you have to use const in a different place if you want the pointer itself to not be changeable:
// Constant pointer to data. // We can modify the data the pointer points to, but // not the pointer itself. uint8_t * const constPtr3 = &buffer[0];
constPtr3[3] = 3;
// But this will not work: constPtr3++; // error: increment of read-only variable ‘constPtr3’
And if you want to make it so you cannot modify the pointer AND the data it points to, you use two consts:
// Constant pointer to constant data.
// We can NOT modify the data the pointer points to, or // the pointer itself. const uint8_t * const constPtr4 = &buffer[0];
// Neither of these will work: constPtr4[4] = 4; // error: assignment of read-only location ‘*(constPtr4 + 3)’
constPtr4++; // error: increment of read-only variable ‘constPtr4’
Totally not confusing.
The pattern is that “const” makes whatever follows it read-only. You can do an integer variable both ways, as well:
const int constVariable declare constVariable as const int
int const constVariable declare constVariable as const int
Since both of those are the same, “const char *” and “char const *” should be the same, too.
const char *ptr declare ptr as pointer to const char
char const *ptr declare ptr as pointer to const char
However, when you place the const in front of the variable name, you are no longer referring to the pointer (*) but that variable:
char * const ptr declare ptr as const pointer to char
Above, the pointer is constant, but not what it points to. Adding the second const:
const char * const ptr declare ptr as const pointer to const char
char const * const ptr declare ptr as const pointer to const char
…makes both the pointer and what it points to read-only.
Why do I care?
You probably don’t. However, any time you pass a buffer in to a function that is NOT supposed to modify it, you should make sure that buffer is read-only. (That was more or less the point of my 2017 post.)
#include <stdio.h>
#include <string.h>
void function (char *bufferPtr, size_t bufferSize)
{
// I can modify this!
bufferPtr[0] = 42;
}
int main()
{
char buffer[80];
strncpy (buffer, "Hello, world!", sizeof(buffer));
printf ("%s\n", buffer);
function (buffer, sizeof(buffer));
printf ("%s\n", buffer);
return 0;
}
When I run that, it will print “Hello, world!” and then print “*ello, world!”
If we do not want the function to be able to modify/corrupt the buffer (easily), adding const solves that:
#include <stdio.h>
#include <string.h>
void function (const char *bufferPtr, size_t bufferSize)
{
// I can NOT modify this!
bufferPtr[0] = 42;
}
int main()
{
char buffer[80];
strncpy (buffer, "Hello, world!", sizeof(buffer));
printf ("%s\n", buffer);
function (buffer, sizeof(buffer));
printf ("%s\n", buffer);
return 0;
}
But, because the pointer itself was not protected with const, inside the routine it could modify the pointer:
#include <stdio.h>
#include <string.h>
void function (const char const *bufferPtr, size_t bufferSize)
{
// I can NOT modify this!
//bufferPtr[0] = 42;
while (*bufferPtr != '\0')
{
printf ("%02x ", *bufferPtr);
bufferPtr++; // Increment the pointer
}
printf ("\n");
}
int main()
{
char buffer[80];
strncpy (buffer, "Hello, world!", sizeof(buffer));
printf ("%s\n", buffer);
function (buffer, sizeof(buffer));
printf ("%s\n", buffer);
return 0;
}
In that example, the pointer is passed in, and can be changed. But, since it was passed in, what gets changed is the temporary variable used by the function, similarly to when you pass in a variable and modify it inside a function and the variable can be changed in the function without affecting the variable that was passed in:
Because of that temporary nature, I don’t see any reason to restrict the pointer to be read-only. Any changes made to it within the function will be to a copy of the pointer.
In fact, even if you declare that pointer as a const, the temporary copy inside the function can still be modified:
void function (const char const *bufferPtr, size_t bufferSize) { // I can NOT modify this! //bufferPtr[0] = 42;
while (*bufferPtr != '\0') { printf ("%02x ", *bufferPtr);
bufferPtr++; // Increment the pointer }
printf ("\n"); }
Offhand, I cannot think of any reason you would want to pass a pointer in to a function and then not let the function use that pointer by changing it. Maybe there are some? Leave a comment…
The moral of the story is…
The important takeaway is to always use const when you are passing in a buffer you do not want to be modified by the function. And leave it out when you DO want the buffer modified:
#include <stdio.h>
#include <string.h>
#include <ctype.h>
// Uppercase string in buffer.
void function (char *bufferPtr, size_t bufferSize)
{
while ((*bufferPtr != '\0') && (bufferSize > 0))
{
*bufferPtr = toupper(*bufferPtr);
bufferPtr++; // Increment the pointer
bufferSize--; // Decrement how many bytes left
}
}
int main()
{
char buffer[80];
strncpy (buffer, "Hello, world!", sizeof(buffer));
printf ("%s\n", buffer);
function (buffer, sizeof(buffer));
printf ("%s\n", buffer);
return 0;
}
And if you pass that a non-modifiable string (like a real read-only constant string stored in program space or ROM or whatever), you might have a different issue to deal with. In the case of the PIC24 compiler I use, it flat out won’t let you pass in a constant string like this:
function ("CCS PIC compiler will not allow this", 80);
They have a special compiler setting which will generate code to copy any string literals into RAM before calling the function (at the tradeoff of extra code space, CPU time, and memory);
#device PASS_STRINGS=IN_RAM
But I digress. This was just about const.
Oddly, when I do the same thing in the GDB online Debugger, it happily does it. I don’t know why — surely it’s not modifying program space? Perhaps it is copying the string in to RAM behind the scenes, much like the CCS compiler can do. Or perhaps it is blindly writing to program space and there is no exception/memory protection stopping it.
Well, it crashes if I run the same code on a Windows machine using the Code::Blocks IDE (GCC compiler).
One more thing…
You could, of course, try to cheat. Inside the function that is passed a const you can make a non-const and just assign it:
// Uppercase string in buffer. void function (const char *bufferPtr, size_t bufferSize) { char *ptr = bufferPtr;
ptr++; // Increment the pointer bufferSize--; // Decrement how many bytes left }
putchar ('\n'); }
This will work if your compiler is not set to warn you about it. On GCC, mine will compile, but will emit a warning:
main.c: In function ‘function’: main.c:16:17: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers] 16 | char *ptr = bufferPtr;
For programmers who ignore compiler warnings, you now have code that can corrupt/modify memory that was designed not to be touched. So keep those warnings cranked up and pay attention to them if your code is important.
Only one of the programming jobs I have had used a coding standard. Their standard, created in-house, is more or less the standard I follow today. It includes things like:
Prefix global variables with g_
Prefix static variables with s_ (for local statics) or S_ (for global statics)
It also required the use of braces, which I have blogged about before, even in single-line instances such as:
if (fault == true)
{
BlinkScaryRedLight();
}
Many of these took me a bit to get used to because they are different than how I do things. Long after that job, I have adopted many/most of that standard in my own personal style due to accepting the logic behind it.
I thought I’d ask here: Are there any good “widely accepted” C coding standards out there you would recommend? Adopting something widely used might make code easier for a new hire to adapt to, versus “now I have to learn yet another way to format my braces and name my variables.”
Even though I first started learning C using a K&R compiler on a 1980s 8-bit home computer, I still feel like I barely know the language. I learn something new regularly, usually by seeing code someone else wrote.
There are a few common “rules” in C programming that most C programmers I know agree with:
Do not use goto, even if it is an intentional supported part of the language.
Do not use globals.
I have seen many cases for both. In the case of goto, I have seen code that would otherwise be very convoluted with nested braces and comparisons solved simply by jumping out of the block of code with a goto. I still can’t bring myself to use goto in C, even though as I type this I feel like I actually did at some point. (Do I get a pass on using that, since it was a silly experiment where I was porting BASIC — which uses GOTO — to C, as literally as possible?)
But I digress…
A case for globals – laziness
Often, globals are used out of sheer laziness. Like, suppose you have a function that does something and you don’t want to have to update every use of it to deal with a parameter. I am guilty of this when I needed to make a function more flexible, and did not have time to go update every instance and use of the function to pass in a variable:
In that case, there would be some global (I put a “g_” the variable name so it would be easy to spot as a global later) containing a baud rate, and any place that called that function could use it. Changing the global would make subsequent calls to the function use the new baud rate.
Bad example, but it is what it is.
A case for globals – speed
I have also resorted to using globals to speed things up. One project I worked on had dozens of windows (“panels”) and the original programmer had created a lookup function to return that handle based on a based in #define value:
int GetHandle (int panelID)
{
int panelHandle = -1;
switch (panelID)
{
case MAIN_PANEL:
panelHandle = xxxx;
break;
case MAIN_OPTIONS:
panelHandle = xxxx;
break;
...etc...
Every function that used them would get the ID first by calling that routine:
handle = GetHandle (PANEL_MAIN);
SetPanelColor (handle, COLOR_BLUE); // or whatever
As the program grew, more and more panels were added, and it would take more and more time to look up panels at the bottom of the list. As an optimization I just decided to make al the panel handles global, so any use could just be:
SetPanelColor (g_MainPanel, COLOR_BLUE); // or whatever
This quick-and-dirty change ended up having about a 10% reduction in CPU usage — this thing uses a ton of panel accesses! And it was pretty quick and simple to do.
Desperate times.
An alternative to globals
The main replacement I see for globals are structures, declared during startup, then passed around by pointer. I’ve seen these called “context” or “runtime” structures. For example, some code I work on creates a big structure of “things” and then any place that needs one of those things accesses it:
InitI2C (runTime.baudRate);
But as you might guess, “runTime” is a global structure so any part of the code could access it (or manipulate it, or mess it up). The main benefit I see of making things a global structure is you have to know what you are doing. If you had globals like this:
// Globals
int index = 0;
int baudRate = 0;
…you might be surprised if you tried to use a local variable “index” or “baudRate” and got it confused with the global. (I actually ran in to a bug where there was a global named simply “index” and there was some code that had meant to have a local variable called “index” but forgot to declare it, thus it was always screwing with the global index which was used elsewhere in the code. This was a simple accident that caused alot of weird problems before it was identified and fixed.
Prepending something like “g_index” at least makes it clear you are using a global, so you could have a local “index” and not risk messing up the global “g_index”.
To me, using that global runtime structure is just a slower way to do that, since in embedded compilers I have tested, accessing a global something like “foo.x” is slower than just accessing a global “x”. I have also seen it to take more code space, and I had to remove all such references in one tightly restrained product to save just enough bytes to add some needed new code.
Yes, I have ran in to many situations where a tiny bit of extra memory space or a tiny bit of extra code space made the difference between getting something done, or not.
A cleaner approach?
Ideally, code could pass around a “context” structure, and then nothing could ever access it without specifically being handed it. Consider this:
int main ()
{
int status = SUCCESS;
// Allocate out context:
RunTimeStruct runTime;
...
status = StartProgram (&runTime);
return status;
}
int BeginProgram (RunTimeStruct *runTime)
{
InitializeCommunications (runTime->baudRate);
status = DoSomething (runTime);
return status;
}
The idea seems to be that once you had the runTime structure, you could pass in specific elements to a function (such as the baud rate), or pass along the entire context for routines that needed full access.
This feels like a nice approach to me since passing one pointer in is fast, and it still offers protection when you decide to pass in just one (or a few) specific items to a function. No code can legally touch those variables if it doesn’t have the context structure.
But what about globals that aren’t globals?
And now the point of this article. Something I learned from this project was an interesting use of “globals” that were not globals. There were functions that declared static structures, and would return the address of the structure:
This seems like a hybrid approach. You can never accidentally use them, like you might with just a global “int index” or whatever, but if you did, you could get to them without needing a context passed in. It seems like a good compromise between safety and laziness.
It also means those functions could easily be adapted to return blocks of thread-safe variables, with a “Release” function at the end. (This is actually how the thread-safe variables work in the LabWindows/CVI environment I use at my day job.)
Since I like learning, I thought I’d write this up and ask you what YOU do. Show me your superior method, and why it is superior. I’ve seen so many different approaches to passing data around, so share yours in a comment.