C and VLAs (Variable Length Arrays)

When you are old (or “experienced” if you prefer), you begin to realize how much of what you learned is wrong. Even if it was “right” when you learned it. I think of all peers that went through computer courses at colleges back in the late 1980s or 1990s, learning now-obsolete languages and being taught methods and approaches that are today considered wrong.

When I learned C, it was on a pre-ANSI K&R C compiler. I learned it on my Radio Shack Color Computer 3 under the OS-9 operating system, with assistance from a friend of mine who had learned C on his Commodore Amiga.

I had alot of new things to learn in 1995 when I took a job with Microware Systems Corporation (creator of OS-9 and the K&R compiler I had learned on). Their Ultra-C compiler was an ANSI compiler, and it did things quite different.

In that era of the C89/C90 standard, arrays were just arrays and we liked it that way:

int array[42];

if you wanted things to be more flexible, you had to malloc() memory yourself.

int *array = malloc (sizeof(int)*42);

…and remember to stay within your boundaries and clean up/free that memory when you were done with it.

But C99 changed this, somewhat, with the introduction of VLAs (Variable Length Arrays). Now you could declare an array using a variable like this:

int x=42;

int array(x);

Neat. I do not think I have ever used this. One downside is you cannot do this with static variables, since those are created/reserved at compile time. But it is still neat.

But today I learned, you couldn’t rely on VLA is you were using C11. Apparently, they became optional that year. A compiler would define a special define if it did not support them:

__STD_NO_VLA__

But at least for twelve years of the standard, you could rely on them, before not being able to rely on them.

And then C23 happened, which I just learned made VLAs mandatory again.

So, uh, I guess if you have the latest and greatest, you can use them. For now. Until some future change makes them option again. Or removes them. Or whatever.

Still neat.

But I doubt any of the embedded C compilers I use for my day job support them.

Ciaran “Xroar” Anscomb’s PCLEAR 0 without assembly!

On the CoCo mailing list, Ciaran (author of the Xroar emulator), said this:

FWIW this is the bodge I have saved out for doing PCLEAR0 without such
ROM patching:

POKE183,PEEK(183)-6:POKE188,PEEK(188)-6:PCLEAR1:POKE183,PEEK(183)+6:POKE188,PEEK(188)+6

Annoyingly verbose, but should account for DOS, and works on the
Dragon too.

..ciaran

https://pairlist5.pair.net/mailman/listinfo/coco

This topic came up because of Juan Castro‘s experiments with updating HDB-DOS to add new functionality on a CoCo 1 and 2 (but that is a discussion for a dedicated blog post sometime). Juan had recently “fixed” Extended Color BASIC to allow using “PCLEAR 0” to remove all graphics memory and give more RAM to BASIC. I have discussed PCLEAR 0 in the past

This mysterious line performs a PCLEAR 0 without needing to load and run program of assembly code!

POKE183,PEEK(183)-6:POKE188,PEEK(188)-6:PCLEAR1:POKE183,PEEK(183)+6:POKE188,PEEK(188)+6

And it works!

But … how does it work!?!

Ciaran, you’ve got some ‘splainin’ to do…

Until then…

A safer memcpy with very limited use cases

Here is a quick one… At my day job, I found lines of code like this:

memcpy(systemDataPtr->serialNumber, resp.serialNumber, 16);

A quick peek at systemDataPtr->serialNumber shows it defined as this:

unsigned char serialNumber[MAX_SERIAL_NUMBER_LENGTH];

…with that constant defined as:

#define MAX_SERIAL_NUMBER_LENGTH        16

So while 16 is correct, the use of hard-coded “magic numbers” (hat tip to a previous manager, Pete S., who introduced me to that term) is probably best to be avoided. Change that #define, and things could go horribly wrong with a memory overrun or massive nuclear explosion or something.

One simple fix is to use the #define in the memcpy:

memcpy(systemDataPtr->serialNumber, resp.serialNumber, MAX_SERIAL_NUMBER_LENGTH);

This, of course, assumes that resp.serialNumber is also 16. Let’s see:

char serialNumber[16];

Ah, magic number! In this case, it comes from a DLL header file that does not share that #define, and the header file for the DLL was made by someone who had never made a Windows DLL before (me) and did not make #defines for these various lengths.

What if the DLL value ever got out-of-sync? Worst case, not all data would be copied (only 16 bytes). That seems fine. But if the DLL value became smaller, like 10, then the memcpy would still copy 16 bytes, copying the 10 from the DLL buffer plus 6 bytes of data in memory after it — buffer overrun?

In this case, since the destination buffer can hold 16 bytes, and we only copy up 16 bytes, the worst case is we could get some unintended data in that buffer.

sizeof() exists for a reason.

One thing I tend to do is use sizeof() instead of hard-coded numbers or the #define, since it will continue to work if the source buffer ever got changed from using the #define:

memcpy(systemDataPtr->serialNumber, resp.serialNumber, sizeof(systemDataPtr->serialNumber));

But this still has the same issue if the source resp.serialNumber became larger.

A safer, and more ridiculous, memcpy

Naturally, I came up with a ridiculous “solution”: A safer memcpy() that is much more of a pain to use because you have to know the size of each buffer and tell it the size of each buffer so it can make sure not to copy something larger than will fit into the destination buffer.

Here is the prototype of memcpy():

void * memcpy ( void * destination, const void * source, size_t num );

It will blindly copy up to “num” bytes from “source” to “destination”. But a ridiculous safer memcpy might look like this:

void * memcpy_safer ( void * destination, size_t sizeOfDestination,
                      const void * source, size_t sizeOfSource,
                      size_t num );

Just think of the extra overhead to add two more parameters for every use! Plus, it is a longer function name so you get to type even more! Wonderful.

Here is a quick implementation:

void * memcpy_safer ( void * destination, size_t sizeOfDestination,
                      const void * source, size_t sizeOfSource,
                      size_t num )
{
    // Use whichever size is the smallest.
    if ((num > sizeOfDestination) || (num > sizeOfSource))
    {
        if (sizeOfDestination < sizeOfSource)
        {
            num = sizeOfDestination;
        }
        else
        {
            num = sizeOfSource;
        }
    }
    
    return memcpy ( destination, source, num);
}

Now that the function knows the size of source and destination, it can do a simple check to make sure to use whichever is smallest for the max number of bytes to copy — regardless (or irregardless, depending on which you prefer) of how many bytes you specified.

If both buffers are larger than the “num” passed in, num is used. But if “num” is larger than either buffer, it is reduced to match the largest of the two buffers.

Note that this does not do any NULL checks, so it is relying on the behavior of memcpy() which would likely be a crash if NULL is passed in. To improve, NULL checks could be added.

I decided to pass this function into Bing CoPilot and see what it “thought” about this:

Limitations & Risks

Silent truncation: If num is too large, the function silently reduces it. This can lead to subtle bugs if the caller expects all bytes to be copied.

No error signaling: There’s no return code or logging to indicate that truncation occurred.

Still not “safe” in all cases: If destination or source are NULL, or if the buffers overlap, memcpy may still cause undefined behavior.

No runtime buffer validation: It assumes the caller provides correct buffer sizes, which may not be guaranteed.

– Bing CoPilot

And, of course, the A.I. offered up these changes:

#include <string.h>
#include <stddef.h>
#include <stdio.h>

void * memcpy_safer ( void * destination, size_t sizeOfDestination,
                      const void * source, size_t sizeOfSource,
                      size_t num )
{
    if (!destination || !source) {
        fprintf(stderr, "Error: NULL pointer passed to memcpy_safer.\n");
        return NULL;
    }

    if (num > sizeOfDestination || num > sizeOfSource) {
        size_t adjusted = (sizeOfDestination < sizeOfSource) ? sizeOfDestination : sizeOfSource;
        fprintf(stderr, "Warning: Truncating copy from %zu to %zu bytes.\n", num, adjusted);
        num = adjusted;
    }

    return memcpy(destination, source, num);
}

That version adds NULL checks, returns a NULL if either buffer passed in was NULL, and adds prints to standard error if a NULL happens or if the value was truncated.

Not bad, predictive language model.

My ridiculous test program

Here is my test program, which I wrote using the Online GDB C compiler:

/******************************************************************************

Welcome to GDB Online.
  GDB online is an online compiler and debugger tool for C, C++, Python, PHP, Ruby, 
  C#, OCaml, VB, Perl, Swift, Prolog, Javascript, Pascal, COBOL, HTML, CSS, JS
  Code, Compile, Run and Debug online from anywhere in world.

*******************************************************************************/
#include <stdint.h> // for uint8_t
#include <stdio.h>  // for printf()
#include <stdlib.h> // for EXIT_SUCCESS
#include <string.h> // for memcpy()

/*---------------------------------------------------------------------------*/
// PROTOTYPES
/*---------------------------------------------------------------------------*/

void * memcpy_safer ( void * destination, size_t sizeOfDestination,
                      const void * source, size_t sizeOfSource,
                      size_t num );

void * memcpy_safer2 ( void * destination, size_t sizeOfDestination,
                       const void * source, size_t sizeOfSource,
                       size_t num );

void initializeBuffer (void *dataPtr, size_t dataSize, uint8_t value);

void dumpBuffer (const char* prefix, void *dataPtr, size_t dataSize);

/*---------------------------------------------------------------------------*/
// MAIN
/*---------------------------------------------------------------------------*/

int main()
{
    uint8_t smallerBuffer[10];
    uint8_t largerBuffer[15];
    
    // Test 1: copy longer buffer into smaller buffer.
    
    printf ("\nInitialized buffers:\n\n");    
    
    // Initialize buffers with something we can identify later.
    initializeBuffer (smallerBuffer, sizeof(smallerBuffer), 0x1);
    dumpBuffer ("smallerBuffer", smallerBuffer, sizeof(smallerBuffer));

    initializeBuffer (largerBuffer, sizeof(largerBuffer), 0x2);
    dumpBuffer ("largerBuffer ", largerBuffer, sizeof(largerBuffer));

    printf ("\nTest 1: Copying largerBuffer into smallerBuffer...\n\n");

    memcpy_safer (smallerBuffer, sizeof(smallerBuffer), largerBuffer, sizeof(largerBuffer), 42);

    dumpBuffer ("smallerBuffer", smallerBuffer, sizeof(smallerBuffer));

    // Test 2: copy smaller buffer into larger buffer.

    printf ("\nInitialized buffers:\n\n");

    // Initialize buffers with something we can identify later.
    initializeBuffer (smallerBuffer, sizeof(smallerBuffer), 0x1);
    dumpBuffer ("smallerBuffer", smallerBuffer, sizeof(smallerBuffer));

    initializeBuffer (largerBuffer, sizeof(largerBuffer), 0x2);
    dumpBuffer ("largerBuffer ", largerBuffer, sizeof(largerBuffer));

    printf ("\nTest 2: Copying smallerBuffer into largerBuffer...\n\n");

    memcpy_safer (largerBuffer, sizeof(largerBuffer), smallerBuffer, sizeof(smallerBuffer), 42);

    dumpBuffer ("largerBuffer ", largerBuffer, sizeof(largerBuffer));

    return EXIT_SUCCESS;
}


/*---------------------------------------------------------------------------*/
// FUNCTIONS
/*---------------------------------------------------------------------------*/

/*---------------------------------------------------------------------------*/
// My ridiculous "safer" memcpy.
/*---------------------------------------------------------------------------*/
void * memcpy_safer ( void * destination, size_t sizeOfDestination,
                      const void * source, size_t sizeOfSource,
                      size_t num )
{
    // Use whichever size is the smallest.
    if ((num > sizeOfDestination) || (num > sizeOfSource))
    {
        if (sizeOfDestination < sizeOfSource)
        {
            num = sizeOfDestination;
        }
        else
        {
            num = sizeOfSource;
        }
    }
    
    return memcpy ( destination, source, num);
}


/*---------------------------------------------------------------------------*/
// Bing CoPilot changes.
/*---------------------------------------------------------------------------*/
void * memcpy_safer2 ( void * destination, size_t sizeOfDestination,
                       const void * source, size_t sizeOfSource,
                       size_t num )
{
    if (!destination || !source) {
        fprintf(stderr, "Error: NULL pointer passed to memcpy_safer.\n");
        return NULL;
    }

    if (num > sizeOfDestination || num > sizeOfSource) {
        size_t adjusted = (sizeOfDestination < sizeOfSource) ? sizeOfDestination : sizeOfSource;
        fprintf(stderr, "Warning: Truncating copy from %zu to %zu bytes.\n", num, adjusted);
        num = adjusted;
    }

    return memcpy(destination, source, num);
}


/*---------------------------------------------------------------------------*/
// Utility function to initialize a buffer to a set value.
/*---------------------------------------------------------------------------*/
void initializeBuffer (void *dataPtr, size_t dataSize, uint8_t value)
{
    if (NULL != dataPtr)
    {
        memset (dataPtr, value, dataSize);
    }
}


/*---------------------------------------------------------------------------*/
// Utility function to dump bytes in a buffer, with an optional prefix.
/*---------------------------------------------------------------------------*/
void dumpBuffer (const char* prefix, void *dataPtr, size_t dataSize)
{
    if (NULL != dataPtr)
    {
        if (NULL != prefix)
        {
            printf ("%s: ", prefix);
        }

        for (size_t idx=0; idx<dataSize; idx++)
        {
            printf ("%02x ", ((uint8_t*)dataPtr)[idx]);
        }
        printf ("\n");
    }
}

// End of memcpy_safer.c

If you want to run it there, you can use this link:

https://onlinegdb.com/Eu7FToIcQ

But of course, I am not using this code. It is ridiculous and requires extra typing.

Besides, I know exactly what I am doing in C and never make any mistakes… Really.

Until next time…

Google Street View scripts and A.I. emojis

When capturing video for Google Street View, Google recommends using 1 frame per second video for walking, and 5 frames per second for biking and lower speeds. A full 30 or even 60 fps video is unnecessarily huge and will take much longer to upload and process … and most of the frames will be discarded by Google anyway.

I had one of the A.I.s (probably CoPilot) automate using the ffmpeg open source command line tool so I could batch convert files in a directory. A very rough work-in-progress version is on my GitHub now:

allenhuffman/GoogleStreetViewScripts: Scripts for converting videos before uploading to Google Street View

I have noticed the A.I.s are starting to put emojis in things — including code and scripts they generate!

I don’t even know how to type emojis in uMacs or VI ;-) but apparently they are supported these days.

Have you noticed the increase in emojis in A.I. responses lately?

I’d end this post with an emoji, but I do not know how to type one in WordPress . . .

DJI OSMO 360 launched… but you cannot buy it (in the USA)

UPDATE: There is now a third party reseller offering the DJI Osmo 360 on Amazon, but shipping doesn’t happen until mid-September. Since this is not Amazon shipping or an official DJI storefront, buyer beware. Check reseller ratings before ordering anything.

UPDATE #2: More “confirmation” as I check out Facebook posts. I keep seeing posts from folks who have had access to pre-release review units saying “not available in the US for the foreseeable future.” That is unfortunate. It looks like the Osmo 360 might be better for low-light 360 and I’d get one just for that.

UPDATE #3: According to this REDDIT post, which includes a chat response from DJI, there is no official sale of the DJI Osmo 360 in the USA at this time. BUYER BEWARE of any third party sellers offering it in the USA.


This morning at 7am central time, DJI officially launched their new DJI OSMO 360 camera. But, like almost everything else on the DJI website, it is unavailable in the U.S.A.

This situation was predicted by many DJI followers, based on how almost nothing they sell is currently available to purchase in the U.S.A. Basically no drones, no cameras, and no microphones are being sold to the U.S.A. currently.

The reasons speculated by video bloggers are varied, and are all based on speculation. The two top speculations I have seen include:

  • No Slave Labor – The situation with the Uyghurs (pronounced “weegers”, which you may have heard of) being used as forced/slave labor. Apparently there is something passed by the previous administration that would prevent those items from being brought into the U.S.A.
  • Tariffs – The situation with tariffs and our current administration. This one does not really ring true, since tariffs would seemingly only make things more expensive. And, plenty of other China-based entities have products available here in the U.S.A. Insta360 products are readily available, and you can still order from Alibaba, Aliexpress and Temu, for example.

Thus, my goal of getting one of these on launch day will be delayed while we figure out if we can get them at all. Instead of a pre-order or long delivery window, they just say “out of stock.” But so does most other things I have spot-checked on the website.

The FOMO is real!

If you get one here in the U.S.A., please leave a comment and tell us where and how you got it. This isn’t even listed on Amazon… yet.

More to come…

“GPS data jumps around a lot” and Google Street View

Updates:

  • 2025-09-03 – Added details on how I got the python tool running on macOS.

Recently I posted a list of Google Street View upload errors I have received. I found a way to deal with one of them.

GPS data jumps around a lot

After some “research” (which you can assume means “googling” and “asking A.I.”) I learned that there can be gaps in GPS data that make the movement appear to spike. I found a Mac Store program called GPX Editor which I purchased for the wonderful price of $4.99 so I could look at the data closer:

GPX Editor showing spikes in the GPS data.

This program allowed me to look at my data in a much finer (i.e. zoomed in) way than Google Maps does. I could see spikes in the data that looked as if I suddenly popped to another location, as if I went from walking to zooming at 17 mph. This may have been caused by a GPS glitch as I walked under a bridge or some other obstacle that affected the GPS reading.

While you are looking at that screen shot, notice on the right side you can see how much time there is between GPS entries. Google Street View wants 5 seconds or less, I believe, so when gaps appear at 8 seconds, that will also cause an error and the GPX file will be rejected.

What to do, what to do…

Google wants real GPS data, but it seems fixing an obvious glitch like this should be acceptable. Basically, just moving a point back to where it was. I am sure there are tools like GPX Editor that would allow moving one point and fixing it.

But I am lazy, and wondered if there was an easier way.

I ended up looking for a tool to interpolate the GPS data. Here is the one I found:

https://github.com/remisalmon/gpx-interpolate

It will read a GPX file and then smooth out the points. Here is the example screen shot from that Github repository:

The black dots are actually GPS entries, and the red is where it can create new entries. For slow data, such as walking, this should work really well. And, I notice it has the ability to limit how often the data ends up in the file:

usage: gpx_interpolate.py [-h] [-r RES] [-n NUM] [-s] FILE [FILE ...]

interpolate GPX files using piecewise cubic Hermite splines

positional arguments:
FILE GPX file

optional arguments:
-h, --help show this help message and exit
-r RES, --res RES interpolation resolution in meters (default: 1)
-n NUM, --num NUM force point count in output (default: disabled)
-s, --speed save interpolated speed

Running gpx_interpolate on macOS

Side note: To get this running, I had to install a few other things. In order to install them, I had to use some a “virtual environment” for Python so the stuff I installed was isolated from the Python that came with macOS (or so I think). Here were the commands I ran:

cd Movies ***OR WHEREVER YOU WANT TO INSTALL THIS***
python3 -m venv venv
source venv/bin/activate
pip install gpxpy
pip3 install numpy
pip3 install scipy

After that, I could “python3 gpx_interpolate.py -r 1 my_gpx_file.gpx” and get the interpolated file.

The instructions I used (from Google’s A.I.) said I would need to “source venv/bin/activate” each time I started a Terminal and wanted to use this. I will update this as I learn more and understand what this is all about.

The results…

Perhaps I could use the “-r” option so it drops a marker every 1 meter or so. As long as I was walking at a regular pace, that should make new data that is much smoother without the jumps and gaps.

GPX Editor showing less spikes in the GPS data after applying interpolation.

This file now seems to have much more consistent entries, and still follows the walking path closely.

I was able to upload the video file and this new interpolated GPX file and it is now live on Google.

But I am sure we can do better. To be continued…

I only understand this a bit…

NOTE: Credit to the content of the article, other than my comments about the content, goes to whatever A.I. engine Microsoft uses for CoPilot…

While trying out a feature of Microsoft’s CoPilot A.I., I decided to ask it a simple question:

What is a good way to count the set bits in a 32-bit integer in the C programming language?

At my day job, we collect a byte from boards that multiplex eight sub-boards. There can be up to four of these multiplexer boards for a total of 32 sub-boards being multiplexed. Bits in those bytes will be set to indicate if a sub-board was found. The original designer has code that retrieves a byte from each multiplexer board then combines them into a 32-bit integer. There is code that counts the set bits to make sure we found as many boards as the system is configured to expect.

To my surprise, CoPilot gave me five different routines. The first was easy to understand, and a bit more clever than what I would have created. It called this the Iterative Bitwise Method with a description of “simple and portable, but not the fastest.”

int countSetBits(int n) {
    int count = 0;
    while (n) {
        count += n & 1;
        n >>= 1;
    }
    return count;
}

Next was something called the Brian Kernighan’s Algorithm, which was “efficient for sparse bit patterns.”

int countSetBits(int n) {
    int count = 0;
    while (n) {
        n &= (n - 1);
        count++;
    }
    return count;
}

Third was a Lookup Table Method for “fast for repeated calls but requires extra memory.”

unsigned char lookupTable[256];

void initializeLookupTable() {
    for (int i = 0; i < 256; i++) {
        lookupTable[i] = countSetBits(i); // Use any method to count bits
    }
}

int countSetBitsUsingLookup(int n) {
    return lookupTable[n & 0xFF] +
           lookupTable[(n >> 8) & 0xFF] +
           lookupTable[(n >> 16) & 0xFF] +
           lookupTable[(n >> 24) & 0xFF];
}

Fourth was something I had never heard of: Compiler Intrinsics. It said “may modern compilers provide built-in functions to count set bits efficiently” and described it as “extremely fast but platform-dependent.”

#include <intrin.h>

int countSetBits(int n) {
    return __popcnt(n);
}

In my world of “barely C” embedded compiles that don’t even support the modern printf formats for printing pointers, I do not expect to have access to this.

And fifth was Parallel Bitwise Operations, described as “fast and portable, but harder to understands.”

And that is the version that made me post this:

int countSetBits(int n) {
    n = n - ((n >> 1) & 0x55555555);
    n = (n & 0x33333333) + ((n >> 2) & 0x33333333);
    n = (n + (n >> 4)) & 0x0F0F0F0F;
    n = n + (n >> 8);
    n = n + (n >> 16);
    return n & 0x3F;
}

Harder to understand is an understatement. My brane hurts just looking at that trying to figure out how it works.

And I thought I’d share that hurt with you.

Have a good C day…

Google Street View upload errors

Updates:

  • 2025-08-03 – added new error “GPS timestamps do not overlap with Video timestamps.”
  • 2025-09-03 – added “GPS data contains gaps greater than 5 seconds in between GPS points.”

This is another research article which I am posting so others may find it in the search engines.

Uploading to Google Street View involves uploading a video file with embedded GPS, or a non-GPS enabled video file and a separate .gpx file containing the GPS data.

Here are some of the errors I have been getting:

GPS data contains gaps greater than 5 seconds in between GPS points. 32 sec gap found after 0 seconds

This was from two separate recording where the Insta360 GPS Remote did not embed GPS data. Instead, I used a .KML file generated by an iPhone app (Geotag Photos 2) I was using for my digital camera photos. I tried to use the KML file as a backup, but apparently there are data problems in it that Google will not accept.

Video does not contain more than 10 GPS points. Only 0 GPS points found

This one is problematic because the GPX file contains hundreds of GPS points. I had converted my video file down to 1 frame per second, per Google recommendations. One theory was that the points in the GPX file did not line up enough times with a specific frame in the video, and thus the file gets rejected. Google Gemini says this should not matter, but a different file I uploaded at 1 fps did not have any issue, and resubmitting the same erroring file converted to 5 fps did not receive this error.

GPS time range (9/12/24, 8:35 AM to 9/12/24, 8:39 AM) does not overlap with Video time range (7/20/25, 6:49 PM to 7/20/25, 6:53 PM)

This was caused when I converted a video file to 1 fps using Apple Compressor. The resulting file had a different creation date, and thus did not match the GPX file. It looks like Apple Compressor cannot be used by itself to get the requested 1 fps, 5fps or other frame rates to Street View.

GPS data jumps around a lot

This one puzzles me the most. If I convert the GPX file to a KML file for uploading to Google Maps…

https://gpx2kml.com

…that file displays on the map and looks quite accurate to the trail on the map:

GPS timestamps do not overlap with Video timestamps. GPS time range: (6:00:00 PM UTC-6 to 6:00:00 PM UTC-6). Video time range: (6:00:00 PM UTC-6 to 6:00:00 PM UTC-6).

This one happened after I used Apple Compressor to reduce the video file down to 5 fps. There are some settings I was told about on Reddit that should allow the metadata to pass through to the converted file. That helped get past an earlier error, but now this one shows up.

Can you help?

Please leave a comment with any solutions to these errors, and I will update this page.

Until then…

When the original Atari closed in 1996…

From a post on CompuServe, as shared by former coworker Eric. C to myself and some other coworkers, including one who formerly worked for Atari during the Jaguar days.

From 75300.1267@CompuServe.COM Sat Oct  5 02:38 CDT 1996
Date: 05 Oct 96 03:32:43 EDT
From: "Donald A. Thomas, Jr." <75300.1267@CompuServe.COM>
To: BlindCopyReceiver:;
Subject: Did you say "Goodbye"?
Content-Type: text
Content-Length: 14739

Did you hear anyone say "Goodbye"?
==================================
by Donald A. Thomas, Jr. (10/4/96)

It's odd to imagine an institution, which was as big and as powerful as Atari
once was, to have been shut down in recent days. The real amazement for me is
that it was all accomplished without a measurable flinch from within or outside
the gaming industry. I can understand that gamers wanted to push Pong out the
door early in the timeline. I can appreciate that the classics such as Missile
Command and Asteroids do not push 32-bit and 64-bit systems to any technological
limits. I know all these things intellectually, but the heart cannot face the
truth that the world and the corporate machine known as Atari could not find an
amicable way to coexist.

On Tuesday, July 30, 1996, Atari Corporation took each and every share of it's
company (ATC), wrapped them all in a tight bundle and presented them to JTS
Corporation; a maker and distributor of hard disk drives. On Wednesday, the
shares were traded under the symbol of JTS. Within a few weeks, the remaining
staff of Atari that were not dismissed or did not resign, moved to JTS'
headquarters in San Jose, California. The three people were assigned to
different areas of the building and all that really remains of the Atari
namesake is a Santa Clara warehouse full of unsold Jaguar and Lynx products.

It was only as long ago as mid '95 that Atari executives and staff believed
things were finally taking a better turn. Wal*Mart had agreed to place Jaguar
game systems in 400 of their Superstores across the country. Largely based on
this promise of new hope and the opportunities that open when such deals are
made, Atari invested heavily in the product and mechanisms required to serve the
Wal*Mart chain. But the philosophical beliefs of the Atari decision makers that
great products never need advertising or promotions, put the Wal*Mart deal
straight into a tailspin. With money tied up in the product on shelves as well
as the costs to distribute them to get there, not much was left to saturate any
marketplace with advertising. While parents rushed into stores to get their kids
Saturns or PlayStations, the few that picked up the Jaguar were chastised by
disappointed children on Christmas day.

In an effort to salvage the pending Wal*Mart situation, desperate attempts to
run infomercials across the country were activated. The programs were
professionally produced by experts in the infomercial industry and designed to
permit Atari to run slightly different offers in different markets. In spite of
the relatively low cost of running infomercials, the cost to produce them and
support them is very high. The results were disappointing. Of the few thousand
people who actually placed orders, many of them returned their purchases after
the Holidays. The kids wanted what they saw on TV during the day! They wanted
what their friends had! They wanted what the magazines were raving about!

In early 1996, Wal*Mart began returning all remaining inventory of Jaguar
products. After reversing an "advertising allowance" Atari was obligated to
accept, the net benefit Atari realized was an overflowing warehouse of inventory
in semi-crushed boxes and with firmly affixed price and security tags. Unable to
find a retailer willing to help distribute the numbers required to stay afloat,
Atari virtually discontinued operations and traded any remaining cash to JTS in
exchange for a graceful way to exit the industry's back door.

Now that JTS has "absorbed" Atari, it really doesn't know what to do with the
bulk of machines Atari hoped to sell. It's difficult to liquidate them. Even at
liquidation prices, consumers expect a minimal level of support which JTS has no
means to offer. The hundreds of calls they receive from consumers that track
them down each week are answered to the best ability of one person. Inquiries
with regard to licensing Atari classic favorites for other applications such as
handheld games are handled by Mr. John Skruch who was with Atari for over 13
years.

In spite of Nintendo's claim that their newest game system is the first 64-bit
game system on the market, Atari Corporation actually introduced the first
64-bit system just before Christmas in 1993. Since Atari couldn't afford to
launch the system nationwide, the system was introduced in the New York and San
Francisco markets first. Beating the 32-bit systems to the punch
(Saturn/PlayStation), Atari enjoyed moderate success with the Jaguar system and
managed to lure shallow promises from third-party companies to support the
system. Unfortunately, programmers grossly underestimated the time required to
develop 64-bit games. The jump from 8-bit and 16-bit was wider than anticipated.
In addition, Atari was already spread thin monetarily, but were required to
finance almost every title that was in development.

After the initial launch, it took Atari almost a year before an assortment of
games began to hit store shelves. Even then, having missed the '94 Holiday
Season, many of the planned titles were de-accelerated to minimize problems
caused by rushing things too fast. Consumers were not happy and retailers were
equally dismayed. The few ads that Atari was able to place in magazines were
often stating incorrect release dates because that information changed almost
every day although magazines deadline their issues up to 120 days in advance.

It was in 1983 that Warner Communications handed Jack Tramiel the reins of
Atari. By this time, Atari was often categorized as a household name, but few
households wanted to spend much money on new software and the systems were
lasting forever. No one needed to buy new ones. That, combined with Warner's
obscene spending, amounted to a *daily loss* of over $2 million. Atari was
physically spread all over the Silicon Valley with personnel and equipment in
literally 80 separate buildings; not considering international offices and
manufacturing facilities. Mr. Tramiel took only the home consumer branch of
Atari and forced Warner to deal with the arcade division separately. Within a
few years, Jack took the company public, introduced an innovative new line of
affordable 16-bit computers and released the 7800 video game system.

To accomplish these miracles for Atari, Jack implemented his "business is war"
policies. While people who publicly quoted his statement often felt that policy
meant being extremely aggressive in the marketplace, the meaning actually had
closer ties to Tramiel's experience as a concentration camp survivor. Of the 80
buildings in Sunnyvale, Santa Clara and Milpitas, almost every one of them were
amputated from Atari's body of liabilities. The people, the work, the heritage,
the history were fired or liquidated. Those who survived were unsympathetically
required to fill in the gaps and while most tried, few actually found a way to
be successfully do what a dozen people before them did. Atop the mountain, Jack
pressed with an iron thumb. All Fed/Ex mailings were required to be pre-approved
by one of a handful of people. "Unsigned" purchase orders went unpaid regardless
of the urgencies that inspired their creation. Employees found themselves
spending valuable time trying to find ways around the system to accomplish their
jobs. Many of them lost their jobs for bending the rules or never finding a way
to make things work. As horrible as it all sounds, it actually was the only way
to protect Atari as a company and give it a chance to survive as it did and did
very well.

Jack's introduction of the 16-bit computer was initially hearty in the United
States but it went extremely well in Europe. Europeans were not accustomed to
"affordable" technology and although the Atari computers were not IBM
compatible, it didn't matter because people could afford them. Jacks' private
laugh was that the computers were sold at prices much higher in Europe than
Americans were willing to pay. As a result, most of the machines made were being
shipped to European destinations to capture the higher margin. This enraged the
people in the United States that had been Atari loyalists. While waiting months
for stores to take delivery domestically, international magazines were touting
ample supplies. Those in the know within the U.S. became dismayed. The remainder
never knew Atari was slowly abandoning the value of Atari's name recognition as
it became easier and easier to forget some assuming Atari had long filed for
bankruptcy.

On a technical level, Atari 16-bit computers were designed beyond their time.
For less than $1,000, consumers could enjoy "multimedia" before the phrase was
ever really widely used. The icon-based working environment proceeded Windows
popularity although the essential attributes of the two environments were very
similar. MIDI was built-in and became an instant hit in the high-end music
industry. Tasks were activated and manipulated with a mouse and the system
accepted industry standard peripherals such as printers, modems and diskettes.

With all the genius that went into the technology of the machines, very little
of equivalent genius went into the promoting and marketing the machines. Mr.
Tramiel was the founder of Commodore Business Machines. When he introduced the
PET computer in 1977, Jack discovered he didn't have to call a single
publication. Instead they all flocked to his door demanding an opportunity to
see the product. News magazines. Science Journals. Business newsletters.
Newspaper reporters. They were all there with microphone, camera and pen in
hand. And they kept coming back. Adding a switch, announcing a new 4K
application or signing a new retailer were all big stories the press wanted to
handle.

Today, a new video game announcement may generate a request from any of the
dozens of gaming magazines for a press release, but a lot of costly work has to
be done to assure fair or better coverage. Editorial people are literally
swamped with technical news. Samples are mailed regularly to their attention.
Faxes fly in through the phone lines and e-mail jams up their hard drives. It
takes a lot to grab their attention.

While Atari retained hopes to be successful with the Jaguar, Atari's marketing
people were fighting established standards in the industry with severe
handicaps. Since cartridges (the Jaguar was/is primarily a cartridge-based
system) were so expensive, editorial people were required to return them before
new ones would be sent. Editorial people like to assign review projects. So
finding cartridges they sent out was not always easy to do. Additionally,
reviewers often love their work because they get to keep what they write about.
Regardless, the few magazines willing to cover Atari products were more often
turned away because of a lack of programmable cartridges or any number of other
indecisive barriers. In-store signs and posters were sometimes created, but many
retail chains charge premiums to manufacturers that want to display them. Some
direct mail campaigns were implemented, but Atari often could not afford to keep
those things being advertised on schedule. Therefore, the advertisements were
published and distributed, but the product was not available.

Clearly, Jack's experience with the world beating a path to the door of a
company making a better mousetrap no longer applied. The world had revolved a
few times beneath him and he never noticed. The tactics used to successfully
sell Commodore computers were simply antiquated notions from the past.
Meanwhile, Sony launches the PlayStation with over $500 million in marketing
funds. Today, the PlayStation is considered the most successful next-generation
gaming machine throughout the world. Sony bought the market. Tramiel's Atari
never learned how to do that. Actually, they never could afford it anyway.

After the 1990's got underway, Europe as well as the rest of the world,
discovered that IBM-compatible computers were becoming more powerful and more
affordable. The world always did want computers at home just like in the office
and companies like Dell and Gateway exemplified the industry's trend toward
home-based office computers. As a result, companies like Commodore, Atari and
Next couldn't compete any longer. While the dedicated user base of each of them
felt abandoned by these companies having to leave the computer market, the
inevitable prevailed. Commodore jumped ship, Next changed business goals
completely and Atari invested what they had left in the Jaguar game system. Even
today, Apple is kicking and screaming. As good as Apple was at creating a huge
niche for themselves, they focused more heavily on education. When kids grow up
and get jobs, they want business machines. IBM was always the business standard.

When one examines the history of Atari, an appreciation can grow for how many
businesses and people were a part of the game over the years. Chuck E. Cheese
Pizza was started by Atari's founder, Mr. Nolan Bushnell. Apple Computer was
born in a garage by ex-Atari employees. Activision was founded by Ace Atari
programmers. The list goes on and on.

But for some pathetic reason Atari's final days came and went with no tribute,
no fanfare and no dignified farewells. Why? Where did all the talent go? Where
are all the archives? Where are the vaults? Where are the unpublished games and
where are the originals of those that were? Why has no company stepped forward
to adopt the remaining attributes Atari has to offer? Where are the creditors?
What has happened to all the properties and sites? Where are the databases,
warranty cards, promotional items, notes on meetings, unanswered mail? Who owns
P.O. Box 61657? Who goes to work in Atari's old offices? Where do consumers have
their systems fixed? Who is publishing new games? Who still sells Atari
products? Why are there still a lot of people talking about Atari on-line?

I'm an ex-Atari employee and proud to have been. I'm still an Atari devotee and
proud to be. To me, these are questions which all deserve an answer, but who
will ask them?

The best people to ask these questions are those who have exposure to the
public. If you believe Atari left us without saying goodbye, contact Dateline at
dateline@nbc.com. If you REALLY believe, then send this article to 10 of your
friends in e-mail. AND if YOU REALLY, REALLY believe, mail a few to newspapers
or other news programs. A letter in your own words would be great!

I'd spend money for a thorough retrospect on Atari. Wouldn't you?

Wouldn't it at least be nice to say "Goodbye"?

--Don Thomas
75300.1267@compuserve.com
209/239-3898

Permission is granted to freely reprint this article in it's entirety provided
the author is duly credited.