programming at the end of the world
A small group of engineers are thinking about software in a world where code seems like the least important thing to consider.
Well, it’s here. The apocalypse. Doomsday. It’s all gone to shit and the world now lies in ruins. Forget scrolling through TikToks as you laze on the couch. You better get out there and learn how to farm because carrots, toilet paper, and hand sanitizer are now the most valuable currencies on what’s left of our planet. The last thing you’re gonna be doing is writing software. Or is it?
According to a profile in Wired, there’s a small group of people who seriously asked if there would be a need for computers and software if the world as we know it ends in a cataclysm and humanity will need to rebuild. Their answer is yes, which may seem like a pretty biased and absurd conclusion, but their reasoning is solid.
Think about it this way. If we actually want to rebuild a modern society, we can’t be on the farm all day, worrying about who’s going to water the plants and if the solar panels or wind turbines are aligned correctly. We’re going to need to automate a lot of things so we have the bandwidth to plan, explore, form and organize communities, and work on longer term projects. As weird as it sounds at first blush, automation could be our best friend in a post-apocalyptic world.
Regulating power, making sure plants get the right amount of water and detecting any leaks in a hydroponic system’s lines, or the field, all of these sound like luxuries. But in a world where the power comes from small generators or renewable micro-grids, you have no choice but to try and conserve and optimize every watt, and seed, and drop of water, as precisely as possible. Stretching your rations and supplies another day is a victory because that’s one more day you can survive and get more resources.
If a Raspberry Pi sipping a two watts of energy can make that happen by running fifty lines of code, it’s now a requirement instead of an indulgence of the before times. So, with an eye on that future, a handful of engineers created new operating systems like Collapse OS and Dusk OS, running low level languages like Forth. No, you wouldn’t be able use them to create the next Google, or Facebook, or ChatGPT, but you can make reliable, simple, efficient automations for microcontrollers.
But in reading the documents and rationalizations for the designs of these languages and systems, a question burned in the back of my mind. Is all this really necessary in the post-apocalypse? What’s so unique about Forth and a doomsday OS?
when opinions become binding code
In computer science there are tools referred to as “opinionated” and they’re usually a source of much controversy. Opinionated software and architectures are exactly what they sound like, they’re tools that put their foot down and say that this is the way you need to be doing things and we will make this approach easy and reliable, but if you’re going to try and do something differently, you’re either on your own, or we will make it an enormous pain to bypass our proscribed methodology.
A perfect example of this is how the Go language developed and used by Google and a number of startups handles errors, vs. how they’re handled in languages like Kotlin, which is used for a lot of Android apps, and C#, which is typically used to make large enterprise services.
Don’t panic, we don’t need to look at or analyze code for this, or debate which way is better. This is a safe, layperson friendly space and I’m not going to subject you to an impromptu coding lesson or go into type theory. Actually, now that I think of it, trying to spring type theory on unsuspecting readers may be a low grade war crime.
But back to opinionated tooling. In languages like Go, an error is something you as a coder manage and handle with every call. In languages like C#, errors are things that happen and you catch and handle them at the right spot in your code. You cannot do both in both languages. You could emulate Go’s approach in C# but not for all errors, and you can’t catch errors in Go the way C# prefers you do.
The reason why is because the creators of Go believe that if you throw errors, you’ll lose its root cause for really complicated reasons we don’t need to go into. Suffice it to say that this is a valid concern. Meanwhile, the team behind C# didn’t want you to have to write explicit error handling for every problem because for 99% of the errors, you were just going to log them and return a “whoops, sorry about that!” message to the user, so just catching and logging was seen as a better way to go.
Forth and Collapse OS are similarly very opinionated languages and other comp sci people who analyzed them and their coding philosophies are not sure they agree on how valid those opinions really are. We may already have the tools to accomplish all the things they want, and do it faster and easier.
In the world imagined by the creators of these doomsday coding tools, we’re going to be worried about mass producing the easiest CPUs, and we would need an operating system that can use itself in a way that would let you adapt it for other CPU types we find or develop later. But the question is whether their views of what CPUs would be a better bet for a post-apocalyptic future are correct.
back to the programming stone age
The creator of Collapse OS, Virgil Dupras, believes that the world will end in five years in a slow, rolling collapse of modern supply lines and financial system, and the logic in his manifesto for the operating system would make Eeyore say “dude, you should see someone about this doomsday spiral.”
He’s also convinced that we would not be able to still build very simple machines like the aforementioned Raspberry Pi — yes, I there’s a reason it’s the computing device I used in my earlier example — therefore, we’ll need to start from near scratch because nothing else will be viable by the early 2030s. Hence an operating system that lives a life of a digital hermit and a programming language which micromanages every single address in memory and position in the stack, like assembly code.
And yet, this may be more of a liability than an asset. Yes, a self-hosting OS that uses itself in a way that can be ported to any other chipset is very neat. Yes, a language for manipulating every byte in every memory address on the stack would make your code very efficient if you know what you’re doing. But there’s a reason we no longer do that unless it’s absolutely necessary.
We can already port Linux to other chipsets. Languages like Rust and Go can create small, fast binaries that sip memory but come with much higher level logic that would be much easier for a casual user to pick up. Your typical Raspberry Pi is far from the most complex computing device ever created and is, in fact, created for those trying to run tiny, hyper-efficient rigs. And we already have C++ for embedded programming and manuals on exactly how to do it.
Managing every byte in every memory address of a stack is very difficult and makes it extremely easy to crash and burn instead gracefully failing and recovering when your code encounters a problem. It’s also very involved and means the software will take a long time to write and test. Again, this is not a new idea. Assembly languages were a thing and still are, and we now have technology that manages memory better than we can. These are not lessons we need to re-learn as the world collapses.
If you’re trying to desperately stretch your supplies with automation, you probably do not want to learn new, extremely low level tooling instead of just using an existing low power device that can run a stripped down Linux instance.
Then, you can just write your code in a popular language you probably know, or get a microcontroller you can configure with C++ or very C-like code already being used for this exact purpose, something like the Arduino, which are also cheap, plentiful, and so easy to set up and configure they’re frequent learning tools for robotics enthusiasts.
“there are now fifteen competing standards”
Why go our of your way to spend precious weeks and months learning arcane tooling that’s ostensibly designed for the rigors of the post-apocalypse when there are tools already there, already known and popular, and so abundant, the only way they would no longer exist or be impossible to manufacture is if there was a concerted effort by a global network of agents to systematically destroy every one of these devices.
These doomsday tools have fallen victim to one of the classic blunders. No, they are not currently involved in a land war in Asia. At least not yet. What they did instead was to reinvent the wheel. Like that classic XKCD cartoon about competing standards, the toolset assumes it knows how to solve problems in a way others don’t and decided to double down on this decision until they created an entire ecosystem.
This is why I don’t see post-apocalyptic programmers learning a new language for an operating system that’s not exactly popular when they’re just trying to quickly rig up a way to monitor basic power fluctuations and irrigation.
They’ll make sure to swing by a hollowed out electronics store to pick up a starter kit for a tiny micro-server, or a microcontroller, find a guide in a library, and get to work. Instead of figuring out what’s in memory at position 0x400FA0, they’re going to use one of the C++ methods in the tutorial to display a single red, yellow, or red light if a sensor detects a certain threshold.
On top of that, microcontrollers consume milliwatts of energy and low power CPUs we discussed idle around two watts and spike at around five. The differences amount to a rounding error and there’s nothing inherently special in Forth or Collapse OS which is going to turn a chip consuming 1.7 W into one which only needs half the power. They may be able to get the code to run faster on that chip, but only after cycles of trial and error until you figure out how to set up the memory management. Cycles you, by the way, many not have time or energy to run.
So, yes, wondering how to run code in extreme environments like outer space or after a complete technological collapse post nuclear war or civilizational implosion, is very popular. But the reality is that we’ll most likely try to scavenge and optimize existing, well known and thoroughly documented tools instead of learning new paradigms and cracking open an emergency post apocalypse programming kit.