May 17, 2023•887 words
(Part 1 here)
Imagine yourself as a single-celled organism living in a puddle. You have exactly one top-level priority: propagate so that your descendants are more numerous than you. In service of this goal, you have exactly two primary objectives: 1) don't die, and 2) acquire resources. In service of these objectives, you have a solid handful of secondary objectives; for example moving toward potential food, moving away from potential predators, responding to nonliving hazards like contaminants in the water, or responding to stimuli like ambient light levels that might indicate meaningful changes in the environment. But each of these secondary objectives contains its own important implementation details; for example, how do you identify potential food? What response should you attempt to a particular contaminant? Which goal should take short-term priority in any particular moment?
If you think about it for a bit, even a humble puddle is a chaotic environment. For a tiny organism living in such a place to survive and thrive, it must be capable of solving poorly defined problems with unquantifiable parameters, based on ambiguous or incomplete information, interpreted through sensory organs that multicellular creatures like humans would consider impossibly primitive and rudimentary. And yet each puddle is full of a vast array of microorganisms, linked together in an elaborate food web of production and predation. Some of them have even been confirmed by scientific study to learn from experience, despite having nothing that could be called a "nervous system"1.
Now, imagine writing2 an algorithm that could handle all these tasks, given the information available to a single cell's sensory organelles. Could it do that? Perhaps even more importantly, could it do that while executing on hardware a few hundred micrometers across (at most), which lives in a puddle and consumes even smaller puddle dwelling creatures for energy?
Of course, a puddle is a smaller-scale version of any natural environment. While more human attention is paid to more glamorous habitats like savannahs or jungles or open oceans, any natural ecosystem is a fully chaotic environment, full of parameters that cannot be measured or accounted for in advance, and hidden factors that may (or may not) be significant to any specific actor within the system. And yet not only does life survive within all these environments, it thrives in a staggering variety of permutations, from the humblest photosynthetic cell to the proudest apex predator. This is because every one of these life forms evolved, through ruthless selection by the survival demands of its environment, to interface with a chaotic environment, in all its ambiguity and resistance to quantification.
Fortunately for every existing life form, we are able to handle the demands of our environments with native functionality. A brain, or a nervous system, or even whatever unknown mechanism a single trumpet-shaped cell uses to learn and make decisions, is a chaotic system. The parameters which define its behavior include DNA, RNA, the effects of myriad chemicals3, the specific physical configuration of the decision-making structures, temperature, electrical interference, and many others which modern research only hints at. The point I'm trying to make here without being too detailed or too vague is that the parameters which determine the state of any biological decision system are, much like the natural environment in general, ill-defined and unquantifiable; which is exactly why living creatures can behave in such startling ways, and handle all manner of external chaos.
Contrast this with any digital computer system; the whole purpose of such a thing is to act in a deterministic and predictable way. Though a digital computer can be (and today's certainly are!) both complicated and complex, they are designed and built to reduce chaos, which is treated as a bug rather than a feature. This is a good thing! When someone executes an algorithm, the same inputs give the same result, unless something is deeply and catastrophically wrong. Computers' existence as deterministic state machines allows them to accomplish incredible things that have brought great benefit to humanity4 but deterministic state machines, by their very definition, are not chaotic systems.
From the other angle, it should be apparent that biological nervous systems5 are not deterministic state machines, and they do not run predefined algorithms. One can quibble at the philosophical level whether a brain is deterministic (given exactly the same starting molecular and environmental configurations, will it be guaranteed to arrive at the same decisions?) but without the supernatural ability to replicate starting conditions down to the molecular level, there's no way to settle that question empirically. But they certainly aren't state machines, in the sense of having a finite and discreet number of possible states of being. Neither do they run algorithms, except by far overstretched metaphor.
With these facts established, Part 3 will focus on the ability of algorithms to handle chaotic environments (spoiler alert: not well!) and what sorts of tasks we can and can't expect algorithms to accomplish in the future.
Or, for that matter, a "second cell". ↩
Or, if you're a machine learning person, "training". ↩
Some subtle, and some (as anyone who has ever taken a few dozen micrograms of LSD can attest) the exact opposite. ↩
And kept me profitably employed, which is a nice side benefit. ↩
Or "biological things that make decisions while not being networks of specialized nerve cells". ↩