The Cambrian Period of AI

I’ve noticed that AI twitter is talking a lot about evolution lately, and for good reason. Comparing AI development to an evolutionary system lets us do a kind of dimensional analysis 1 and take a fuzzy glimpse into the future. It can potentially help to estimate scales and behavioural trends of AI in the way G.I. Taylor estimated the energy released by an atomic bomb (confidential information at the time) from a series of pictures published in a magazine.

This is the first post of what I hope to be a series about looking at AI development as an evolutionary system. Thoughts and feedback are always extremely welcome and encouraged.

Hacker News thread

Life on Earth - Punctuated Equilibrium

History is a constant race between invention and catastrophe. Education helps but it’s never enough. You also must run. - King Leto II, God Emperor of Dune

Before Niles Eldredge and Stephen Jay Gould proposed punctuated equilibrium, it was believed that evolution continued in a gradual and incremental way. Everyone is familiar with the idea; tiny mutations accumulate over millions of years, slowly and surely leading to a variety of specialized, fit” species.

A problem that puzzled evolutionary biology up to that point was that often a species would appear somewhere in the fossil record and remain mostly unchanged for millions of years, before suddenly and rapidly adapting into dramatically different new forms. These bursts were hard to reconcile with the slow and linear change predicted by gradual evolution.

Punctuated equilibrium instead takes that as the norm, and says not only that it is expected, but that long periods of relative stasis punctuated” by periods of sudden disruption and restructuring are fundamental features of how evolution works.

This pattern recurs throughout the fossil record, but everyone’s favourite example is the Cambrian explosion. After the Earth’s formation, it spent a long time doing… not much. After 4 billion years of Earth history, the upper limit of complex life was in the vicinity of multicellular filter feeder. Near the end of that period, over the course of about 20 million years (~5% of history at that point), an unprecedented diversification occurred, caused by rising oxygen levels.

Aerobic respiration had existed for some time before the Cambrian explosion, but abundant oxygen made it scalable. Aerobic respiration is 18x as efficient as anaerobic respiration which makes it an excellent strategy, provided there’s enough oxygen. Given that critical mass of oxygen, highly complex and energy intensive life makes economic sense in that it becomes sustainable to rely on oxygen.

Within that short period, the basic image of today’s modern” phyla suddenly appeared, including arthropods, vertebrates, mollusks, and other things you’ve heard of. During the explosion, several fundamental innovations were made that were unlike anything previously seen, making life on earth a much more interesting thing. To name a few:

  1. hard body parts: shells, skeletons
  2. sensory organs: basic eyes, vibrational, and chemical sensors
  3. widespread agency: swimming, burrowing, predation

What a time it was to be alive.

The Invention of Invention

Evolution, natural selection, and punctuated equilibrium have implications beyond the the emergence of molecular life. The trick of looking at a system and drawing parallels to evolution is called Universal Darwinism. Lots of things act like an ecosystem, and our knowledge of molecular life has taught us a lot about them. Examples include culture, music, languages, memes, and lots more. 2

If you squint at the economy, you will find that it too looks exactly like an ecosystem. Inside a complex network of people, ambition, products, services, and needs, you’ll see an evolutionary arms race where purchasable goods evolve and compete to survive in a landscape of human desires.

Without thinking too hard, notice how on the surface, the proliferation of coffee mugs looks a lot like the proliferation of frogs. Coffee mugs have over time, taken on different sizes, shapes, colours, etc, each one adapting itself to a particular variation of needs and personal taste. Likewise, different frogs have adapted their requirements, camouflage, and abilities to live in a specific variation of wet environment. An unremarkable, standard, professional looking” mug is well suited to conquer the hot beverage niche of an office building in the same way that a leopard frog is well suited to occupy generic still water in North America.

Things weren’t always so complicated. In the beginning, our product and service economy was pretty much stone knives and hammers, and setting things on fire. Then in Cambrian fashion, over a tiny percentage of human history, mountains of brand new stuff appeared suddenly and out of nowhere. In The Origin of Wealth 3, Eric Beinhocker masterfully puts it:

To summarize 2.5 million years of economic history in brief: for a very, very, very long time not much happened; then all of a sudden, all hell broke loose. 4

The economy started like complex life; just as there was an oxygen threshold to catalyze scalable aerobic respiration, there seems to be a cognitive threshold for abstraction that once crossed, enables systematic invention. As our intelligence improved, we crossed a tipping point in excess cognitive resources, giving us the acuity to notice ways we can manage our surroundings and systematically shape them to our needs. This marks a transition from living by preserving survival knowledge to living by creating survival knowledge. An expanse of possible tools and inventions unfolds leading to a complex economy.

Each of us now owns somewhere in the neighbourhood of 300,000 things 5 for a plethora of significant and mundane reasons, each one existing for the same reason: that it can. Through a multitude of channels, each of them persuades someone, somewhere to hand over money while being cheap enough to do so at a profit.

Ultimately, fuelled by a tipping point in human ingenuity, we are presented with an astounding diversity of choices. Some are altruistic and incentivize you out of goodness; you can choose to buy a nutritious head of broccoli. But if you aren’t paying attention, a well-placed ad will make you buy a big mac instead. Both meet the same end but in different ways.

The overall pattern is that once enabled, an evolutionary system will fill every possible survivable niche in unpredictable and creative ways. The moment a product is profitable via any strategy, a market spontaneously appears to make people buy it. Similarly, life will ruthlessly seep into any space as soon as it’s even remotely habitable through any means given. Then occasionally, a threshold is crossed or an innovation is made, the evolutionary light cone 6 suddenly widens, and the game becomes vastly more complicated and interesting.

Musical Chairs

This brings us finally to AI at large; a network of mathematics, engineering, science, philosophy, hardware, software, and interfaces to the public. Why is it here and what is it doing?

Evolution and punctuated equilibrium is like a game of musical chairs. In an equilibrium period, there is a limited number of chairs, and survival means securing whatever seat you can. All that you can do is make small improvements to get the jump on your neighbour; like a leopard frog with better camouflage. Each punctuation spawns a whole new game with many many more chairs.

All evolutionary systems are based on survival. In the economy, getting bought is key. In the pursuit of AI, looking intelligent” is key. As new ideas and techniques emerge, it’s the ones that look and act the most like us that get hype and funding and inspire future variations of themselves.

This is a pattern you can see in the history of AI springs and winters. Springs arrive when a system demonstrates a cool new behaviour that wasn’t possible before. A theorem-proving program, an analog digit recognizer, superhuman chess. Investors get excited and money flows. Winter rolls in when each technique in turn fails to generalize, doesn’t look that intelligent anymore, and investors lose interest.

The AI spring in the 2000’s was slightly different in that new systems, while not intelligent” to most people, were just good enough to produce economic value. They were good enough to find a niche and hang on. Dumb AI like regression models and simple neural networks were just useful enough to maintain a flow of funding by clinging to economic niches, like price forecasting or better voice recognition 7.

Funding persisted because these things were useful. It persisted long enough for the field to get lucky and reach a significant innovation. Large language models mark a key punctuation in AI. They represent more than a step along a gradient; but a whole new game with a new set of rules. New applications are proliferating into what seems to be a very deep space of possibilities. It’s like how eyes and locomotion freed life from the sea floor and gave it the entire ocean.

It seems that language models could be powerful and general enough to initiate a sustained reaction; that they won’t just fill niches but curate them. Over time following the Cambrian explosion, it became possible for organisms to live entirely away from the sea floor. There was a sustained burn where enough organisms could find the energy and resources away from the ground that they created a whole new, largely independent ecosphere for themselves near the surface, and freed themselves from the constraints of dwelling life. In a similar way, uses of language models often create more living spaces for language models. Want to moderate your language model? You can do that with another language model. It’s like how if there are enough small fish that swim to the surface from the sea floor, there’s an opportunity for a predatory fish to live entirely up there by feeding on them. Or how buying an iphone makes you want a case, applecare, this thing, and many other items that you wouldn’t otherwise buy.

What this could look like in the case of language models is hard to say. A very extreme and fantastical image is that AI eventually liberates itself from economic constraints, creating an independent econosphere” of AI agents, analogous to the TechoCore in Dan Simmons’ Hyperion Cantos 8. In that timeline, AI systems develop sufficient agency to divorce themselves from humanity and continue their operations in parallel; independent of human concerns, involving themselves only where they judge to be instrumental to their goals.

More practically, there will be a digital and physical expansion of niches for AI. Digitally, surprising new applications are emerging through the realization that generating text” can be used for more than babbling; that it’s an engine. Morphologically, language model applications will move beyond work that ostensibly has to do with generating text, such as chatbots and code completion. This is hinted by use cases like Google’s PaLM-E, where a language model is used to plan out complex action sequences to be executed by a physical robot. Similarly, Microsoft’s Kosmos-1 plans and executes sequences inside of user interfaces. Tooling like LangChain is evolving to facilitate these chains of thought” and exogenous interactions.

On the other side, steady improvements in hardware, and shockingly fast efficiency gains in language models suggest that they will also proliferate through physical space. As of now, language models like GPT3 and 4 dwell in the darkness of Azure data centres. Performant size reductions though, like Meta’s llama show that hardware requirements will become less stringent. Georgi Gerganov’s 9 llama.cpp also proves that it is already feasible to get them up and running on ordinary computers. Hell, someone got llama 7B to work on a 4GB raspberry pi.

Taken together, things will look interesting. It seems inevitable that language models will play a rapidly increasing role in personal computing and seep into many surprising places. We are more or less a handful of engineering problems away from having them right on our iPhones, opening a whole other realm of opportunities. In all probability, it will even work its way into mundane technology. Soon, your Roomba will not only avoid dog poo, but will also inform you that those doritos are just a little too close to the edge of the coffee table, and that your terrier will take interest in them when he wakes up.

Conclusions

An evolutionary competition to live resulted in an indescribable variety of living things, existing in all kinds of different places, through all kinds of different means. An evolutionary competition to make people buy things resulted in people owning a ridiculous array of things, for truly uncountable purposes and reasons. An evolutionary competition to satisfy whatever humans refer to as intelligence” results in…?

Tying it together, a big diversification is likely coming, and it will almost certainly be extremely multifarious in objectives. We know that evolution can and will reach its tendrils into any space and domain that it can, by any means that it can. What makes it scary is that intelligence” is colourless; it is a means without a defined end 10. AI will find many homes in making us productive, well, and fulfilled. It will also find homes in its deep capability for manipulation, sabotage, and warfare. And eventually, it might define new homes all on its own. There’s no turning back from here.

It will present to us, sometimes forcefully, many new and often antithetical abilities and choices. Just in the way nature creates life-sustaining resources and life-ending predators, and how the economy sells us life-enhancing products but also manipulates us to buy life-deteriorating ones.

Most importantly, emergence will play a dominant role. Predicting specific aspects of the future AI landscape is somewhat like if someone were to ask 4 billion years ago, what will happen if you give proto-plants a little more oxygen?”. Many unforeseeable morphologies, abilities, and niches will arise through mysterious opportunities created by a highly strange and dynamic ecosystem.

Some of them we will see coming, and others we won’t. Stay safe!


  1. Dimensional analysis is what physicists do when they have only a surface level understanding of something. You look at the raw quantities that drive the system you’re trying to describe, and work out how they have to fit together in order to make sense. Often, this can tell you right away how quantities will scale as the system grows.↩︎

  2. See Universal Darwinism - Wikipedia. Rabbit holes abound.↩︎

  3. An excellent book that introduced me to countless things that I cherish↩︎

  4. Origin of Wealth, page 11↩︎

  5. I haven’t researched this number thoroughly. You could estimate it with dimensional analysis ;)↩︎

  6. In special relativity, your light cone is the collection of all future times and places that are physically reachable from here and now. By evolutionary light cone I mean the breadth of viable evolutionary trajectories.↩︎

  7. Remember around 2013 when Siri became somewhat useable?↩︎

  8. These books are monumental, and must-reads for AI practitioners that enjoy sci-fi. I have a post about this in the works.↩︎

  9. A name that history will remember no doubt↩︎

  10. This is evidenced by Waluigi syndrome. From lesswrong: After you train an LLM to satisfy a desirable property P, then it’s easier to elicit the chatbot into satisfying the exact opposite of property P. Patterns like this will likely persist as models grow in ability. One could imagine wide consequences if a model with greater capability and agency were to roll out the way Bing did.↩︎



Date
March 27, 2023