Engineering Consciousness

I’m launching a new startup! It’s not really related to the project described here, but if you’ve got a business and you want to make more money, check out Hibox here!

Like everybody else on the internet, I have had a marvelous time playing with ChatGPT over the past week. I’ve found it quite useful as a springboard for all sorts of creative tasks, and it’s the first time in a long time that I think we’re on the cusp of a transformative technological change.

In that spirit, I’ve started a new project called Helix. It’s like a modular synthesizer for AI. The goal of the project is to explore an approach to using new AI tools to build self-aware, knowledge-generating AI networks, but it may be practical for other purposes as well.

Self Oscillation

The general hypothesis of the project is: Consciousness, or something resembling consciousness, emerges not from the capability of a single task model like GPT or Stable Diffusion, but from the oscillations between the inputs and outputs of different instances of different models performing different tasks.

Our brains, for instance, don’t just do one thing everywhere - we have lots highly specialized meat, refined by evolution for different specific tasks - tracking fast motion, recognizing objects, recognizing faces, comprehending words, producing words with our mouths, remembering smells, etc. - which all layer on top of each other into a conscious little meatball.

Rather than a brain, the analogy we will be using for the project will be a modular synthesizer. If ChatGPT is a single module capable of making a single wave, Helix is a rack full of modules, all connected to the inputs and outputs of each other, finding new sounds that come from self oscillations and feedback loops of different modules playing into each other.

Proposing and Implementing an Architecture

Here’s a simple idea for a “Brain in a Jar” type of “consciousness” which receives stimuli, generates ideas about that stimuli, criticizes those ideas, feeds that criticism back into the idea generator, and decides whether or not that idea is worth expressing.

┌─────────┐     ┌──────┐
│ Stimuli ├─┐ ┌─┤Input │◄──────┐
└─────────┘ │ │ └──────┘       │
            ▼ ▼                │
     ┌────────────┐            │
┌────┤ Conjecture │◄────┐      │
│    └────────────┘     │      │
│                       │      │
│     ┌───────────┐     │      │
└────►│ Criticism ├─────┘      │
      └─────┬─────┘            │
            │                  │
            ▼                  │
   ┌─────────────────┐         │
   │ Action Decision │         │
   └────────┬────────┘         │
            │                  │
            ▼                  │
       ┌────────┐              │
       │ Output ├──────────────┘
       └────────┘

and here’s that architecture implemented as a Helix graph:

and here’s that architecture running in Helix:

This is a toy example, and I don’t think the system is generating any new knowledge, but I hope it shows an idea of what we’re tilting towards.

You can watch it generate new ideas which evolve over time, and you can change the stimulus and watch those ideas co-mingle. It’s quite fun!

All of the nodes are running as separate processes, and their inputs and outputs are automatically managed by Helix. In this example, all of the “abstract” nodes are using the OpenAI completion models, but there’s no limit to what modules we will be able to use. There are already quite a few modules available, including OpenAI and HuggingFace modules, there will be many more soon, and it’s very easy to create your own. You can also use the web interface to interact with multiple targets in the same session.

Graphs are built using Liquid templates of DOT graphs, and there is a simple syntax for templating prompts, all of which is described in more detail in the project README.

A “Turing Machine” of Consciousness

Alan Turning showed that there is a minimum set of requirements for a machine to be able to implement any function, and that that minimum set of requirements is actually remarkably small - a tape, a head to read and write to the tape, a state register, and a table of instructions. That’s it. Of course, modern computers are vastly more complex than that, but at the end of the day, that’s all you need for classical computation.

So, the question is - is there a similar minimum set of requirements for consciousness/knowledge-generation? Perhaps it requires the ability to recursively identify problems and to propose and discover solutions. Perhaps it requires the ability to observe the world, to remember past worlds, and to compare them. Perhaps it requires the ability to observe the world, explain the world, interact with the world, and re-explain the world after the interaction.

I don’t know yet, but I think that Helix could be a good framework to explore that question.

High Level / Low Level AI

There are different layers of abstraction with which these problems can be approached. One could imagine exploring this question with a massive, multi-modal embedding, like CLIP but for hundreds of modalities at once. Though I’m sure something like this will come along soon, it will likely require massive amounts of data and hundreds of millions of dollars to train.

Instead, we can approach the problem at a higher level - ignore the internal mechanisms of a module and instead concern ourselves with its capabilities and connections. After all, a Turing machine doesn’t care what kind of kind of tape it uses, it could be magnetic, silicon, aquatic, or cells in Conway’s Game of Life running inside of Minecraft running on a cellular phone in the Pope’s pocket.

Perhaps this is also true for consciousness.

Implementation Details and Project Status

Helix is implemented in Elixir and Phoenix LiveView. It works, mostly, but still relies heavily on “prompt engineering”, and it can’t do anything particularly magical yet. It eats through OpenAI credits at a mean clip, it’s Free Software and it named itself.

I think the next steps are to run small and large experiments with different graph architectures, to build more modules types for interacting with the web and computer systems, to build a robust memory storage and retrieval system, and to add modules for more capabilities in different modalities, particularly Free and Open Source ones, and for graphs to be able to fine-tune their own models and graphs on the fly.

I’ve got quite a few ideas of systems I’d like to build this way, but I’m really interested in what other people think and experiments they’d like to try.

If you’re interested in how it goes, please come back to this website where I’ll post updates, if you’re interested in playing with it yourself please check out the code on GitHub, and if you’re interested in contributing, please use GitHub issues to share your ideas, experiments, and code.

Cheers!