/purple

An attempt to write a visionary paper to address AGI

Purple

Outlining a model to implement AGI

Abstract

This work is a speculation of a system or systems that allow the emergence of behaviors which are considered as intelligent behavior. Although it might be more similar to a sci-fi story, but recent achievements and the research momentum on AGI, almost guarantee that many intelligent behaviors are actually emerging from the tools we build.

- The true value of science fiction to me is that it permits speculation (Asimov)

Introduction

Intelligence has various definitions in different domains1: emotional, social, street smart, etc. Which one do we mean by general intelligence?

DeepMind has come to the conclusion that to build intelligent "Reward is enough"2. OpenAI is more focused on scaling transformer models which has been summarized into "Gradient descent can do it3". And referring to the Turing test, DeepMind, OpenAI, Anthropic, Meta and others built systems that are indistinguishable from humans in a conversation to an extreme extent. Yet it can be evidently argued that these are not AGI4.

Current DL models (LLMs by extension) are working on this simplified scenario: DL: I can best learn on your parameters Scientist: But that's overfitting, I want you to learn new things DL: Show me the new things, I will learn them too Scientist: What if I don't know what I don't know? DL: !

In Persian we call this question "Compound Ignorance" or "Compund Nescience" (جهل مرکب)

All the work in unsupervised learning and re-inforcement learning are efforts to build an agent that can learn by itself so it can solve Compound Ignorance. Other research areas like active leanring5, RLHF and similar areas are just technics to fill this gap in the problem modeling. The root cause of this gap is in the way we're designing benchmarks and we train our models to beat those benchmarks. Our neural network design depends on a mathematical evaluation of a loss function. Any approach based on this evaluation will end up as the DL scenario and won't be able to solve Compound Ignorance.

If we want new type of intelligence, we need another type of evaluation.

Design concepts

I introduce principal elements and core concepts to later on propose the initial design, but I don't have the concreate implementation of these principals and concepts:

Principle 1. I define intelligence as a subjective and relative phenomena. It's our perception of intelligence that matters. The base idea comes from the Turing test. I expand that definition to a broader one: A system is intelligent if we perceive intelligence from it.

Alternatively: A system is not inherently intelligent, it's intelligent if we sense it's intelligent.

Principle 2. Explainability: Designing AGI is by itself hard and embedding explainability from the beginning adds unnecessary complexity. The fact that we humans, as a benchmark of intelligence, struggles to explain why we do something is the core of this unnecessary complexity. Also, the non-deterministic nature of our decision-making process is a key for intelligence. (e.g. Even most sophisticated humans might do unreasonable things). On the other hand, I believe if you don't see unexpected behavior from a system, there's no curiosity in the behavior of that system. And if curiosity is important for intelligence, then lack of unexpected behavior means there's no intelligence (Artificial Curiosity). So I will exclude explainability from this research and propose to use control mechanisms that help us stay safe rather than having a white-boxed explanation of an AI decision-making process.

Concept 1. domain-set Our intelligence is limited to "domain-sets": meaning we can't apply the same amount of intelligence to everything in life. Someone might be exceptional with her muscle memory in playing the piano but not very sharp in understanding chemistry or even not as good with another instrument like a guitar. A high-IQ person might be a distinguished scientist but normally not able perform a complex surgery.

Characteristics of domain-sets:

  • Everyone can introduce their own domain-sets. I (or this model) don't define domain-sets.
  • Everyone that perceives intelligence from a system on a domain-set can claim that the system is intelligent in that domain. For example, a programmer can build a calculator program and claim that the calculator is intelligent in the arithmetic domain-set.

Defining domain-sets is outside the boundaries of this design. And I'm not using domain-sets to test if a system is AGI or not. Later on, I introduce a broader test to detect AGI. The role of domain-sets in this design is to distinguish and respect different types of intelligence and not limit the definition of intelligence to this design or to some tasks.

My aim is to have as many scoring systems as we can, over different domain-sets by many researchers and turn measuring intelligence into an endless research topic. Meanwhile, design a system that we think can be intelligent in practice. Additionally, defining domain-sets allows having concepts like "foolish intelligent": an entity that we consider intelligent but still call it foolish in certain situations.

Note: Similarly, (or ironically), that's how we've been evaluating human intelligence with different intelligence tests, e.g. the IQ tests.

Concept 2. story: A story is a chain that articulate the dynamics inside a domain-set. For example, an intelligent person might be able to make sense of symbols in a math equation and see a way to prove it. An intelligent person might recognize the dynamics of a game, etc.

Each of these is an example of a story. In any domain-set, there's a dynamic of how things are connected to each other and how we can exploit them to our benefit. When a system recognizes those dynamics, either by itself or by learning from others, it has the ability to turn what it has learned into a form called story. A story is an articulation of a finding.

**** Design Principles of stories****

  • Stories aren't complete, they're fragments, so they have no limitation in nature, but in implementation there might be limitations in terms of bytes or characters.
  • Stories can be received and understood but not bit by bit. Meaning that they aren't objective and absolute, they are subjective and relative. (Subjective Intelligence)
  • Stories find their own place in the receiver's knowledge base and build new connectors to understand more stories.
  • Stories are used to communicate between AI models.
  • Stories are self-reinforcing. Meaning they form the understanding of a model, what it desires, what it considers good or bad.

Concept 3. collective intelligence One human can be supersmart still won't do much on earth, but a collection of humans plus time, will change the world. Some examples of previous works in AI and algorithm design that addressed this concept are: agent-oriented design, particle swarm analysis, ant colony, etc. But almost all of them neglect the impact of "one small idea". It means that an idea of a single person can do a lot but usually in collaborative systems it's hard to give higher wights to one idea. While that’s actually how humanity works. So in this proposal we emphasize on how important is "herd mentality" in human intelligence. Something seems to be very cheap but actualy very important. It's the "herd mentality" that makes single idea bold enough.

Note: Curiosity in communication turns into noise; if there's a collective model with not enough noise, it's a very bad sign.

Live Free Models

Supposing to have an implementation of these concepts, the resulting system comprises a set of models that are interacting together. I call each of these models a Live Free Model(LFM). Any LFM knows its objective at any given time and constantly is updating it. A set of LFMs form something we call a Purple.

Purple

A Purple is a set of LFMs. We can also communicate with each Purple.

How to use Purple

Purple, (or any other implementation of AGI) is different from normal tools we have. It can be difficult to interact with, like a human.

Some ways that come to mind when we want to interact with someone:

  1. Using them, persuading them
  2. Trade with them
  3. Collaborate with them
  4. Serve them

etc.

If we truly build AGI, it will have its own agenda, and we need to answer it needs or give it what it wants or persuade it to give us the answers.

It might have an answer for understanding the secrets of the universe or a way to travel to far galaxies, etc. And we might not understand how that answer works, but we might also use it.

Open Questions

  1. Staying alive: should the model show a will to live?
  2. Minimize pain: everything we do can be modeled as pain minimizing. is this an angle that helps the design?
  3. Boredom and laziness: should the model show boredom and laziness?
  4. Difference between average agents and genius agents
  5. I believe the key for a better AI is to model the way learn and represent things not the structure of our brain How each person builds its understanding and starts to generalize
  6. Stories aren’t real, and yet they’re meaningful: how we create mental paths between two abstract concepts and later on use them for other concepts
  7. We need a little bad memory, forgetting: To be a mathematician, you need a slightly bad memory 🔗
  8. Imagination in the AGI
  9. Giving positive bias to weird ideas
  10. What intelligence are we building? Dumb, normal, intelligent
  11. When we decide to believe someone/something? (how do we decide in the mafia game)
  12. Where do we have important problem solving capabilities like top-dowm/bottom-up analysis?

AGI Test

As discussed in C1, intelligence is based on our perception and from our perception a model like GPT-4 can be considered intelligent. I define a threshold that surpassing that indicates AGI. A system is AGI not when it's indistinguishable from a human in conversation but when it can design another AGI.

e.g. since GPT-4 can't build another AGI, it's not AGI.

This definition seems simple enough that I believe somebody else has thought about it before. So in that case, I'm not the first one who has thought about it but I vote for this definition of AGI. There's also an ultimate version of this test where a system autonomously and independently came to conclusion to build another system to delegate computation to it. (without a prompt or ask)

Vision papers

What is a vision paper?

https://scienceplusplus.org/visions/index.html

Personal Motivation and Story

The Motivation of this work, beside the Asimov novels and years of working on AI projects, started in October 2021. AI has always been a part of my professional career, but it was the first conversation topic that I had with the person I love.

In October 2021 something terrible happened, and it made something inside me to flip. I wasn't aware of it, but that event made me work on AI more than before in my free time, unconsciously. Months later, I noticed it in myself. I was thinking that if I create AI, that first conversation will be restored and love will find its way to me.

It might seem cool to work on AGI, but it was also mad and unrealistic, starting from nothing and from nowhere. At that period, our data team at Eveince6 was working on graph neural networks to build a better representation of texts for better understanding of financial advice given by experts on the internet7 so I was more focused on graphs networks. But then I came to the conclusion that graphs inherently and generally are not a good tool to represent behaviour when they represent data and vice versa 8.

This wasn't a step toward a design, rather a step toward removing designs that don't work. This led me to read more and search more where I found almost all the available network architectures empty of characteristics required to build AGI, but then again small lights in the path like this tweet from LeCunn kept me going 9.

Studying DeepMind and OpenAI works has also made me draw some predictions over the next months, not to feel good about my predictions but to test if I was right that current networks are not what we're looking for 10.

On April 25th 2022, the ICLR 2022 was held. One of the most important events of introducing cutting-edge achievements in AI, sponsored by DeepMind, Google Research, Two Sigma, Microsoft, Meta, etc. (DeepMind's' overview of their papers for ICLR 11). And I started to see a convergence between my findings and what was reflected in the "Bootstrapped Meta-Learning12."

I articulated my ideas as a basis for more research even though interesting works at the time like Gato, DALL-E and GPT were on a different path. Those models progressed dramatically over the past year, but I was thinking maybe it's better to call them computation models.

And for AGI, I took another direction which has been depicted here as Purple.

The name, why Purple

I chose the name in March 2023, after over a year since it was started. There were three mysteries about this work that formed its identity for me:

First: What is the right design, one powerful model or a population of models? In this design, I chose the population, based on this thought that a single human wouldn't be intelligent if it was one baby on an isolated island.

Intelligence emerges from a population of models, not a single model. It's essential to have live models that each one has its own experiences and stories and has the ability to interact and communicate with other models. True intelligence emerges from this population of models, not a single model.

For a counter design, LeCun's proposal13 is more focused on one powerful model and its general ability of learning, which is absolutely important but doesn't introduce a specific way for models to learn from each other.

Second: There's a famous paradox called the Moravec's Paradox (1988)14: "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."

This paradox has been used by robotic researchers where they had difficulty in problems like coordinating hand-eye tasks while computers were really amazing at computation.

How can a model feel like us? and ultimately learn like us? Learning is the key to intelligence, but mimicking learning is not enough. So I came to this conclusion that our feelings are something that will stay unique to us humans. The way we interact with the world can be simulated, but it will never be how we do it. So this paradox should always be true at some level.

Third: In a society, how do we prioritize our goals and wants against others? If we're living with others, where we rely on society rules and where we rely on morality? How do we value our thoughts and thoughts from other people?

Answering this question is key to understanding our relationships. Our ability to make friends, start a conversation or living together. If an AI model directly doesn't address this, of course, to some extent, the model has not been successful in creating a stable population of models.

Live Free Models (LFMs) inherently have the ability to create stories from themselves and learn stories from others. They create their own goals from those stories, and teach each other, and judge each other, and through these constant interactions they find solutions to problems. Then again, these solutions are incomplete and partial, but they cover a specific scope of a problem.

Purple: These mystries for me were full of contradictory concepts. I felt Purple reflects how two different colors, blue and red, create a new color while they seem contradictory.

Footnotes

  1. https://en.wikipedia.org/wiki/Intelligence

  2. https://www.sciencedirect.com/science/article/pii/S0004370221000862

  3. https://twitter.com/sama/status/1638983750934724608

  4. https://www.nature.com/articles/s41562-022-01516-2

  5. https://en.wikipedia.org/wiki/Active_learning_(machine_learning)

  6. https://eveince.com

  7. https://www.hup.harvard.edu/catalog.php?isbn=9780674576186

  8. https://arxiv.org/abs/2211.16103

  9. https://openreview.net/pdf?id=b-ny3x071E5

  10. https://arjmandi.substack.com/p/on-the-edge-5

  11. https://arjmandi.substack.com/p/on-the-edge-11

  12. https://www.deepmind.com/blog/deepminds-latest-research-at-iclr-2022

  13. https://openreview.net/pdf?id=BZ5a1r-kVsf

  14. https://www.hup.harvard.edu/catalog.php?isbn=9780674576186