Considering AIs through our own mind’s reflection
Lessons about AI from lessons about our mind. Focused on the nature of free will.
I have been trying to train my mind through meditation for about two years now. It is remarkable to me the frequency and ease by which you can notice the actual modus operandi of the brain differing from the perceived modus operandi. Things like superimposing images on your visual field, noticing how arbitrary choices are made (they appear), and the transient nature of even self-conjured negative emotions all can easily challenge our status quo.
The Waking Up App has a new series concisely dictating the illusion that is the fleeting feeling of free will. It is unsurprising that how we view the artificial intelligences (AIs) that we create through the lens of our own subjective experience. That is: we see the AIs through the lens of how we think and we view the world. Our thoughts and sights do not entirely reflect reality, and the gaps you can observe when meditating and examining the nature of reality are gaps we get to decide if we want to imbue our machine creations with.
Ultimately, the nature of the mind that makes it feel as if there is are sensations to be human can likely be hard-wired in code to determine how it is to be an AI. Some of the sensations are more intuitive to set up in code and some represent open problems in AI research. On the other side of the coin, we can make AIs that have no bearing or expectation to feel any of the sensations we regularly experience as being human. In this post, I will walk you through the 90-minute course from Sam Harris on free will, focusing on what it means for AI.
The first four sections 1. Cause & Effect, 2. Thoughts without a Thinker, 3. Choice, Reason, & Knowledge, and 4. Love & Hatred have the most bearing on creating intelligent computer systems, so I will spend more time there.
The last three lessons 5. Crime & Punishment, 6. The Paradox of Responsibility, and 7. Why Do Anything? have more to do with ethics and the creation of a functional society in light of these mental structures. These three will be less about creating AIs, but rather how AIs could better fit into this society.
In writing this, a lot of themes come up with Artificial General Intelligence (AGI) and computer-consciousness. The points made are early explorations and I suspect these themes will be revisited as we learn more.
Preliminaries
Some terms I use heavily in this piece can take multiple meanings, but for this post, I am thinking of them as:
Meditation: the act of investigating the nature of the mind, normally through quietly focusing on individual aspects of awareness (such as the breath).
Free will: the subjective feeling that your decisions, biology, and primarily your sense of self determine your actions to some extent.
Artificial Intelligence: an agent that reasons and interacts with the world.
For an illustrative mental exercise (meditation) to warm you up to the illusion of free will, focus very closely on this arbitrary task and how your brain comes to a conclusion: what is the favorite article you have read in the past month?
(Pause and think closely about what has come up)
Now, think of another piece of writing.
Do you have any control over which articles come to mind? The feeling is that random suggestions appear as a consequence of your current state. This is a clear example that we don’t have free will, and really the illusion we trick ourselves with is an illusion that an illusion of free will exists at all. By no means do I expect you to think you are an autonomous entity after doing this, it can just be illuminating and make you want to investigate further.
The core illusion of the illusion of free will and tricking our AIs
Closely examining the nature of the mind shows the strength of our subjectiveness with respect to our experience. The subjectiveness of being is our wiring, and we are making machines that primarily reflect this notion. As with the example above, the illusion in our brain is really that there is an illusion of free will — that is, when we closely examine what is happening, the freedom disappears.
Considering these states and common operations is crucial to planning with powerful AIs in the loop. Now, I will walk through the lessons from Sam and what can be taken away from them.
Lesson 1. Cause & effect
Consider causes of a phone to ring — some digital signal is transferred to a speaker (and many digital signals before that), and electrical oscillations create sounds waves. What causes a phone to ring can be construed in different ways (on the chain of engineering), and the same ambiguity exists with human thought. The difference is: with human thought we don’t really consider the other possible causes other than ourselves. Ultimately, there is little difference in initiating in the human brain — where a set of neurons fire in reaction to a stimulus — and in code where an interrupt triggers an action. In the understanding of the human brain, we are limited by neuroscience, and in computers, we have already eliminated most uncertainty in fabrication. A difference in fundamental scientific understanding of a system does not preclude the uncertainties from behaving the same at a high level different.
Impact on AIs: changing the notion of cause and effect in software
Accepting the cause of events will actually be easier with AIs — we expect our agents to act based on the information they have (unless we add more abstract hallucination). We could very easily wire an AI to perceive that it is its own cause of many events. This would be accomplished by updating its priors (any distribution of beliefs) with an external program, but make the computer think it was its own fruition.
Lesson 2. Thoughts without a thinker
We have two modes of action and mental investigation, voluntary and involuntary — only the former (voluntary) indicates thought. If you re-consider the meditation exercise on choice proposed in the preliminaries, it is easy to prove that identifying with thought is not free will. Identifying with thought is more of a noticing than control (which is one of the first lessons in many meditation practices). The phrase thoughts without a thinker is referring to the idea that we all do experience many thoughts, but upon close inspection, there is not a thinker embedded in our consciousness (in the form of a thinker that reacts to and curates much of the information that appears).
Impact on AIs: Hierarchical AIs with contrived data flows
We could have robots truly act in the way where its thoughts are truly associated with a thinker, but I think this muddles efficacy (in some research curiosity) at the benefit of direct computation. A specific organization of this could be an AI with a computation structure so that there seems to be two levels of computation: one with control and an environment where information progresses — kind of like an RL loop with a separate multi-modal data processing unit (it’s obvious to me at this point that we don’t have the terminology to discuss what these things would look like). This structure could mirror the human perception of a thinker and a consciousness.
Lesson 3. Choice, reason & knowledge
Without free will, the act of reasoning and the substance of knowledge can come into question. Luck seems to be a mental reaction to randomness, rather than a true property. The way that it is described to me is that reasoning about goals is a sort of self-value update, rather than a choice. The ultimate decision will not be something directly under your control. Inherently, and somewhat counter-intuitively, the lack of freedom makes reasoning possible. The world opposes free choices because they can be wrong (with respect to laws such as science) and punished. True freedom is recognizing that one is not controlling the aspects of experience previously identified with.
Impact on AIs: manipulating a computer’s relationship with randomness
AI can have a different understanding of luck (randomness and uncertainty) and how it interfaces with its structure of intellect. This more direct approach to AI leverages the benefits of computation, but if we seek to create consciousness, having AIs who tie their fate to the randomness can provide a strong source of self. Like with a sense of self, a formulation of knowledge doesn’t seem very practical in how we currently view AIs, which makes me think it can be one of the biggest opportunities for improvement.
Lesson 4. Love and hatred
The dichotomy of how, and why, we experience different types of emotions is part of the allure of being human. It is hard to describe what happens, yet we all feel it. Love is a magical feeling that can seem serendipitous and hatred feels incredibly focused. Love is about a feeling with people or things (not how they make decisions) while hatred is very specific to free will and judgment of actions (think they should act differently). The free will arguments start leading into ethical discussions from here by considering scenarios such as forgiving people who were shown to commit heinous acts partially due to degrading biology, like a brain tumor.
Impact on AIs: easier to make computers that love
At first glance, it will be hard to make AIs that hate in the same way. AIs can be made to want to optimize, but true hatred represents a complex “what if” structure of thought. Let us leave the door open for robots that love then — love for a robot can be when it fulfills its task perfectly and is part of the bigger system. Both of these have complicated problems of distilling human values into numerical approximations, but after this analysis, I am more optimistic than I originally thought.
Illusions, designs, pitfalls
Machine learning researchers have already learned a lot by studying the structure of the mind. Designing systems based on the perceived nature of the mind is much tricker, but I think it can lend us more insights into what it means to feel intelligent. It is likely that the AIs we design that become generally intelligent will not have mental behavior mirroring the human brain (Note: this depends on how AGI research proceeds and whole-brain emulation is an option of research, but it would require a dramatic improvement in neuroscience, so some are skeptical).
All of the learning systems I have worked with have formed intelligence as an iterative, computational device for exploration and approximation. It is not possible for me yet to consider the nature of what it may be like to experience the world as such a system.
Clearly understanding the nature of computation, reference, and development given to AIs is crucial for mitigating any potential harms. I would say we want to make our AIs distinctly non-human in most of the axes discussed above (humanness often implies contradiction and uncertainty, but also creates the ability to love) in order to clearly quantify the potential harmful effects of these agents.
I am very interested in what we can learn from our nature of being and how it influences the systems we build. In my (first) post on model-based RL, I discussed its inherent links to brain structures and mental planning. If you think I missed anything or want to discuss this, please leave a comment or reach out. I am likely to explore the link between meditation and computer intelligence in more detail in the future. It sort of feels like the recommendations of how we create a free-will robot could lead to a consciousness emulation.
Now to the remaining points in the course on free will.
Implications of the illusions
Our society is really structured under the assumption that individuals have free will. When there are more AIs that clearly lack free will, it will be interesting to see if they are permanently treated as a separate type of entity from humans (at least in the mitigation of risk, legal precedents are hard to progress away from the notion of free will). AIs can potentially teach us things, like reducing punishment to people who harm society by biological chance, and instead remove the danger. After living through the golden age of the internet, the next wave I will watch is how interactions with AIs restructure our society’s habits, culture, and laws.
Lesson 5. Crime & punishment
If someone is attacked by a bear versus a man in the woods, the blame reaction will be very different. Will hate the man more because of free will. This intuition amounts to how we react to planned punishments — those with the illusion of free will should be in more control. What we can learn is to disambiguate the responsibility of society to protect each other from the individual blame on mistakes, especially when knowing there are biological-structural failures that can cause humans to act out.
Note: Funnily enough, lawyers used to represent animals in the Middle Ages! Rats could come to court because of eating cats, and the jury was told that the rats were too busy to attend (example, example2)!
Lesson 6. The paradox of responsibility
From the perspective of free will, true randomness would be supportive of individual control. Actions that follow your path are less free; we are more biased. It is more shocking for a professional golfer to miss a putt because something in their wiring must have gone very wrong — but there is a cause that could be discerned. The golfer should in theory be help even less responsible because it seems more random.
Right now computer computation seems completely distinguishable from neural computation, but as models for the brains improve, the same level of determinism could easily be observed. The subjectivity of facts in the brain (human bias) allows for subjectivity of free will, but the causes don’t change a pre-disposed neural composition. There is no scientific evidence contradicting that we would ever make a different decision given the same neural state and surroundings; every moment of our lives is the cause.
If people are less responsible, who do we give the evil cure to if it is medicinal? Is there any person who we should deem “too deeply evil” to cure?
Lesson 7. Why do anything?
The conclusion answers the question: if we do not have free will, why should I do anything? Without free will, we realize that we are a product of our environment and those we spend time with. Without free will, we are a part of a collective — a collective that jointly updates every being. This collective of people can separate societal best interest from fault and can dispel the notions of pride and shame (by realizing they are conscious constructs). The updates are the biological priors of our brains and our experience.
Robots as part of the collective
The reason to still do anything is because your actions create those around you. This becomes a feedback loop for the compassionate, where you enable those around you and they enable you. The point is that we are part of a collective, and without free will many negative aspects of being are nonexistent.
From exploring this, I feel like AIs are likely to be more compatible with a notion of the collective than individuals. This will be an interesting force towards the digital world if it is true.
If you want to support this: like, comment, or reach out!