What does it mean to have agency? To be an agent?
In Reinforcement Learning, an agent is formulated mathematically — some set of actions it can take, some goal, some way of understanding how actions interact with the environment to achieve the goal.
Those three conditions together seem both necessary and sufficient. Of course, there is more you could add — sophisticated reward models and world models, memory replay, parallel branching, etc.
But nothing can be removed.
An agent without the ability to take at least two actions at any point in time is not an agent. It is a just a statistical model between states and some score function.
An agent without a goal is merely a tuple of state and action spaces. Without an objective, it’s impossible to have any information about interactions between the state and action spaces. The simplest goal is always “don’t die” — continue taking actions such that you continue being in a space and having opportunities to take further actions. Taking an action that guarantees ending up in a state where the set of available actions is empty is akin to suicide.
An agent without a the capacity to learn patterns between actions, world states, and goals is a memoryless mechanism. An animate automaton. It cannot update. It cannot learn. At each point in time it could be substituted for a different one and be at no disadvantage.
What does this tell us about agency? Agency requires at least three things: the ability to take actions in the world, some goal or objective or preference over states of being in the world, and some way of learning to take “better” actions over time — ending up in preferred situations more often.
There isn’t necessarily a need for consciousness. But it’s hard to claim that you don’t need self-awareness to have preferences. And it’s not hard to see how consciousness might fall out of self-awareness in many cases. Again, the simplest preference is simply to continue existing. Consciousness seems like a plausible mechanism for implementing that.
People get stuck at many points in life. Many people feel powerless — propelled by the currents of the world, but unable to steer their course. In those cases, it’s likely that one of the three components of agency is missing.
Ask most people what they want and, beyond their next meal preference, they usually can’t articulate anything. Often, they can’t even identify the next meal preference. If you don’t know in excruciating detail what it is that you want, you’re no more an agent than two sets — actions you can’t decide between and world states you have no preference over. Sometimes preferences trade off with each other. Reconciling multiple goals down to a multi-factor preference is one of the great challenges of life. But it’s easier if you know exactly what preferences are competing.
Sometimes, however, people know exactly what they want, but they’re heavily restricted in their actions — imprisoned, indebted, enslaved, trapped, employed, controlled. They understand the dynamics of their world, their preferences, and what actions would lead to those preferences. But the best available action is so far down the list of preferred actions as to be insulting. Agents trapped in a restrictive state space are the great tragedy of sentience — knowing what could have been, but being barred from it. Often through cruelty.
Finally, some people have an idea of what they want and what they need to do to get it. But they’re paralysed by fear and uncertainty. Their model of the action-goal dynamics is too imprecise. Pits of calamity and peaks of success pepper the state space, mostly snug neighbours. Hundreds of available actions trade off between predictability and impact. A wrong bold move could spell disaster, so it’s safest to take only extremely certain moves. Stay home and watch Netflix. Continue in the same dead-end job. Don’t risk making that thing. But it’s unsettling. Listless guilt abounds.
The only way to gain more certainty is to gain more information by taking uncertain actions. In RL agents, we’d artificially break such a deadlock by taking a random action every now and then. Often this is a great strategy for humans too.