Adding meaning to ‘human in the loop’
At a recent conference on AI, one of the key buzz phrases uttered during most talks was keeping a ‘human in the loop’ as part of executing good AI design. And who can argue with that? It’s obvious that humans should be an integral part of any workflow, even when adding AI as part of the process.
The problem, however, is that ‘human in the loop’ is too vague to be helpful. The phrase can elicit highly variable assumptions from different people and implies that any human ability to intervene or steer is adequate. And what many people imagine when thinking about human in the loop are not even possible or pragmatic.
In some regards it is impossible to keep humans out of the loop. Even if a task is fully run by an AI agent, humans will be impacted by the decisions. If an AI agent goes awry, some human will be on the blame line (maybe an executive, maybe the AI agent creator). They may not know it, but they are officially in the loop. And of course, humans impacted by the agent are 100% in the loop, whether they want to be or not. While this perspective is clearly not the intended meaning of this buzz phrase, it reveals the extremely imprecise nature of the phrase.
To achieve the value implied by the phrase, we need to define what we really mean by putting a ‘human in the loop’. We need to distinguish between the problem we’re implying by this vague solution and the facets that define a meaningful, effective solution to that problem. When talking about it, we really need to ask 3 high-level questions. Why? Where? How?
We start with ‘Why’ do we think having a ‘human in the loop’ adds critical value? Asking why is not a challenge - as in not questioning whether there is value. “Why” serves as a mission statement. We need to define the goal that inclusion of the human is critical toward? Without answering this question, the AI team is aimless for what they want to achieve. It is also impossible to know if implementation has been successful.
The second question is ‘Where’ should we inject a human into the process? This can get teams in trouble when they might presume an answer to this question (a priori determining in the loop, on the loop, above the loop, etc…). Do they want the human making every decision within the domain and being supported by the AI as they do so? Is the AI making critical decisions that a human needs to oversee and possibly correct? Are they working with one agent or multiple agents?
Teams often want to answer these questions right away, but answering these questions requires the development team to have a solid understanding of the problem domain and the decisions that need to be made, not just the underlying AI technology that they are developing. Each AI or agent must be built to fit within that larger problem space, regardless of which entity is making decisions. It is misguided to think that an AI technology will simply replace a human (see Substitution Myth: Bradshaw et al, 2013). In reality, the new technology changes the work, meaning coordination between human and AI is paramount. The answer to ‘Where’ cannot be presumed before you’ve done the work to gain a solid understanding of the problem domain
If we understand where we need the human to be in the loop, we can then move on to the final question, “How?” ‘How’ is the last and possibly most difficult question to answer, perhaps for that reason a frequently under-considered question. ‘How’ are we going to make sure the human and AI team can work together for overall system success? You can’t just thrust a human into operations with an agent (or multiple agents), provide no additional support, and expect them to manage the process successfully. You need to design a system deliberately to create a powerful human-AI team. Otherwise, you are encoding disfunction, leading to higher risk of failure. The system design needs to provide observability into the AI and directability to adjust the AI when the AI operates in ways not aligned with the goals of the system. This requires understanding how people think, perceive, and process information. It requires knowledge about building good teams and how information can and should be shared between entities. It requires understanding how the Joint Cognitive System (human and AI working together) should make decisions in complex problem spaces.
Having a human in the loop may seem simple to think about but is actually very difficult to implement in practice. Paying lip service to the concept may notionally put the human in the loop but practically leaves the human user struggling to know what’s going on and how to the implemented technology to work for them. Thoughtfully creating a human-AI team, one of our capabilities at RCS, is vitally important to having a human in the loop that works.
Bradshaw, J. M., Hoffman, R. R., Johnson, M., & Woods, D. D. (2013). The Seven Deadly Myths of “Autonomous Systems.” IEEE Intelligent Systems, 28(3), 54–61. https://doi.org/10.1109/MIS.2013.70
