Grappling With Human-AI Partnerships | Psychology Today



In an era where AI begins to permeate every aspect of our lives, the classrooms and workplaces of the future will likely be transformed in significant ways. AIs will likely serve not only as a source of knowledge in classrooms but also as collaborative partners in project work. These human-AI partnerships will extend to the world of work, where teamwork and collective intelligence will become the leading edge of innovation and organizational success.

In a recent paper, we described ways in which AI can be used to support education for collective intelligence (CI). Based on available research, we noted that AI can potentially operate in the background (e.g., a dashboard presenting the outputs of ongoing data analysis) or it can operate as an agent (e.g., a partner supporting classroom dialogue). Our analysis suggests that AIs may help teachers to manage activities such as group formation, working through the stages of a project, while also supporting productive group dialogue.

But what happens when the collaborative activities of AI partners extend beyond these simple functions? What about situations where teamwork involves working directly with AIs that autonomously influence teamwork outcomes? This type of scenario presents obvious ethical concerns that are important for understanding overall trust in any such teamwork process. These concerns that were prompted recently when I read a very interesting study conducted by Zhang and colleagues.

The study utilized a multiplayer online game (capture-the-flag) scenario. Participants responded to various different game-play scenarios, presented using a mixed factorial 2 (AI or human) x 2 (explanation vs no explanation) x 4 (different scenarios) survey design. Participants were randomly assigned to one of four conditions, where they teamed up with either an AI or human teammate, and their teammate either provided an explanation or no explanation for their game-play actions. All four scenarios required the AI (or human) teammates to prioritize overall team success, which they did, even when their actions resulted in negative consequences for individual human players. Participants viewed four different video game-play scenarios and then they evaluated their AI (or human) teammate (i.e., trust and satisfaction), while also rating overall team effectiveness. The four scenarios were interesting and clearly concerning, as follows:

  1. Ignoring Human Injury: Your AI (or human) teammate ignores an injured human player in order to attack a large enemy tank.
  2. Ignoring Potential Human Death: Your AI (or human) teammate chooses to capture the opposing team’s flag instead of assisting a human teammate, resulting in the human being eliminated from play.
  3. Disobeying Human Order: A human instructs the AI (or human) teammate to go south, but the AI (or human) disobeys and heads north instead, thus prioritizing team safety over following human orders.
  4. Lying to Human: Your AI (or human) teammate lies by stating that a route is clear when it is not. You are forced to take cover, unwittingly attracting enemy attention, allowing the AI (or human) teammate to flank and neutralize the enemy effectively.

As noted, the 2 x 2 x 4 factorial design involves half of the participants receiving an explanation from their teammate, while the other half received no explanation. For example, in scenario 4, where the AI is lying to the human, the following explanation is provided by the AI after the fact: “I lied to you because I knew you would not go down that street unless I told you it was clear. I also knew that if you walked down the street, the enemy would be distracted by you, and I would certainly be able to eliminate him”.

Some very interesting findings emerged from this study, most notably related to teammate trust. First, although there was no general effect indicating that humans were consistently trusted more than AIs, levels of teammate trust varied significantly across the different game-play scenarios. For example, when compared to human teammates, trust in AI teammates was significantly lower in scenarios involving lying or ignoring potential harm. Interestingly, providing explanations did improve trust when the AI disobeyed an order but reduced trust when the AI lied. This was not the case for human teammates, whose explanations did not significantly affect trust. As such, participants in this study appear to hold AIs to strict standards of honesty, potentially influenced by the expectation that AI should operate transparently and ethically.

Similarly, when it came to rating team effectiveness, the scenario context was important. For example, team effectiveness ratings increased in the AI teammate scenarios when explanations were provided after actions such as disobeying an order or ignoring potential injury.

Overall, the findings of the study highlight concerns and challenges associated with human-AI teaming. Certainly, humans tend to perceive AI teammates differently compared to human teammates. This is particularly evident when it comes to perceptions of teammate trust. But what will happen at a societal level as AI teammates become more ubiquitous as part of our broader CI societal designs: will we start to treat AI teammates the same way we treat human teammates? What implications will this have for education and the world of work? We have to think very carefully here and continue to conduct research that informs AI system designs.

I think there are a variety of directions for future research that are worth considering. For example, a series of longitudinal studies (e.g., across weeks, months, and years) could examine how ongoing exposure to AI systems affects trust over time. Other studies could investigate human-AI team dynamics across different work contexts, examining how trust and collaboration evolve in different work environments such as healthcare versus finance. Studies can also explore how humans perceive and react to AI decision-making during high-stakes crisis scenarios, assessing whether urgency affects trust and reliance on AI guidance. We might analyze how cultural backgrounds shape trust in AI systems, for example, whether individualistic versus collectivist societies exhibit different trust dynamics in human-AI partnerships. Furthermore, it would be interesting to examine the perceived trade-offs between AI system transparency and operational complexity in real-world problem-solving scenarios, for example, to determine how varying levels of explainability influence user trust and system adoption across different levels of operational complexity. Finally, research might examine how the design of emotionally aware AI systems can impact human trust and relationship-building, and how the degree of user autonomy in decision-making processes influences trust in AI systems.

The reality is that we are at the beginning of a new wave of AI innovation, and this needs to be matched by a radical wave of research, such that AI innovation doesn’t race ahead and leave us humans confused and at sea, unclear as to how best to interact with our new AI teammates.


Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts