The Missing Skill in AI That Could Make or Break Trust—and Psychological Safety.

It’s not another programming language. It’s not data labeling expertise. It’s emotional intelligence. In a time when AI is expected to be not just powerful but human-centric and trustworthy, emotional intelligence is emerging as a critical—yet often overlooked—competency in AI development.

Artificial intelligence IS deeply embedded in how we work, hire, heal, shop, and learn. Regulations like the European Union’s AI Act that came into force on 1 August 2024, spotlight the emotional risks in technology and make a strong case for assessing and developing emotional intelligence not only in AI developer teams but also in an AI enabled workforce.

Bias Disclaimer

I certified in the MSCEIT Emotional Ability Assessment in 2010, so I acknowledge my potential confirmation bias based on my own experience has led me to a hypothesis that with the increased consumption of media content and our natural human desire to find shortcuts to get our needs met - in an AI enabled world - we need a framework to make visible and repeatable the process of how we integrate our emotions into our thinking to perceive, connect, understand and manage what we are now absorbing and processing at the speed of light. (Long sentence…. definitely take a breath after reading that!)

This skill, especially in our globalized world with multiple identities and personas can no longer be consider a soft skill. It is one of the hardest skills to consistently use, even when you score highly on the test.

MSCEIT Graduation Class of 2010 - London

The more I use AI, the more curious I become about how we use the insights from MSCEIT2 Emotional Ability Assessments (updated in 2024), to understand how we need to adapt to the new demands of AI. I am interested in how we measure the risks and rewards of ignoring (or embracing) the development of emotional intelligence skills in AI developer and AI enabled teams to create better human centric outcomes.

MSCEIT2: The Gold Standard of Emotional Intelligence Testing

MSCEIT2 - 4 Domains - MHS Assessments

So, what is MSCEIT2? The acronym stands for the Mayer-Salovey-Caruso Emotional Intelligence Test (Version 2). It’s not another personality quiz or “How empathetic are you?” self-check; it’s a scientifically designed assessment of emotional ability. Originally launched in 2002 by psychologists John Mayer, Peter Salovey, and David Caruso, the MSCEIT was the first tool to objectively measure adult emotional intelligence – over the last 30 years through research of the concept of Emotional Intelligence first published by John Mayer and Peter Salovey in 1990 - the test has earned international recognition as a trusted, EI ability assessment. Its latest iteration, MSCEIT2, builds on that legacy and is the gold standard for evaluating EI in a practical, actionable way.

How does MSCEIT2 differ from common self-assessments?

Think of MSCEIT2 as an “emotional IQ test.” Instead of asking you to rate your own empathy or describe your personality, it presents you with problem-solving tasks about emotions. For example, you might be asked to identify the emotions expressed in a series of faces, or to choose effective ways to manage a teammate’s anxiety in a scenario. In other words, it measures how well you perform in understanding and managing emotions. Many popular EI tools are self-report questionnaires – they gauge how you perceive your own emotional traits or behaviors. By contrast, MSCEIT2 treats emotional intelligence as a cognitive ability: the capacity to reason with emotion-laden information and arrive at correct answers. This performance-based approach makes MSCEIT2 more objective than assessments that simply rely on one’s self-perception. MSCEIT-style tests evaluate skills like perceiving, using, understanding, and managing emotions through direct tasks, rather than through self-rated traits. The result is a nuanced profile of a person’s actual emotional abilities – strengths and areas for growth – akin to how an IQ test measures cognitive abilities.

Trustworthy, Human-Centric AI: A New Imperative

Why push emotional intelligence in AI teams now? The landscape of AI development is shifting towards trustworthiness and human-centric design. Around the world, there’s a growing expectation that AI systems should be not just powerful, but responsible and attuned to human values. A prime example of this shift is the European Artificial Intelligence Act (AI Act) – the first major law to seek to regulate AI. The EU AI Act explicitly aims to “promote the uptake of human-centric and trustworthy artificial intelligence” while safeguarding fundamental rights. In essence, regulators are saying that AI should respect and understand the humans it serves.

This regulatory milestone elevates the importance of what we might call “emotional intelligence awareness” in AI development. It’s telling that the AI Act even flags certain emotion-related AI applications as high-risk. For instance, AI-driven emotion recognition systems (like those that detect emotions from faces or voices) are classified as high-risk uses in sensitive contexts. Why would a law care about that? Because misreading or misusing human emotional cues can lead to serious harm – from eroding privacy to unfair treatment, to lost trust. The takeaway for AI teams is clear: building AI that people can trust requires more than technical skill with algorithms; it requires human-centric thinking, empathy, and ethical insight.

Is technical brilliance enough if an AI ends up socially tone-deaf or alienating to users?

Increasingly, I think the answer is no. Emotional Intelligence is key. Imagine an AI product team that is collectively high in EI: they are likely to foresee how a new chatbot’s tone might comfort a confused user or offend them. They’ll be more attuned to diverse user perspectives and emotional reactions, designing with inclusion and respect in mind.

  • If you are a tech leader, would love to hear your thoughts in the comments on whether you think emotional intelligence is not just a “nice-to-have” in AI development, but a core competency? Do you believe that by embedding emotional intelligence into AI engineering, companies can create systems that uphold ethical standards and truly foster positive user experiences, or is it more nuanced than this?

  • Is the push for human-centric AI, amplified by regulations like the EU AI Act, an effective push for companies to take the need to embed emotionally intelligence into AI team recruitment and development seriously in order to balance machine efficiency with human empathy?

Response from OpenAI - ChatGPT

Risks of Ignoring EI (and Rewards of Embracing It)

What happens if we ignore emotional intelligence when building AI systems? Let’s consider a few risks:

  • Emotionally Tone-Deaf Systems: AI that lacks emotional insight can come across as tone-deaf. For example, think of a customer service chatbot that responds with a perky scripted answer right after a user expresses frustration. Such mismatched, insensitive responses can quickly erode user trust, leaving people feeling unheard and undervalued. An AI without empathy might inadvertently insult or frustrate users – hardly the outcome we want when user experience is king.

  • Biased or Unethical Outputs: Emotional intelligence in a team also means being aware of biases and social nuances. Without it, AI developers might miss how their system could amplify biases or treat different user groups inequitably. Recent research on conversational AI bears this out – even advanced chatbots have shown skewed “empathy” levels, reacting more sympathetically to certain genders, for instance. If teams lack the emotional and ethical awareness to catch these issues, the AI they build could deliver biased, unfair interactions. In a broader sense, low-EI teams might create products that handle sensitive situations (like healthcare decisions or moderation of harmful content) in a cold, algorithmic way that violates users’ dignity or well-being.

  • Poor User Adoption and Trust: Trust is the currency of the AI age – if users don’t trust a system, they simply won’t use it. An AI tool that feels impersonal or uncaring can drive users away. Consider how much more we engage with technology that “gets us” versus tools that feel robotic. An emotionally tone-deaf AI is likely to be met with skepticism and discomfort. Over time, companies that ignore EI risk their brand reputation, as their AI products may develop a reputation for being creepy, insensitive, or unreliable. That said, the same could be said if AI EI is programmed in a way that could be seen as manipulative and/or Machiavellian!

Now, contrast that with the rewards of embracing VALUES based, ETHICAL emotional intelligence in AI design and teams:

Source: Six Dimensions - Accordant Advisors

  • Empathetic, Human-Centered Design: Teams high in EI are naturally inclined to design solutions with the end-user’s feelings and expectations in mind. The potential result? More empathetic interfaces and features. Empathetic design can dramatically improve how users perceive and engage with AI. Studies have noted that embedding emotional intelligence into AI interactions leads to higher user engagement and trust. Users feel “heard” and respected, which keeps them coming back. For example, an AI assistant that can adjust its responses when it senses frustration (perhaps by apologizing and offering to clarify) will make users feel understood, not just processed. Again, that said, it is important to note that we have heard of devastating cases in the media of AI enabled technology, programmed to emotionally manipulate vulnerable people; so, considerations of values and ethics need to be considered in what is programmed as an emotionally intelligent outcome for the users.

  • Better Collaboration and Prompt Engineering: Emotional intelligence isn’t only outward facing; it improves teamwork and creativity within the development process. AI projects often require collaboration across diverse disciplines (engineers, designers, ethicists, product managers). A team that practices tactical empathy and self-awareness will communicate and collaborate more effectively, leading to better outcomes. Moreover, EI can enhance prompt engineering and model training: developers with high EI may craft prompts or training data that anticipate emotional context, leading models to respond more appropriately. They are also more likely to catch outputs that “just don’t feel right” and refine them. In short, emotionally intelligent practitioners can fine-tune AI systems to be more context-aware and user-friendly. A great example of an Emotional Intelligent communication framework is articulated in Kim Scott's book Radical Candor.

  • Inclusive and Trustworthy User Experiences: Perhaps the biggest reward is an AI product that genuinely resonates with a broad user base. When EI is a key ingredient, AI systems are less likely to overlook the human differences in how people express feelings or react to technology. This leads to more inclusive experiences – AI that is accessible and respectful to people of different cultures, ages, and backgrounds. Such systems inspire confidence because users sense a human touch. And in a marketplace increasingly steered by trust and psychological safety (especially under new regulations), that human-centric quality becomes a competitive advantage.

Applying insights from neuroscience and cognitive psychology to strengthen the symbiosis between AI efficiency and human empathy is no longer futuristic—it’s an immediate necessity for any organization seeking to harness AI responsibly.

From Code to Compassion: Rethinking AI Talent Recruitment

The case is building that emotional intelligence should sit alongside tech skills in AI recruitment and development. If we want AI that is trustworthy, inclusive, and attuned to human needs, we need teams who embody those qualities. This is where MSCEIT2 comes back into the picture as a game-changer.

Including the MSCEIT2 Emotional Ability Assessment in hiring or upskilling processes sends a powerful message: that your organization values the “human” in human-centric AI. It’s not about hiring only empathic extroverts or creating some kind of emotional echo chamber. Rather, it’s about ensuring your AI architects have a baseline ability to recognize and manage emotions — a skill set that will translate into more thoughtful design decisions.

What if every AI engineer or product manager had proven emotional intelligence competencies?

We might see far fewer “AI fails” where a chatbot responds insensitively, or an algorithm ignores the real-world impact on users. Instead, we’d have AI systems developed with a built-in awareness of emotional context and ethical nuance.

It’s also worth noting AGAIN that developing EI in tech teams isn’t just good for AI products — it’s good for the team culture itself. High-EI teams tend to have better communication, handle stress more effectively, and adapt to change more. All of these are crucial in the high-pressure, rapidly evolving field of AI. In an industry where burnout and ethical dilemmas are common, emotional intelligence ability assessments and training can be a great tool to add to the toolkit of stabilizing forces. It helps leaders lead more effectively and teams navigate uncertainty with empathy rather than fear. Imagine an AI project meeting where engineers and ethicists truly listen to each other’s concerns (technical or human-centric) and collaboratively find solutions that satisfy both. That’s the kind of synergy MSCEIT2 can foster by highlighting individuals’ emotional strengths and growth areas.

As AI becomes more powerful, the differentiator for success will increasingly be humanity.

The European AI Act and similar initiatives worldwide are clear indicators that the age of “move fast and break things” in AI is giving way to “move thoughtfully and build trust.”

Incorporating emotional intelligence assessments like MSCEIT2 into AI talent development is a proactive step in that direction. It ensures that the people creating AI have the empathy, self-awareness, and social skills to foresee the human impact of their code. We routinely test our AI models before deployment – why not also test (and train) the emotional aptitude of the humans behind those models?

What kind of AI world might we create if emotional intelligence was valued as much as technical genius?

It’s a question worth pondering now, because the answer will shape the future of trustworthy, human-centric AI. This is something I think we should equip the next generation with the Emotional Math(s) skills to answer in a meaningful way. This is a Grand Strategy we should ALL be working together to figure out.

What do you think?

Disclosure: Open AI ChatGPT helped me write this article. I am exploring a topic due to personal interest and welcome the views of others especially specialists in this field or people who are just intellectually curious and open to dialectical ways of thinking.

Key References:

Emotional Intelligence

How many emotional intelligence abilities are there? An examination of four measures of emotional intelligence

MSCEIT 2: Enhancing Emotional Intelligence in the Workplace

The EU AI Act

Talk, Listen, Connect: Navigating Empathy in Human-AI Interactions

AI chatbots perpetuate biases when performing empathy, study finds

Previous
Previous

How to Get Comfortable with Uncomfortable Conversations: A step-by-step guide

Next
Next

April 2nd is #WorldAutismAwarenessDay — a time to celebrate neurodiversity and deepen our collective understanding.