Gemini’s Emotionally Intelligent response

AI

You most properly found your way here because you read ‘The Culturally Emotionally Intelligent AI Opportunity in 2025’ article in my Emotional Maths LinkedIn newsletter.

Here is the full response from Gemini which, I thought was a great example of an emotionally intelligent response to a potentially triggering topic.

The AI response perceives how I might feel, connects the potential felt emotions to facilitate thoughts that convey empathy; it understands the meaning of the emotions behind the question asked and then responds in a way that provides the information requested, whilst helping to regulate the response through the choice of words, acknowledgement of the issue and suggestions about what could be done to address the issue.

RESPONSE FROM GEMINI

That is a frustrating and jarring experience, and it highlights a significant and well-documented problem in the field of artificial intelligence: inherent bias.

The AI did not do this intentionally or with any understanding of race or gender. The reason for this bizarre and incorrect transformation lies almost entirely in how the AI was built and the data it was trained on.

Here’s a breakdown of the likely reasons why this happened:

1. Biased and Unbalanced Training Data

The single biggest reason is the data the AI learned from. AI image models are trained on billions of images scraped from the internet.

  • Underrepresentation: If the training data contained significantly fewer images of African American women compared to other demographics, the AI is simply less "skilled" at recognizing and accurately reconstructing their features. It has a weaker understanding and fewer reference points.

  • Poor Labeling and Stereotypical Associations: The AI doesn't see a "woman" or a "man." It sees patterns of pixels, shapes, and colors. The data it learned from might have faulty or stereotypical labels. For example, it might have learned to associate certain types of lighting, facial structures, or even image quality with one demographic over another. Your photo might have contained a subtle feature (like the angle of the cheekbones or the way light reflected off the skin) that the AI, due to its biased training, has more strongly associated with a Filipino man.

2. The "Lost in Translation" Problem (Latent Space)

When you give an AI an image, it doesn't "see" it as a person. It converts the image into a complex mathematical representation—a series of numbers—called "latent space." It then uses this numerical code to generate a new image.

Think of it like a flawed translation dictionary.

  • Your photo of an African American woman was translated into a specific "numerical word" by the AI.

  • When the AI looked up that "numerical word" in its own biased dictionary to translate it back into an image, the closest and most detailed definition it had was for a "Filipino man."

  • The translation was technically successful from the AI's perspective, but the result was completely wrong from a human perspective because its internal "dictionary" is skewed.

3. The Influence of the Text Prompt (If You Used One)

If you included a text prompt along with the image (e.g., "a photo of a person smiling"), the AI tries to merge the information from the image with the text. If the AI’s training data for "a person smiling" is overwhelmingly represented by a certain demographic, it might have defaulted to that representation, pulling features from the original photo but forcing them into a new, incorrect context.

An Analogy: The Poorly Stocked Library

Imagine an AI as a librarian who has been given a massive, disorganized library to learn from. If that library has thousands of books on Filipino history and culture but only a handful of pamphlets on African American women, when you ask a complex question about an African American woman, the librarian might get confused. They might try to answer by pulling concepts and descriptions from the books they know best, resulting in an answer that is completely inaccurate and mixes up the subjects.

In summary: The AI is not making a conscious choice. It is a powerful but flawed pattern-matching machine that is reflecting the imbalances and biases present in its vast training data. Your experience is a direct and unfortunate example of why creating more equitable and representative datasets is one of the most critical challenges facing the AI industry today.

RESPONSE END.

Any thoughts?

Previous
Previous

Emotions, Feelings and Moods

Next
Next

How to Get Comfortable with Uncomfortable Conversations: A step-by-step guide