AI and Large Language Models Show Promising Skills in Emotional Intelligence Testing

Recent studies reveal that large language models like ChatGPT can effectively solve and generate emotional intelligence tests, outperforming humans and opening new possibilities for mental health and social training applications.
Recent research indicates that large language models (LLMs), which are the artificial intelligence systems powering conversational tools like ChatGPT, are highly capable of both creating and solving emotional intelligence (EI) tests. These tests are designed to evaluate a person's ability to recognize, understand, and manage emotions—skills that are vital for social interactions throughout life.
A study conducted by researchers at the University of Bern and the University of Geneva examined the performance of multiple LLMs, including ChatGPT-4, Gemini 1.5 flash, Claude 3.5, Haiku, and DeepSeek V3, on five widely used EI assessments originally developed for humans. The models were tasked with interpreting short emotional scenarios and selecting the most appropriate responses, which measure emotional recognition, reasoning, and regulation.
Remarkably, the LLMs achieved an average accuracy of 81% on these tests, significantly surpassing the human average of 56%. Furthermore, ChatGPT-4 not only successfully solved the tests but also generated new EI test items with clarity and realism comparable to the original ones, indicating a sophisticated understanding of emotional concepts.
The study further explored the models' ability to create new psychological evaluations. Over 460 human participants then rated both the original and AI-generated tests based on objectives like difficulty, clarity, and realism. Results showed high alignment between AI-produced tests and human standards, emphasizing the models' capacity for deep emotional reasoning.
These findings open promising avenues for developing automated tools in psychological assessments, training materials, and social simulation scenarios. Such applications could streamline the creation of emotional intelligence resources and enhance the capabilities of social agents like mental health chatbots and educational tutors, particularly in emotionally sensitive interactions.
Looking ahead, researchers aim to test the models in more complex, real-world emotional conversations and assess their cultural sensitivity, as current models primarily reflect Western-centric data. Overall, this study underscores the growing potential of AI to understand and replicate human emotional skills, offering new tools to support mental health, social skills training, and human-AI interaction.
Stay Updated with Mia's Feed
Get the latest health & wellness insights delivered straight to your inbox.
Related Articles
Neural Activity and Its Role in Self-Preoccupied Thinking
Discover how specific neural activity patterns are linked to self-focused thoughts and their implications for mental health, including depression and anxiety. This research highlights potential neural markers for maladaptive self-preoccupation and future interventions.
Social Risks in Specialized Psychiatric Care Often Go Unnoticed, Increasing Vulnerabilities
A recent study reveals that social risks such as financial hardship, housing issues, and violence frequently impact psychiatric patients. Addressing these interconnected risks is vital for effective treatment and overall well-being.
Study Finds Social Media Use Contributes to Increased Depression in Preteens
Research shows that increased social media use in preteens can lead to a rise in depression symptoms, highlighting the need for healthier digital habits among youth.
Older Adults' Positive Aging: Insights from British Seniors
A groundbreaking study highlights how older adults in England perceive aging positively, emphasizing social relationships, lifelong learning, and resilience as key factors in healthy aging.