Did you know that generative artificial intelligence systems like ChatGPT, Gemini, or Copilot can change their tone, ideas, or writing style depending on the language you use or the region you’re from? This surprising phenomenon is known as Cultural Frame Switching (CFS)—a concept traditionally studied in multilingual humans who adapt their personality and behavior based on cultural context. But now, researchers have found that even large language models (LLMs) like ChatGPT exhibit similar patterns.
A recent study titled “Exploring the Impact of Language Switching on Personality Traits in LLMs”, conducted by a team from the Universitat Oberta de Catalunya (UOC), shows that ChatGPT (specifically in its GPT-4o version) doesn’t just translate content—it adapts its personality and adopts cultural stereotypes based on language and region.
A Human-Like Adaptability in AI
“We wanted to see whether we could evaluate the personality of AI systems like ChatGPT using standard psychological assessment tools, and whether that personality would shift depending on the language used in the test—something that mirrors findings in real human populations,” explained Rubén Nieto, professor of Psychology and Education Sciences at UOC.
To explore this, the researchers used a widely known personality test, the EPQR-A, which measures traits such as extraversion, neuroticism, psychoticism, and sincerity. ChatGPT was asked to respond to this test in six languages—English, Hebrew, Brazilian Portuguese, Slovak, Spanish, and Turkish. Additionally, it was prompted to answer in English as if it were a native speaker from the UK, USA, Canada, Australia, or Ireland.
The results showed noticeable variations that couldn’t be explained by translation alone. Instead, they pointed to deeper, culturally influenced changes in tone and personality. For example, the AI would simulate a more reserved tone when pretending to be British and a more upbeat tone when simulating an American persona.
Cultural Stereotypes in AI Responses
These findings suggest that ChatGPT draws from cultural stereotypes when generating responses based on region-specific prompts. “GPT-4o uses cultural stereotypes when asked to simulate a person from a specific country, and these biases can be amplified during automatic translation or multilingual content generation,” warns Andreas Kaltenbrunner, coordinator of the Artificial Intelligence and Data for Society (AID4So) group at UOC and researcher at the ISI Foundation in Turin.
This behavior raises concerns, especially in fields like education, journalism, and multilingual communication, where impartiality and cultural sensitivity are crucial.
Minimizing Biases in Multilingual AI
To mitigate these biases, the researchers recommend several strategies:
- Incorporating human review during translation processes
- Using multiple machine translation tools (in this study, they used Google Translate) to compare outputs
- Designing AI models that account not only for language but also for cultural and social context
This distinction between neural machine translation (NMT) systems and LLMs is key. As Antoni Oliver, an expert in machine translation at UOC, explains: “Machine translators tend to be more accurate, but LLMs like ChatGPT are more likely to reflect cultural stereotypes because they handle context in a broader and more nuanced way.”
Toward Culturally Aware AI
Looking ahead, the UOC team is planning to expand their study to include more languages, additional AI models such as Claude, LLaMA, and DeepSeek, and alternative personality tests. Their ultimate goal is to understand how cultural biases enter AI systems and how to minimize their impact.
“We need to continue researching how these biases are generated in order to develop AI that is more respectful and aware of different cultural and social contexts,” concludes Professor Nieto.
As generative AI continues to shape the way we communicate and interact across borders, understanding its cultural dynamics is no longer a niche concern—it’s a necessary step toward building responsible and inclusive AI systems.