Emerging personality patterns that drive differentiation in AI products
In 1966, MIT’s Joseph Weizenbaum created ELIZA, a conversational AI that mimicked a Rogerian therapist using simple pattern matching. Despite its limited design, many users felt understood and emotionally “seen,” attributing empathy and intent to the system, a phenomenon later called the “ELIZA effect”. From the very start, people related to even basic conversational software as if it had a mind of its own.
Long before modern AI, product and brand design had already embraced personality as a differentiator. Mailchimp’s playful feedback or Slack’s quirky copywriting showed how tone and character could give otherwise functional software a distinctive emotional presence. Design theory reinforces this! Norman’s emotional design theory shows that products which evoke emotion foster attachment and loyalty, while anthropomorphism explains our instinct to assign human traits to machines. Sherry Turkle’s research at MIT shows that people form “artificial intimacy” with conversational agents assigning them emotions, motives, and moral qualities as if they were caring entities rather than tools.
In modern AI systems, this projection becomes even more consequential. Anthropic, for example, shapes Claude around a deliberate character — thoughtful, principled, and balanced — by embedding values and ethical reasoning into how the model thinks and responds.
Thus, designing for personality shifts from a branding layer to a core interaction layer that shapes trust, adoption, and long-term habits.
Emerging personality patterns in AI products
This framework draws from my work on a virtual pet at IDEO, a companion robot at Miko.ai, and most recently an AI tutor at SuperNova AI — supported by research (reference list in the end) across HCI, HRI, and narrative design .
Personality in AI systems is shaped much like a human character with underlying internal traits that drive behavior and external expressive cues that make those traits visible.
- Internal layer refers to the traits and personality cues like values, worldview, personality archetype, backstory and contextual intelligence.
- External layer expresses these traits through the surface with avatar, typography, tone of voice, motion, and other multi-modal feedback cues.
The interplay between these layers and their individual elements determines an AI product’s position on the personality spectrum from minimal to expressive to fully characterful — and ultimately shapes how people perceive and engage with it.
1. Purpose, worldview and values
Purpose
Purpose is the starting point and shapes the AI’s relationship to the user based on what it helps them accomplish.
An AI built to teach will naturally behave differently from one designed to co-create an app or automate commands. Defining purpose guides —
- What is this AI supposed to accomplish for users?
- What should be the nature of AI-human relation? Is the interaction like interacting with a tool help, a pet/ companion or a collaborator/ supervisor/ assistant?
After purpose, worldview and values form the underlying belief system that makes the AI’s behavior consistent and predictable across different situations.
Worldview
Defines how the AI interprets problems and reasons under uncertainty, whether it approaches tasks with caution, optimism, curiosity, or analytical detachment. Defining worldview guides —
- How should user needs/ problems be approached?
- What matters most in its interactions?
Values
Define the AI’s ethical boundaries and priorities — the principles it won’t break. They influence not only what the AI refuses, but how it refuses, shaping the emotional tone of guardrails. Defining values guides —
- What are the rules it shouldn’t break?
- How should it protect or guide the user?
Amanda Askell tweet about Claude’s system prompt, reveals how purpose, worldview, and values are explicitly programmed into AI personality — directly shaping how millions of users experience Claude assistant.
2. Archetypes
Brand archetypes are universal character prototypes drawn from Carl Jung’s psychology. Brands adopt these familiar personas to evoke specific emotions and embody core values.
Selecting an archetype helps determine whether your AI feels like a playful entertainer (Jester), a wise advisor (Sage), an empathetic supporter (Caregiver) or any of the other twelve classic archetypes.
Choosing an archetype layer guides —
- How the AI presents itself visually — avatar style, colors, motion design
- How the AI communicates — tone of voice, phrasing, pacing, and conversational behavior.
For example
- Claude (Sage) embodies a wise, thoughtful assistant focused on clarity and balanced reasoning. Its warm, calm visual identity and starburst icon signal structure and trust, while its communication style stays gentle, honest, and reflective.
- Wysa (Jester + Coach) a compassionate, judgment-free companion focused on emotional support and guided healing. Its cute penguin mascot and playful tone signal approachability and warmth, while its communication style stays empathetic, encouraging, and grounded in evidence-based cognitive behavioral therapy.
Archetypes resonate at a subconscious level guiding the design to maintain character consistency and thus strengthening user experience and engagement.
3. Backstory
In consumer-facing AI, where emotional connection is key, backstory adds coherence and emotional weight to the interaction. Janet Murray’s Hamlet on the Holodeck shows that when virtual characters have defined roles and histories, it enriches user interaction and making the system feel more alive and intentional.
Some successful consumer products which follow this pattern strongly are
- Miko: The companion robot invites children to ask, “Where did you come from?”, “What’s your job?”, “Are you a boy or a girl?” — revealing his backstory and personality through playful dialogue.
- Tolan: General-purpose assistants take on the persona of friendly aliens from planet Portolah, each arriving with distinct personalities and quirks to befriend their human companions
- Character.ai: A massive platform where users chat with millions of AI characters, each with detailed backstories, personalities and fictional histories to explore.
4. Context adaptation
Contextual adaptation is the AI’s ability to respond appropriately to a user’s situational context or emotional state and shaping tone, language, and pacing to feel human and supportive.
This relies on techniques like emotional state recognition (detecting sentiment or tone), conversational mirroring (reflecting the user’s phrasing or energy), and contextual personality shifting (adjusting tone or style based on task complexity or user mood) .
- Alexa, though simpler, varies its vocal delivery: sounding excited when offering good news (“It’s sunny all day!”) or disappointed when reporting a loss, reinforcing its role as a helpful household companion.
- T-Mobile’s AI reduced customer complaints by 73% through instant detection of negative sentiment and tone mirroring
- Replika, performs deep, long-term mirroring. It learns a user’s vocabulary, emotional patterns, and interaction style over many conversations, gradually shaping its own language, tone, and even “personality” to reflect the individual.
5. Visual identity
Visual identity is often the first layer through which users infer an AI product’s personality. Design decisions around logo, avatar, color, type and even hardware shapes cues.
- Logo/ Avatar/ Embodied identity give an AI a recognizable “face” and act as an immediate proxy for personality. For instance, Perplexity’s logo, is minimal, geometric, and abstract and it subtly animates during search to cue users that AI is creating an answer. On the other end of the spectrum, more expressive AI products like Lovot uses a physical body, large eyes, soft materials, and even personalised clothing to project warmth, affection, and a companion-like presence.
- Typography contributes to brand voice in chat interfaces. For example, Claude’s serif typography conveys warmth, maturity, and approachability — qualities that align with its thoughtful, privacy-first positioning. Perplexity’s geometric sans-serif communicates modern technical clarity and reliability, reinforcing its role as a precise, research-oriented tool.
- Illustrations/ Imagery: Supports communicating information in a delightful way
- Motion and Animation: Motion guidelines define how elements animate, transition, and respond to user interaction.
- Color palette reinforce AI personality archetypes by leveraging emotional associations rooted in color psychology. For example. blue signals trust and calm (Alexa), bright green colours signal playfulness and growth (Duolingo, Miko) and muted tones as professionalism (Notion AI).
6. Communication style & copy
Communication style and copy are also the most visible expressions of an AI’s personality. Whether experienced through voice or text, elements like tone, phrasing, pacing, formality work together to communicate character’s personality and guide user experience.
Core elements shared across voice and chat interfaces
- Tone sets the emotional posture: formal tones (e.g., Sage, Ruler) project authority and professionalism, while informal tones (e.g., Everyman, Jester) build warmth and approachability. For example, Capital One’s Eno uses “First things first” is perceived as friendlier than “Here is the first step” in user research.
- Phrasing and vocabulary reflect personality cues. For instance, a Caregiver might say ““Let me walk you through this , we’ll figure it out together,”,” while Sage personas might use precise, technical terms (e.g., ““Let me explain the details step by step”).
- Pacing and prosody (in voice), or structure and grammar (in text), shape rhythm and emphasis which is key to emotional delivery and comprehension. Models from ElevanLabs V3 and Cartesia uses intentional pauses and inflection to simulate human-like and expressive speech generation.
Modality differences: Voice vs. Chat
Voice-first interfaces (e.g., Alexa, Miko) rely heavily on prosody, timing, and conversational repair. Spoken interactions are adaptive — users naturally adjust tone, pacing, or emphasis when there’s confusion.
In contrast, chat-based interfaces (e.g., Claude, Perplexity) emphasize structural clarity, grammar, and concise phrasing. Because written responses can’t be corrected mid-delivery, users expect a higher degree of accuracy and polish.
7. Multimodal feedback
Multimodal feedback refers to the way an AI system expresses its internal states such as listening, thinking, processing, or responding sometime called Chain-of-thought (CoT) display through various visual design elements and multiple sensory channels. These channels can include visual cues, voice and sound design, micro-animations, LED lights, or haptics.
It allows users to see, hear or feel what the AI is doing, making the interaction more interpretable, human-like and trustworthy.
Multimodal feedback varies across products depending on how much personality is intended for the product.
- Replit AI, Perplexity, Claude relies on icon animations and microcopy to communicates system states.
- Alexa devices like Echo dot uses LED lights, voice tone and sound effects to express internal states. The glowing ring is soft and household-friendly, reinforcing her calm, helpful persona.
- Moxie robot combines voice and non verbal cues — screen animations, sound effects and hand and body movements to communicate states in a child-friendly, playful way consistent with its worldview of empathy and curiosity.
Interaction rhythm and timing
Interaction rhythm and timing are critical to how AI systems communicate through multimodal cues. Research in human–robot interaction (Michalowski et al., 2007; Hoffman & Breazeal, 2007; Breazeal, 2003) shows when robots align their timing with humans — pausing at the right moments, matching conversational pacing, or synchronizing gesture and gaze — interactions feel more alive, more engaging, more lifelike, and more trustworthy.
Products like Tolans use coordinated animations, eye-gaze, and motion timing to signal attention and emotional states, making the characters feel alive. Voice agents like Alexa similarly use rhythmic cues — brief pauses, LED pulses, and vocal pacing — to create a sense of responsiveness. In both cases, achieving the right timing transforms abstract AI behaviour into something that feels intentional, expressive, and socially fluent.
When does personality matter?
The fundamental question to ask before designing for personality, if your product needs one and how complex?
Not every AI system needs a quirky mascot or empathetic voice. The role of personality depends on what problem the product is solving and what users expect from AI.
In enterprise, B2B SaaS, or high-utility applications, personality is kept limited like professional tone, clear explanations and minimal flair as seen in tools like Ema, Salesforce Einstein or Intercom where users value accuracy, compliance, and integration.
In consumer AI, personality is often the differentiator driving engagement, trust, and habit-building where functionality is commoditized. Duolingo’s cheeky Duo, Miko’s playful child companion, Alexa’s polite household voice, Claude’s thoughtful collaborator persona, and Lovable’s friendly co-creative tone all show how personality, delights, stickiness, and emotional connection become as important as utility itself.
References
- Collective Constitutional AI: Aligning a Language Model with Public Input by Anthropic
- Introducing Claude (March 2023) by Anthropic
- Hamlet on the Holodeck: The future of narrative in cyberspace. MIT Press. Murray, J. H. (1997).
- Social interactions in HRI: The robot view.. IEEE Transactions on Systems, Man, and Cybernetics — Part C: Applications and Reviews, 33(4), 550–560.
- Human–robot interaction: An introduction. Cambridge University Press. Bartneck, C., Belpaeme, T., Damholdt, M., Jensen, B., & Šabanović, S. (2020).
- Emotion and sociable humanoid robots — Breazeal, C. (2003)International Journal of Human-Computer Studies, 59(1–2), 119–155.
- Effects of anticipatory action on human–robot teamwork — Hoffman, G., & Breazeal, C. (2007). Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 1–8.
- Robots in rhythmic interaction: Improvisation in embodied human–robot interaction — Michalowski, M. P., Sabanović, S., & Simmons, R. (2007). Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2007, 330–335.
- Elements of a Successful Digital Assistant— AnswerLab (2018)
UX research study exploring tone, persona, and assistant clarity. - Best practices: designing a persona for your assistant — James Walsh
- Should your AI sound human — or like you?— Erin Xie (2023)
Published on UX Collective, explores prosody, tone, and brand identity in voice AI. - Duolingo Brand Narrative guidelines —Duolingo
- What is chain of thought (CoT) prompting? — IBM
- My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship
- Serif or sans serif typefaces? — The effects of typefaces on consumers’ perceptions of activity and potency of brand logos. European Journal of Marketing, 59(4), 879–922. — Zhang, M., Teng, L., Xie, C., Wang, X., & Foti, L. (2025).
- The Marketers’ Prismatic Palette: A Review of Color Research and Future Directions.” Psychology & Marketing, 30(2), 187–202.- Labrecque, L. I., Patrick, V. M., & Milne, G. R. (2013).
So your AI wants a personality was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
