Creating Empathy in an Artificial World

Written by Sherine Kazim
February 23, 2017

Ever since I wrote that piece Emotive UI about designing intention and reaction for the full spectrum of 32 emotions, one thing continues to plague me: empathy. There’s no doubt that the best experience designers are highly empathetic. They have an incredible ability to interpret and relate to users which, in turn, helps them create more engaging interactions. Paramount to these experiences is personalization — always giving the impression that each interaction is unique and specifically catered to that particular user. These days, designing with personal data is table stakes, but what about personality data? Is it possible to design for personality in order to create higher levels of empathy?

MIT Professor Rosalind W. Picard, wrote about Affective Computing in 1995 and described it as the ability to simulate empathy. Its premise relies on a machine’s ability to adapt and respond appropriately to human emotions. These emotions are derived from human behavior. By behavior, I mean the ways in which a person communicates aspects of their personality, either through implicit or explicit actions.

Typically, behavior and interaction among humans is mostly implicit — passive emotions and expressions. Subtle cues are manifested through voice, gestures, meaning, and language. All of which form a person’s unique personality. If we downplay the implicit piece, and not simultaneously take into account the five senses which help us process communication, we could easily misinterpret someone’s behavior, misidentify their emotion, and ultimately miss a connection. Further, without fully realizing how the data relates to each other and the message, we will assume user intention.

Relationship data stems from our sensory streams working together so we can analyze, understand and emotionally respond to any given situation. For example, if someone uses non-threatening language, while speaking softly and avoiding eye contact, we may infer from those three sensory streams that this person is shy. In turn, we may consider a measured response with non-confrontational verbal and emotional language. If, for whatever reason, we lack confidence in our potential responses, we can seek out more relationship data — content and context — for further analysis and validation.

For empathetic experience designers, data sets are our new palettes. In particular, relationship data which helps us develop our human intuition, will be at the forefront of machine prediction. With Apple purchasing emotion-focused startup Emotient, and facial recognition startup Realface, it appears that our design future will emphasize personality-driven data. This is important because having geographic, contextual, demographic, psychographic, and analytics data — the hallmarks of personalization — won’t be enough anymore. Instead, we’ll have to contend with an increased hunger for human data. We’ll continue to see AI materialize on various physical and digital platforms allowing us to determine the user’s emotional state far better than any empathetic designer can do with just user interviews and audits.

To successfully define personality as it relates to communication, designers will now have to combine four different types of behavioral data:

  • Gestural Data. The way we would identify conversational tone via face and hand motions.
  • Physiological Data. The way we would measure heart rate, blood pressure and skin temperature.
  • Facial Recognition. The way we would verify a person, and interpret their emotions and expressions.
  • Deep Learning. The way we would understand speech, and how language is used.

It’s that potent mix of personal and personality data that will give way to hyper-customized experiences. It’s a mix that could ultimately help us determine the user’s intention.

Let’s pretend that we’re monitoring physiological data and we see that a user’s blood pressure spikes a split second before the opening line of a conversation with a customer service rep. We might assume that the customer is upset, but we would still be uncertain as to why or his intention. Is he angry, nervous or pressed for time? Will he yell, punch or intimidate? No idea. For us to understand his intention, we’d have to access a greater portion of his everyday life — everything he interacts with online and offline so we can determine patterns of behavior. All of those data streams would need to be tracked and analyzed so we could get a sense of his big picture. Only then could we organize appropriate communication and responsibly adjust it to fit his personality. Essentially, personality data is making the case for creating a master algorithm.

"When faced with a machine, humans talk like a machine."

Besides creating the master algorithm, in order for companies to better understand their users, they will need to create emotion databases. This will be time consuming because it relies on someone (yes, a human) to determine facial expressions. It’s highly subjective — literally someone is tagging someone else who is posing and acting out those emotions. That info is then validated by an expert (yes, another human). The issue is that the interpretation is only as good as the actor. It’s difficult to capture spontaneous, dissipation and faint transition of emotions. It’s challenging to understand why, measure how, and guess when they’re about to happen. And, it’s overwhelming for the people tasked with tagging the emotions of thousands, millions, eventually billions, of users.

Second, let’s be honest, the hardware and software for facial recognition just isn’t quite there yet. Ask anyone in law enforcement and they’d be hard-pressed to disagree. While it’s passable for identifying broad characteristics, it will have to get better for us to pick up on the subtleties of expression. China’s Face++ is promising, and if we continue to improve the platforms while combining it with AI, this should prove to be one of the most powerful breakthroughs in technology and an essential to determining personality.

Finally, we’re still mastering natural language when it comes to interacting with devices. For some reason, when faced with a machine, humans talk like a machine. When we talk to Amazon’s Echo, we usually say: “Alexa, [wait for response indicator] “what’s the weather today?” But, when we talk to an actual human near us, we tend to say things like “Hey, what’s it like outside?”No name. No pause. No time. All context is assumed. Interacting with machines is unavoidable, so we need to design them to act and react in a more human-like way — give it unique personalities — ones which compliment our own personality and can adapt to our emotions. When the Mini Cooper car was reintroduced in 2002, one of the most delightful brand experiences was the voice interface. Drivers were able to pick a gender and an accent for how the car’s navigation system would communicate with them. Although the voices were all programmed to give the exact same responses, there was something magical about picking one, about identifying the personality of a passenger that we wanted to join us on our journey. It was a great start, and it’s good to know that empathetic experience designers are still the ones in the driver’s seat.



An Introduction to Emotive UI

Written by Sherine Kazim
October 10, 2016

Historically, emotion has been thought of as a byproduct of design — not something that drives the user experience. But emotion is actually a critical new dimension in UI, and one for which designers are ultimately responsible.

There are certain things designers want you to feel when you use their products, and you can hear it in the way they talk. Designers often say: “We want to surprise and delight our users.” But really, that’s just scratching the surface. Humans are a mess of emotions — and designers are going to have to learn to engage with all of them.

Robert Plutchik, an academic psychologist who (literally) wrote the textbook on emotions, in 1980 introduced the concept of eight basic emotions — joy, trust, fear, surprise, sadness, disgust, anger, and anticipation. His “wheel of emotions” shows the interplay of those emotions, and their varying levels of intensity.

The form of UI that most accurately reflects our emotional spectrum today is emojis. Here’s an emoji-based version of the wheel I created for a recent presentation on Emotive UI:

You can see in the chart that there is a dissipation of intensity that happens as you move further out on the wheel, and in design, nobody is talking yet about that. We currently design things that make people feel basic emotions — maybe joy or sadness — but we don’t talk about how we should adjust the UI according to how the user’s emotions may be intensifying or dissipating.

We tend to create products that aim for the center-wheel emotions, because it’s the easiest thing to convey. We rarely think about the full spectrum, and we don’t think about the dissipation. Currently, most designers think about the design intention and the user reaction: “I want to make you happy,” and the user is happy.

But here’s what a user interaction might look like in the real world:

When someone designed this particular object (a mailbox flag), they weren’t thinking about emotion. They were thinking about an object. Then along comes a user — Ralph Wiggum — and he clearly thinks, “Hey, this looks like a really fun thing to do.” Once the flag is down and the fun is over, you watch that emotion dissipate. In Ralph’s case, he even transitions to another emotion: sadness. That whole spectrum happens in a three-second GIF: Ralph is feeling joy — but from a design point of view, we never got around to addressing the more intense ecstasy, or less intense serenity, even though Ralph went through all those emotions, and more. We need to start intentionally designing for every single emotion that Ralph is going through.

What that forces us to do is fine-tune communication between people and products — to design for a primary emotion as well as its dissipation — and the relationship we have there. Ultimately, we’re trying to make deeper, more meaningful connections with the user and the best way to do that is to fully understand and manage their emotional spectrum.


Designing for emotion can be subjective, but here’s one example of something that gave me joy:

Things bounce around, there’s bright color and movement and music that brings back amazing memories from when I was a kid. I even get to make something that’s a direct reflection of myself. The design intention is pure ecstasy, and the user reaction, pure ecstasy.

Or what about designing for fear — what does that look like? It’s difficult to come up with examples, but I found one instance when I was in a Seoul train station — and it’s an example of designing for fear, gone wrong.

I remember sitting there looking at this piece and thinking, “OK, I don’t speak Korean but clearly I am supposed to run away from a bomb, run away from an explosion. Not sure why I’m running toward the trees.” I’m not entirely sure what I should do, and it seems the intention was to deliver the information in a neutral way. But my reaction to it was “Holy shit, I’m not getting on that train, because I don’t want to be anywhere near where these things can happen.” Absolute terror. You start to feel the responsibility of the designer in considering the emotions of the user — in this case, the person viewing the sign.

Or, think about designing for anticipation. Look at the Japan Quake Map. It’s relatively old, but it’s still very powerful. The UI of the quake map shows the quake depth. You see the clock running, but beyond that, there’s nothing else going on — until you’ve waited for a couple of minutes:

That first day is the day Fukushima hit, five years ago. Without even realizing it, the data itself is managing to not just create anticipation, but also elicit an emotion — grief. All that with very few UI cues. That wasn’t intentional, but it illustrates why we need to take responsibility when we create something to put in front of a user.

Surprise is a tricky one because I rarely agree with surprising a user in the interface, with one exception: games. When it comes to games, surprise can be paramount to their success.

Who would think to put an apple in front of a sleepwalker to stop him from hurting himself? It’s gorgeous, interesting, and wonderfully surprising.

What’s interesting is I could hit on only 50% of the basic emotions — not to mention all the others on the rest of the wheel. There’s no clean way to capture that stuff in UI currently — at least, not visually. That’s because, as designers, we’ve done such an excellent job simplifying visual UI. But soon, products will become more multidimensional, so they have their own personalities, just like users do.

"We have a finite set of emotions, and there will never be another sadness 2.0."

As AI becomes more prevalent in our lives, it’s only going to get more complex, because we’ll need to focus on the personality of the user and how it interacts with the personality of the AI. We can no longer solely rely on just the visual representation of the UI. 

This is good because it will ultimately allow us to form a more reciprocal relationship between the user and the brain behind the interface as we move away from screens.

It’s challenging, but know this: Products and platforms will change, but our spectrum of emotion will not change. This is the key takeaway — that we have a finite set of emotions, and there will never be a sadness 2.0. It’s actually great that we have that constraint to work within because it’s going to help us design better product relationships for the future. It’s not just about designing for intention and reaction — it’s about understanding the full spectrum of emotion and the dissipation of each one, so we can start to develop true product personalities.

The minute we design for all the variations of basic emotions, we know we’re heading toward solid territory for future design.



Restartup: The New Love Affair Between Big Business and Tech

Written by Sherine Kazim
April 18, 2014


The NASDAQ approaches highs not seen since the peak of the first dotcom bubble in 2000. HBO’s new comedy series, “Silicon Valley,” portrays a hilarious, hype-fueled world of out-of-touch “brogrammers” (one built an app that alerts users to erect, oh never mind). Are we in bubble redux? Probably not. Underneath seemingly frothy stock prices are fundamental indicators much stronger than the ones from 14 years ago. And while there is plenty to caricature in the tech startup world — one of the best recent parodies took the genre to new levels — the most-hyped ventures these days seem to have actual business plans, not just ping pong tables and champagne-drenched IPO parties. Part of the reason is a surprising shift in the relationship between tech startups of all sizes and bigger, established companies, which is invigorating both.

If you lived somewhere on the northern California peninsula during the late ’90s and early ’00s, chances are you worked for a startup, knew someone who did, or crashed one of their epic parties. These tiny-but-tough companies created solutions to things we desperately needed — like email — and stuff we had no idea we needed — like social networks. Ideas and energy were abundant, and it was riveting to watch eBay catapult to incredible success while Webvan spun into oblivion.

Regardless of the product they were building, most of these companies focused on creating solutions to everyday problems. Some saw it as a service to their community, but others saw it as a vehicle to reach a singular goal: the vaunted Initial Public Offering (IPO).

Filing for an IPO gave founders and employees an innate sense of ownership, without becoming beholden to the whims of a larger corporate parent. They truly believed they could maintain their startup culture and innovative spirit. More importantly, they wanted to determine their own roadmap for future growth. It was a brave new world.

download (4).png

Few startups admitted the truth about their chance of succeeding through an IPO. As the bubble wore thin, roughly three out of every four startups began to fail by filing for bankruptcy, liquidating assets or simply disappearing. This happened for several reasons:

  • Inability to scale. In 1999, Webvan filed for an IPO with a company valuation of $1.2B and plans to expand to 26 cities. Rapid growth with minimal margins is always a risky endeavor. Indeed, Nasdaq halted trading of their stock just two short years later, forcing the online grocer to close up shop. It seemed to focus on “rapid” rather than “expansion.
  • Believing the hype. took many people by surprise when it ponied up millions for a Super Bowl ad in 2000, featuring an incredibly likable sock puppet. While its mascot and highly visible marketing campaign built rapid brand awareness, the hype surrounding was more focused on the sock puppet rather than solving for a sustainable business model.
  • Ignoring user behavior. When Kozmo hit the on-demand delivery market with their orange scooters, people were excited. The problem was that Kozmo didn’t implement delivery fees or a minimum order size until it was too late. Customers were ordering a single box of Milk Duds for $1 to be delivered within an hour and getting it. Demand for same day delivery was strong, but Kozmo ignored the economics of indulging consumer whims and didn’t understand the basic premise of how consumers shopped.
  • Overinflated product ego. Friendster seemingly had all of the right ingredients for success: investors, talent and connections. Yet, all of these things couldn’t fix the one thing they needed to be successful — a site that worked. The site’s rapid growth bore well for the company’s future but may have also blinded executives to the potential for failure. When they passed on a $30M deal from Google in 2003, hubris ran high. Friendster executives held out for a larger payoff but executive level infighting prevented them from moving forward. They thought their users would never leave, but competitors moved in quickly. Google took its cash and invested elsewhere.


Today the pace of innovation is quickening. The stakes are higher, while operational costs are lower. More data is freely available and becoming increasingly less expensive to store. More tools are available to run analysis and allow stakeholders to pivot accordingly.

Some of those early entrepreneurs went on to join the ranks of more established companies. Others invested in a new breed of startups, learning from their past and helping to shift mindsets away from making something cool to making something scalable, sustainable and useful.

While the avarice and ambition of earlier startups hasn’t disappeared entirely from Silicon Valley, startups have taken a more pragmatic approach to getting products into the market. Some of the models that have taken root include:

  • Incubator. Within an incubator model, early stage startups are taught business skills and provided access to professional networks. This model is gaining more attention from unlikely players, such as Pepsi Digital Labs, which leverages co-working spaces like WeWork Labs and invests directly in startups. In turn, those startups collaborate with Pepsi to innovate on their products and beverages. One of the advantages to this model is startups gain valuable mentorship, and Pepsi Digital Labs gets access to innovative ideas which can be pushed to market faster. The downside to this model is that a startup’s vision can get lost in the shuffle and business mentorship can be time-consuming for the larger corporation.
  • Accelerator. Under this umbrella, mid-stage startups are mentored and given strategies for rapid growth. Companies like Mondelez Mobile Futures offer a 90-day accelerator program for mobile-tech startups where brand managers from the food and beverage conglomerate embed themselves in startups. It’s a relationship where startups can get more funding and pre-approval for future VC investment in exchange for bigger businesses getting direct access to proven products and technologies.
  • Partnerships. Partnerships bring late-stage startups together with big businesses to develop ideas together. In 2013, GE and crowdsourced invention startup Quirky joined forces to tackle the Internet of Things. GE invested $30 million into this partnership with a goal to launch 30 connected products. With the Wink app firmly under its belt to control all of your internet connected in-home devices, Quirky and GE have recently introduced Aros, a smart air conditioner, into their product mix. Clearly, money helps but isn’t the sole prize in partnerships. The real opportunity for startups is the chance to design for scale, and for big businesses, it’s a chance to experiment outside of the confines of what can be a siloed organization.
  • Acquisition. An outright acquisition can happen with any stage startup, bought and either folded into the larger business or just simply discontinued by the parent company. Yahoo CEO Marissa Mayer has spent over $1B purchasing various stage companies across multiple product verticals, including Tumblr and Summly. But there have been some surprising acquisitions, too. In a twist, Harry’s, a 10-month old shaving startup, spent $100M to acquire an old razor factory. The positives are fairly similar for both sides: press coverage, access to innovative technologies, products and people. If you consider the alternative of building a team or product from scratch, acquisitions can be an appealing option.

These deeper, symbiotic relationships show that many startups have moved past the old us-against-them mentality to embrace new initiatives and models that may help explain why this time around, their businesses are stronger than they were in the heady days of Web 1.0. It’s encouraging that startups have moved beyond their role as muses, inspiring big businesses to innovate and embrace disruption. And, it’s clear that big businesses have become beacons, lighting the way for startup success through sustainable growth. This mutual mentorship is transforming how we make products and redefining how we do business, with everyone able to explore models that fit their needs.