Why so emotional? The ethics of using emotion data to improve UX
We live in a world driven by emotion. And emotion is at the center of user experience design. With emotion data becoming much more accessible, tech companies need to understand how to leverage user emotion data to enhance user experience (UX) to better serve users, and not the other way around – for users to better serve companies. We must be asking the right questions and have guard rails in place to avoid companies exploiting users.
“Our north star metric is user happiness”. “We’re building an emotional experience for our users — we’re all about making people smile when they use our product”. “We seek to incite excitement in every user”. #OverheardAt(AnyTechCompany)
At the start of any product development process, UX designers draw out user personas with the intention of mapping the emotional profiles of their users in order to dream up user flows that take users on an emotional journey. This is very intentional — mapping users’ discomfort, energy, excitement are all part of a common UX design process. And after any build, products are continuously optimized for user happiness — measured by proxy engagement and retention metrics, or through methods like NPS scores.
With Emotion AI, we’ll enter a world where digital products, infused with emotional intelligence, have a much more accurate feedback loop based on emotion that helps optimize user experience for emotional response. Imagine if technology and devices can interact with people the same way people interact with each other. Kantian ethics lead us towards a focus on the “human self-determining capacity for rule-making and rule adherence”. As machines develop more human attributes and capabilities, especially around emotional intelligence, they edge closer to aligning with the “Kantian notion of ‘rational beings’ that can act more responsibility for their ultimate conduct and action.” (Ulgen, 2017) Making machines more like rational beings, including responding in more human ways to emotional inputs, could then lead us in a direction that allows machines to make ethical judgments without the impairment of human bias. Assuming a base of unbiased code, that would be a more equitable world.
But affective computing raises concerns too. What are the implications of having our emotions be machine-readable? How can we know that emotion data will be used in a way that will benefit society? Would we be comfortable with tech companies owning our emotional data and building up identifiable emotional fingerprints?
Using emotional data to improve user happiness is good right?
In the context of products and services, happiness is a pleasurable or satisfying user experience. Acclaimed user researcher, Tomer Sharon, lists the following synonyms for happiness: contentment, joy, and delight. User happiness has usually been a self-reported measurement, which means users rate their happiness rather than companies having to track their behavior.
Is user happiness maximized when a product is well optimized for a user’s emotional state? From a utilitarian perspective, if tech companies were able to better gauge their users’ emotional response, they’ll be able to better optimize their product experience, hence maximizing happiness. Imagine an educational product that can detect levels of confusion and optimize the learning experience to speed up / slow down instruction or flesh out specific points in order to minimize ‘confusion’ & maximize ‘contentment’. Even in the context of media, adaptive video content that weaves in emotional data inputs to plot out storylines helps deepen viewing experiences and opens up endless opportunities for media creators and consumers.
Well, it comes down to sousveillance vs. surveillance
In my opinion, a lot of it comes down to intent. A core concern around the use of emotion data by profit-making corporations is that incentives are misaligned and that it’s not the case that companies are truly incentivized to maximize user happiness. It is important that we use emotional data to better serve users, and not for users to better serve companies. This is where the idea of sousveillance (subordinate) versus surveillance (over, above) comes in.
Take the example of an ads-backed social media platform. For the profit-making company, ad revenue is an important business driver, and ad revenue is reliant on the number of eyeballs and time spent. By understanding users’ emotional response on social posts, the platform can work to maximize these metrics and, in turn, ad effectiveness. This is unlikely to be the user happiness-maximizing approach and would break the argument framed in my last piece about personal data and targeted advertising. This would be platforms using emotional data to better serve themselves at the expense of users’ personal freedom. To take this further, imagine if the platform enables advertisers to build emotional profiles for users and target ads on that basis — say for example an online game can be targeted specifically to depressed users on the platform. This is worrying — it is morally objectionable and should be legally prohibited for emotional data to be utilized in a net negative way from a user perspective.
On the flip side, the social platform could use emotional data to detect stress levels in their users and surface content that’ll help alleviate stress. Or better yet, could the platform use affective computing to enable functionality that could help cure mental health issues? This would be sousveillance — platforms using emotional data to better serve users.
The question becomes that with access to this suite of technologies, we unlock a new suite of financial opportunities. It then becomes about whether these financial opportunities are too lucrative for people to be ethically minded. History tells us that people follow the money. Regulation needs to be put in place to ensure that companies leveraging affective computing continue to meet users’ needs in a responsible manner.
So it’s all about the implementation
Emotional data is very powerful. There are a plethora of opportunities when it comes to how we can use it to build technologies that are more human-like, and hence more responsible for their ultimate conduct and action. But it’s exactly that — it’s all about *how* we use affective computing. At present, there’s no regulation around the use of emotional data and in the context of surveillance capitalism, we must keep our eyes open for applications that serve the interests of corporations instead of truly maximizing user happiness.What products have you come across that leverage emotional data in effective ways? What policy recommendations would you put forward to ensure appropriate use of emotional data? Let me know.
Jad is co-president of the entrepreneurship club @ Harvard Business School. He is now hyper-focused on the Harvard/MIT consumer internet startup Koodos (text 👋🎧 to 566-367 for more info).