The good news, for me at least, is that the computer thinks I have a nice personality. According to an app called MorphCast, I was, in a recent meeting with my boss, generally “amused,” “determined,” and “interested,” though—sue me—occasionally “impatient.” MorphCast, you see, purports to glean insights into the depths and vagaries of human emotion using AI. It found that my affect was “positive” and “active,” as opposed to negative and/or passive. My attention was reasonably high. Also, the AI informed me that I wear glasses—revelatory!
The bad news is that software now purports to glean insights into the depths and vagaries of human emotion using AI, and it is coming to watch you. If it isn’t already: Morphcast, for example, has licensed its technology to a mental-health app, a program that monitors schoolchildren’s attention, and McDonald’s, which launched a promotional campaign in Portugal that scanned app users’ faces and offered them personalized coupons based on their (supposed) mood. It is one of many, many such companies doing similar work—the industry term is emotion AI or sometimes affective computing.
Some products analyze video of meetings or job interviews or focus groups; others listen to audio for pitch, tone, and word choice; still others can scan chat transcripts or emails and spit out a report about worker sentiment. Sometimes, the emotion AI is baked in as a feature in multiuse software, or sold as part of an expensive analytics package marketed to businesses. But it’s also available as a stand-alone product, and the barrier to entry is shin-high: I used MorphCast at no cost, taking advantage of a free trial, and with no special software. At no point was I compelled to ask my interlocutors if they consented to being analyzed in this way (though I did ask, because of my good personality).



I can’t think of any beneficial uses for such a product.
The only one that I can think of is research or clinical settings. It could be really useful for various social or psychological research or monitoring patient status. That’s not how it will be used for the most part but that is a place it could be useful. Like any tool, it can be used for good things and bad (how revelatory…)
The problem with the AI industry (and modern, unregulated capitalism in general) is that as soon as someone has a potentially useful tool, they look everywhere for every possible use with no regard for societal consequences. Thinking about the ramifications of using a tool doesn’t increase shareholder value. In fact, trying to only have your tool be used in a positive way actively harms shareholder value. Greed perverts all that is good.