When we think of work, it’s natural to mostly think of the practical tasks it involves. But there’s another side of work, an emotional side: we must regulate our feelings while we’re at work and do as much as we can to maintain a pleasant exterior, even when it’s only a thin facade concealing a writhing mass of stress, anger or anxiety. This psychological work is called “emotional labour”, a term coined by the sociologist Arlie Russell Hochschild in 1983.
Where there’s labour, there’s potential for labour-saving technology, although when it involves human feelings it comes with an unmistakably dystopian twang.
Service industry workers such as waitstaff and call centre workers are under the most pressure to put in the emotional elbow grease, to smile and be cheerful even when dealing with obnoxious or irate customers.
Although this kind of toil is hard to automate – it’ll be some time before we believe a robot voice cares about us – some call centres already use emotion-detection software that analyses wave patterns in the voice to detect stress and anger on the part of either caller or worker. As sociologists Danielle van Jaarsveld and Winifred Poster have written, this can help the workers in the centre, for instance by alerting their supervisor if a call becomes particularly heated so they can be relieved.
These emotion-detection systems are now bleeding into everyday use, most notoriously in Facebook’s “emotional contagion” experiment, details of which emerged in the summer of 2014. Facebook restricted the amount of “positive emotional content” some users saw and found that those users made fewer positive posts of their own – emotions were contagious.
This was an interesting finding, but reached unethically, and many users were outraged that they had been manipulated in this way without their consent. Another storm surrounded Samaritans Radar, a free web application launched by the Samaritans depression support charity. Radar let Twitter users detect which of their friends might be struggling with the black dog, with unquestionable humanitarian intentions. But a ferocious backlash based on equally understandable fears about privacy and consent forced the charity to withdraw the app.
Nevertheless, the technology seems destined to eventually break into our daily online lives. As part of an initiative by Fast Company magazine, Boston-based design studio Altitude created a conceptual app called Moodit, which would in theory let users avoid conflict by detecting difficult emotions exhibited in their behaviour on social networks.
“Moodit tells you to calm down, cheer up, or celebrate accordingly, and to watch out for friends in bad moods,” the magazine explained, emphasising that it’s “meant to be fun” – not quite the laudable aims of Samaritans Radar. In a way, this resembles an emotional extension of the “sharing economy”. If you have surplus good cheer to offer, something like Moodit could direct you to brighten a mopey friend’s day: Uber for feelz.
The really troubling thought prompted by interfering algorithmic mood rings like these is what happens when they start feeding back into our online behaviour. According to van Jaarsveld and Poster, although emotion-detection software might help call centre workers escape some difficult calls, the flipside is that knowing their tone is being constantly monitored can put them under added stress.
This was another criticism of Samaritans Radar – it could force unhappy people into trying to conceal their misery. Far from making the internet a more sensitive place, emotion detection could make it all the more trite and insincere, resembling It’s a Good Life, Jerome Bixby’s nightmarish 1953 tale of mindreading and painted smiles concealing inner torture.
What’s really needed is electronic emotional analgesia – a way to say what you really feel without anyone getting hurt. That has its own terrifying implications – what do our feelings matter if they have no effect in the outside world? But there are pioneers trying to find ways of easing troublesome human interactions. Dr Hirotaka Osawa at Tsukuba University in Japan has developed a pair of glasses that display digital “eyes” which move to acknowledge passing colleagues, allowing him to stay focused on work and saving the cognitive effort needed to look friendly and alert.
Far from being a tool to avoid social contact, AgencyGlass is presented as a way to ensure Dr Osawa remains approachable, even when deeply engrossed in a task. Such simple aids to thoughtful living are more and more widespread. Witness Cortana, the personal assistant on Nokia’s Lumia phone, advertised by showing how it can help your relationship with your spouse. Cortana reminds a husband to wish his wife happy anniversary when she calls, and to pick up flowers when he’s near the florist. “Thanks, Cortana, you’re making me look good,” the forgetful hubby says. But does the thought still count if it’s an algorithm doing the thinking for you?