In spite of this, Chinese authorities have deployed emotion recognition systems based on the same principles in the Xinjiang region in western China 47.Įven evaluating some of the supposedly less ambiguous expressions becomes problematic once we leave our own locality: a smile in a country beset by corruption can be interpreted as disingenuous or even a sign of low intelligence 48 facial expressions representing pain or pleasure are quite different among diverse cultures 49 the effects of ageing on the human face make expression recognition more difficult for us and the machine systems we are informing 50 and the labeling of expression data depends more on the interpretation of scientists than the direct feedback of participants 51 (who are the only ones who knew what they were feeling when the image was taken).Ī historical glance at headlines around automated FER over the last decade or so reveals a number of bold announcements by variously-sized tech companies for FER products which either fail to materialize or else disappear, or are later downplayed by the originating company. Though Ekman's methodology was widely adopted (and even became the direct inspiration 44 for the investigative drama Lie To Me), it has been the subject of growing scientific criticism 45 in recent decades, and its use as a terrorist-detection tool gauged at the same level as 'flipping a coin' 46. Ekman's theory assigned weight to 'microfacial' expressions deemed to be indicative of hidden emotions. EmotientĪ great deal of current thought and convention around FER has its roots in the 'facial action coding system' 43 developed by psychologist Paul Ekman in the 1970s. Though their comparative offerings are not identical in terms of properties or methods of quantifying success, one 2019 comparison 34 of the services found that Google was less likely to commit to identifying an emotion at all, whereas Rekognition is willing to commit to an emotion at levels as low as 5% in order to return a result of some kind. However, the two volume commodity providers emerging from this circumspect market are Google Cloud Vision 32 and Amazon Rekognition 33, both of which provide facial sentiment detection facilities as a component of their more broadly successful facial recognition APIs. Microsoft Azure's Emotion API 31 can also return emotion recognition estimates along with the usual array of feature requests. The commercial SkyBiometry API 30, which provides a range of facial detection and analysis features, can also individuate anger, disgust, neutral mood, fear, happiness, surprise and sadness. Though cloud-based services are usually unsuitable for time-critical deployments (such as in-car safety systems), they are useful for training a more responsive and slimmed-down algorithm, or else in evaluating data in research projects where latency is not a factor. Unsurprisingly, there's a lot of money in it. The ability to decipher the true intent and emotional response of a person from their facial expressions, notwithstanding their attempts to mask or deceive what they feel, is an evolutionary advantage of great interest to a range of sectors, from physicians through to marketers and political analysts. Experts in the understanding of facial expression remain split even on whether the Mona Lisa's enigmatic smile is sincere 3 or forced 4 (apparently a matter of context). The lexicon of facial expression is rich in ambiguities, as facial cues may be insincere, misinterpreted, or in some way at a tangent to their intent. Some contend that we recognize anger most easily 1, since it may represent an existential threat others that we recognize happiness most easily and anger most slowly 2, since the march of civilization has altered our priorities over time. Understanding the meaning of facial expressions is an essential human survival tool, but one that generates scientific controversy.