We show our emotions in many ways such as our body language and vocal tones, but the most compelling way is through our facial expressions.
There is a constant stream of involuntary non-verbal communication flowing between us on our faces. These expressions of emotion that we see on one another’s faces, in typical interactions, last between half of a second to 4 seconds. They are classified as Macro-expressions, and we know people can control them. For example, we often smile when we’re supposed to, but not because we actually feel like smiling.
Face2Faces uses state-of-the-art machine learning to build an emotional fingerprint of a subject. Our technology shows a real-time analysis of a subject’s emotional state across 7 key emotions. It then correlates these emotions into a correlation matrix that allows a clinician to assess underlying conditions that may not be outwardly apparent.
Face2Face literally maps the ever-changing human emotional landscape on the basis of hard data. Face2Face has been proven to be accurate 999 times out of 1000. Its assessments of an individual’s resilience, reactivity, and coping style, are driven by comparisons of their responses to different topics compared to their own baseline (neutral) reactions. It also uses comparisons of an individual’s reactions to stimulus pictures with nationally based norms. Face2Face’s integrated neural-network machine learning is designed to continuously improve its accuracy in those areas.
Moreover, it's machine learning is not limited to self-correction. It is designed to compile an ever-expanding database and set of emotional interactions and behaviors it can detect or predict. As its store of data grows it will be able to create algorithms for almost any purpose.