Michal Kosinski, a Stanford-affiliated researcher, claims to build an algorithm to expose people’s political views from social media profiles and facial expressions. He used a dataset of over 1 million Facebook and dating sites profiles. He trained an algorithm that, according to him, can correctly classify political orientation in 72% of “liberal-conservative” face pairs.
His work revolves around the idea that a person’s personality can be judged from their appearance. According to Kosinski, several facial features such as head orientation, age, gender, emotional expression, and ethnicity reveal political affiliation.
The project’s source code and dataset are made available by Kosinski but not the actual images, citing privacy implications. Several studies have shown that facial recognition algorithms are susceptible to many biases. Bias is pervasive in ML algorithms beyond those powering facial recognition systems. An investigation by ProPublica found that software used to predict criminality tends to exhibit prejudice against black people. Other studies found that women are shown fewer online ads for high-paying jobs.
Kosinski’s work analyzes the connection between personality traits and Facebook activity. It’s a matter of controversy that he inspired the creation of political consultancy Cambridge Analytica. In a paper published in 2017, Kosinski and Stanford computer scientist Yilun Wang reported that an off-the-shelf AI system with a high degree of accuracy could distinguish between two photos, one of gay and the other a straight person.

Alexander Todorov, professor at Princeton, is also a critic of Kosinski’s work. Todorov argues that methods employed in the facial recognition paper are technically flawed. According to him, the patterns picked up by the algorithm comparing millions of photos might have very little to do with the facial characteristics. For example, self-posted pictures on dating websites project several non-facial clues.
As per Todorov, Kosinski’s research is “incredibly ethically questionable.” He thinks so because it could lend credibility to governments and companies that might want to use such technologies. He and academics like Abeba Birhane (cognitive science researcher) argue that those who create Artificial Intelligence models must consider the political, social, and historical contexts. In her paper “Algorithmic Injustices: Towards a Relational Ethics,” Birhane wrote that “concerns surrounding algorithmic decision making and algorithmic injustice require a fundamental rethinking above and beyond the technical solutions.” She won the Best Paper Award at NeurIPS 2019 for the above paper.
Paper: https://www.nature.com/articles/s41598-020-79310-1
Dataset & Code: https://osf.io/c58d3/
Consultant Intern: He is Currently pursuing his Third year of B.Tech in Mechanical field from Indian Institute of Technology(IIT), Goa. He is motivated by his vision to bring remarkable changes in the society by his knowledge and experience. Being a ML enthusiast with keen interest in Robotics, he tries to be up to date with the latest advancements in Artificial Intelligence and deep learning.