• 3 Bewertung(en) - 3.67 im Durchschnitt
  • 1
  • 2
  • 3
  • 4
  • 5
Propaganda des Tages (Korruption, Satire, Kommentare, Wahnsinn, Gerüchte, Ansagen)
Der moderne Techno-Faschismus wäre nicht vollständig ohne eine Wiederkehr der Physiognomik:

Facial recognition technology can expose political orientation from naturalistic facial images

[Bild: acn8lkux.jpg]


Ubiquitous facial recognition technology can expose individuals’ political orientation, as faces of liberals and conservatives consistently differ. A facial recognition algorithm was applied to naturalistic images of 1,085,795 individuals to predict their political orientation by comparing their similarity to faces of liberal and conservative others. Political orientation was correctly classified in 72% of liberal–conservative face pairs, remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%). Accuracy was similar across countries (the U.S., Canada, and the UK), environments (Facebook and dating websites), and when comparing faces across samples. Accuracy remained high (69%) even when controlling for age, gender, and ethnicity. Given the widespread use of facial recognition, our findings have critical implications for the protection of privacy and civil liberties.


There is a growing concern that the widespread use of facial recognition will lead to the dramatic decline of privacy and civil liberties 1. Ubiquitous CCTV cameras and giant databases of facial images, ranging from public social network profiles to national ID card registers, make it alarmingly easy to identify individuals, as well as track their location and social interactions. Moreover, unlike many other biometric systems, facial recognition can be used without subjects’ consent or knowledge.

Pervasive surveillance is not the only risk brought about by facial recognition. Apart from identifying individuals, the algorithms can identify individuals’ personal attributes, as some of them are linked with facial appearance. Like humans, facial recognition algorithms can accurately infer gender, age, ethnicity, or emotional state2,3. Unfortunately, the list of personal attributes that can be inferred from the face extends well beyond those few obvious examples.

A growing number of studies claim to demonstrate that people can make face-based judgments of honesty4, personality5, intelligence6, sexual orientation7, political orientation8,9,10,11,12, and violent tendencies13. There is an ongoing discussion about the extent to which face-based judgments are enabled by stable facial features (e.g., morphology); transient facial features (e.g., facial expression, makeup, facial hair, or head orientation); or targets’ demographic traits that can be easily inferred from their face (e.g., age, gender, and ethnicity)14. Moreover, the accuracy of the human judgment is relatively low. For example, when asked to distinguish between two faces—one conservative and one liberal—people are correct about 55% of the time (derived from Cohen’s d reported in Tskhay and Rule15), only slightly above chance (50%). Yet, as humans may be missing or misinterpreting some of the cues, their low accuracy does not necessarily represent the limit of what algorithms could achieve. Algorithms excel at recognizing patterns in huge datasets that no human could ever process16, and are increasingly outperforming us in visual tasks ranging from diagnosing skin cancer17 to facial recognition18 to face-based judgments of intimate attributes, such as sexual orientation (76% vs. 56%)7,19, personality (64% vs. 57%; derived from Pearson’s rs)20,21,22, and—as shown here—political orientation. (For ease of interpretation and comparisons across studies, across this text, accuracy is expressed as the area under the receiver operating characteristic curve (AUC), an equivalent of the Wilcoxon signed-rank test coefficient and the common language effect size.)



An algorithm’s ability to predict our personal attributes from facial images could improve human–technology interactions by enabling machines to identify our age or emotional state and adjust their behavior accordingly. Yet, the same algorithms can accurately predict much more sensitive attributes, such as sexual orientation7, personality20 or, as we show here, political orientation. Moreover, while many other digital footprints are revealing of political orientation and other intimate traits29,30,31,32,33,34, one’s face is particularly difficult to hide in both interpersonal interactions and digital records. Facial images can be easily (and covertly) taken by a law enforcement official or obtained from digital or traditional archives, including social networks, dating platforms, photo-sharing websites, and government databases. They are often easily accessible; Facebook and LinkedIn profile pictures, for instance, are public by default and can be accessed by anyone without a person’s consent or knowledge. Thus, the privacy threats posed by facial recognition technology are, in many ways, unprecedented.

Predictability of political orientation from facial images does not necessarily imply that liberals and conservatives have innately different faces. While facial expression or head pose, facial hair, and eyewear were not particularly strongly linked with political orientation in this study, it is possible that a broader range of higher-quality estimates of those and other transient features could fully account for the predictability of political orientation. Yet, from the privacy protection standpoint, the distinction between innate and transient facial features matters relatively little. Consistently changing one’s facial expressions or head orientation would be challenging, even if one knew exactly which of their transient facial features reveal their political orientation. Moreover, the algorithms would likely quickly learn how to extract relevant information from other features—an arms race that humans are unlikely to win.

Some may doubt whether the accuracies reported here are high enough to cause concern. Yet, our estimates unlikely constitute an upper limit of what is possible. Higher accuracy would likely be enabled by using multiple images per person; using images of a higher resolution; training custom neural networks aimed specifically at political orientation; or including non-facial cues such as hairstyle, clothing, headwear, or image background. Moreover, progress in computer vision and artificial intelligence is unlikely to slow down anytime soon. Finally, even modestly accurate predictions can have tremendous impact when applied to large populations in high-stakes contexts, such as elections. For example, even a crude estimate of an audience’s psychological traits can drastically boost the efficiency of mass persuasion35. We hope that scholars, policymakers, engineers, and citizens will take notice.


(Noch nicht perfekt, aber man arbeitet daran.)

Studie zu nuklearer Explosion

Wie überlebt man eine Atombombe?

Forscher haben untersucht, wie man sich vor einer nuklearen Explosion schützt. Ihre Simulationen zeigen, wie gefährlich eine Druckwelle sein kann und wo man sich verstecken muss, um ihr auszuweichen.

[Bild: 44j3d258.jpg]

Nachtrag zum oben verlinkten Artikel.

Wer das Bedürfnis verspürt, die Zeit und Nerven besitzt, seine Überlebenschancen auszurechnen für den total unwahrscheinlichen Fall, dass unser geliebtes Vaterland jemals plötzlich und unerwartet einen Atomschlag erleiden sollte, bitte hier entlang:

(Ich selbst sitze hier knapp einen Kilometer Luftlinie von einem Primärziel entfernt. Angel )


Gehe zu:

Benutzer, die gerade dieses Thema anschauen: 5 Gast/Gäste