SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Van Der Zee S, Poppe R, Havrileck A, Baillon A. Psychol. Sci. 2021; ePub(ePub): ePub.

Copyright

(Copyright © 2021, Association for Psychological Science, Publisher John Wiley and Sons)

DOI

10.1177/09567976211015941

PMID

34932410

Abstract

Language use differs between truthful and deceptive statements, but not all differences are consistent across people and contexts, complicating the identification of deceit in individuals. By relying on fact-checked tweets, we showed in three studies (Study 1: 469 tweets; Study 2: 484 tweets; Study 3: 24 models) how well personalized linguistic deception detection performs by developing the first deception model tailored to an individual: the 45th U.S. president. First, we found substantial linguistic differences between factually correct and factually incorrect tweets. We developed a quantitative model and achieved 73% overall accuracy. Second, we tested out-of-sample prediction and achieved 74% overall accuracy. Third, we compared our personalized model with linguistic models previously reported in the literature. Our model outperformed existing models by 5 percentage points, demonstrating the added value of personalized linguistic analysis in real-world settings. Our results indicate that factually incorrect tweets by the U.S. president are not random mistakes of the sender.


Language: en

Keywords

Twitter; deception detection; linguistic analysis; LIWC; open data; open materials; tailored model

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print