Fake news detector algorithm works better than a human
System sniffs out fakes up to 76 percent of the time.
An algorithm-based system that identifies telltale linguistic cues in fake news stories could provide news aggregator and social media sites like Google News with a new weapon in the fight against misinformation.
The University of Michigan researchers who developed the system have demonstrated that it’s comparable to and sometimes better than humans at correctly identifying fake news stories. In a recent study, it successfully found fakes up to 76 percent of the time, compared to a human success rate of 70 percent. In addition, their linguistic analysis approach could be used to identify fake news articles that are too new to be debunked by cross-referencing their facts with other stories.
Rada Mihalcea, the U-M computer science and engineering professor behind the project, said an automated solution could be an important tool for sites that are struggling to deal with an onslaught of fake news stories, often created to generate clicks or to manipulate public opinion. A paper detailing the system will be presented on Aug. 24 at the 27th International Conference on Computational Linguistics in Santa Fe, New Mexico.
Catching fake stories before it has real consequences can be difficult, as news sites today are heavily reliant on human editors who often can’t keep up with the constant influx of news. In addition, current debunking techniques often depend on external verification of facts, which can be difficult with the newest stories. Often, by the time a story is proven a fake, the damage has already been done.
Linguistic analysis takes a different approach, analyzing quantifiable attributes like grammatical structure, word choice, punctuation and complexity. It works faster than humans and it can be used with a variety of different news types.
Mihalcea, who worked with computer science and engineering assistant research scientist Veronica Perez-Rosas, psychology researcher Bennett Kleinberg at the University of Amsterdam and U-M undergraduate student Alexandra Lefevre, envisions a future in which automated systems help humans spot fake news more quickly in several ways.
“You can imagine any number of applications for this on the front or back end of a news or social media site,” Mihalcea said. “It could provide users with an estimate of the trustworthiness of individual stories or a whole news site. Or it could be a first line of defense on the back end of a news site, flagging suspicious stories for further review. A 76 percent success rate leaves a fairly large margin of error, but it can still provide valuable insight when it’s used alongside humans.”
A 76 percent success rate leaves a fairly large margin of error, but it can still provide valuable insight when it’s used alongside humans.”
Rada Mihalcea, Professor of Electrical Engineering and Computer Science
Mihalcea explains that linguistic algorithms that analyze written speech are fairly common today. The challenge to building a fake news detector lies not in building the algorithm itself, but in finding the right data with which to train that algorithm.
Fake news appears and disappears quickly, which makes it difficult to collect. It also comes in many different genres, further complicating the collection process. Satirical news, for example, is easy to collect, but its use of irony and absurdity makes it less useful for training an algorithm to detect fake news meant to mislead.
Ultimately, Mihalcea’s team created its own data, crowdsourcing an online team that reverse-engineered verified genuine news stories into fakes. This is actually how most actual fake news is created, Mihalcea explained, by individuals who quickly write them in return for a monetary reward. Study participants, recruited with the help of Amazon Mechanical Turk, were paid to turn short, actual news stories into similar but fake news items, mimicking the journalistic style of the articles. At the end of the process, the research team had a dataset of 500 real and fake news stories.
They then fed these labeled pairs of stories to an algorithm that performed a linguistic analysis, teaching itself to distinguish between real and fake news. Finally, the team turned the algorithms to a dataset of real and fake news pulled directly from the web, netting the 76 percent success rate.
The details of the new system and the dataset that the team used to build it are freely available, and Mihalcea says they could be used by news sites or other entities to build their own fake news detection systems. She says that future systems could be further honed by incorporating metadata such as the links and comments associated with a given online news item.
The paper is titled “Automatic detection of Fake News,” and it will be presented at the International Conference on Computational Linguistics on August 24. The research was supported by U-M’s Michigan Institute for Data Science and by the National Science Foundation (grant number 1344257).
In the Media
Freakonomics Radio Live: “We Thought of a Way to Manipulate Your Perception of Time.”
EECS-CSE professor Rada Mihalcea is a guest on the Freakonomics podcast where she discusses her fake news detecting algorithm.
New system can detect fake news better than humans
EECS-CSE professor Rada Mihalcea and her fake news detector research are highlighted in New Indian Express.
Algorithm beats humans for sniffing out fake news
Futurity shares the fake news detector research story done in collaboration with Electrical Engineering and Computer Science professor Rada Mihalcea.
This fake news detection algorithm outperforms humans
The Next Web delves into the fake news detector research done in collaboration with Electrical Engineering and Computer Science professor Rada Mihalcea.
Researchers claim new algorithm beats humans at spotting fake news
Research led by EECS-CSE Rada Mihalcea highlights a new method for detecting fake news.
Fake news detector algorithm works better than a human
Tech Xplore highlights the fake news detecting algorithm developed my Michigan Engineering researchers.