Detecting deception in online information and news

Victoria Rubin

Image credit: Ross Howey


When Associate Professor Victoria Rubin joined FIMS in 2006, she received funding from the faculty to study the credibility of blogs, assessing how online writers express biases and opinions.

 While in the past decade the emphasis may have shifted away from one particular platform to another – for example, blogs to Facebook – verification issues still persist, says Rubin.

“With the new ways of generating, sharing and obtaining content online, most users want to make their decisions based on credible sources who share their expertise with the best intentions in mind (meaning without lying).”

Identifying deliberately deceptive information in the institutional mainstream or non-institutional text-based online news is the subject of Rubin’s current research endeavour, Digital Deception Detection. The work is being conducted in the Rubin’s Language and Information Technology Research Lab, which was founded in 2008. The ultimate goal of the project, Rubin says, “is to inform the design of systems that either alert the users to fact check information of dubious quality, or flag and filter out misleading statements from the stream of data that we’re interacting with on a daily basis (e.g. rumours, hoaxes, or unverified news).”

Rubin and the students in her lab analyze satirical news, which can be misleading when delivered through Facebook. “Most news are formatted alike and there is no clear visual distinction between a new piece from The New York Times and The Onion, for example. If the source attribution is unclear or its credibility is unknown, news readers might mistake a news parody for legitimate news,” Rubin says.

“My lab’s initial task is to come up with a satirical news detection system that flags satirical news parody as one type of fake or deceptive news, based on how the news is written regardless of the presence or absence of clear attribution.”

In April 2015, Rubin was awarded a three-year SSHRC Insight Grant to fund the Digital Deception Detection project. “The grant was a logical continuation to my previous work on rhetorical features of texts that could be predictors of deceptive behaviours in texts,” says Rubin. “The context of news is new to the Deception Detection community since the majority of the work is in interpersonal psychology, law enforcement, and airport/border security. Applying some of the methods to everyday life news behaviours is tricky since news can be biased, subjective, erroneous, but not necessarily deceptive. It’s a tangled knot that we are just starting to unravel.”

The funding will support the work of two to three students for three years and the consequent dissemination of research and development results through conference presentations and knowledge mobilization in a form of digital literacy guidelines for news readers. Rubin says she and the students in her lab are aiming for software prototypes to support the work of newsrooms, news aggregators, and end-users, namely news readers.

 At an Association for Information Science and Technology annual meeting in November 2015, Rubin, along with LIS PhD students Yimin Chen and Niall Conroy, presented three short papers related to the SSHRC-funding project: “Deception Detection for News: Three Types of Fakes,” “News in the Online World: The Need for an ‘Automatic Crap Detector’,” and “Automatic Deception Detection: Methods for Finding Fake News.”

In these series of papers, Rubin and her co-authors discuss the role of Library and Information Science in deception detection with respect to news, noting that LIS researchers have been interested in issues of online credibility for quite a while now, and that news verification is an important issue within certain streams of LIS.

 Rubin says news verification methods and tools are timely and beneficial to both lay and professional text-based news consumers. The research significance in LIS is four-fold: automatic analytical research methods complement and enhance the notoriously poor human ability to discern information from misinformation; credibility assessment of digital news sources is improved; the mere awareness of potential digital deception constitutes part of new media literacy and can prevent undesirable consequences; and the proposed veracity/deception criterion is also seen as a metric for information quality assessment.

 Rubin and her lab team are connected virtually on a daily basis. They meet weekly for two hours and everyone has a role to fill, whether it’s running a data analysis script, collecting data or managing a dataset, researching specific concepts, or writing a paper. “The work is intense and demanding especially around paper submission deadlines, but we are having lots of laughs given the data that we work with is satirical news pieces,” Rubin says.

 When considering graduate students who are interested in working with her in her lab, Rubin places emphasis on good writing, self-motivation, and enthusiasm.

“I also have to say that when people are curious or perhaps even fearless, or at least confident, in their ability to master something that appears hard at first, they immediately catch my attention. It is enormously gratifying to work with smart self-driven individuals, regardless of their titles, age, and gender. I learn to listen to what they have to say, what worries them, what matters to them, and how their life might be different from when I was a student.”

Rubin adds, “I have collaborated with amazingly experienced and enthusiastic colleagues at FIMS who are generous with their time and expertise. I hope to further cross disciplinary bridges and may be knocking on their doors as my newly funded project develops and matures.”