Wikipedia gets a hard time for not always being accurate—professors at universities and high schools around the world are famously skeptical of students using the site as a source, even though its editors work hard to make sure that it is. It's a great resource, but you certainly do want to double-check the information you find there before you use it in a paper for school or an article. According to researchers at MIT, one way the online encyclopedia could become more accurate is with the help of A.I.
A new paper from MIT researchers that is being presented at the AAAI Conference on Artificial Intelligence claims an A.I. system could help pinpoint and replace parts of sentences that need updating in the millions of Wikipedia articles that need updating to lessen the burden on the website's volunteer editors. This system could maintain the tone of the article so it wouldn't be apparent that a bot updated the article.
"The idea is that humans would type into an interface an unstructured sentence with updated information, without needing to worry about style or grammar. The system would then search Wikipedia, locate the appropriate page and outdated sentence, and rewrite it in a [human-like] fashion," MIT wrote in a blog post.
The researchers say the A.I. system could be constantly looking for new information about certain topics across the internet, from reliable sources, and could update Wikipedia articles as this new information is discovered. This would save volunteers a lot of time because they wouldn't have to constantly be looking for every little new piece of information that needs to be changed to be current.
Here's how the system works: it looks at an existing sentence in a Wikipedia article and then a sentence from elsewhere that contains the new information. The A.I. then decides which words in the existing sentence in Wikipedia need to be replaced to make it accurate based on the information in the other sentence. This allows it to update the article, keeping things current without rewriting it in a way that makes it sound like it wasn't written by a human.
The MIT researchers claim this type of A.I. could be used for more than updating Wikipedia articles. They claim it could also be used to help eliminate bias in fake news detectors, which have popped up to combat the rise of malicious fake news online. These detectors typically run by deciding if pieces of information they're analyzing in an article are true or false based on the data that's available. However, they're often coded to be biased because the people who create them decide what makes something appear to be fake news. This A.I. system could use its ability to compare pieces of information to remove the bias.
Darsh Shah, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and one of the lead authors of the paper, explained the problem.
"During training, models use some language of the human written claims as 'give-away' phrases to mark them as false, without relying much on the corresponding evidence sentence," Shah said. "This reduces the model’s accuracy when evaluating real-world examples, as it does not perform fact-checking.”
Wikipedia is an important tool that doesn't get enough credit because of the occasional error, and fake news is an epidemic we're still trying to figure out how to deal with, so perhaps this A.I. system could help us solve two significant problems.