Machine Learning isn’t effective at identifying fake news

Two recent studies show that while it’s relatively easy for computers to create fake news without much input from humans, machine learning is poor at identifying fake news, Fast Company reports. The research was conducted by MIT doctoral student Tal Schuster in two separate studies. While Schuster found that computers are adept and identifying ML-generated text, they have a hard time flagging what’s false from what’s true. Schuster says the problem lies with the database used to train computers to spot fake news. That database is called Fact Extraction and Verification (FEVER). Schuster found that ML-taught computers struggled to interpret negative statements about a subject even when the computers could easily interpret positive statements. As Axios reports: The problem, say the researchers, is that the database is filled with human bias. The people who created FEVER tended to write their false entries as negative statements and their true statements as positive statements — so the computers learned to rate sentences with negative statements as false. Does this mean machine learning will never be able to identify fake news? No, but we shouldn’t pin our hopes on it doing so anytime soon. The fact that the database used to train machines to identify fake news is biased is not just a problem found in teaching machines to identify fake news. Human bias is an ongoing problem commonly found in artificial intelligence circles. The AI may be capable of being very smart and more accurate than humans, but only if we can weed out our own prejudices while training it.