REIsearch NEWS


Why You Should (Or Should Not) Trust AI

When speaking with corporate and community leaders, Forbes says you always hear the same thing: Artificial intelligence makes them nervous. It’s one thing when AI is deciding whether to offer you a coupon for diapers or for detergent. It’s another when AI is producing near-instant decisions on everything from mortgage applications to prison sentences.

“Can we trust AI to do all that?” they ask. “What if something goes wrong?” One reason for mistrust is that AI (like the people who create its algorithms) can sometimes show bias. A recent study from UC Berkeley, for example, found that AI systems charged minority homeowners higher interest rates. That’s an outrage. It’s also a risk. If your AI system is responsible for discrimination, consumers won’t be suing AI. They’ll be suing you. And if your AI is unfairly assessing consumers, it’s misjudging the marketplace and potentially costing you market share. AI can run through millions of data points — using algorithms you need a PhD to understand — to reach an answer in a millisecond. That’s impressive. But risk professionals, the C-suite and boards of directors often ask, “Since AI is so fast and complicated, how am I supposed to control it?” Thirty-three percent of respondents in that PwC survey cited AI becoming too complex to explain or control as a top threat. If all this sounds scary, it shouldn’t. It’s possible to reduce AI’s bias, improve its reliability and its resistance to cyber and privacy threats, and make it benefit not just your bottom line, but also your employees, customers and community.  

Twitter Feed