Is that AI product or service the real deal? Here’s how to tell

Malte Mueller/Getty Images

Like you, perhaps, I have trouble trusting what artificial intelligence says and does. But even if AI hallucinations are “solvable” and the tech doesn’t sway electionsshred privacy, or destroy humankind, there’s yet another trust worry.

People and companies lie about AI.

The Securities and Exchange Commission is sounding the alarm. In a recent speech, SEC chair Gary Gensler warned that the current hype around AI might encourage businesses to engage in “AI-washing”—making misleading or false claims about their use of the technology.

Meanwhile, the Federal Trade Commission has told companies that advertise AI products and services to keep their claims in check. And the SEC and the Financial Industry Regulatory Authority (FINRA) have warned of investment scams involving the purported use of AI.

So, what exactly is AI-washing?

“It’s very similar to greenwashing,” says Maya Dillon, VP and head of AI at Cambridge Consultants. It could mean exaggerating the capabilities of an AI offering, and/or downplaying its limitations, Dillon explains. Or calling something AI when it isn’t. Or using AI buzzwords for marketing purposes, without any real substance to back them up.

How concerned should we be?

“If the SEC and FTC and others are weighing in, there’s clearly a bad problem there,” says Steve Mills, chief AI ethics officer with Boston Consulting Group.

Regulators vowing to hold companies accountable for AI-washing is one thing. But as Dillon points out, existing legislation only covers the data used to produce AI models, not the models themselves.

For now, that leaves providers and the industry to self-regulate via AI assurance, which Cambridge is helping develop. Assurance consists of processes, frameworks, and tools that ensure AI systems are human-centric, reliable, safe, technically robust, secure, and ethical, Dillon says. The aim: transparency and accountability so AI can be trusted. “That process in itself forces you to prove that your AI solution actually works and does what it says on the tin.”

At the same time, AI providers should let an objective third party assess those efforts.

Those shopping for AI products and services must do their homework too.

Expect transparency, Mills advises. “If you’re looking at four solutions, and three of them are being super transparent and one isn’t, it leads you to start asking a lot of questions.”

On that note, Dillon suggests going straight for the technology. You’re paying for it, so don’t be shy. “Where did the data come from that created this algorithm?” is one key question, Dillon says. “How was this algorithm developed? How is it being deployed? And what are the results that come back from it, and how is that information used? Sometimes it’s those simple questions that will easily enable you to understand whether or not what you’re looking at is AI.”

Don’t feel confident grilling the vendor? Get help. Some venture capital firms are going that route, Dillon notes, by having independent organizations do AI technical due diligence for them.

Mills also recommends seeing if the company has legit AI experts on staff. “Are they making all these claims, but then they don’t seem to have the technical expertise to back up those claims?”

Bottom line: If that AI solution sounds too good to be true, it probably is. The sales pitch shouldn’t leave you feeling like you need a wash.

Nick Rockel
nick.rockel@consultant.fortune.com

This story was originally featured on Fortune.com

Advertisement