Salesforce’s chief ethical and humane use officer says AI needs guardrails to reach its full potential

Good morning, Broadsheet readers! The FTC tries to block Tapestry's $8.5 billion acquisition of Capri, 23andMe cofounder and CEO Anne Wojcicki intends to take her company private, and a Salesforce exec turns to humans to ensure AI reaches its full business potential. Have a thoughtful Thursday.

- Trust factor. Companies have no question that AI will help them advance business goals. It’s already happening, from tech startups to fashion brands. But Paula Goldman, Salesforce’s chief ethical and humane use officer, argues that guardrails and a human counterbalance to AI are instrumental to ensuring that the technology's potential fully translates into results.

“I think that’s what’s going to continue to unlock AI productivity and AI gains for companies,” Goldman said. She joined Fortune’s Brainstorm AI conference in London earlier this week in a conversation with Fortune executive editor Nick Lichtenberg. “There’s no doubt right now about the capabilities of AI,” she said. Yet, she added, “it’s possible that the next AI winter is caused by trust issues with AI or people adoption issues with AI.”

Simply put, failing to address people’s concerns about AI—from a doomsday takeover to racial and gender bias—could prevent AI’s productivity advancements from reaching their full potential.

006 FORTUNE BRAINSTORM AI London April 15th, 2024 London, UK 15.05-15.20 AI + HI: BUILDING THE AI WE WANT As AI rapidly advances, designing the way in which humans will interact with AI systems becomes crucial. Blind trust in these systems could spell disaster for business finances, operations, customer relations, and more. So we need to create ways for these systems to augment and complement human intelligence (HI), while guarding against both AI biases and inaccuracies, as well as human cognitive biases. We dig into what companies can do to marry HI and AI to get the best out of both, creating outcomes that are accurate, trustworthy, and value-enhancing. Paula Goldman, Chief Ethical and Humane Use Officer, Salesforce Moderator: Nick Lichtenberg, Executive News Editor, FORTUNE Photography by Joe Maher/Fortune Brainstorm AI

Goldman said that companies have so far implemented checks like a human signing off before a consequential decision is made by AI. But “that’s no longer enough,” she said. Instead, organizations need “next-level controls,” with a human copilot participating throughout the process, not only at the end point.

Goldman has been in her unusual role at Salesforce for five years—well before the launch of ChatGPT. Now that the public has more awareness of AI, building trust is even more important than when the technology was discussed mostly among those already in the know. A top question Salesforce clients ask about AI is “Can we trust it?,” she revealed. Her job is to help them answer that question in the right way for their own businesses.

As AI becomes par for the course across global business, Goldman hopes that focus on trust doesn’t wane. “I hope that the attention that's being paid to these issues of trust continues and is not a momentary thing,” she said.

Emma Hinchliffe
emma.hinchliffe@fortune.com

The Broadsheet is Fortune's newsletter for and about the world's most powerful women. Today's edition was curated by Joseph Abrams. Subscribe here.

This story was originally featured on Fortune.com

Advertisement