Why We Should Stop Worrying About A.I. (And Start Worrying About Data)
Why We Should Stop Worrying About A.I. (And Start Worrying About Data) · Fortune

The one-two punch of data and artificial intelligence are in the midst of transforming the world as we know it. But how do we make sure that the new world that emerges in their wake is one we’ll want to live in?

A big part of the equation is ensuring that consumers’ data is handled properly. Speaking on a panel at Fortune‘s Most Powerful Women Summit in Laguna Niguel, Calif. on Monday, Clara Shih, CEO and co-founder of Hearsay Systems, offered a straight-forward, four-point system for doing just that:

1. Be transparent. Let people know what information will be used and how.

2. Provide choice. Be clear about when people can opt in or out of having their personal data collected.

3. Explain the value. That might be convenience (Shih cited Amazon as an example) or rewards, as with a credit card.

4. Instantly notify people when there’s a data breach. That’s “when,” not “if,” she said.

It’s important to set a code of conduct around data and to follow it strictly, added fellow panelist and Ancestry.com CEO Margo Georgiadis. For example, during the height of the 2018 immigration and family separation crisis, Ancestry was asked to donate genetic testing services to try to help reunite asylum-seeking parents and children who were separated at the boarder. That was a vitally important cause, but Georgiadis said she ultimately felt that the request violated Ancestry’s policy of allowing consumers to control their data.

“There was an emotional connection,” she said. “But there could be unintended consequences.” Would parents provide consent? Would the asylum seekers be able to delete the data later? How else might their data be used? “We’re happy to help,” Georgiadis said, “but we need clarity.”

Technologists must stop thinking of data as “just another input,” said Navrina Singh, principal product lead for Microsoft A.I. Instead, they have to acknowledge that the data they use will make the product what it is and may have larger implications for society.

“When you’re thinking about A.I., for me the data sets are one of the biggest concerns,” said Brenda Darden Wilkerson, president and CEO of AnitaB.org, an advocacy group for women in technology. If those data sets are faulty or biased, they color what the A.I. learns. Humans must be responsible for the “care and feeding” of technology, she said, so we can be sure artificial intelligence can actually help make the world better—for everyone. Technology can’t just serve the Silicon Valley elite, Darden Wilkerson said: “I’m very concerned with the people who are negatively impacted.”