Emerging technological breakthroughs in the fields of AI and the internet of things are reshaping the future of tomorrow. At Dreamforce 2018, a panel was assembled from members of the government, church and tech community for a deep dive on best practices. “The genie is out the bottle,” said Richard Socher, chief scientist at Salesforce. “Now everyone can use [AI], and we want to help them think of the ethical implications.”
Ethics by Design
Father Eric Salobir, president of Optic Technology, a Catholic research and innovation network, emphasized how ethical design is not about choosing between being profitable and having a positive impact. “We want to put those together and see how to have added value using core values,” he said. This needs to start at a company level. “As a platform we have a responsibility to do this, and help other companies work through this,” said Socher.
Safety and Transparency
“Safety is critical to AI,” said Terah Lyons, the executive director of The Partnership on AI, a nonprofit founded in 2016 to explore best practices for AI. Founding members include IBM, Apple, and Google, who've made a commitment to establishing standards and public education. Lyons explained that 'safety' in this context means defining the language used when discussing AI. “The first order is getting a common language when we name a term, it's establishing a common framework for talking and thinking about issues across sectors and disciplines,” she said. “It looks like standards setting but takes place a little prior to that.”
—Sharon Bowen, spokesperson, Seneca Women
Regulating Fairness and Equal Opportunity
“We definitely need rules of the road, and regulation means we have an environment for a safe space for everyone,” said Commissioner Sharon Bowen, formerly of the U.S. Commodity Futures Trading Commission and current spokesperson for global leadership platform Seneca Women. “But we don't want to stifle development, so you're always going to have a balance.” Her chief concern is that individuals get equal access to the benefits brought about from AI—from financial markets to individual organizations. “Technology lets us have more liquidity, more competition and cheaper cost to the end users,” she said, but warned that it can be abused—for example, high-frequency trading can impact a level playing field. She sees the potential for AI to be a watchdog in this space. “AI is a tool for regulators to use to filter out bad behavior, and for the marketplace, it's a tool to mitigate risk.”
Embedding Inclusion
“Ethics is a mindset, not a checklist,” said Socher, explaining that algorithms are only as good as the training data they get. When bias occurs it's not due to “an evil programmer against giving loans to women,” but an algorithm trained off a dataset that includes historical biases where women and minorities were given fewer opportunities. The AI then makes a prediction using the data at hand, which results in fewer opportunities.
“We think that educating companies that use us to think about this and try and improve the data [is key],” he said. “Constantly question yourself about how is this algorithm affecting people lives.” Lyons believes this type of change works best when companies prioritize inclusiveness at the management level. “It's not about hacking your way through something, it's about situating work done in product development context in context with how it's scaled,” she said.
The Danger of Datasets
Collectively, the panel warned that data should be the start, not the end of your interaction with AI. A verified dataset may still have flaws. For example, Socher highlighted the FDA approved medical algorithms that sweep brain scans for cancer. They have to be at least as accurate as a doctor—but doctors aren't always accurate, he said. “We need regulation in terms of data,” he said.
Moving forward, this is going to become extremely necessary in light of privacy, self-driving cars and automation. “It's a fine balance between regulation and consumer protections,” he said.
Bowen agreed. “Data analytics are extremely important,” she said. “Data has become powerful in helping us make policy decisions, and our job is to protect the integrity of it.”
Adding to this, Father Salobir voiced concerns about the use of AI data in sensitive situations, such as criminal justice. “Policymakers don't take that enough into consideration. Without the right tool we don't get the right outcome,” he said. “There's no Vatican tech. [But] religions bring a new set of questions and a new way to think about ethics.”