Why Big Tech Struggles with Ethics

In April, Google made news yet again with the controversy surrounding the formation of an ethics board focused on artificial intelligence (AI). The board, tasked with the “responsible development of AI,” was to have eight members and meet four times over the course of 2019 to evaluate the ethical implications of AI development and to make recommendations to executives.

But a week after the board was formed, it was officially cancelled. The Advanced Technology External Advisory Council (ATEAC), as it was called, ran into considerable controversy over the inclusion of Kay Cole James, the African American female president of the conservative think tank The Heritage Foundation, as well as the inclusion of drone company CEO Dyan Gibbens. The inclusion of James was protested by employees because of her views on sexuality and climate change. The inclusion of Gibbens brought up an older controversy Google faced: the outcry from its employees last year over an AI contract with the U.S. Department of Defense. Project Maven was designed to strength drone targeting systems by identifying objects in video data, but thousands of Google employees protested the company’s involvement, saying: “Google should not be in the business of war.”

The race to develop ethical AI is in vogue, with companies like Google and German-based SAP—as well as government organizations like the European Union—drafting forms of ethical guidelines for AI. These ethical statements are often developed in response to the growing concern among ordinary people about the way AI is reshaping society—from how we deal with bias in AI to the future of work in an AI-driven economy. The giants of Silicon Valley are sensitive to growing criticism.

These corporate and government principles can ring hollow, however, since they’re often based on the prevailing moral preferences of the day, which shift depending on what tribe or interest is at the table. Google points out that AI development should be socially beneficial and not cause harm, but rules out any military applications that might actually save lives through more precise weapon targeting. Often these statements are based more on popular opinion and what may increase profits than on any transcendent principles of justice and human dignity. Absent a shared moral consensus, it will be hard for tech companies and civic authorities to create principles that are universally embraced.

Read the full article at The Gospel Coalition