Machines and Morality: How and Why We Should Make AI More Human

We are entering what some call the “fourth industrial revolution.” Whereas the first industrial revolution mastered the power of the steam engine, the fourth will flourish on the backs of big data, artificial intelligence and robotics.

China has recognized the importance of these three keystone technologies and have committed to creating a $150 billion AI industry in the hopes of becoming the global leader by 2030. However, China’s vision for AI may be at odds with the vision of Western democracies, “AI in China appears to be an incredibly powerful enabler of authoritarian rule” with face scans, citizen databases and censorship taking centrestage, while Western AI is focused on the monetary potential - mapping consumer behavior, making cities more efficient and improving consumer technology.

So with the world accelerating towards an AI-dependent future – it leaves us to ask one question: What role does morality play with AI? Can AI itself be programmed with the concept of morality and hold itself accountable?

Interestingly, yes - AI can be as moral as the humans who code it. By crowdsourcing ethical decisions from subjects around the world, scientists have been able to create a morale framework to help self-driving cars make decisions in difficult moral dilemmas. If a malfunctioning self-driving vehicle is barreling towards an elderly couple or a young child, how should it react? The results may surprise you – and they depend on your cultural background. For example, in China, Japan and South Korea, the elderly are held in high esteem, so subjects were less likely to save the child. According to an MIT review of the study“countries with more individualistic cultures are more likely to spare the young”. As dark as the prospect of AI out-competing human productivity and creativity, the future may not look so bleak if it can adhere to a human-based moral code.


Author

Viewpoint Research Team