Why, Robot? The social, moral & cultural implications of AI

wazoku News

Google’s recent Pixel 2 event may have been focused primarily on new product, on smartphones and laptops, but the real story was the blanket pervasiveness of Artificial Intelligence and Machine Learning.

Artificial Intelligence is iterating at a rapid pace. As corporates and governments rush to implement, to automate tasks at speed and link technical actions to sensory experience, there is both risk and opportunity that the social, cultural and political implications of AI will be neglected.

It’s important to recognise that AI creates a cultural shift as much as it does a technical shift. Technology is never without values, and never built without an agenda or values attached – so how do we ensure that this risk is an opportunity – how do we ensure that AI is held to the highest ethical standards? Is it time for a robo-magna-carta?

One of the first challenges is defining the problem, not to mention clearly defining what AI is and is not.This isn’t only about re-training and re-employing the hundreds of thousands who will be displaced by AI; an ethical foundation of AI needs to put morals and social impact at the core of the design process.

Greg Brockman, one of MIT’s 35 innovators under 35, is one of the few spearheading the charge, co-founding OpenAI with Elon Musk. They’re working backwards from an accepted realisation – that AI must learn crucial, implicitly human emotion as a way of preventing misbehaviour, bias and misrepresentation. The only way to do this is to design with one particularly anthropomorphic emotion in mind: Shame. They believe that shame should be the forefront emotion to ensure AI recognises there are consequences to actions.