Digital Future: Helping Algorithms Decide Who Lives Or Dies

January 23, 2017  |   Blog   |     |   2 Comments

godImagine you were asked to decide the behaviour of an autonomous vehicle as it spins out of control:

“if an accident is really unavoidable and where the only choice is a collision with a small car or a large truck, driving into a ditch or into a wall, or risk sideswiping a mother with a stroller, or an 80-year-old grandmother?”

Who would you choose?

This was the question recently posed by Daimlers CEO, Dieter Zetsche (which I wrote about previously).

Whilst on the surface it sounds like isolated conundrum, the truth is that more of our life and death decisions are been pre-determined by strokes on a keyboard.

The frequency and granularity of these decisions, represent a spectrum of hazards that sit at the heart of our technical, legal and industrial worlds. Most are unprecedented and all, have large implications for the future effectiveness of our organisations and governance structures.

Realistically, no organisation, be they private or public has the resources, abilities, or rights, to decide on all outcomes across all jurisdictions; its simply too big, too varied, too complex and too fast moving.

So how do you cater for a variety of ethical and legal challenges, in a world of exponential change?

This important question has inspired a group of tech companies to take collective action:

“Amazon, DeepMind/Google, Facebook, IBM, and Microsoft today announced that they will create a non-profit organization that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field.

The objective of the Partnership on AI is to address opportunities and challenges with AI technologies to benefit people and society. Together, the organization’s members will conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology.”

In an interview with the Financial Times Eric Horvitz, an AI researcher at Microsoft, said:

“The technology was at an “inflection point”, and that challenges were becoming clear. These include the “hidden biases” in algorithms that make implicit assumptions after being “trained” with specific sets of data; questions about the safety and trustworthiness of systems that often take decisions for reasons even their own programmers cannot understand; and ethical judgments that are embedded in systems to influence how they interact with humans in given situations.

The Microsoft researcher said the group had looked back through the history of science, to the introduction of sweeping new technologies such as electricity and human flight, and not found any precedents for the industry-wide effort.

He also denied that there was an “explicit attempt in the notion of self-regulation to repel government intrusion” in the field. Instead, he said the industry research should be welcomed by governments, which often find it hard to understand the impact of such technologies.”

Whilst the formation of such an group is no guarantee the decisions made by our technology will always be good; it is a commendable example of collective-accountability and an important first step in ensuring that this important area gets the visibility, and thinking, it deserves.

Our lives literally, depend on it!


Industry Leaders Establish Partnership on AI Best Practices


Scott (31 Posts)

CEO Digital Infusions

2 Comments for this entry

    Web Hosting
    August 1st, 2017 on 3:17 AM

    Working with scientists from Stanford and Baidu, Ng is working on building the next generation of deep-learning algorithms. He spoke at MIT Technology Review’s EmTech conference in Cambridge this week about applying deep-learning technologies to search and future technologies.
    August 22nd, 2017 on 6:43 AM

    And most importantly for those who don’t create algorithms for a living – how do we educate ourselves about the way they work, where they are in operation, what assumptions and biases are inherent in them, and how to keep them transparent?

%d bloggers like this: