On 12 November 2019 we held a joint event with Instadeep, which explored the intersection between AI and ethics. We looked at some of the more philosophical considerations as AI starts to replace human decision making across society.
For those who were unable to join us, here is a quick recap and we hope to see you at future events.
The event was moderated by 6point6 with three lightning talks from: Sherif Elsayed-Ali, Director of Partnerships, AI for Good, Element AI; Ross McKenzie, Partner – Data Protection, Addleshaw Goddard; and Andrew Morgan, Head of Data Engineering, 6point6.
These talks were followed up by an audience Q&A discussion with the speakers.
Below is a short summary of the lightning talks.
How should we approach ethics in artificial intelligence? What kind of ethical framework should surround AI decision making? Where and when is it okay and not okay to use AI? What about in the battlefield, or in the criminal justice system?
These types of questions now surround the use of artificial intelligence as it begins to take over more important aspects of our lives. One approach is to look at AI through the lens of Human Rights; using the internationally agreed framework to ethically guide the use of the technology. Human Rights have legal standing and the private sector is already somewhat monitored by the UN Guiding Principles on Business and Human Rights.
Yet, while Human Rights provide good guidelines, there are still important questions we still must consider. For example, can existing public institutions and laws address AI’s human rights risks? How can industrial policy and AI investors encourage a market that favours the design, development and deployment of responsible AI? It’s clear that there are no simple answers to the topic.
GDPR provides an important legal framework in which to navigate through the risks associated with AI governance and the processing of data. Ross outlined his three steps when looking at AI implementations: Engagement, Ethics and Evaluation.
Ethics needs to be incorporated at the data science level when developing AI systems that involve processing individuals’ data. This will enable us to get true value from AI.
While harvesting and use of personal data is ubiquitous, it’s very difficult to fully anonymise individuals in a dataset. Credit card numbers can be hashed, but can still be matched to location and journey data to reveal a unique fingerprint that is much harder to anonymise. TFL even experienced this difficulty during an FOI request into the pilot wifi data it collected from its WiFi infrastructure within stations. It eventually declined to hand over the data, citing they worried it might be re-identifiable and used to publicly name individuals and their travel patterns.
Andrew explained many standard approaches to anonymising data, like “differential privacy”, can actually reduce the value of the data itself when building AI systems downstream. To solve this problem, researchers have been delivering new methods that seek to retain the value of the data, while also improving the robustness of the anonymisation. As an example, he walked through a new algorithm for anonymising user trajectory data, called “SwapMob”. It fragments user routes into different parts and rebuilds randomised journeys from them. This retains the data’s statistical structure and thus its value, while also better ensuring that data can’t be re-identified to individuals.