insights

Artificial Intelligence and Ethics Recap

November 19, 2019

On 12th November we held a joint event with Instadeep, which explored the intersection between AI and ethics. We looked at some of the more philosophical considerations as AI starts to replace human decision making across society.

For those who were unable to join us, here is a quick recap and we hope to see you at future events.

The event was moderated by Gary Richardson, MD of Data and Emerging Technology at 6point6, and began with three lightning talks from: Sherif Elsayed-Ali, Director of Partnerships, AI for Good, Element AI; Ross McKenzie, Partner – Data Protection, Addleshaw Goddard; and Andrew Morgan, Head of Data Engineering, 6point6.

These talks were followed up by an audience Q&A discussion with the speakers.

Below is a short summary of the lightning talks.

Sherif Elsayed-Ali: A human rights approach to AI-governance

How should we approach ethics in artificial intelligence? What kind of ethical framework should surround AI decision making? Where and when is it okay and not okay to use AI? What about in the battlefield, or in the criminal justice system?

These types of questions now surround the use of artificial intelligence as it begins to take over more important aspects of our lives. One approach is to look at AI through the lens of Human Rights; using the internationally agreed framework to ethically guide the use of the technology. Human Rights have legal standing and the private sector is already somewhat monitored by the UN Guiding Principles on Business and Human Rights.

Yet, while Human Rights provide good guidelines, there are still important questions we still must consider. For example, can existing public institutions and laws address AI’s human rights risks? How can industrial policy and AI investors encourage a market that favours the design, development and deployment of responsible AI? It’s clear that there are no simple answers to the topic.

Ross McKenzie: Engagement, Evaluation and Ethics: a lawyer’s perspective

GDPR provides an important legal framework in which to navigate through the risks associated with AI governance and the processing of data. Ross outlined his three steps when looking at AI implementations: Engagement, Ethics and Evaluation.

  1.   Engagement: establish why you need the technology, what the business benefits are now and into the future. There needs to be a lawful basis for using the technology and processing data, which should all be grounded in GDPR.
  2.   Ethics: the use of AI should follow GDPR guidelines, and be transparent in its use of data (to avoid being creepy), minimise the data that it harvests, ensure that it is accurate, respect the rights of individuals, and incorporate privacy by design.
  3.   Evaluation: this must be an essential part of all technology projects. They must be regularly reviewed to avoid scope creep, cleanse all unnecessary data and notify users of any changes. In addition, suppliers should be audited on a regular basis, while the board should be re-engaged where relevant.

 Andrew Morgan, Privacy Aware Machine Learning: Creating value through AI, while safeguarding the individual.

Ethics needs to be incorporated at the data science level when developing AI systems that involve processing individuals’ data. This will enable us to get true value from AI.

While harvesting and use of personal data is ubiquitous, it’s very difficult to fully anonymise individuals in a dataset. Credit card numbers can be hashed, but can still be matched to location and journey data to reveal a unique fingerprint that is much harder to anonymise. TFL even experienced this difficulty during an FOI request into the pilot wifi data it collected from its WiFi infrastructure within stations. It eventually declined to hand over the data, citing they worried it might be re-identifiable and used to publicly name individuals and their travel patterns.

Andrew explained many standard approaches to anonymising data, like “differential privacy”, can actually reduce the value of the data itself when building AI systems downstream. To solve this problem, researchers have been delivering new methods that seek to retain the value of the data, while also improving the robustness of the anonymisation. As an example, he walked through a new algorithm for anonymising user trajectory data, called “SwapMob”. It fragments user routes into different parts and rebuilds randomised journeys from them. This retains the data’s statistical structure and thus its value, while also better ensuring that data can’t be re-identified to individuals. 

If you have any questions regarding the event or any of the talks, please do not hesitate to get in touch with us: [email protected] or [email protected]

 

Gary Richardson
MD, Emerging Technology