Artificial intelligence (AI) and machine learning technology are no longer mere buzz words; organisations are keen on aligning their initiatives to meet industry guidelines and recommendations. As we push the boundaries of what AI can achieve, governing bodies must create a framework for organisations to consider the ethical implications not as policy, but as a moral obligation to those that it affects.
"Charting the course of AI advancement, we must ensure ethics remain our guiding star"
AI is revolutionising businesses, streamlining operations and reducing costs. However, alongside abundant opportunity, there is risk. The key to responsible innovation lies in carefully balancing ethics with advancement.
The rapid advances in machine learning (ML) and artificial intelligence (AI) has given rise to a host of ethical concerns. For example: Gender bias in text-based generative AI tools as researched by Isobel Daley, Data Scientist and 6point6 has shown how AI reinforces traditional gender roles, facial recognition technology, as reported by studies like the Gender Shades project [1], has exhibited bias, particularly in its analysis across different groups and the wider impact to society. AI-driven hiring tools have faced scrutiny for discriminatory outcomes in hiring practices [2]. Predictive policing systems have raised concerns about racial bias and may have been based on inadequate or partial data allowing for the loop of bias to be embedded in existing police practices. Additionally, ethical dilemmas in autonomous vehicles [3] have been discussed greatly.
The proliferation of deepfake technology, its use in crime and widespread AI surveillance and data privacy breaches have been the subject of extensive reporting and research over the past few years. What do all of these examples tell us? They demonstrate that in order to successfully integrate AI, organisations must comply not only with existing legislation, government guidance and cybersecurity principles but also ensure that responsible and ethical AI is at the heart of their innovations.
Whilst businesses increasingly turn to AI, it is essential to highlight the data protection risks inherent in developing machine learning models. These models often require access to substantial datasets, frequently containing sensitive or personal information. These are subject to the stringent safeguards of GDPR and data protection laws and there may be a lack of understanding around risk management and legal compliance when using personal data to develop AI systems.
Beyond protecting data, businesses must also consider the potential impact of employing AI on people. The European Commission succinctly encapsulates these concerns as AI’s “opacity, complexity, bias, a certain degree of unpredictability and partially autonomous behaviour”. As a result, businesses must explore ways in which explainability can be incorporated into models, address the risks of bias and discrimination, and implement safeguards to mitigate the consequences of unexpected outcomes. To achieve this, some of the key principles outlined below should be considered.
In light of these risks and ethical concerns, businesses should conscientiously explore the AI governance principles and recommendations outlined below. These approaches seek to strike a balance between the imperative to foster innovation and the ethical and legal responsibility to effectively manage risk.
AI and data protection law [4] are intertwined because of the ethical concerns around the use of personal data and the potential for bias in AI systems. To overcome these challenges, organisations using AI systems should consider implementing features (adhering to data protection legislation such as GDPR) like transparency, consent, and dealing with the potential impact of automated decision-making on individuals. Techniques such as federated learning enables AI models to train on decentralised data without exposing any sensitive data and safeguarding data privacy.
The AI systems must be designed to facilitate end-to-end accountability, safety, transparency [5] and efficiency. To accomplish this, regular reviews by humans should be conducted. In addition, the internal working of an AI system should be transparent towards its stakeholders. It should be able to justify the ethical permissibility and the public trustworthiness both of its outcome and of the processes behind its design and use. To ensure integrity and balance, human-in-the-loop features must be built into the system plan.
AI systems within an organisation should be evaluated in the context of current national and international laws. In addition, any new laws should be reviewed for applicability.
AI governance policy implementation is in its infancy in the UK and will continue to evolve as the technology takes an increasingly operational role in both public and private sector. The government has set out the general principles in the National AI Strategy [6], whilst the Guide to using AI in the public sector [7], developed in cooperation with the Alan Turing Institute [8], puts forward ways to ensure ethical and safe operational implementation of AI.
Additionally, there are several international guidelines:
It is advisable to review organisational AI strategy in the context of the available national guidance and also refer to the international equivalents to capture as many different angles as possible. These guidelines are likely to form the basis of future laws.
Innovation, sustainability, and collaboration [15] are all related in their efforts to manage multiple dimensions of organisational policies and practices. In particular, it is imperative to note that sustainable innovation roots in collaborative efforts, stakeholder integration, and incorporating external stakeholders’ preferences while shaping innovation practices is important. The impact of this collaboration to generate sustainable innovation can be analysed by using a Sustainable Innovation Matrix (SIM) [16] leadership model.
In navigating the path ahead, businesses must embrace responsibility by recognising exclusion, embedding fairness in algorithms and ensuring AI models know when to seek human input. To successfully and safely integrate AI, organisations need a clear strategy that is aligned with existing and anticipated legislation, high ethical standards and robust security principles. This strategy will of course need to evolve in response to the ever-changing AI governance landscape. Those who embrace this sustainable approach will find themselves well-prepared to harness the advantages of AI and reap its benefits.
How can 6point6 help?
At 6point6 we have an established approach to support our clients for incorporating AI in their business.
In today’s world, AI can solve a range of issues and improve bottom lines in numerous industries. It can also improve the efficiency of a business by reducing the time and effort required to complete tasks and freeing up employees to focus on more complex and innovative aspects of the business.
If you are thinking of incorporating AI into your business then contact us to find out more about the proven 6point6 approach to incorporating AI in a responsible manner.