Call for Expression of Interest (EOI) - A research study in Designing Humane AI Solutions   AAIH President to deliver Keynote Address on Gen AI at the 20 th ASEAN Ministerial Meeting on June 7th.  AAIH President, Dr. Anton Ravindran, and AAIH Founding member & Fellow Prof Liz Bacon have been invited to speak at the MENA ICT Forum 2023 which will be held at the Dead Sea Jordan on November 20th and 21st 2024 under the patronage of His Majesty King Abdullah II. Dr. Anton Ravindran has been an invited speaker previously at the MENA ICT Forum in 2022, 2020 and 2018.

What is AI Ethics and Governance?

Dr Anton Ravindran
CEng (UK), FBCS, FSCS
Author of the book “Will AI Dictate the Future?”
“We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.”
— Klaus Schwab
At its essence, ethics is about moral principles that govern our conduct and behaviour, what is right and wrong. But it is not easy to define ethics universally as “privacy”, “trustworthiness”, “transparency”, “fairness”, “bias” or even “safety” as it may mean different things to different societies based on sociopolitical, cultural and economic realities. Like in the case of AI, there are many definitions of AI ethics and governance, and there are vigorous ongoing discussions about this because of its ambiguity. In its simplest form, AI ethics is also about what is good for individuals and society, and what is not. Principles are generally abstract, which adds to the challenge in addressing AI ethics and governance. The tangible aspect of AI ethics and governance are the tools that can be deployed to govern and uphold ethical conduct by AI. AI ethics is about setting the guidelines that stipulate the design, deployment and outcomes of AI to ensure responsible use, as it affects society at large.
 
AI’s “behaviour” is driven by data and algorithms, and bias in AI can take different forms. But there are three widely accepted sources for AI biases:
 
AI developers may design, develop and deploy applications without even knowing the risk. Data bias is probably the biggest – and often the root source – of the three biases. The data used to train the AI “brain” may not be representative of the whole population. We need to examine the training data to ensure it is representative and large enough to avoid statistical sampling errors, known as sample bias. If the data doesn’t have a good representation of the population, it is known as representation bias. For example, if the model was trained with a dataset with images of Asian faces, it may not be accurate in predicting faces of Caucasian faces. This can be managed by performing sub-population analysis. This requires developing model metrics for the sub population.
 
On 20 August 2019, Apple launched its Apple Card and ran into problems because of AI biases. Users noticed that it was biased against women, offering them lower credit limits. Tech entrepreneur David Heinemeier Hansson (creator of the famous Ruby on Rails web development framework) tweeted, saying that “it gave him 20 times the credit limit that his wife received”. He also stated that he and his wife filed joint tax returns, and she in fact had a better credit score. A few days later, Apple’s co-founder, Steve Wozniak, confirmed this claim and tweeted, saying he received ten times more credit on the card compared with his wife. “We have no separate bank or credit card accounts or any separate assets”. This was obviously due to data bias or discrimination against female applicants which was inherent in the system datasets.
 
Another well-publicised example of data bias is the use of AI in the US for policing and the justice system, which has drawn much attention and discussion. These AI models are driven by algorithms trained on historical crime data, using statistical methods to find connections and patterns. The patterns are derived based on correlation, not causation. For example, if an algorithm found that lower-income neighbourhoods were “correlated” with a higher tendency for a convicted criminal to re-offend (recidivism), the model would then predict that any defendant from a low-income background would have a higher likelihood to re-offend. These very same populations may be unfairly targeted by law enforcement and are at higher risk of being arrested, based on historical information.
 
Besides, even if the neighbourhood has improved living conditions and is no longer poor, the model still depends on historical data to make the decision. This amplifies and perpetuates biases by generating even more biased data to feed the algorithms, creating a cycle. This change in the external environment is known as “model drift”. This happens when the AI model degrades over time because the data and algorithm may not sufficiently reflect changes in the real world. The data used to train the model may have become irrelevant. Similarly, to avoid outcome bias, the AI algorithm must be monitored over time against any biases as the outcome of the AI model may change over time as the model learns or the training dataset changes.
 
According to Wikipedia, algorithmic bias describes “systematic and repeatable errors in a computer system that creates unfair outcomes, such as privileging one arbitrary group of users over others”. In 2015, Amazon discovered that their AI model used for hiring employees was biased against women. This was because the algorithm was based on the number of resumes submitted over the past ten years, which mainly constituted male applicants; hence it was trained to favour male candidates over female candidates. Amazon disbanded the development team and stopped using the AI recruitment tool. Again, data is the crucial source for these biases.

AI Governance

Dr. Baeza-Yates (2022) says, “Ethics always runs behind technology too. It happened with chemical weapons in World War I and nuclear bombs in World War II, to mention just two examples”..” AI governance is the process of evaluating and monitoring algorithms’ effectiveness, accuracy, bias and risk. It also defines policies and guidelines for establishing accountability in creating and deploying AI systems in an organisation. The fundamental principles of governance are transparency, accuracy, fairness and accountability. Being able to explain why an AI model behaves in a specific manner and its decision-making process can boost trust in the accuracy and fairness of the decision or output generated by AI. Also, we need to ensure accountability of the performance of the AI model. What constitutes a reasonable explanation will vary depending on the audience as well as the complexity of the AI system. For example, a radiologist could use the historical data of 100 patients as a reference, based on a probability of 90%, to report the chance of a tumour.

Explainable AI (XAI): Making the Blackbox Transparent

Explainable AI (XAI), sometimes known as Interpretable AI, refers to methods and techniques that enable humans to understand the results generated by an AI algorithm, hence improving on the governance and ethical dimension of AI. It is a fast-emerging area that provides transparency and creates trust in AI. XAI explains how the AI model works and why a result was generated in a comprehensible manner to non-technical end-users. What data did the model use? Why did the AI model make a specific prediction or decision? Are there any biases? When do AI models give enough confidence in the decision to form the basis for trust? How can the AI algorithm correct errors that arise?

AI algorithms that have traceability and transparency in their decision-making – such as Naive Bayes, linear regression, logistic regression, Decision Trees, K-Nearest Neighbors (KNN) can provide explainability without sacrificing too much on performance or accuracy. More complicated AI models based on more powerful algorithms – such as deep neural networks including Random Forest (RF), Convoluted Neural Networks (CNN), Recurrent Neural Networks (RNN) and other similar algorithms – may not have high levels of transparency and explainability but will have more power, better performance and accuracy. This is because of their inherent complexity and multitudinous layers of networks of neurons.
 
Explainability is an intuitively appealing concept but hard to fully realise because of the complexities of advanced AI algorithms. Dr. Lance B. Eliot (2021), a renowned expert on AI and ML, emphatically says, “Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphise AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and unarguable fact that no such AI exists as yet.” In sum, AI is not sentient yet, and until we reach the stage of ASI, XAI will have limitations for deep neural network-based models.

However, unlike Blackbox AI, XAI gives data scientists better control of the model’s behaviour, thus enabling them to avoid discrimination and any biases. XAI generally has two levels: explainability for data” and “explainability for algorithm. It is key to developing AI models that are understandable, transparent, and interpretable. This will enhance confidence and ensure effective and safe use of AI for the intended purpose.

According to the NIST, the four key principles of XAI are:
Explanation: Systems should provide evidence or reason(s) for all output.
Meaningful: Systems should offer explanations that are understandable to individual users.
Accuracy: The answer should accurately describe the system’s process for generating the output.
Knowledge Limits: The system should only operate under limits or conditions for which it was designed.
 
With XAI, biases and erroneous situations can be avoided by justifying the output. For example, when extending loans and performing credit ratings, banks can leverage XAI. The model would be able to justify its recommendations and give clients a detailed explanation if their loan application was declined. Banks simply cannot take the risk of using Blackbox AI. they need to deploy XAI to have a degree of transparency and accuracy to meet compliance requirements.
 
However, explainability doesn’t provide all the answers with respect to “fairness”. Lily Hu (2021), a PhD candidate in applied mathematics at Harvard University who studies algorithmic fairness, states that “the use of algorithms in social spaces, particularly in the prison system, is an inherently political problem, not a technological one”. To develop models to avoid any bias and to ensure “fairness” we must agree precisely on what it means to be fair before developing the algorithms which may vary based on societal, cultural and political norms.

Not only do AI models require the right volume and quality of data, but they need to ensure data scientists don’t pass on their human biases and assumptions when developing and training the AI models. People from different backgrounds would have different societal and cultural norms, perceptions, beliefs and practices. Cognitive biases are generally unconscious errors in judgement made by data scientists. AI can only be as good as the people who develop it and the data that is used to “train” it.

XAI can leverage tools and methodologies such as Google’s What-If -Tool, IBM’s Watson OpenScale or AI -Fairness 360, Microsoft’s Fairlearn and Accenture’s Teach and Test methodology. These tools can test and mitigate biases in models on a real-time basis. They can also analyse the importance of different data features, visualise model behaviour, and validate different AI fairness metrics. Many of them are open-source, including the tools from Google, IBM and Microsoft.

Conclusion

The rapid digitisation and penetration of AI has led to the rise of customer-centricity. Businesses collect vast amounts of data about customers’ needs, preferences and wants. Data is the new oil, as the saying goes. AI helps businesses realise which customers are more receptive to marketing campaigns and messages than others. Responsible AI has rewarded businesses with opportunities for delivering personalised products and services while upholding customer values and doing good for society. Implementing measures to avoid human bias is necessary for developing solutions that are accurate, fair, and transparent, and is not only a moral and ethical issue but also good for business. Simply put, it can set a business apart from the competition by improving customer confidence, trust and loyalty, and this trait will become far more significant as AI becomes more pervasive.

The question is not necessarily about whether AI will become more intelligent than us, though some believe it will, but about what we can do to make sure AI does good and we don’t abuse technology. AI is driven by data fed by man and the algorithms developed by man. AI models must be designed, developed and used in a manner that respects laws, human rights and ethical values. For now, it is for humans to decide on the degree of autonomy we should extend to AI. Inherently, the existing forms of AI, do not have empathy and emotions; in other words, they are devoid of sentience. Ultimately, the onus is on humans to ensure the responsible and ethical behaviour of AI!

References

Andrews, E., L. (2020, October 13). Using AI to Detect Seemingly Perfect Deep-Fake Videos. HAI Stanford
University. Available at: https://hai.stanford.edu/news/using-ai-detect-seemingly-perfect-deep-fake-videos.
Bathaee, Y. (2018, May 5). The Artificial Intelligence Black Box And The Failure of Intent And Causation. Harvard

Journal of Law & Technology., 31(2). https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-
Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf.

Burrell, J. (2016, January 6). How the machine ‘thinks’: Understanding opacity in machine learning algorithms.
Big Data & Society, 1-12. DOI:10.1177/2053951715622512.
https://journals.sagepub.com/doi/pdf/10.1177/2053951715622512.
Eliot, L. (2021, April 24). Explaining Why Explainable AI (XAI) Is Needed For Autonomous Vehicles And
Especially Self-Driving Cars. Forbes. Available at:

https://www.forbes.com/sites/lanceeliot/2021/04/24/explaining-why-explainable-ai-xai-is-needed-for-
autonomous-vehicles-and-especially-self-driving-cars/?sh=3c2a2c921c5a.

Singh, A. & Mutreja, S. (2022, February 15). Autonomous Vehicle Market Statistics 2030. Allied Market Research.
Available at: https://www.alliedmarketresearch.com/autonomous-vehicle-market.
Zicari, R., V. (2022, February 7). On Responsible AI. Interview with Ricardo Baeza-Yates. ODBMS Industry Watch.

Available at: http://www.odbms.org/blog/2022/02/on-responsible-ai-interview-with-ricardo-baeza- yates/.

Leave a Reply

Your email address will not be published.

You may use these <abbr title="HyperText Markup Language">HTML</abbr> tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*