Home |  Contact UsSitemap

FOCUS

AI Ethics

AI Ethics Education for Future African Leaders

Gadosey Pius Kwao, Deborah Dormah Kanubala & Belona Sonna

 

Abstract

From the Greek word “ethos”, which means custom, habit or character, the word ethics can mean and has been defined in many different ways by ethics and morality theorists.

 

1 Ethics in the African Context

From the Greek word “ethos”, which means custom, habit or character, the word ethics can mean and has been defined in many different ways by ethics and morality theorists. Some define ethics as a branch of moral philosophy concerned with asking questions about what is right or wrong. Others might say they are a set of guiding principles for an individual or group. In these modern times, it is advisable to desist from being in a hurry to pick up an individual or group’s view of what may be termed as ethical or unethical. Therefore, we can say that the way ethics is understood is heavily influenced by one’s geographical and cultural differences.

Across the world and especially in Africa, a person’s ethical decisions cannot be separated from their beliefs and societal expectations. In order to define ethics in the African context, however, caution must be taken so that a “one size fits all approach” is not adopted with the assumption that situations are the same everywhere. All the same, it is safe to assume that the fundamentals of “African Ethics” stems from the importance of the interactions between individuals and their communities and what they perceive to be morally “good” or “bad” and “right” or “wrong”.

It is also nearly impossible to consider ethics in the African context without considering religion as a relevant contributing factor to how it is defined. Religion has always played a big role in society’s determination of what is considered morally wrong or right but their fundamental beliefs, regardless of the type of religion, are almost the same. With the limits of AI capabilities being pushed, many questions that religious communities will ask will be related to how far A.I should be allowed to go. If any technology could make autonomous decisions just like any person, must it also be considered a person? Are we then challenging the belief that humans are the only beings on earth with a purpose for God? The theological term “Imago Dei” which is Latin for “Image of God” refers to the relationship between humans and their creator. Is creating in our own image and trusting that creation other than God the creator not a practice of idolatry? (Herzfeld 2002). Aside that all religions believe there is a creator and for instance some believe in Jesus Christ and others in Mohammed, but whichever their belief is they hold true that there is one supreme one and none can be compared to their creator. They believe it is only their Supreme being that has the power to create intelligence that can think and act like a human. The bigger question here; Is it ethical to encourage the building of “human-like” machines?

One may first need to ask: how is defining ethics relevant in teaching AI in Africa? AI or any other digital technology is logically malleable and exhibits a high level of flexibility. Its intentions are therefore very open to any kind of interpretation. This means that AI can be used for countless purposes, which may or may not be aligned with the objectives of its developers (Stahl 2021).

One factor that influences use and objectives is culture, which of course varies widely by geographical location. Culture is dynamic and one major contributor to change in cultural beliefs and practices today has been the influx of Information Technology over the past few decades. However, just like any other phenomenon, embracing AI in any culture means that people may have to stop doing and seeing in the ways they were accustomed to. However, people are naturally hesitant towards immediate change. Therefore, successful implementation of this change (the use of AI) starts happening when people start seeing the need for it and believe that it will improve their lives, while not violating their cultural values.

In short, “African Ethics” in AI may be defined as the set of guiding principles and methodologies applied in the building and usage of AI in Africa that are widely accepted by communities majorly based on their beliefs and what stakeholders believe as being morally right and do not infringe on fundamental human rights, while improving lives.

 

1.1 Ethical Principles

AI ethics is a set of features and techniques used during the lifecycle of AI projects in order to ensure that the final solutions protect the end users from potential harm such as bias and discrimination, denial of individual autonomy, unfair outcomes, invasion of privacy. Thus, AI ethics is led by some principles that bring ethical values to AI systems. Principles here can be defined as a set of concepts and rules for the use and development of AI. Values are not mere desire, but goals and ideals that people endorse thoughtfully and defend as appropriate or right (Leslie 2019). In this section, the most common principles of AI recognised by many communities will be discussed as well as the values that are related to them. Then, some tensions that may exist between values will be highlighted.

Overall, AI Ethics principles are meant to ensure trustworthy AI. Many concepts have been proposed by different communities or institutions as the key requirements for AI ethics. Recently, in April 2021, we witnessed the proposal of AI regulation by the European Union Commission that is based on seven core pillars including human agency oversight, technical robustness and safety, privacy and data protection, transparency, diversity—non-discrimination and fairness, social environmental wellbeing and accountability. The government of Australia published another set of concepts of ethical AI that are made up of accountability, transparency, reliability, privacy protection, fairness, human-center values and finally social and environmental wellbeing (Department of Industry 2019). Lo Piano (2020) reviewed many other propositions coming from other institutions and came out with a set of principles that were common to them including transparency, justice and fairness, non-maleficence and privacy. Be it as it is, the most important principles of ethics are listed as follows: respect of human rights, respect of society and environment, robustness and safety, transparency, contestability, responsibility, justice and fairness, and privacy. (AIHLEG 2019). In this section, we would review each of these principles.

Respect of Human rights refers to all the values that preserves human autonomy. The two key values of this principle are human dignity and equality. In the context of AI, the system should not compel people to make decisions that they can't assume as human rights and are inalienable. In addition, the system should treat people equally without any discrimination regarding the nationality, gender, and place of living, as human rights are universal. Finally, the system should not ignore any one of the human rights as they are indivisible, interdependent and interrelated.

Respect for society and the environment is all about making AI systems serve people by respecting the rules of society and the environment where the system will be used. The values aligned with this principle are sustainability, environmental friendliness, and social impact.

Robustness and safety is related to all aspects of the system that make it reliable in accordance with the intended purpose. The values assigned to this principle are accuracy, security, resilience to attack, and reproducibility.

Transparency principle focuses on the break of the traditional black box process. There should be clarity and understanding on the output of the system as well as traceability of the process used. This characteristic brings values such as explainability, interaction, and communication that are important to improve the truth of the final users with respect to the system.

Responsibility or Accountability aims to point out who is responsible for the outcomes of the system. One of the reasons for the less adoption of AI based solutions is that there is no organization or individual that can endorse the system in case of harm. For example, should the designer of the AI solution or the owner who employs it or the society that uses it be the one to accept a faulty model? For now, it is not clear. However, research is ongoing to figure out a solution that suits everyone. Thus the most significant value linked to the principle is auditability and values related to the transparency principle.

Justice and fairness ensure that AI systems are equitable, inclusive and fair with respect to all the potential users of the solution. The values related to this principle are non-discrimination, accessibility, universal design and stakeholder participation.

Privacy claims the respect of privacy rights and data protection in an AI system. Basically, throughout the lifecycle of the system, privacy should be preserved by using appropriate techniques. Techniques such as data anonymisation and differential privacy could be used to preserve dataset for hacking. This principle brings along values such as security, quality and integrity of data, access to data (Ethical and Societal Implications of Data and Arti 2019).

In general, the values are tangible means used to assess if an AI ethics principle is respected or not. Hence, when designing a particular system for a specific purpose, it is necessary to bring people from different backgrounds to propose a set of values that can be used to assess AI ethics principle without any restriction. The values listed above are not exhaustive and can be contextualised during the application in the real world. For instance in healthcare, there are values such as quality of service, aggressiveness of a treatment that can be used to assess the quality of an AI system.

However, for the same system, there can be two values that are contradictory. For example in the case of healthcare, the quality of service as value can be in conflict with privacy preservation. It is known that, having a personalised service that suits patients’ expectations, the use of data listed as sensible are needed such as gender, age and others. If the team in charge of the project decides to prioritize quality service, there would definitely be a violation of privacy which is banned by AI ethics. Another conflict that can be observed is equality of the human right principle and equity in justice and fairness principle. This phenomenon is called tensions between values (Lo Piano 2020): It happens when two values witness points of friction. There are many ways to resolve those tensions as listed in Whittlestone et al. (2019). Overall, the process consists of measuring the importance of the values with respect to the society and then finding a trade-off between the two values.

 

1.2 Ethical Challenges in Existing in Domains

AI systems have shown great impact in different domains from health to education. Due to its enormous potential, AI has generated wide interest in the research community and industry as a whole. However, as these systems are being adopted by various institutions to make autonomous decisions on issues such as loan approvals, pretrial risk assessment etc., how do we ensure that the systems that are developed are not biased? What are the moral or ethical implications/challenges that would arise from these systems? In this subsection, we discuss the ethical challenges that are likely to exist in the use of autonomous systems in domains particularly relevant to African societies using agriculture and health domains as case studies.

Agriculture is one of the major sectors that serve as a means of livelihood for the majority of Africans with over 80% of the total urban food sales supplied by Africans (Africa Agriculture Status Report 2020). The Africa agriculture status report 2020 also projects African urbanization as one of the highest in the world. With the world’s population expected to exceed 9 billion by 2050, this, therefore, would mean that the agricultural sector would need to increase its production levels up to about 70% (Kamilaris et al. 2017; Schönfeld et al. 2018) to be able to feed the world. The agriculture sector, therefore, needs to begin creating measures and possible solutions on how to increase food production. AI has been identified as one of the major solutions to this problem (Kumari et al. 2016; O’Grady and O’Hare 2017). AI can learn from historical data and learn patterns in the data to make predictions in the future. This ability of AI systems, therefore, makes it possible to be adopted by agricultural farmers to increase crop yield, identify crop disease, determine soil fertility and water levels.

AI systems however, learn well when presented with large amounts of data. The ethical challenge with working data is how to effectively measure if the data alone can capture all relevant information to correctly model the real-life experience, or if there is bias in the data. Does high accuracy from these models automatically translate to an efficient model which would also be accurate in practice? If these curated data are not accurate, they turn to translate to the developed autonomous systems. The efficiency of these autonomous systems could further lead to low yields, poor plant nutrition, and ill livestock, etc. Aside from the possibility of inaccurate data, there is the possibility of errors with the retrieval of data due to environmental circumstances. Most of the agriculture data are gathered through the use of sensors. However, farm animals can interfere with sensor equipment which would therefore lead to false readings (O’Grady and O’Hare 2017).

Furthermore, there are high development costs in building autonomous systems, which forces developers of these systems to sell them at exorbitant prices to cover production costs. Unfortunately, smallholder farmers, who should greatly benefit from these systems, cannot afford them. As such, how should AI systems developed with data from large-scale production farmers be used by a smallholder farmer whose data had no representations in the curated data training phase? Will such AI systems take into account such farmers’ representations and characteristics? Will these systems not end up making biased decisions in favor of these farmer groups?

Health is another domain presented with a lot of ethical challenges when it comes to the deployment of AI in health systems. Many researchers have already pointed out how AI would revolutionize healthcare systems in the world, from the early detection of disease to drug development and clinical trials. Despite the potential benefits AI presents in this sector, we are also faced with the issue of data privacy and confidentiality. How will AI researchers developing systems ensure that individual data rights are protected? AI developed systems should provide ways in which data is protected as indicated by the National Institute of Health (NIH) Data Sharing policy and Implementation Guidelines which mentions that data needs to be widely and freely available, nevertheless it should protect the privacy and confidentiality of the data and individuals involved.

Moreover, a serious ethical challenge will be to deal with how to model fairness to avoid any form of bias. AI engineers/developers should be able to explain how an AI system made some decisions and why that particular decision was made. This would make it easier to interpret and explain these developed systems. In light of the recent COVID-19 pandemic, development has been made in the use of AI to reduce the negative impact of the pandemic on many nations. AI for instance was used to raise early warning towards the outbreak of the COVID-19 pandemic days before it was reported by international organizations. The haste in which emerging technologies are implemented and deployed, however, presents difficult ethical concerns and risks. The phenomenon of privacy issues over data collection, processing and analysis are becoming more pervasive and we need to place a closer emphasis on it (AI, Robots, and Ethics in the Age of COVID-19). Recently, in helping curb the exponential spread of the virus, most companies have put in place phone-based applications that seek to monitor people, self-diagnose oneself for COVID-19, contact tracing for people who may have come in contact with an infected person etc. However, who gets access to this vast amount of data that is generated? How long is the data going to be kept? What occurs when members of the public request that their data be returned?

The use of AI will not only present ethical challenges in agriculture and the health sector, the educational sector is also another domain that could be faced with serious ethical challenges. AI is currently being used to grade students, suggesting topics students need to spend considerable amounts of time to improve their grades etc.

 

1.3 Data Bias

One of the purposes of considering ethical principles in the design of artificial intelligences is to reduce the biases that may be consciously or unconsciously included during any stage of the design process. It is therefore important that learners know what these biases exist and how to mitigate its negative impact. Bias in AI can be defined as a phenomenon that occurs when a system’s output is systematically prejudiced due to assumptions during the system development process (Mehrabi et al. 2019). There are basically two types of bias: Societal bias (or cognitive biases that are effective feelings towards a person or a group based on their perceived group membership) and data bias which is the lack of complete data of the case study (Mehrabi et al. 2019; Ntoutsi et al. 2020). This section focuses on data bias as it seems to be the one that is objective (not subjective compared to societal bias). In addition, data is to AI what blood is for human beings. In other words, without good data there is no hope for good results. As mentioned earlier, data bias is due to the use of data that is not well representative of the current situation.

According to (Aysolmaz et al. (2020), there are six types of biases: Sample or Selected bias, Exclusion bias, Measurement bias or Systematic value distortion, Observer or Confirmation bias, Racial bias, Association or Stereotype bias. The next paragraph will discuss each type by giving the definition, where it is inserted consciously or not in the development pipeline of the system.

Sample or Selected bias is when the dataset does not reflect the realities of the environment. This type of bias occurs mostly in the data collection process. Exclusion bias is when some features are categorised as non important and are removed. It’s likely to happen during the feature engineering phase. Measurement bias or systematic value distortion is when the data used for training differs from the real world data or when faulty measurements result in data deformation. The stages concerned in the development are data collection. Observer or Confirmation bias is the way of seeing from data what is expected (assumptions) or wanted instead of truly paying attention to the output of the model. It happens when the team or developer goes into a project with subjective thoughts and is just looking for a confirmation of them. This bias is inserted during data labeling as well as in the training phase. Racial bias occurs when data skews in favor of particular demographics (gender, location, race, age). It is likely to be inserted during the data collection. Association or Stereotype bias happens when the data reinforces cultural bias. The consequences of data bias in Artificial Intelligence are very huge. They may lead to unfair systems, discriminatory outcomes, low accuracy models as well as analytical errors. To eliminate as much as possible, in 2016 the FAIR Guiding principles for scientific data management and stewardship were published in Scientific Data (FAIR Principles n.d.). The next paragraph will give details about the FAIR principles and explain how they are used to mitigate some biases listed above to reinforce Ethics AI principles.

FAIR stands for Findable Accessible Interoperable Reusable. It is a set of principles that aims to solve most of the issues in data management that reinforce data bias. FAIR principles set four characteristics that should have datasets for AI based solutions: findability, accessibility, interoperability and reusability. Findability is the way of assigning to datasets a globally unique identifier and enriching them with a lot of Metadata that can be indexed in a searchable resource for further use. This characteristic is crucial to mitigate data bias as it contributes to having datasets that contain a lot of information about specific subject matter which is good for AI systems as discussed in the previous section. This characteristic alone is suitable to reduce selection, exclusion, racial and stereotype biases. Accessibility is the way of giving people the right to know who is using their data as well as why and how their data are used through authentication and authorization systems. This feature is necessary to guarantee security, privacy and avoid data collection without the consent of people. Interoperability and Reusability characteristics ensure that the dataset can be linked with other applications for analysis and are sustainable.

 

1.4 Current State of Teaching AI Ethics

AI has been identified as having a great potential to further enhance developmental exploits throughout the African continent. A number of countries are championing these efforts by supporting and setting up hubs for further research and promotion of AI in many economic sectors. By 2035, the rate of growth of a country’s GDP will be doubled by AI (Accenture-AI-Economic-Growth-Infographic) and several African governments believe that AI can serve as a solution to many countries’ prevalent problems; from poverty reduction to easier delivery of healthcare and better education (Microsoft Corporation and Microsoft Corporation—2018—The Future Computed Artificial Intelligence and i.Pdf n.d.).

Thus, there have been initiatives such as the African AI accelerator, as well as global giants like Google setting up its first AI research and development center in Accra, Ghana (Google AI in Ghana 2018). University of Lagos launched the very first AI hub in Nigeria in 2018 with a focus on developing interests in AI among the youth (Data Science Nigeria Opens 1st Artificial Intelligence Hub in Unilag | The Guardian Nigeria News—Nigeria and World NewsTechnology—The Guardian Nigeria News—Nigeria and World News n.d.). Academic City University in Accra Ghana introduced the first undergraduate degree in Artificial Intelligence in Ghana. Other examples include Data Science Nigeria, and the IndabaX (IndabaX—Deep Learning Indaba 2021 n.d.), a program with the objective of encouraging conversations in machine learning and AI locally. IndabaX started in 2018 and currently boasts a membership of 27 countries across the continent. In April 2021, Nvidia’s annual GTC (GTC 2021 n.d.) conference on breakthroughs in AI featured several African startups that presented innovative solutions that sought to tackle issues in agriculture, education, and healthcare and fintech in their respective countries. For example Dr CADx is a startup from Zimbabwe that has developed an AI-powered computer-aided diagnosis system to help doctors in the absence of radiologists. Their system can currently detect 15 pathologies in x-rays including Covid-19. Another startup presented a system for increasing access to clean energy in Africa through A.I.

Despite these efforts and gains in promoting the development of AI in Africa, the gap between AI development in Africa and the rest of the world is still very wide with countries like Mauritius, Egypt, South Africa, Kenya, Ghana, Namibia, Senegal and Morocco being the only countries in the top 100 on the 2020 Global Government AI Readiness Index. This report draws on 33 indicators across 10 different dimensions which include data availability, infrastructure, governance and ethics, vision, data representativeness, adaptability, digital capacity, human capital, size and innovation capacity (Table 1) to determine how ready a given government is to implement AI in the delivery of public services to their citizens.

Table 1 Sample of scores of the A.I readiness index comparing U.S.A (ranked 1st) to the African countries listed in the top 100
Full size table

The major indicators under the governance and ethics dimension included data protection and privacy legislation, cybersecurity, existence of a national ethics framework, legal framework’s adaptability to digital business models. Under this dimension, it is observed that the score across all the African countries is still quite poor. Mauritius which ranks top on the continent only has a score of 58.34 compared to the number one (1) country, USA which has a score of 92.66.

The 2020 AI Readiness Index also introduced an assessment known as the Responsible AI Sub-index. This measures how responsibly governments make use of A.I by measuring 9 indicators across four (4) dimensions; Inclusivity, Accountability, Transparency and Privacy. It is interesting to note that countries like Senegal and Mauritius ranked 9th and 13th respectively while the U.S.A (number 1 on the Readiness Index) ranked lower at 24th. Similarly, Estonia, which is 17th on the AI Global Readiness Index, was 1st on the responsible A.I sub-index list. This trend indicates that the countries positioned at the highest rank of AI readiness are not necessarily higher in their practice of responsible AI. The 2020 responsible A.I sub-index covered only 34 countries and 4 countries out of these were from the African continent. However, this presents an opportunity for other countries to develop strategies to ensure that while they take advantage of the positive impacts of A.I for development, policies can also be put in place to promote ethical uses of A.I. especially at the government level.

Education is one the most important mechanisms for pushing the awareness of AI ethics. The past decade has seen many top universities and research institutions, such as the University of Nairobi, Ghana, or Cairo, introduce AI as courses in their computing and engineering departments. Most of these educational institutions, however, do not currently have AI ethics as part of their syllabi. A few of them, such as the University of Botswana, have a separate course known as Social Informatics which focuses on ethical, social, legal issues in computer science. In another example, the Centre for A.I research (CAIR) Ethics of AI research group at the University of Pretoria focuses on teaching and research on machine ethics, the ethics of social robotics, neuro-ethics and data ethics.

But these examples are still too few and far between. With AI steadily gaining visibility in academia and research institutions across Africa, it is necessary to further encourage learners and future developers in particular to understand the ethical issues involved in the design, development and use of AI applications though the discussion of these issue in higher education.

 

1.5 Best Approaches to Teaching AI Ethics

In 2002, Pratt (2002) proposed five approaches to teaching usually used in secondary and higher education. In this section, we discuss each of them and tell how they can be relevant to teaching AI ethics. The transmission perspective refers to the transfer from teacher to learners, a specific body of the knowledge through a structured lecture and includes seminar format and conferences. This perspective can be useful for teaching ethical design of algorithms or teaching decision making methodologies.

The developmental approach which aims to propose grows the mindset of the learner. It might change the mindset of the learner if the content delivered is not in line with their understanding. It might strengthen the mindset of the learner if the content is in line with theirs. It works with questioning and examples that make the learner think out of the box. Furey and Martin (2019) used this approach to raise ethical thinking about autonomous vehicles. This form of teaching is also suitable for research students. It helps to cultivate critical thinking and ethical reasoning skills, which are highly relevant in AI development and ethics (Borenstein and Howard 2021), to set or understand ethical principles and values as well as codes of conduct (Wilk n.d.). It can be done through both seminars and discussions formats. For instance, organising a series of talks on a specific topic of AI ethics and giving the opportunity to many teachers to share their opinion with respect to their background and interact with the learners.

The Apprenticeship Approach aims to challenge the learners with the real environment as an internship. In that context, learning occurs when the students start to adopt the language, values, and practices of the specific activity. This form can be reserved for students in the specialisation phase: Ethics AI in Education, Ethics AI in Agriculture, Ethics AI in Computer vision etc.…This pipeline is suitable to increase student's familiarity with professional code of ethics as well as balancing theory and practice of AI ethics. It can be made by using world datasets that require students to address ethical issues in AI.

From the Nurturing Approach, the goal of teaching is to give enough support (care) to the learners in order to build their confidence in order to have an impact on their competence. It is suitable for elementary education (primary schools). In essence, there is no one approach to it, learners should be allowed to think and learn on their own and grow as individuals. Teachers should support students and not impose on learners what they as teachers necessarily deem right.

Finally, the Social Reform Perspective aims to bring about social change, not simply individual learning. Learners are called to take social actions for better conditions in their environments. Activities involved in this type of teaching are bringing learners in diverse communities, encouraging learners to take a critical stance, watching documentaries and discussing. For instance, the documentary (Coded Bias) is a great resource to point out algorithmic bias in AI systems.

Overall, the best approach for teaching AI ethics is the mix of all the approaches listed above with respect to the type of learners and the expectations of the lesson. For a specific case of higher education, the combination of transmission, developmental and apprenticeship might be the best approaches regarding their goals and the activities included. In addition, there are some practices that should be essential to reinforce AI Ethics skills to future AI leaders in Africa including creating diverse working AI groups composed from both technical and non technical people. Diversity is essential for ethics in general.

 

1.6 Goal (Where Do We Need to Be with Teaching AI Ethics)

The need for teaching AI ethics cannot be overstated and all indications have shown the necessity of this. For instance, Google has put together the nature of AI applications they would and would not pursue as their AI principles of which they are putting in practice (Google AI Our Principles n.d.). The launch of Montréal Declaration for a Responsible Development of Artificial Intelligence (Montréal Declaration) with its main focus to foster a responsible development of AI (Responsible AI Declaration 2018). Conferences such as ACM FaCCT also seek to bring in a diverse group of researchers and practitioners interested in the area of fairness and accountability to discuss and tackle emerging issues in this area. All of these initiatives seek to address the issue of teaching AI ethics to future developers and practitioners. But then, where exactly do we as Africans need to be as a continent with regards to teaching AI ethics?

First and foremost, teaching AI ethics and ensuring that all students take courses in AI ethics is not a debatable topic. As a matter of urgency, AI ethics should be a compulsory course for all students to take. As many industries continue to incorporate AI technologies into their operations, everyone at some point in time will come into contact with using AI developed applications, whether one is an AI software developer or not. It is therefore important that as we train the next future generation, they learn to pause and think about the ethical consequences of the technologies they come into contact with or while developing them. AI developers are often focused on having the performance of their models increase while paying little attention to the complex ethical considerations at play when designing AI systems. In view of this, future AI developers should be taught how to design AI systems for healthy outcomes devoid of any form of bias.

Second, we should be at a point where students have a fostering AI ethics mindset. While teaching AI ethics, student AI developers should be able to understand that the AI technologies they are developing are linked with ethical concerns and they have a paramount role to find ways to deal with these ethical issues. Most often, developers turn to look at AI ethics as another person’s problem to deal with. However, in training AI technologies, the developer has a choice to decide which particular features should be useful in the model. Take for instance, an AI system that is able to classify transactions as fraudulent or non-fraudulent. The developer needs to decide how to choose/eliminate sensitive features like ethnicity, sex etc. to train the model to ensure that these systems do not carry any form of bias. As such, we need to be at the state where students have a fostering AI ethics mindset and not see studying ethics as the sole responsibility of a sub group of people interested in ethics.

Third, the understanding of ethical principles differ from person to person and place to place. As such in teaching AI ethics it is important to have a diverse team of instructors (from lawyers to philosophers etc.). This will give students different perspectives in understanding ethics and when developing or using AI technologies. We need to be at a state where teaching of AI ethics is of priority to everyone notwithstanding the background of the individual or organization involved.

Lastly, higher institutions should by now have instituted multiple ways to teach AI ethics to future generations. As there is no right or wrong answers to answering ethical questions, since these turn to differ from place to place. Teaching modes should incorporate discussions about the ethical problems and how to make ethical decisions. Seminars and papers presentation could also be fused in the mode of instructions. This gives students the liberty to think and write down their own ethical challenges and come up with suggested ways in handling them. Particularly, AI ethics classes should provide a complete blend of theory and practice. Case studies could be presented to students and allow them to ponder and deliberate over the ethical issues associated with these case studies and suggest ways of dealing with them.

As AI is becoming a massive technology impacting our lives, we need to direct its use in a more socially responsible manner and this needs to start early on in the training and education process. It is expected that as of now, students should have developed the fostering AI ethics mindset and always pause to think about the ethical concerns while developing AI technologies or before using them. But, how can they do this if it is not thoroughly incorporated into their learning? Higher education could start by making AI ethics a compulsory course for all students to take with additional multiple ways of imparting AI ethics training to younger ones as well using the approaches described above.

 

2 Conclusion

In the recent cases of ethical issues that cause harm to many end users in AI applications, most of the problems detected were due to the lack of consideration of some ethical principles during the lifecycle of development. While at the moment, Africa as a continent is not ranked at the top in terms of the development and use of AI, it is just a matter of time before the tables start turning. Africa has been identified as the continent for the next industrial revolution after Asia. Furthermore, AI is gradually becoming an integral part of industrial and economic advancements on the continent. Africa, therefore, needs to take advantage of AI in its developmental agenda. However, African countries should learn from others failures to produce better solutions that are ethically right. Therefore, the need to incorporate ethical values to AI systems is paramount. The first step to ensure it is by educating the future leaders of AI development in Africa about ethical principles and in order to make them fully understand the impact of these tools on society. African institutions can start by putting together educational policies where AI is a relevant field of study. This needs to be accompanied by teaching ethical issues related to AI development and usage.

Teaching ethics is easier when a general set of guidelines is written down for a group of people to follow rather than contextualizing it to what the specific individual or group of people perceive as right or wrong and good or bad. It is however recommended in this chapter that, in teaching AI ethics on the African continent, it is important to stress on certain key principles: respect of human rights, respect for society and the environment, robustness and safety, transparency, contestability, responsibility or accountability, justice and fairness, and privacy. It is also highlighted that these principles will go a long way to help in addressing challenges faced in societal domains like agriculture and healthcare and also find solutions to issues in data bias.

 

References

2019—Ethical and Societal Implications of Data and Arti.pdf. n.d. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf. Accessed 1 June 2021.

Accenture-AI-Economic-Growth-Infographic.pdf. n.d. https://www.accenture.com/_acnmedia/PDF-57/Accenture-AI-Economic-Growth-Infographic.pdf. Accessed 1 June 2021.

Africa Agriculture Status Report 2020—AGRA. n.d. https://agra.org/africa-agriculture-status-report-2020/. Accessed 1 June 2021.

AI+Readiness+Report.pdf. n.d. https://static1.squarespace.com/static/58b2e92c1e5b6c828058484e/t/5f7747f29ca3c20ecb598f7c/1601653137399/AI+Readiness+Report.pdf. Accessed 1 June 2021.

AI, Robots, and Ethics in the Age of COVID-19. n.d. https://sloanreview.mit.edu/article/ai-robots-and-ethics-in-the-age-of-covid-19/. Accessed 1 June 2021.

AIHLEG_EthicsGuidelinesforTrustworthyAI-ENpdf.pdf. n.d. https://ai.bsa.org/wp-content/uploads/2019/09/AIHLEG_EthicsGuidelinesforTrustworthyAI-ENpdf.pdf. Accessed 1 June 2021.

Aysolmaz, B., Dau, N., and Iren, D. (2020). Preventing algorithmic bias in the development of algorithmic decision-making systems: A Delphi study. Proceedings of the Annual Hawaii International Conference on

Borenstein, J., and A. Howard. 2021. Emerging Challenges in AI and the Need for AI Ethics Education. AI and Ethics 1 (1): 61–65. https://doi.org/10.1007/s43681-020-00002-7.

‘Coded Bias’ Is the Most Important Film About AI You Can Watch Today. n.d. https://www.vice.com/en/article/n7v8mx/coded-bias-netflix-documentary-ai-ethics-surveil. Accessed 4 June 2021.

Montréal Institute for Learning Algorithms. (2018). Montréal Declaration for a Responsible Development of Artificial Intelligence. 1-21. https://www.montrealdeclaration-responsibleai.com/

Department of Industry, S. 2019. AI Ethics Principles [Text]. Department of Industry, Science, Energy and Resources; Department of Industry, Science, Energy and Resources. https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles.

FAIR Principles. n.d. GO FAIR. https://www.go-fair.org/fair-principles/. Accessed 1 June 2021.

Furey, H., and F. Martin. 2019. AI Education Matters: A Modular Approach to AI Ethics Education. AI Matters 4(4): 13–15. https://doi.org/10.1145/3299758.3299764.

Google AI in Ghana. 2018. Google. https://blog.google/around-the-globe/google-africa/google-ai-ghana/

GTC 2021: #1 AI Conference. n.d. NVIDIA. https://www.nvidia.com/en-us/gtc/. Accessed 1 June 2021.

Herzfeld, N. 2002. Creating in Our Own Image: Artificial Intelligence and the Image of God. Zygon®, 37(2), 303–316. https://doi.org/10.1111/0591-2385.00430.

IndabaX—Deep Learning Indaba 2021. n.d. https://deeplearningindaba.com/2021/indabax/. Accessed 1 June 2021.

Kamilaris, A., A. Kartakoullis, and F.X. Prenafeta-Boldú. 2017. A Review on the Practice of Big Data Analysis in Agriculture. Computers and Electronics in Agriculture 143: 23–37. https://doi.org/10.1016/j.compag.2017.09.037.

Kumari, S.V., P. Bargavi, and U. Subhashini. 2016. Role of Big Data Analytics in Agriculture. International Journal of Computing Science and Mathematics Engineering 3: 110–113.

Leslie, D. 2019. Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector. Zenodo. https://doi.org/10.5281/ZENODO.3240529.

Lo Piano, S. 2020. Ethical Principles in Machine Learning and Artificial Intelligence: Cases from the Field and Possible Ways Forward. Humanities and Social Sciences Communications 7 (1): 9. https://doi.org/10.1057/s41599-020-0501-9.

MBA, M.K.M., BSN, RN-BC, Director, N., Industry, U.P., Officer, C.N., & Microsoft. 2019. Artificial Intelligence in Health: Ethical Considerations for Research and Practice | HIMSS. https://www.himss.org/resources/artificial-intelligence-health-ethical-considerations-research-and-practice.

Mehrabi, N., F., Morstatter, N., Saxena, K., Lerman, and A. Galstyan. 2019. A Survey on Bias and Fairness in Machine Learning. http://arxiv.org/abs/1908.09635.

Ntoutsi, E., P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, I. Kompatsiaris, K. Kinder-Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, et al. 2020. Bias in Data-Driven Artificial Intelligence Systems—An Introductory Survey. Wires Data Mining and Knowledge Discovery 10 (3): e1356. https://doi.org/10.1002/widm.1356.

O’Grady, M.J., and G.M.P. O’Hare. 2017. Modelling the smart farm. Information Processing in Agriculture 4 (3): 179–187. https://doi.org/10.1016/j.inpa.2017.05.001.

Pratt, D.D. 2002. Good Teaching: One Size Fits All? New Directions for Adult and Continuing Education 2002 (93): 5–16. https://doi.org/10.1002/ace.45.

Schönfeld, M.V., R., Heil, and L. Bittner. 2018. Big Data on a Farm—Smart Farming. In Big Data in Context, eds. T. Hoeren, and B. Kolany-Raiser, 109–120. Springer International Publishing. https://doi.org/10.1007/978-3-319-62461-7_12.

Stahl, B.C. 2021. Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies. Springer International Publishing. https://doi.org/10.1007/978-3-030-69978-9.

Uni_ethical_ai.pdf. n.d. http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf. Accessed 1 June 2021.

Whittlestone, J., R., Nyrup, A., Alexandrova, and S., Cave. 2019. The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 195–200. https://doi.org/10.1145/3306618.3314289.

 

About this chapter

Kwao, G.P., Kanubala, D.D., Sonna, B. (2023). AI Ethics Education for Future African Leaders. In: Corrigan, C.C., Asakipaam, S.A., Kponyo, J.J., Luetge, C. (eds) AI Ethics in Higher Education: Insights from Africa and Beyond. SpringerBriefs in Ethics. Springer, Cham. https://doi.org/10.1007/978-3-031-23035-6_7

http://creativecommons.org/licenses/by/4.0/

https://link.springer.com/chapter/10.1007/978-3-031-23035-6_7

 

Share content with FFD

Features Archive

PARTNERS & SPONSORS

new-sampnode-logo rockefeller-logo-footer-new

Foresight For Development - Funding for this uniquely African foresight site was generously provided by Rockefeller Foundation. Email Us | Creative Commons Deed | Terms of Conditions