Lesson Introduction to AI Ethics - Artificial Intelligence - ثالث ثانوي

6. Al and Society In this unit, you will analyze how Al ethics influence and guide the development of sophisticated Al systems. You will evaluate how large scale Al systems impact societies and the environment and how they are regulated for ethical and sustainable use. Then, you will use Webots simulator to program a drone for autonomous movement and for patrolling an area with image analysis. وزارة التعليم 300 inistry of Education 2024-1446 Learning Objectives In this unit, you will learn to: > Identify what Al ethics is. > Interpret how bias and fairness impact the ethical use of Al systems. > Evaluate how the transparency and explainability problem in Al can be solved. > Analyze how large scale Al systems influence society and how they are regulated. > Program a drone for autonomous movement. > Develop an image analysis system for the drone to patrol an area. Tools > Webots > OpenCV library

Lesson 1 Introduction to AI Ethics

AI and Society

Learning Objectives

Tools

Lesson 1 Introduction to Al Ethics Link to digital lesson www.ien.edu.sa Overview of Al Ethics As Al continues to advance, it has become increasingly important to consider the ethical implications of this technology. As a citizen of the modern world, it is important to understand the significance of Al ethics in developing and using responsible Al systems. One of the main reasons Al ethics is important is that Al systems can potentially affect people's lives significantly. For example, Al algorithms can be used to make hiring and medical treatment decisions. If these algorithms are biased or discriminatory, they can lead to unjust outcomes that harm individuals and communities. Real-World Examples of Ethical Concerns in Al Discriminatory algorithms Al Ethics Al ethics refers to the principles, values, and moral standards that guide Al systems' development, deployment and use. There have been cases where Al systems were found to perpetuate biases and discriminate against certain groups of people. For example, a study by the National Institute of Standards and Technology found that facial recognition technology has higher error rates for people with darker skin tones, which can lead to false identifications and wrongful arrests. Another example is the use of Al algorithms in the criminal justice system, where studies have shown that these algorithms can be biased against minorities and lead to harsher sentences. وزارة التعليم Invasion of privacy Al systems that collect and analyze data can threaten personal privacy. For example, in 2018, a political consulting firm, had harvested data from millions of Facebook users without their consent and used it to influence political campaigns. This incident raised concerns about using Al and data analytics to manipulate public opinion and infringe on individuals' privacy rights. Autonomous weapons The development of autonomous weapons, which can operate without human intervention, has raised ethical concerns about using Al in warfare. Critics argue that these weapons can make life-or-death decisions without human oversight and can be programmed to target specific groups of people, which could violate international humanitarian law and lead to civilian casualties. Job displacement The increasing use of Al and automation in various industries has raised concerns about job displacement and the impact on workers' livelihoods. While Al can improve efficiency and productivity, it can also lead to job losses and exacerbate income inequality, which can have negative social and economic consequences. Ministry of Education 2024-1446 301

Lesson 1 Introduction to AI Ethics

Overview of AI Ethics

AI Ethics

Real-World Examples of Ethical Concerns in AI

Bias and Fairness in AI Bias can occur in Al systems when the data used to train the algorithm is unrepresentative or contains underlying prejudices. Bias in Al systems can occur on any data that the system outputs represent, such as products, opinions, communities, and trends, among others. An example of a biased algorithm is an automated hiring system that uses Al to screen job candidates. Suppose the algorithm is trained on biased data, such as historical hiring patterns that favor certain demographic groups. In that case, it may perpetuate those biases and unfairly screen out qualified candidates from the groups, ignoring categories that are not well represented in the data set. For example, suppose the algorithm favors candidates who attended elite universities or worked at prestigious companies. In that case, it may disadvantage candidates who did not have access to those opportunities or who come from less privileged backgrounds. This can lead to a lack of diversity in the workplace and perpetuate systemic inequalities. Therefore, it is important to develop and use Al hiring algorithms that are based on fair and transparent criteria and do not perpetuate biases. Fairness in Al refers to how Al systems produce unbiased outcomes and treat all individuals and groups fairly. Achieving Al fairness requires identifying and addressing biases in the data, algorithms, and decision-making processes. For example, one approach to achieving fairness in Al is to use a process called "debiasing," where biased data is identified and removed or modified to ensure that the algorithm produces more accurate and unbiased outcomes. Al Bias In the context of Al, bias refers to the tendency of machine learning algorithms to produce outcomes that systematically favor or disfavor certain alternatives or groups, leading to inaccurate predictions and potential discrimination against certain products or populations. Table 6.1: Factors that contribute to biased Al Biased training data Lack of diversity in the development teams Lack of oversight and accountability Lack of experience k of experie and knowledge in the development team التعليم Ministry of Education 302 2024-1446 Al algorithms learn from the data they are trained on, so if the data is biased or unrepresentative, the algorithm may produce biased outcomes. For example, if an image recognition algorithm is trained on a dataset that predominantly features lighter-skinned individuals, it may have difficulty in recognizing individuals with darker skin tones accurately. If the development team is not diverse and does not represent a range of cultural and technical varieties, they may not recognize the biases in the data or the algorithm. A team that only consists of individuals from a particular geographic region or culture leads to a lack of consideration for other regions or cultures that may be represented in the data used to train the Al model. The lack of oversight and accountability in the development and deployment of Al systems can lead to the perpetuation of biases. Without adequate oversight and accountability mechanisms from companies and governments, testing for bias in Al systems may not be carried out and there may be no recourse for individuals or communities harmed by biased outcomes. Development teams lacking experience may not identify or address bias indicators in the training data. A lack of knowledge in designing and testing Al models for fairness may perpetuate existing biases.

Lesson 1 Introduction to AI Ethics

Bias and Fairness in AI

AI Bias

Table 6.1: Factors that contribute to biased AI

Reducing Bias and Promoting Fairness in Al Systems Diverse and representative data This involves using data that reflects the diversity of the group it represents. Additionally, it is important to regularly review and update the data used to train Al systems to ensure that it remains relevant and unbiased. Debiasing techniques Debiasing techniques involve identifying and removing biased data from Al systems to improve accuracy and fairness. This can include techniques such as oversampling, undersampling, and data augmentation to ensure the Al system is exposed to various data points. Explainability and transparency Making Al systems more transparent and explainable can help to reduce bias by allowing users to understand how the system makes decisions. This involves clarifying the decision-making process and allowing users to explore and test the system's outputs. Human-in-the-loop design Incorporating human-in-the-loop design into Al systems can help to reduce bias by allowing humans to intervene and correct the system's outputs when necessary. This involves designing Al systems with a feedback loop enabling humans to review and approve the system's decisions. Ethical principles Incorporating ethical principles, such as fairness, transparency, and accountability, into the design and implementation of Al systems, ensuring that they are developed and used ethically and responsibly. This involves establishing clear ethical guidelines for using Al systems and regularly reviewing and updating these guidelines as necessary. Regular monitoring and evaluation Regularly monitoring and evaluating Al systems is essential for identifying and correcting bias. This involves testing the system's outputs and conducting regular audits to ensure it operates fairly and accurately. Evaluating user feedback User feedback can help identify bias in the system, as users are often more aware of their own experience and can provide better insights into potential bias than Al algorithms can. For example, users can provide feedback on how they perceive the Al system's ⚫⚫performance or provide helpful suggestions for ways to improve the system and make it less biased. Oversampling Oversampling in machine learning is increasing a class's samples in a dataset to improve the model's accuracy. It is done by randomly duplicating existing points from the class or generating new points from the same class. Undersampling Undersampling is the process of reducing the size of the dataset by deleting a subset of the larger dataset to focus on the more important data points. This is particularly useful if the dataset contains an imbalance of classes or different data groups. Data Augmentation Data augmentation is the process of generating new training data from existing data to enhance the performance of machine learning models. Examples include image flipping, rotation, cropping, color changing, affine transformation, and noise addition. وزارة التعليم Ministry of Education 2024-1446 303

Lesson 1 Introduction to AI Ethics

Reducing Bias and Promoting Fairness in AI Systems Diverse and representative data

Debiasing techniques

Explainability and transparency

Human-in-the-loop design

Ethical principles

Regular monitoring and evaluation

Evaluating user's feedback

Oversampling

Undersampling

Data Augmentation

modifier 0101 ob. bject to mirro mirror_object 11101000 1 MIRROR X use x=True use y False FIRROR Y O 8010 Use ze False H busex ise y =True e use x = False I 66 D use z False OTMIRROR Z" use_x = False BIT 0 use y False use z=True 110010101 at the end ct1101000 lect 1010-1 ene.objects.acti str(modifier select 10 ext selected fore cts [one.namese ase select exactl ΟΙ ΙΟΟΙΟΙΟΙΟΙΙΟ FOR CLASSES erator): or to the selected ror mirror_x" 101 - The Problem of Moral Responsibility in Al The problem of moral responsibility when using advanced Al systems is a complex and multifaceted issue that has attracted significant attention in recent years. One of the key challenges with advanced Al systems is that they can make decisions and take actions that can have significant positive or negative consequences for individuals and society. However, who should be held morally responsible for these outcomes is not always clear. One perspective is that the developers and designers of Al systems should bear responsibility for any negative outcomes that result from their use. This view emphasizes the importance of ensuring that Al systems are designed with ethical considerations and that developers are held accountable for any harm their creations may cause. Others argue that the responsibility for Al outcomes should be shared among broader stakeholders, including policymakers, regulators, and technology users. This view highlights the importance of ensuring that Al systems are used in ways that align with ethical principles and that the risks associated with their use are carefully evaluated and managed. Another view is that Al systems are moral agents responsible for their actions. This theory holds that advanced Al systems can have agency and autonomy, making them more than tools and requiring them to be accountable for their own acts, but there are various problems with this theory. Al systems can make judgments and act but are not moral agents for multiple reasons. First, Al systems lack consciousness and subjective experiences, which are essential for moral agency. Moral agency usually involves reflecting on one's ideals and actions. Second, people train Al systems to follow specified rules and goals, which limits. their moral judgment. Al systems can replicate moral decision-making but lack free will and personal autonomy. Finally, Al system creators and deployers are responsible for their acts. Thus, Al systems can aid ethical decision-making but are not moral agents. وزارة التعليم Ministry of Educat 304. ive obric

Lesson 1 Introduction to AI Ethics

The Problem of Moral Responsibility in AI

Transparency and Explainability in Al and the Black-Box Problem The black-box problem in Al is the challenge of understanding how an Al-based system makes decisions or produces outputs. This can make it difficult to trust, explain, or improve the system. Lack of openness and explainability might affect people's trust in the model. Medical diagnosis and autonomous vehicle judgments can be especially challenging. Biases in machine learning models are another black box concern. The biases in the data these models are trained on can lead to unfair or discriminating results. Additionally, the accountability for decisions made by a black box. model can be difficult to determine. It can be challenging to hold anyone responsible for those decisions, particularly with the need for human oversight, such as in the case of autonomous weapons systems. The lack of transparency in Al decision-making makes it challenging to identify and fix problems with the model. It can be difficult to make improvements and ensure it functions correctly without knowing how the model makes its decisions. There are several strategies for addressing the black box problem in Al. One strategy is to use explainable Al techniques to make machine learning models more transparent and interpretable. This can involve techniques such as natural language explanations or visualizations to aid in understanding the decision-making process. Another approach is to use more interpretable machine learning models, such as decision trees or linear regression. These models may be less complex and easier to understand, but they may not be as powerful or accurate as more complex models. Addressing the black box problem in Al is crucial for building trust in machine learning models and ensuring they are used ethically and fairly. Output وزارة التعليم Output Ministry of Education 2024-1446 Black-Box System A black-box system is one that does not reveal its internal working processes to humans. An input is fed and an output is produced without knowing how it works, as depicted in figure 6.1. Input Output Black-Box Figure 6.1: Black-Box System Methods for Enhancing the Transparency and Explainability of Al Models LIME LIME (Local Interpretable Model-Agnostic Explanations), which you have used previously for NLP tasks, is a technique that generates local explanations for individual predictions made by a model. LIME creates a simpler, interpretable model approximating the complex black-box model's behavior around a specific prediction. This simpler model is then used to explain how the model arrived at its decision for that particular prediction. The advantage of LIME is that it provides human-readable explanations that non-technical stakeholders can easily understand, even for complex models like deep neural networks. SHAP SHAP (SHapley Additive explanations) is another method for explaining the output of machine learning models. SHAP is based on the concept of Shapley values from game theory and assigns a value (or weight) for each feature's contribution to the prediction. SHAP can be used with any model, and it provides explanations in the form of feature importance scores, which can help to identify which features are the most influential in the model's output. 305

Lesson 1 Introduction to AI Ethics

Transparency and Explainability in AI and the Black-Box Problem

Black-Box System

Methods for Enhancing the Transparency and Explainability of AI Models

Another technique for improving Al explainability such as decision trees and decision rules, which are interpretable models that can be easily visualized. Decision trees partition the feature space based on the most informative feature and provide explicit rules to make decisions. Decision trees are particularly useful when the data is tabular and there are a limited number of features. However, these models are also limited, as the interpretability of the generated decision tree decreases with the tree size. For example, it is difficult to understand trees consisting of thousands of nodes and hundreds of levels. Finally, another approach uses techniques such as sensitivity analysis to help understand how input changes or assumptions can impact the model's output. This approach can be particularly useful in identifying the sources of uncertainty in the model and in understanding the model's limitations. Value-Based Reasoning in Al Systems The goal is to create Al systems more aligned with human values and ethics, ensuring that they act in beneficial, fair, and responsible ways. The first step in value-based reasoning involves understanding and representing ethical values within Al systems. These systems must be capable of interpreting and internalizing values or ethical guidelines provided by their human creators or stakeholders. This process may involve learning from examples, human feedback, or explicit rules. By clearly understanding these values, Al systems can better align their actions with the desired ethical principles. Input Al Model Predefined values Value-Based Reasoning Value-based reasoning in Al systems refers to the process by which artificial intelligence agents make decisions or derive conclusions based on a predefined set of values, principles, or ethical considerations. Output Figure 6.2: Representation of Value-Based Reasoning The second aspect of value-based reasoning focuses on evaluating decisions or actions based on internalized values. Al systems must assess the potential outcomes of different decisions or actions by considering each option's consequences, risks, and benefits. This evaluation process should consider the underlying values the Al system has been designed to uphold, ensuring that it makes informed and value-aligned choices. Lastly, value-based reasoning requires Al systems to make decisions that align with established values. After evaluating various options and their potential outcomes, the Al system should select the decision or action that best reflects the ethical principles and goals it was designed to follow. By making value-aligned decisions, Al agents can act in ways consistent with the ethical guidelines set by their creators, promoting responsible and beneficial behavior. For example, Al systems are being used in healthcare to assist with diagnosis and treatment decisions. These systems must be able to • ⚫reason about the ethical implications of different treatments, such as the potential side effects or the impact on quality of life, and make decisions prioritizing patient well-being. Another example is Al pil ill systems used in finance to assist with investment decisions. Ministry of Education 306 2024-1446

Lesson 1 Introduction to AI Ethics

Another technique for improving AI explainability such as decision trees and decision rules,

Value-Based Reasoning in AI Systems

These systems must be able to reason about the ethical implications of different investments, such as the impact on the environment or social welfare, and make decisions that align with the investor's values. It is important to note that responsibility does not solely rely on the Al system, but rather a collaboration between the Al and human experts. The Al system will assist in decision-making by summarizing the case and presenting the tradeoffs to the user-expert, who ultimately takes the final decision. This ensures that the human expert retains control and is accountable for the final outcome, while also benefiting from the insights and analysis provided by the Al system. Al and Environmental Impact The impact of Al on the environment and our relation to the environment is complex and multifaceted. Potential benefits On the one hand, Al has the potential to help us better understand and address environmental challenges, such as climate change, pollution, and biodiversity loss. Al can help us in analyzing vast amounts of data and predict the impact of different human activities on the environment. It can also help in designing more efficient and sustainable systems, such as energy grids, agriculture, transportation systems, and buildings. Potential risk or harm Figure 6.4: Al analyzing large amounts of data However, there are also concerns about the environmental impact of Al itself. The development and use of Al systems require significant energy and resources, which can contribute to greenhouse gas emissions and other environmental impacts. For example, training a single Al model can require as much energy as several cars use in their lifetimes. Additionally, producing electronic components in Al systems can contribute to environmental pollution, such as using toxic chemicals and generating electronic waste. Moreover, Al can potentially change our relationship with the environment in ways that are not always positive. For example, using Al in agriculture may lead to more intensive and industrialized farming practices, negatively impacting soil health and biodiversity. Similarly, the use of Al in transportation may lead to more reliance on cars and other modes of transportation, which can contribute to air pollution and habitat destruction. Conclusion Overall, the impact of Al on the environment and our relation to the environment depends on how we develop and use Al systems. It is ...important to consider Al's potential environmental impacts and develop and use Al systems in ways that prioritize sustainability, efficiency, and the planet's health. Figure 6.3: Al systems require significant energy and resources وزارة التعليم Ministry of Education 2024-1446 307

Lesson 1 Introduction to AI Ethics

These systems must be able to reason about the ethical implications of different investments,

AI and Environmental Impact

Potential risk or harm

Conclusion

Regulatory Frameworks and Industry Standards Regulatory frameworks and industry standards are critical in promoting ethical Al applications. Regulations and standards can help ensure that organizations developing and using Al systems are accountable for their actions. By setting clear expectations and consequences for non-compliance, regulations and standards can incentivize organizations to prioritize ethical considerations when developing and using Al systems. Transparency Regulations and standards can promote transparency in Al systems by requiring organizations to disclose how their systems work and what data they use. This can help build trust with stakeholders and reduce concerns about potential biases or discrimination in Al systems. Risk assessment The risk of unintended consequences or negative outcomes from using Al can also be reduced with appropriate regulations and standards. By requiring organizations to conduct risk assessments. This means identifying potential risks and hazards and implementing appropriate safeguards, regulations and standards can help minimize potential harm to individuals and society. Clear Al developing and deploying frameworks Regulations and standards can also encourage innovation by providing a clear framework for developing and using Al systems. Using regulations and standards to establish a level playing field and providing guidance on ethical considerations can help organizations develop and deploy Al systems in ways that are consistent with ethical and social values. Regulatory frameworks and industry standards are important in promoting ethical Al applications. By providing clear guidance and incentives for organizations to prioritize ethical considerations, regulations and standards ensure that Al systems are developed and used in ways that are aligned with social and ethical values. Sustainable Al Development in the Kingdom of Saudi Arabia Al technologies and systems are expected to become a major disruptor in the financial sectors of many countries and may significantly affect the job market. It is predicted that in the coming years, about 70% of the routine work currently performed by workers will be fully automated. The Al industry is expected to create 97 million new jobs and add 16 trillion US dollars to global GDP. The Saudi Data and Artificial Intelligence Authority (SDAIA) has developed strategic goals for the Kingdom to use sustainable Al technologies for its development. KSA will be a worldwide hub for Data & Al. They also hosted the first Global Al summit in KSA, where global leaders and innovators can discuss and shape Al's future for society's benefit. Another aim is to transform the Kingdom's workforce by developing a local Data & Al talent supply. As Al is transforming labor markets globally, most sectors need to adapt and integrate Data & Al into education, professional training, and public knowledge. By doing so, KSA can gain a competitive advantage in terms of employment, productivity, and innovation. SDAIA l holl g Saudi Data & Al Authority The final goal is to attract companies and investors through flexible and stable regulatory frameworks and incentives. Regulations will focus on developing policies and standards for Al, including ethical use. The framework will promote and support ethical development of Al research and solutions while providing data protection and privacy standards guidelines. This will provide stability and direction for stakeholders operating in the Kingdom. Ministry of Education 308 2024-1446

Lesson 1 Introduction to AI Ethics

Regulatory Frameworks and Industry Standards

Sustainable AI Development in the Kingdom of Saudi Arabia

Example The Kingdom of Saudi Arabia plans to use Al systems and technologies as the base of its NEOM and THE LINE megacity projects. The NEOM project is a futuristic city that will be powered by clean energy, have advanced transportation systems, and provide high-tech services. It will be a platform for cutting-edge technologies, including Al, and will use smart city solutions to optimize energy consumption, traffic management, and other urban services. Al systems will be used to enhance the quality of life for residents and to improve sustainability. Similarly, THE LINE will be a linear, zero-carbon city built with Al technologies. THE LINE will use Al systems to automate its infrastructure and transportation systems, creating a seamless, efficient experience for residents. The city will be powered by clean energy and will prioritize sustainable living. Al-powered systems will be used to monitor and optimize energy usage, traffic flow, and other urban services. Overall, Al systems and technologies will play a crucial role in developing these megacity projects, enabling them to become sustainable, efficient, and innovative cities of the future. NEOM International Al Ethics Guidelines As illustrated in the table below, UNESCO has developed a guideline document detailing the values and principles with which new Al systems and technologies should be developed and maintained. Table 6.2: Values and principles of Al ethics Values • Respect, protection and promotion of human dignity, human rights and fundamental freedoms • Environment and ecosystem flourishing • Ensuring diversity and inclusiveness • Living in harmony and peace Principles Proportionality and doing no harm • Safety and security • Fairness and non-discrimination Sustainability • Privacy • Human oversight and determination • Transparency and explainability Responsibility and accountability Awareness and literacy • Multi-stakeholder and adaptive governance and collaboration وزارة التعليم Ministry of Education 2024-1446 309

Lesson 1 Introduction to AI Ethics

The Kingdom of Saudi Arabia plans to use AI systems and technologies

International AI Ethics Guidelines

Exercises 1 Read the sentences and tick ✔ True or False. 1. Al ethics is only concerned with the development of Al systems. 2. Al and automation have the potential to lead to job displacement. 3. A lack of diversity in Al development teams can lead to biases being overlooked or unaddressed. 4. Incorporating ethical principles into Al systems can help ensure their responsible development and use. 5. Human-in-the-loop design requires that Al systems work without any human intervention. 6. The black box problem in Al refers to the difficulty in understanding how Al algorithms arrive at their decisions or predictions. 7. Al models can be designed to adapt their decisions or outcomes according to established ethical values. 8. The widespread use of Al only has positive implications on the environment. 2 Describe how Al and automation might lead to job displacement. وزارة التعليم Ministry of Education 310 2024-1446 True False

Lesson 1 Introduction to AI Ethics

Read the sentences and tick True or False.

Describe how AI and automation might lead to job displacement.

3 Outline how biased training data can contribute to biased Al outcomes. 4 Define the black-box problem in Al systems. 5 Compare how Al systems can have both positive and negative impacts on the environment. وزارة التعليم Ministry of Education 2024-1446 311

Lesson 1 Introduction to AI Ethics

Outline how biased training data can contribute to biased AI outcomes.

Define the black-box problem in AI systems.

Compare how AIsystems can have both positive and negative impact on the environment.