Accountability: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Accountability” means: Accountability relates to an allocated responsibility. The responsibility can be based on regulation or agreement or through assignment as part of delegation. In a systems context, accountability refers to systems and/or actions that can be traced uniquely to a given entity….
AI (or Algorithmic) Bias: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “AI (or Algorithmic) Bias” means: Harmful AI bias describes systematic and repeatable errors in AI systems that create unfair outcomes, such as placing privileged groups at systematic advantage and unprivileged groups at systematic disadvantage. Different types of bias can emerge…
Attack: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Attack” means: Action targeting a learning system to cause malfunction.
Chatbot (Conversational Bot): According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Chatbot (Conversational Bot)” means: A computer program designed to simulate conversation with a human user, usually over the internet; especially one used to provide information or assistance to the user as part of an automated service.
Discrimination: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Discrimination” means: Unequal treatment of a person based on belonging to a category rather than on individual merit. Discrimination can be a result of societal, institutional, and implicitly held individual biases or attitudes that get captured in processes across the AI lifecycle, including…
Evasion: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Evasion” means: In Evasion Attacks, the adversary solves a constrained optimization problem to find a small input perturbation that causes a large change in the loss function and results in output misclassification.
Fault Tolerance: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Fault Tolerance” means: The ability of a system or component to continue normal operation despite the presence of hardware or software faults.
Feedback Loop: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Feedback Loop” means: Feedback loop describes the process of leveraging the output of an AI system and corresponding end-user actions in order to retrain and improve models over time. The AI-generated output (predictions or recommendations) are compared against the final decision (for…
Harmful Bias: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Harmful Bias” means: Harmful bias can be either conscious or unconscious. Unconscious, also known as implicit bias, involves associations outside conscious awareness that lead to a negative evaluation of a person on the basis of characteristics such as race, gender, sexual orientation,…
Human Rights Impact Assessment: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Human Rights Impact Assessment” means: A human rights impact assessment (HRIA) of AI identifies, understands, and assesses the impact of the AI system on human rights, such as but not limited to, the right to privacy or non-discrimination. AI systems…
Opacity: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Opacity” means: When AI system processes, functions, output, or behavior are unavailable or incomprehensible to all stakeholders – usually an antonym for transparency.
Red-Team: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Red-Team” means: A group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. It is often used to help identify and address potential security vulnerabilities.
Reliability: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Reliability” means: An AI system is said to be reliable if it behaves as expected, even for novel inputs on which it has not been trained or tested earlier.
Resilience: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Resilience” means: The ability of an AI system to prepare for and adapt to changing conditions and withstand and recover rapidly from disruptions. Resilience includes the ability to withstand and recover from deliberate attacks, accidents, or naturally occurring threats or incidents. The ability…
Robustness (Robust AI): According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Robustness (Robust AI)” means: Robustness of an AI system encompasses both its technical robustness (ability of a system to maintain its level of performance under a variety of circumstances) as well as its robustness from a social perspective (ensuring that the…
Safety: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Safety” means that AI systems should not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.
Security: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Security” means: The protection mechanisms, design, and maintenance of an AI system and infrastructure’s AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms.
Systemic Bias: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Systemic Bias” means: Systemic bias is a social consistent structure of harmful bias that is systemically reinforced in institutions, cultural perception, and socio-technical infrastructures. AI systems can reinforce systemic biases by reproducing the discriminatory effects of systemic biases when deployed in socially…
Traceability: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Traceability” means: Ability to track the journey of a data input through all stages of sampling, labeling, processing, and decision-making.
Trustworthy AI: According to the first edition of the EU-U.S. terminology and taxonomy for artificial intelligence, the term “Trustworthy AI” means: Trustworthy AI has three components: (1) it should be lawful, ensuring compliance with all applicable laws and regulations (2) it should be ethical, demonstrating respect for, and ensure adherence to, ethical principles and values and (3) it should be…