{"id":415561,"date":"2024-10-20T06:05:56","date_gmt":"2024-10-20T06:05:56","guid":{"rendered":"https:\/\/pdfstandards.shop\/product\/uncategorized\/bsi-pd-iso-iec-tr-240282020-2022\/"},"modified":"2024-10-26T11:20:19","modified_gmt":"2024-10-26T11:20:19","slug":"bsi-pd-iso-iec-tr-240282020-2022","status":"publish","type":"product","link":"https:\/\/pdfstandards.shop\/product\/publishers\/bsi\/bsi-pd-iso-iec-tr-240282020-2022\/","title":{"rendered":"BSI PD ISO\/IEC TR 24028:2020 2022"},"content":{"rendered":"
This document surveys topics related to trustworthiness in AI systems, including the following:<\/p>\n
approaches to establish trust in AI systems through transparency, explainability, controllability, etc.;<\/p>\n<\/li>\n
engineering pitfalls and typical associated threats and risks to AI systems, along with possible mitigation techniques and methods; and<\/p>\n<\/li>\n
approaches to assess and achieve availability, resiliency, reliability, accuracy, safety, security and privacy of AI systems.<\/p>\n<\/li>\n<\/ol>\n
The specification of levels of trustworthiness for AI systems is out of the scope of this document.<\/p>\n
PDF Pages<\/th>\n | PDF Title<\/th>\n<\/tr>\n | ||||||
---|---|---|---|---|---|---|---|
2<\/td>\n | National foreword <\/td>\n<\/tr>\n | ||||||
7<\/td>\n | Foreword <\/td>\n<\/tr>\n | ||||||
8<\/td>\n | Introduction <\/td>\n<\/tr>\n | ||||||
9<\/td>\n | 1 Scope 2 Normative references 3 Terms and definitions <\/td>\n<\/tr>\n | ||||||
15<\/td>\n | 4 Overview 5 Existing frameworks applicable to trustworthiness 5.1 Background <\/td>\n<\/tr>\n | ||||||
16<\/td>\n | 5.2 Recognition of layers of trust 5.3 Application of software and data quality standards <\/td>\n<\/tr>\n | ||||||
18<\/td>\n | 5.4 Application of risk management 5.5 Hardware-assisted approaches <\/td>\n<\/tr>\n | ||||||
19<\/td>\n | 6 Stakeholders 6.1 General concepts <\/td>\n<\/tr>\n | ||||||
20<\/td>\n | 6.2 Types 6.3 Assets <\/td>\n<\/tr>\n | ||||||
21<\/td>\n | 6.4 Values 7 Recognition of high-level concerns 7.1 Responsibility, accountability and governance <\/td>\n<\/tr>\n | ||||||
22<\/td>\n | 7.2 Safety 8 Vulnerabilities, threats and challenges 8.1 General <\/td>\n<\/tr>\n | ||||||
23<\/td>\n | 8.2 AI specific security threats 8.2.1 General 8.2.2 Data poisoning 8.2.3 Adversarial attacks <\/td>\n<\/tr>\n | ||||||
24<\/td>\n | 8.2.4 Model stealing 8.2.5 Hardware-focused threats to confidentiality and integrity 8.3 AI specific privacy threats 8.3.1 General 8.3.2 Data acquisition <\/td>\n<\/tr>\n | ||||||
25<\/td>\n | 8.3.3 Data pre-processing and modelling 8.3.4 Model query 8.4 Bias 8.5 Unpredictability <\/td>\n<\/tr>\n | ||||||
26<\/td>\n | 8.6 Opaqueness 8.7 Challenges related to the specification of AI systems <\/td>\n<\/tr>\n | ||||||
27<\/td>\n | 8.8 Challenges related to the implementation of AI systems 8.8.1 Data acquisition and preparation 8.8.2 Modelling <\/td>\n<\/tr>\n | ||||||
29<\/td>\n | 8.8.3 Model updates 8.8.4 Software defects 8.9 Challenges related to the use of AI systems 8.9.1 Human-computer interaction (HCI) factors <\/td>\n<\/tr>\n | ||||||
30<\/td>\n | 8.9.2 Misapplication of AI systems that demonstrate realistic human behaviour 8.10 System hardware faults <\/td>\n<\/tr>\n | ||||||
31<\/td>\n | 9 Mitigation measures 9.1 General 9.2 Transparency <\/td>\n<\/tr>\n | ||||||
32<\/td>\n | 9.3 Explainability 9.3.1 General 9.3.2 Aims of explanation 9.3.3 Ex-ante vs ex-post explanation <\/td>\n<\/tr>\n | ||||||
33<\/td>\n | 9.3.4 Approaches to explainability 9.3.5 Modes of ex-post explanation <\/td>\n<\/tr>\n | ||||||
34<\/td>\n | 9.3.6 Levels of explainability <\/td>\n<\/tr>\n | ||||||
35<\/td>\n | 9.3.7 Evaluation of the explanations 9.4 Controllability 9.4.1 General <\/td>\n<\/tr>\n | ||||||
36<\/td>\n | 9.4.2 Human-in-the-loop control points 9.5 Strategies for reducing bias 9.6 Privacy 9.7 Reliability, resilience and robustness <\/td>\n<\/tr>\n | ||||||
37<\/td>\n | 9.8 Mitigating system hardware faults 9.9 Functional safety <\/td>\n<\/tr>\n | ||||||
38<\/td>\n | 9.10 Testing and evaluation 9.10.1 General 9.10.2 Software validation and verification methods <\/td>\n<\/tr>\n | ||||||
40<\/td>\n | 9.10.3 Robustness considerations <\/td>\n<\/tr>\n | ||||||
41<\/td>\n | 9.10.4 Privacy-related considerations 9.10.5 System predictability considerations <\/td>\n<\/tr>\n | ||||||
42<\/td>\n | 9.11 Use and applicability 9.11.1 Compliance 9.11.2 Managing expectations 9.11.3 Product labelling 9.11.4 Cognitive science research 10 Conclusions <\/td>\n<\/tr>\n | ||||||
44<\/td>\n | Annex A (informative) Related work on societal issues <\/td>\n<\/tr>\n | ||||||
45<\/td>\n | Bibliography <\/td>\n<\/tr>\n<\/table>\n","protected":false},"excerpt":{"rendered":" Information technology. Artificial intelligence. Overview of trustworthiness in artificial intelligence<\/b><\/p>\n |