Certified AI “made in Germany”
Cooperation between Fraunhofer IAIS and the German Federal Office for Information Security BSI to develop technical test procedures for the certification of artificial intelligence systems
Press release
The press release on the occasion of the newly established working group between Fraunhofer IAIS and BSI (08.07.2021)
Making AI secure and trustworthy
Artificial intelligence is an important key technology of the present. Intelligent systems can be used in almost all areas of life and are already capable of performing many tasks faster and more reliably than humans. In order for companies to achieve decisive competitive advantages by using AI, AI systems should be trustworthy and function reliably. This requires verifiable technical standards and norms that enable a neutral evaluation of the systems and that also provide information to users and consumers about the assured properties of AI technologies.
To advance the development of AI certification “made in Germany”, Fraunhofer IAIS and the German Federal Office for Information Security (BSI) have signed a cooperation agreement for the joint development of test procedures. The aim of the cooperation is to develop test procedures that can serve as a basis for technical standards and norms.
Implementation of the cooperation in the AI.NRW flagship project “Certified AI“
The development of the test methods is carried out in the AI.NRW flagship project »Certified AI«, which will start as an initial project of the cooperation in early 2021. The state-funded project is based on a broad participation process in order to ensure that the test methods developed are suitable for practical application and marketability. In industry- and technology-related user groups, the participants define concrete needs, establish criteria and benchmarks for testing in practice and conduct pilot tests. The broad participation process combines the know-how of the stakeholders and ensures that the procedures develop into generally accepted standards for AI systems and their verification. Renowned partners from research and industry, including the University of Bonn, the University of Cologne, the RWTH Aachen, the German Institute for Standardization DIN, as well as numerous DAX 30 and other companies from various sectors such as telecommunications, banking, insurance, chemistry, and retail, are working together in the project.
Flagship updates
Comments on the cooperation agreement
Flagships powered by KI.NRW
With the umbrella brand “Flagships powered by AI.NRW”, the Artificial Intelligence Competence Platform North Rhine-Westphalia supports projects sponsored by the state as AI lighthouse projects. The aim is to support efficient technology transfer and close cooperation between medium-sized companies, start-ups, universities, colleges and research institutes in NRW.
Essential contents of the cooperation agreement between Fraunhofer IAIS and BSI will be developed and implemented within the framework of the AI.NRW flagship project “Certified AI”, which is funded by the federal state of North Rhine-Westphalia. Under the strategic patronage of AI.NRW the competence platform accompanies funded projects communicatively and positions the AI location NRW by marketing the results on a European level. The focus is on the sustainable transfer and further utilization of the project results.
Certified AI
The experts at Fraunhofer IAIS have already laid important foundations for the development of an AI certification last year in an interdisciplinary research project with scientists from the fields of computer science, law and philosophy, who identified the central fields of action and formulated the first guidelines for the development of a test catalog for the certification of AI systems. The results were published in 2019 in the whitepaper “Trustworthy Use of Artificial Intelligence”.
Interdisciplinary fields of action for the development of an AI certification
As a basis for the certification of AI systems, seven fields of action were defined in interdisciplinary cooperation (see figure). The development of the technical test procedures starts here.
Ethics and Law
Does the AI application respect social values and laws?
Autonomy & Control
Is autonomous, effective usage of AI possible?
Fairness
Does the AI treat all persons concerned fairly?
Transparency
Are the AI functions and the decisions made by the AI comprehensible?
Reliability
Does the AI work reliably and is it robust?
Security
Is the AI protected against attacks, accidents, and errors?
Data Protection
Does the AI protect privacy and other sensitive information?
On the basis of the development of technical test procedures, the interdisciplinary discourse on the design of ethical and legal frameworks is also to be continued. The goal is to strengthen the trust and acceptance of companies, users and social actors in AI-based applications. The development of the testing catalog is based on the recommendations of the “Data Ethics Commission” of the German Federal Government and the “High Level Expert Group on AI” of the European Union and is to take into account technical quality criteria such as reliability and security as well as criteria of transparency and fairness.