Designing Artificial Intelligence secure and trustworthy: the next big step towards a certification of AI »Made in Germany«
Fraunhofer IAIS and the Federal Office for Information Security BSI are starting a strategic cooperation to develop test methods for the certification of system of artificial intelligence.
In the presence of NRW Digital Minister Prof. Dr. Andreas Pinkwart, Arne Schönbohm, President of the Federal Office for Information Security (BSI), and Prof. Dr. Stefan Wrobel, Director of the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, today signed a cooperation agreement to advance the development of an AI certification »made in Germany« today. The goal of the collaboration is to develop test methods that can serve as a basis for technical standards and norms. To this end, the cooperation partners are working with numerous partners from Germany and Europe. As the first major project within the framework of the cooperation, the flagship project »Certified AI« of the Artificial Intelligence Competence Platform North Rhine-Westphalia KI.NRW will start at the beginning of 2021. In the project, companies, among others, define the concrete needs for testing procedures and conduct pilot tests.
Artificial intelligence (AI) is an important key technology of the present. Intelligent systems can be used in almost all areas of life and are already capable of performing many tasks faster and more reliably than humans. In order for companies to achieve decisive competitive advantages with the use of AI, AI systems should be trustworthy and function reliably. This requires testable technical standards and norms that enable a neutral evaluation of the systems and that also provide users and consumers with information about the assured properties of AI technologies.
The BSI and Fraunhofer IAIS have now signed a cooperation agreement for the joint development of test methods. The cooperation enables experts to work closely together to establish technical product and process testing of AI systems in industry.
»Artificial intelligence is a central technology of digitalization. AI methods are being used increasingly on a large scale for security-critical tasks in the course of digitization. As a designer of secure digitization in Germany, the BSI is therefore also intensively involved with the topic and supports the federal government in driving forward the technologically and economically successful as well as socially accepted use of AI solutions in Germany. Confidence among users is important for the acceptance of new technologies. This is created, among other things, by transparent testing, evaluation and certification of AI systems. The basis for uniform standards and norms is the development of testing procedures, which we are now tackling with our long-standing partner Fraunhofer IAIS. At the same time, we have a reliable partner in the NRW Ministry of Economics, which creates and promotes good framework conditions for innovation,« says Arne Schönbohm, President of the Federal Office for Information Security BSI, Bonn.
»Fraunhofer IAIS puts great importance on the development of trustworthy AI solutions and has continuously expanded its research focus on AI assurance over the past years. With the development of our testing procedures with regard to AI certification, we are creating reliable standards for the development and evaluation of AI systems. We will be conducting our first audits with companies as early as next year. I am pleased that in the BSI we have a strong partner at our side with many years of experience in establishing standards in IT. Now we need to set the course for certification together and bring the key players together,« says Prof. Dr. Stefan Wrobel, Director of the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Sankt Augustin.
In the further development of the test procedures, practicality and marketability are to be further improved in close coordination with industry. To this end, a major flagship project will start in early 2021, funded by the state of North Rhine-Westphalia as part of the KI.NRW competence platform. Renowned partners from research and application are working together in the project, including the University of Bonn, the University of Cologne, RWTH Aachen University, the German Institute for Standardization DIN, as well as numerous DAX 30 and other companies from various industries such as telecommunications, banking, insurance, chemicals, and trade. In industry- and technology-related user groups, the participants define concrete needs, establish criteria and benchmarks for testing in practice, and conduct pilot tests. This broad participation process ensures that the procedures develop into generally accepted standards for AI systems and their testing, while being flanked by legal, ethical and philosophical considerations.
»With our outstanding competencies and the strong KI.NRW network, North Rhine-Westphalia can play a leading role in the further development of the economy and society. For this to succeed, we need to make the use of artificial intelligence trustworthy and secure. Independent certification of AI systems helps us to do this: It strengthens trust in modern IT technology and is also recognized internationally as an important competitive advantage. With the development of marketable test procedures, we are approaching this goal with great strides. This important project with strong partners from North Rhine-Westphalia demonstrates the great innovative power that is helping to establish the ‘AI made in Germany’ brand,« says Prof. Dr. Andreas Pinkwart, Minister for Economic Affairs, Innovation, Digitalization and Energy of the state of North Rhine-Westphalia. Fraunhofer IAIS experts already laid important foundations for the development of an AI certification last year as part of an interdisciplinary research project with scientists from the fields of computer science, law and philosophy, who identified the central fields of action and formulated initial guidelines for the development of a test catalog for the certification of AI systems. The results were published in 2019 in the white paper »Trustworthy Use of Artificial Intelligence«.