AI Act is intended to regulate the use of AI

The regulation of AI technologies is currently being negotiated in Europe. Politicians find themselves caught between innovative companies that fear competitive disadvantages and consumer protection that demands clear limits.


Artificial intelligence scares many people. With clear rules, the EU wants to promote trust in such technologies and protect the fundamental rights of citizens. Last week, the ministers responsible for telecommunications in the EU member states agreed on the main features of the so-called AI Act. Further details must now be coordinated with the EU Parliament and Commission. However, the main features of the forthcoming regulation are already causing business to fear regulation that could lead to great uncertainty and have a major impact on the development of the technology.


Contents of the AI Act

The current draft including appendices is around 125 pages long. It provides a risk-based approach. In concrete terms, this means that the rules are based on the risk assumed for a specific technology (minimal, limited, high and unacceptable). The focus of regulation should be on high-risk applications, which the Commission estimates account for up to 15 percent of all AI systems. This includes the operation of critical infrastructure, algorithm-assisted surgery, risk models for life insurance, credit ratings in the banking sector and systems that pre-sort applications or make a forecast of the behavior of criminals.

For such applications, companies must then introduce risk management for AI, fulfill transparency obligations towards users, submit technical documentation with precise information on the data used and also enter their program in an EU database – AI Act is intended to regulate the use of AI.


Criticism of the draft

Publicly, most companies are reluctant to criticize. Instead, they exert their influence through associations that regularly make representations in Berlin and Brussels. First and foremost, the criticism of the economy is aimed at the definitions. In addition to “concepts of machine learning”, the draft law also designates statistical approaches as well as search and optimization methods as artificial intelligence. Almost any modern software could be included under this, as the KI-Bundesverband mounted.

From the point of view of the economy, which application entails a high risk should also be defined much more precisely. In this regard, Bitkom called for specific applications to be classified instead of general use cases.

A Dax group also criticized the fact that the draft leaves open who bears the bureaucratic duties for complex products (e.g. machine controls or cars). A “huge overhead” is to be expected here due to the coordination between the manufacturers and numerous suppliers.

The technology industry sees a further need for discussion when it comes to handling data. According to the draft, these should be “representative, error-free and complete” in order to prevent discrimination against under-represented population groups, for example. According to the developers, however, high-quality data sets are only available to a very limited extent, which is why the requirement is difficult to meet.

In addition to the specific points of criticism, the digital association Bitkom generally warns against “focusing too much on risks”.

Source: Handelsblatt