February 1, 2019

 

 

North Star Trustworthy AI - a Questionable Marketing Strategy Made in Europe

 

A Statement in the context of the "Stakeholders' Consultation on Draft AI Ethics Guidelines“

 

 

My name is Claudia Otto, I am a German lawyer based in Frankfurt am Main. I have been working on the compatibility of law and "new" technologies as well as ethical principles for many years now. More than two years in my own specialized law firm, since 2017 also as editor, editor-in-chief and author of "Recht innovativ (Ri)". I am also a member of the AI Alliance and wish to comment on the Draft Ethical Guidelines for Trustworthy AI1 (the „Draft“) of the High-Level Expert Group on Artificial Intelligence (the „HLE-Group“) dated 18 December 2018. However, the framework conditions do not allow me to deal comprehensively with the draft as desired until today's deadline. I would therefore like to confine myself to these and the core shortcomings of the Draft that I have identified. The aim is to encourage the European Commission and the HLE-Group to improve framework conditions and core shortcomings. In my opinion, this goal cannot be achieved by copying the following explanations into the comment field "General Comments" on the feedback page of the Futurium platform:

 

 

I appreciate the European Commission's initiative to give high priority to fundamental ethical issues relating to technologies which are commonly referred to as „artificial intelligence (AI)“ in the absence of a clear definition. I also appreciate the creation of a body bringing together different age groups, professions and careers, sound professional views, professional and personal experience and knowledge to shape the future of European citizens2 and society on a valid and firm basis.

 

Nevertheless, there is an urgent need for improvement, which I would like to explain below:

 

I. European Commission's tight timetable costs quality and safety

 

As the Draft shows in particular on pages 11, 12, 22, 24 and 28, time is far too short for an "expert group" of 52 members. Ethical guidelines for Europe should not be based on time pressure, but on scientific research and comprehensive consideration.

 

Nor can there be a fruitful consultation of stakeholders within a period of less than a month. In the year-end and public holiday period there is hardly enough time for a well-founded examination of complex questions on "artificial intelligence", honoring the importance for the future of the citizens of Europe. Moreover, the consultation cannot deliver robust results if the draft is made known and accessible to only a few stakeholders. There are very few reflective articles in the press. Those existing hide behind a pay wall. The search engine results are sparse and mostly meaningless. And this in light of an issue that is deemed to determine Europe's and its citizens‘ future.

 

The European Commission should therefore reconsider its timetable. The HLE-Group should improve the communication.

 

II. Lack (of enforcement) of ethical requirements for the HLE-Group itself

 

On page 2 of the Draft there is a link to a list of the 52 members of the HLE-Group including linked short CVs. An alphabetical list of names can be found on the last page of the document. There is a Register of Commission Expert Groups with linked Declarations of Interest (DOI)2. There is also a Transparency Register, from which lobbying can be seen.3 So far, so good.

 

The weight of the proportion of "high-level" experts who openly represent the provider side and thus the primary addressees of ethical (read: restrictive) principles in matters of "AI" is already very high at 13 (companies/groups), i.e. 25 percent. One glance at the Transparency Register, for example regarding Google, reveals that they follow the interest "(...) to organize the world's information and make it universally accessible and useful". The service to the European citizen is not clear here.

 

The flowery short CVs of "former" and self-proclaimed "experts" are no less reason for scepticism. According to random samples, these CVs are more up-to-date than the DOI in the register. A fact in contradiction with the final declaration of the DOI:

 

 

The DOI and short CVs are even grossly inconsistent. The Chairman, for example, states in his DOI that he has not been a member of a management or supervisory body (2) or other representative of interests or views (6) in the last five years which could constitute a potential conflict with the task of the HLE-Group. According to his own information,4 he is – in four cases:

 

 

SAP, for example, is openly listed in the Transparency Register as a lobbyist and is also openly represented by Mr. Noga in the HLE-Group. Why this gap is not closed by the Chairman and the European Commission is not understandable.

 

The disclaimer of the European Commission on page 2 of the Draft

 

„The contents of this working document are the sole responsibility of the High-Level Expert Group on Artificial Intelligence (AI HLEG). Although staff of the Commission services facilitated the preparation of the Guidelines, the views expressed in this document reflect the opinion of the AI HLEG, and may not in any circumstances be regarded as stating an official position of the European Commission.“

 

surprises if a member of the HLE-Group only becomes visible at second or third glance as working for the European Commission:5

 

 

Her DOI, available through the Register of Commission Expert Groups, confirms an "expert", but not secretarial, role of Ms Bouarfa for the European Commission until 2020. It also confirms significant economic interest due to an investment. This is transparent, but not consistent on the part of the Commission. According to Annex I - Classification Form to the "CALL FOR APPLICATIONS FOR THE SELECTION OF MEMBERS OF THE HIGH-LEVEL EXPERT GROUP ON ARTIFICIAL INTELLIGENCE", independence and action in the public interest are required:

 

 

Quite obviously, independence and the non-observance of other interests are neither further examined nor enforced.

 

Critical connections and money flows, whether already established or planned, such as between Facebook and the Technical University of Munich,6  which is also represented in the HLE-Group, should also be disclosed (even retrospectively). Already the impression of potential influence is sufficient to cast doubt on the independence of the HLE-Group. Therefore each individual HLE-Group member should be measured less by the inflationary and therefore meaningless "expert" designation than by its connections to companies whose business models are or could be affected by the "Ethical Guidelines" and the documentation of the EU based on them in the future.

 

The above-mentioned circumstances give rise to doubts as to whether the HLE-Group will take an independent lead, oriented towards the well-being of the European citizens. Those who decide on ethics and the future of human well-being should subject themselves to ethical principles. The European Commission must also effectively assess and enforce these requirements. Keeping transparency registers alone is not sufficient if the important information is not up to date, contradictory and can only be compiled through considerable effort.

 

III. Lack of objective distance; promotional language and establishment of a misleading concept of AI

 

A paper on ethical principles, which assesses the future of people under the premise "Trustworthy AI will be our north star", lacks objective distance. Emotionalisation through romantic approaches, storytelling in novel and children's book style7 and the constantly repeated linking of "AI" and "trustworthy", starting in the summary and repeated 93 times (!) throughout the document, are extremely questionable. This is particularly the case with reference to pages 11 to 13 of the Draft, where the HLE-Group sacrifices only 14 lines for autonomous weapon systems and shows that potential negative long-term consequences of the use of "artificial intelligence" are neither known nor determined. At this point there would be relevant scientific literature that could have been evaluated. Buzzwords and marketing terms such as "Trustworthy AI" cannot replace necessary research and long-term impact assessment. Even more so if the possibility of the stakeholder feedback expressly required here is limited to a time window of only one month.

 

Whoever justifies "Trustworthy AI" as a term through numerous repetitions (again: 93, on 33 pages) creates an illusion of truth. A legend. Through numerous repetitions, a so-called truth emerges that no one doubts anymore.8 Even if in between the lines and bold "Trustworthy AI" repetitions it says that a few voluntary requirements have to be met. The original document is no longer read by the common reader once "Trustworthy AI" has become established in their minds. Just as nobody reads and understands the extensive and arbitrary availability of terms of use and privacy statements from providers such as Google9 or Facebook10 in terms of how they interpret "trustworthiness".

 

Ethical guidelines must be based on scientific and ethical principles, not on personal wishes, feelings and more or less transparent motives of the HLE-Group. They must not be a marketing concept to seduce or even mislead the citizens of Europe.

 

IV. "Trustworthy AI" implies machine accountability

 

The HLE-Group speaks in the continuous text of human-centered AI and human accountability. At the same time, the people are expected to trust in machines. „Trustworthy AI" is repeated 93 times so that no one even demands to see the foundation for this trust.

 

What relationship of trust between man and machine do the experts envision here? Trust and knowledge are mutually exclusive. People, in particular consumers, who come into contact with technical products do not have to trust, but must be fully informed in accordance with European law. With regard to

 

- who the manufacturer is,

- what the product can do,

- if it has the necessary security,

- how to use it,

- which dangers may result from the use or incorrect use,

- for how long and with what accompanying obligations the use can or should take place and

- which rights result from the necessary contract preceding the use.

 

To demand trust in connection with "AI" is a neglection of necessary information. And a neglection of applicable law.

 

But this is not the only reason why the Draft is paradoxical. The HLE-Group does not evaluate the shift of liability to an "electronic person" as ethical or unethical, while the term "Trustworthy AI" necessarily includes machine accountability as a consequence of the "relationship of trust". Here, too, the impression arises of the pursuit of extraneous goals, but not from the focus on the European citizen.

 

The creation of a third person11, as the subject of liability according to the idea of the European Parliament,12 leaves no room for the necessary traceability13 of damaging decisions of "artificial intelligence". There would be no incentive to make potentially defective decisions of "artificial intelligence" traceable back to an accountable person and thus prevent them in its own interest.14  Any incentive to improve and protect other stakeholders would be removed. The result would be a development of "artificial intelligence" threatening humans, which the HLE-Group was actually supposed to counteract.

 

Animals are mentioned only once in the draft. The environment as a protected good also hardly occurs. Neither develop trust in machines. Nevertheless, both must be protected on the basis of ethical principles. Both are damaged by humans, not by the machines used by humans.

 

The HLE-Group must therefore clearly position itself, for example, to the effect that the idea of the "electronic person" would do a disservice to Europe's goals.

 

V. The underlying term "artificial intelligence" is too narrow and futuristic

 

The HLE-Group bases its ethical principles on an understanding of "artificial intelligence", which raises considerable problems:

 

„Artificial intelligence (AI) refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions.

 

As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).“15

 

Both descriptions (read together) are much too narrowly defined and thus do not cover essential areas of "artificial intelligence" perceived by humans as such. So-called weak AI16 is largely not covered by the first paragraph, but raises ethical questions in marketing as "artificial intelligence" which remain unaddressed and unsolved by the HLE-Group. When it comes to ethical principles, „AI“ should be broadly defined in order to avoid inconsistencies, harm (e.g. as a result of the creation of a misconception, i.e. deception) and negative effects on the economy (i.e. unfair competition through misleading claims).

 

To base the definition of "artificial intelligence" on "reasoning", an expression of the ability to think, shows that the HLE-Group has an errant understanding of "artificial intelligence". It shows that it ignores current developments and scientific findings. The HLE-Group treats so-called strong AI17, which includes Machine Reasoning, as a standard to be regulated.

 

The HLE-Group thus bases "artificial intelligence" on the concept of human intelligence. To illustrate that the HLE-group makes unrealistic demands on "artificial intelligence", the first paragraph of the above glossary section is slightly modified by replacements, recognizable by square brackets:

 

„[Analyst (A)] refers to [a human] designed by humans that, given a complex goal, act[s] in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal.

 

[Children] can also be [taught] to learn to adapt their behaviour by analysing how the environment is affected by their previous actions.“

 

This means that "artificial intelligence" must first have reached the level of human intelligence in order to be subject to ethical principles and guidelines. However, this will not be the case in the near future. And thus contradicts both intention and ethics.

 

VI. Why should the ethical requirements for "artificial intelligence" lag behind those of the pharmaceutical industry?

 

„It strives to facilitate and enable “Trustworthy AI made in Europe” which will enhance the well-being of European citizens.“18

 

We are talking about welfare, health and life of people (and animals) in Europe. Of technical solutions and applications that can and shall also be used in the healthcare sector. Even if they do not yet represent "artificial intelligence" in the sense of the HLE-Group's Draft. In the pharmaceutical industry, in addition to comprehensive transparency through information, meticulous care is taken to ensure that contributions towards research and development for the benefit of human well-being on the one hand and sales interests on the other hand are not mingled (so-called separation principle).

 

These ethical principles must also apply to the development of "artificial intelligence", whatever the industry. Sales interests must certainly not dictate ethical principles.

 

VII. Conclusion: A good design takes time

 

It can therefore be summarised that the ethical principles and guidelines laid down in the Draft threaten to go astray because of their problematic framework conditions. The design is based on the unrealistic expectation of human intelligence on machines. The HLE-Group then establishes and consolidates an unrealistic idea of a "Trustworthy AI" that does not (yet) exist. In this respect, the extremely tight timetable is factually incomprehensible.

 

The members of the HLE-Group should commit themselves to ethical principles and act accordingly. The European Commission must monitor and enforce compliance with them. The overall impression is currently an unpleasant one: scientific findings do not get to share the lime light. Instead, the establishment of a brand "Trustworthy AI made in Europe" for the benefit of personal or represented financial interests is a priority. This brand, which is based on artificially created trust, benefits above all manufacturers and suppliers of "artificially intelligent" products who do not want to provide the information required for the human decision to purchase or use and explainability and/or traceability of "artificially intelligent" decisions.

 

I expressly suggest that the laudable project "ethical framework conditions and requirements for artificial intelligence in Europe" should be given the time it needs in accordance with its significance for people, animals and the environment in Europe. Hurried decisions do not lead to healthy competition, especially if the term "Trustworthy AI" is misleading in itself. Stakeholders must also be given more time to provide high-quality feedback. A good result requires interaction. And the time to do so.

 

 

 

1  https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_draft_ethics_guidelines_18_december.pdf (last retrieved on February 1, 2019).

2  http://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupDetail&groupID=3591 (last retrieved on February 1, 2019).

3  http://ec.europa.eu/transparencyregister/public/consultation/displaylobbyist.do?id=03181945560-59&locale=en (last retrieved on February 1, 2019).

4  https://ec.europa.eu/futurium/en/european-ai-alliance/european-ai-alliance-steering-group (last accessed on February 1, 2019).

5  https://ec.europa.eu/futurium/en/european-ai-alliance/european-ai-alliance-steering-group (last accessed on February 1, 2019).

6 Thiel, „Geschlossener Wettbewerb“, FAZ of Januar 30, 2019, https://www.faz.net/aktuell/feuilleton/hoch-schule/geschlossener-wettbewerb-die-tu-muenchen-erlaeutert-ihren-facebook-deal-16013071.html (last accessed on February 1, 2019).

7  Lastella, „Nordsternfunkeln“; Bickel, „Was ist mit Nordstern los?“, „Reitlehrer Lars Hansen – Alles für Nordstern“; Reynolds, „The North Star“; Root, „One North Star – A Counting book“ and many more.

8  Le Bon, "Psychology of the Masses" from 1895 (!), excerpts:

"The pure simple assertion without justification and every proof is a sure means to instill an idea into the mass soul. The more certain an assertion, the more free it is of evidence, the more awe it awakens" (p. 170), "but the assertion can only have a real impact if it is constantly repeated, if possible with the same expressions. Napoleon said there was only one serious figure of speech: repetition. The repetitive is so firmly fixed in the minds that it is finally accepted as a proven truth.", (page 171); eBook original edition © 05/2013 by eClassica.

9  See CNIL decision of 21 January 2019, https://www.cnil.fr/en/cnils-restricted-committee-imposes-financial-penalty-50-million-euros-against-google-llc (English summary), last accessed on 1 February 2019.

10  See Fn 6.

11  Besides the legal and natural person.

12  European Parliament resolution of 16 February 2017 with recommendations to the Commission on civil robotics law (2015/2103(INL)),

 http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P8-TA-2017-0051+0+DOC+XML+V0//DE (last retrieved on February 1, 2019).

13  Otto, Ri 2018, "Das dritte Ich - Ist die "Schizophrenie" künstlich intelligenter Systeme behandelbar?", p. 68 ff; Otto, Ri 2018, "Die größte Verwundbarkeit ist die Unwissenheit", p. 136 ff.

14  Otto, Ri 2018, „Die größte Verwundbarkeit ist die Unwissenheit“, pp. 136, 139.

15  Draft, page iv.

16  About the term: Otto, Ri 2018, "Das dritte Ich - Ist die "Schizophrenie" künstlich intelligenter Systeme behandelbar", pp. 68, 72.

17  About the term: Otto, Ri 2018, "Das dritte Ich - Ist die "Schizophrenie" künstlich intelligenter Systeme behandelbar", pp. 68, 72.

18  Draft, page iii.

 

 

^

Please visit main page for Imprint/Data Protection