Bernhard Jakl
Some assume that the rise of risky AI systems is the end of the Enlightenment and its related basis for trust, the autonomy-oriented justification and reasoning of norms. A first look at utopias and dystopias in innovations seems to confirm that.
However, a second look on the legal framework suggests the opposite. The legal system offers some established and autonomy-oriented normative standards, that have proven their trust-worthiness already in many societal fields. These standards are too often set aside in the regulation of AI systems. Nevertheless, they would significantly contribute to an autonomy-oriented legal-systematic classification of AI systems.
Based on that, the relationship between risky AI systems and trustworthy normative justifications is discussed with examples of current regulatory proposals. I argue for avoiding the “mechanistic trap” in the ongoing juridification of AI systems in favor of a more autonomy-oriented approach.