
A Vision for the "Law of the Future" from a de jure condendo perspective, outlines three non-negotiable pillars for AI integration into society
The role of law is to ensure that innovation does not come at the expense of fundamental rights,”
UDINE, ITALIA, ITALY, March 15, 2026 /EINPresswire.com/ -- As Artificial Intelligence continues to redefine the boundaries of legal frameworks, the gap between technological progress and regulatory oversight is widening. Luca De Pauli, partner at the prestigious law firm Ponti DePauli Partners in Udine, has published a critical analysis on the use of algorithmic systems in high-risk decision-making processes.— Luca De Pauli
As AI integration shifts from theoretical "algo-ethics" to an urgent legal priority, De Pauli warns that the pursuit of superhuman speed and precision is creating an "accountability vacuum" that threatens the foundations of modern jurisprudence.
The Fragmentation of Responsibility
International law is built upon the certainty of individual or corporate liability. However, when a decision is mediated or generated by an algorithm, this chain of responsibility is often broken.
"Artificial Intelligence is not a legal entity; it cannot be held responsible," states Luca De Pauli. "We are facing a concrete risk where it becomes impossible to distinguish whether the fault lies with the programmer, the entity implementing the system, or the human supervisor who validated the result. This grey area represents a direct threat to the Rule of Law."
The Limits of Current Regulation
While the EU AI Act (2024/1689) represented a historic step forward, De Pauli highlights a major shortcoming: the regulation focuses predominantly on market safety and technical compliance, often overlooking the fundamental issue of delegating decisions concerning human rights.
A Vision for the "Law of the Future"
From a de jure condendo perspective (proposing what the law should be), De Pauli outlines three non-negotiable pillars for AI integration into society:
Human-Centered Jurisdiction: Decisions affecting life, health, and safety must remain inherently traceable to a human being.
Explainability: The logical process leading to an algorithmic conclusion must be transparent and reconstructible.
Rigorous Liability Frameworks: Eliminating "shadow zones" for developers and users of high-risk AI to ensure victims of errors have clear legal recourse.
"The role of law is to ensure that innovation does not come at the expense of fundamental rights," De Pauli concludes."Even in the age of algorithms, every decision that alters a person's life must be anchored to a clearly identifiable human responsibility."
https://ponti-partners.it/
francesca schenetti
Ti Lancio
+ +39 339 809 3543
email us here
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
