INDUSTRY 4.0 2026, SUMMER SESSION • Presented at XI International Scientific Conference
“High Technologies. Business. Society”, Borovets, 23–26 March 2026

Management and Regulation of Artificial Intelligence Models in Public Administration: Cryptographic Transparency and Digitalization of Legal Norms

Radoslavov, R. Y.1 & Genov, M.2

1 Independent Researcher, Attorney at Law
Ruse, Bulgaria
E-mail: radoslav@radoslavov.bg

2 Strategic Enquiries, Advanced Consulting-London RR
London, United Kingdom
E-mail: zkap@advanced-consulting.london
Abstract: This paper proposes a conceptual methodological framework based on a Dual-Domain Architecture mediated by a Zero-Knowledge Audit Proxy (ZKAP) to reconcile AI Act accountability with GDPR data minimization. Legal norms are polynomialized into R1CS constraints, transforming compliance into a formally verifiable computational property. For cognitively opaque exascale models, these invariants may be hardware-anchored through a Provable Arithmetic Logic Unit (pALU), ensuring determinism and resistance to algorithmic drift. For lower-risk or on-premise systems, ZKAP operates in a software-only configuration, enabling periodic asymmetric regulatory proofs without silicon-level integration. A calibrated threshold distinguishes admissible technical variance from structural divergence, triggering mandatory safeguards. The framework provides a proportional, scalable, and cryptographically verifiable oversight model applicable both to future non-explainable AI systems and to lighter local infrastructures.
Keywords: ZKAP, pALU, Polynomialization of Law, R1CS, Safety-by-Design, Dual-Domain Architecture, Cryptographically Anchored Verification.

1. Introduction

The contemporary regulatory landscape confronts a fundamental structural tension between the imperative for accountability over artificial intelligence systems, as codified in the AI Act (Regulation (EU) 2024/1689) [3], and the obligation to protect personal data, intellectual property, and trade secrets under the General Data Protection Regulation (GDPR) [1, 2]. This tension generates a paradox of material significance: how can the State ensure substantive compliance of algorithmic processes without accessing the very data and model parameters whose protection is guaranteed by law?

The deployment of high-complexity AI architectures in public administration — where individual freedoms, property rights, and procedural guarantees are directly at stake — amplifies this problem exponentially. Current international standards, including ISO/IEC 42001:2023 [5], establish management frameworks for AI governance but do not provide a technically viable mechanism for continuous verification of compliance at the computational level.

The methodological challenge lies in the incompatibility of competing paradigms. The principles of transparency and accountability, as articulated in post-AI Act ethical guidelines [4], presuppose access to or interpretability of algorithmic decision processes. Yet for non-explainable (non-XAI) systems operating at exascale, such access is both technically impractical and legally impermissible. The information asymmetry between the AI operator and the regulator creates conditions under which oversight becomes procedurally formal but substantively empty [8].

Existing approaches to this problem address individual dimensions but fail to provide an integrated solution. Explainable AI (XAI) methods such as SHAP and LIME [8] offer post-hoc approximations of model behaviour but do not constitute proof of compliance. Regulatory sandboxes provide controlled testing environments but do not address continuous deployment verification. Human-in-the-loop architectures become cognitively impossible beyond certain thresholds of parametric complexity [10].

In broader institutional terms, the implications extend beyond technical regulation to questions of sovereignty and democratic control over algorithmic governance [11]. These questions acquire particular urgency as Member States approach the August 2026 enforcement deadline for high-risk AI system obligations [10].

The present paper proposes a conceptual framework for resolving this structural tension through a methodological approach that displaces the regulatory paradigm from disclosure-based auditing to proof-based verification. The framework addresses the full spectrum of AI deployment scenarios — from exascale non-explainable systems requiring hardware-level guarantees to lighter on-premise infrastructures suitable for software-only verification. The objective is the construction of a proportional, scalable, and cryptographically verifiable oversight model that preserves both regulatory authority and fundamental rights protection.

2. Preconditions and Methods for Problem Resolution

The analytical framework addresses two co-existing yet fundamentally conflicting regulatory instruments: the AI Act (Regulation (EU) 2024/1689) [3] and the General Data Protection Regulation (GDPR) [1, 2]. At their intersection lies the central question of reconciling substantive algorithmic accountability with the protection of privacy and trade secrets.

The methodological approach integrates several complementary analytical perspectives. The legal-technical disconnect — the gap between normative obligations and their technical realizability — serves as the primary object of inquiry. Established frameworks, including ISO/IEC 42001:2023 [5], provide a management-level foundation but do not resolve the operational challenge of continuous compliance verification at the computational level.

The proportionality of regulatory intervention constitutes a central design constraint. The proposed approach differentiates between high-risk exascale systems requiring hardware-anchored verification and lower-risk local deployments where software-based periodic proofs suffice. This graduated architecture reflects the risk-based approach inherent in the AI Act while avoiding disproportionate regulatory burden on lighter infrastructures [7].

The comparative analysis draws upon developments in verifiable computation, zero-knowledge proof systems, and formal verification methodologies, situating the proposed framework within the broader context of cryptographic approaches to regulatory compliance [4]. The framework is designed to be technology-neutral in its regulatory function while technically specific in its verification mechanisms.

3. Solution to the Problem Under Investigation

3.1. Dual-Domain Architecture

The central architectural contribution is the formal separation of the computational environment into two functionally distinct domains: the Restricted Domain (R-Domain) — containing primary data and model parameters — and the Compliance Domain (C-Domain) — where regulatory verification occurs. The boundary between these domains is mediated by a Zero-Knowledge Audit Proxy (ZKAP), which enables the transfer of compliance evidence from R-Domain to C-Domain without transmitting any protected information across the boundary [7].

3.2. Polynomialization of Legal Norms

The first operational stage involves the formal translation of regulatory requirements into verifiable computational predicates. Specific legal obligations — including non-discrimination criteria, proportionality constraints, temporal requirements, procedural safeguards, and accuracy thresholds — are encoded as polynomial equations over finite fields. These polynomial representations enable the construction of arithmetic circuits amenable to zero-knowledge proof generation [9].

In this context, the paper introduces the concept of a Provable Arithmetic Logic Unit (pALU) as a hardware-anchored verification mechanism for exascale systems. The pALU operates on two levels: at the first, it verifies computational integrity; at the second, it generates cryptographic commitments certifying that the computation was performed in accordance with the polynomialized constraints [10].

For lower-risk or on-premise deployments, hardware-level integration is not required. The framework operates in a software-only configuration, generating periodic asymmetric proofs sufficient for regulatory oversight without silicon-level guarantees. This proportional design ensures that the verification burden scales appropriately with the risk classification of the AI system.

3.3. Threshold Calibration and Mandatory Safeguards

A critical design element is the establishment of a calibrated threshold that distinguishes admissible technical variance — inherent to any computational system — from structural divergence indicative of non-compliance or algorithmic drift. When observed deviation exceeds this threshold, mandatory safeguards are triggered, including suspension of output generation pending human review.

This mechanism transforms the regulatory model from reactive (post-violation enforcement) to preventive (pre-output verification), operationalizing the human oversight requirement of Article 14 of the AI Act [3] through a technically enforceable architecture rather than a procedural formality.

3.4. Formal Verification Properties

The cryptographic verification framework satisfies standard formal properties: completeness (a compliant system always generates a valid proof), soundness (a non-compliant system cannot generate a valid proof except with negligible probability), and zero-knowledge (the proof reveals no information about the protected model parameters or data) [7, 10].

4. Results and Discussion

The proposed framework addresses the structural enforcement deficit identified in the current AI regulatory architecture. By displacing the verification paradigm from disclosure to proof, it resolves the three-body conflict between AI Act transparency requirements, GDPR data minimization, and trade secret protection.

The Dual-Domain Architecture provides a formally clean separation of concerns: the operator retains full control over proprietary information within the R-Domain, while the regulator obtains cryptographically verifiable evidence of compliance within the C-Domain. No protected information crosses the boundary.

The graduated design — from hardware-anchored pALU verification for exascale systems to software-only periodic proofs for lighter deployments — ensures proportionality and practical applicability across the spectrum of AI systems subject to the AI Act.

The threshold calibration mechanism provides an operational definition of the boundary between technical noise and substantive non-compliance, replacing subjective regulatory judgment with a mathematically defined criterion. This enhances legal certainty for both operators and regulators.

5. Conclusion

Effective implementation of Regulation (EU) 2024/1689 requires more than normative harmonization — it demands the construction of a technical infrastructure capable of operationalizing regulatory requirements at the speed and scale of modern AI systems. The present framework contributes a conceptual architecture for this infrastructure, grounded in established cryptographic principles and designed for proportional application across the risk spectrum.

The polynomialization of legal norms into verifiable computational predicates, mediated by the ZKAP architecture, provides a candidate methodology for the conformity assessment provisions of the AI Act (Articles 40–49). The approach is compatible with the AI Act's technology-neutral standardization mandate and may be incorporated into harmonized standards developed by CEN-CENELEC.

The framework does not claim to replace human judgment. Legal concepts such as proportionality, reasonableness, and fairness resist complete formalization. The translation of law into verifiable predicates is itself an interpretive act requiring human expertise. The proposed system serves as an instrument of verification, not a substitute for judicial or administrative authority.

Future work should address the empirical validation of threshold calibration parameters, the development of domain-specific constraint libraries for priority sectors (financial services, healthcare, public administration), and the institutional design of regulatory workflows incorporating proof-based verification.

6. References

  1. European Union (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council (General Data Protection Regulation — GDPR). OJ L 119.
  2. European Union (2022). Regulation (EU) 2022/868 (Data Governance Act). OJ L 152.
  3. European Union (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 19 June 2024 laying down harmonised rules on artificial intelligence (AI Act). OJ L 2024/1689.
  4. European Commission (2024). Ethics Guidelines for Trustworthy AI — Updated Edition Post-AI Act. Brussels.
  5. ISO/IEC (2023). ISO/IEC 42001:2023. Artificial Intelligence Management System — Requirements. Geneva: ISO.
  6. Hardt, M., Price, E., & Srebro, N. (2016). Equality of Opportunity in Supervised Learning. Advances in Neural Information Processing Systems (NIPS).
  7. Ben-Sasson, E., Chiesa, A., Tromer, E., & Virza, M. (2019). Succinct Non-Interactive Zero Knowledge for a von Neumann Architecture. USENIX Security Symposium.
  8. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why Should I Trust You? Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
  9. Groth, J. (2016). On the Size of Pairing-Based Non-Interactive Arguments. Advances in Cryptology — EUROCRYPT 2016, Lecture Notes in Computer Science, vol. 9666.
  10. Goldwasser, S., Micali, S., & Rackoff, C. (1989). The Knowledge Complexity of Interactive Proof Systems. SIAM Journal on Computing, 18(1), 186–208.
  11. Floridi, L. (2023). The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. AI & Society, 38(2).
  12. United Nations Educational, Scientific and Cultural Organization (UNESCO) (2021). Recommendation on the Ethics of Artificial Intelligence.