Live
Black Hat USADark ReadingBlack Hat AsiaAI Business14. Observability in AI Systems – How to Know What Your AI Is Actually DoingMedium AIAI Citation Registries and Provenance Absence Failure ModesDev.to AIGitHub Actions for AI: Automating NeuroLink in Your CI/CD PipelineDev.to AIWorld-Building with Persistence: Narrative Layers in AI AgentsDev.to AIBuilding a Claude Agent with Persistent Memory in 30 MinutesDev.to AISamsung, Mistral AI in talks for stable chip supply - NewsBytesGNews AI Mistral50 Useful Prompts I Use in Gemini That Actually Save Me TimeMedium AIThe Two-Line Prompt That Made 7 AIs Develop Distinct PersonalitiesMedium AINLP Token Classification Explained Simply (NER, POS, Chunking + Code)Medium AIAutomate Your Grant Workflow: A Practical AI Guide for NonprofitsDev.to AIYou’re Not Thinking Anymore — AI Is Doing It for YouMedium AIYour LLM Passes Type Checks but Fails the "Vibe Check": How I Fixed AI ReliabilityDev.to AIBlack Hat USADark ReadingBlack Hat AsiaAI Business14. Observability in AI Systems – How to Know What Your AI Is Actually DoingMedium AIAI Citation Registries and Provenance Absence Failure ModesDev.to AIGitHub Actions for AI: Automating NeuroLink in Your CI/CD PipelineDev.to AIWorld-Building with Persistence: Narrative Layers in AI AgentsDev.to AIBuilding a Claude Agent with Persistent Memory in 30 MinutesDev.to AISamsung, Mistral AI in talks for stable chip supply - NewsBytesGNews AI Mistral50 Useful Prompts I Use in Gemini That Actually Save Me TimeMedium AIThe Two-Line Prompt That Made 7 AIs Develop Distinct PersonalitiesMedium AINLP Token Classification Explained Simply (NER, POS, Chunking + Code)Medium AIAutomate Your Grant Workflow: A Practical AI Guide for NonprofitsDev.to AIYou’re Not Thinking Anymore — AI Is Doing It for YouMedium AIYour LLM Passes Type Checks but Fails the "Vibe Check": How I Fixed AI ReliabilityDev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

The EU AI Act: A Pathway to AI Governance with Fiddler

Fiddler AI Blogby Amit Paka and Karen HeAugust 9, 20248 min read1 views
Source Quiz

The EU AI Act is here. Learn how Fiddler helps enterprises bring trust and transparency into AI while driving AI governance, risk management, and compliance.

The initial ChatGPT release caught everyone by surprise with its uncanny humanlike conversational abilities, quickly becoming the fastest adopted product in history. Simultaneously, its potential for misuse and possible unknown dangers has alarmed a lot of experts. Since its introduction, Machine Learning (ML) has always harbored hidden risks: model degradation risk, lack of decision transparency, bias etc. Large Language Models (LLMs) now add to that growing risk with safety, hallucination, and privacy risks.

With this backdrop, the EU introduced the EU AI Act as the world’s first comprehensive policy to regulate the use and development of AI to ensure the trust of EU citizens. The EU AI Act went into effect on August 1, 2024, with enforcement beginning on August 2, 2026. While European enterprises must comply with the EU AI Act by this enforcement date, other nations are expected to introduce more comprehensive AI regulations in the future. Enterprises outside of the EU are encouraged to thoroughly review the EU AI Act to anticipate upcoming regulations and prepare for responsible AI development and usage.

We explore the specifics of the EU AI Act and its implications for enterprises, with a particular focus on AI observability requirements, and discover how Fiddler can create a pathway to  AI governance, risk management, and compliance (AI GRC).

Adapt and Comply to a Risk-Based AI Approach

The EU AI Act recommends a risk-based approach to classify AI applications:

  • Unacceptable risk applications include AI that threaten fundamental rights, such as dangerous toys, social scoring, certain predictive policing, and some biometric systems like workplace emotion recognition, are banned
  • High risk applications will require rigorous AI compliance measures like pre-market assessments, regulatory audits, and EU database registration. Examples include biometrics, facial recognition, and AI in education, law enforcement, and healthcare like autonomous driving and credit scoring will have new oversight
  • Specific transparent risk applications like chatbots and deepfakes must adhere to additional AI transparency rules, ensuring synthetic content is clearly marked and identifiable as artificially generated
  • Other minimal risk applications like recommender systems and spam filters, which form a majority of AI applications, face no specific obligations but are encouraged to uphold trustworthiness and reliability

The AI Act: An AI Governance Framework Focused on High Risk Applications

The cornerstone of the EU AI Act is its guidelines for high-risk applications, which are classified as those that could endanger people (particularly EU citizens) and their opportunities, property, society, essential services, or fundamental rights. This classification includes applications such as recruitment, creditworthiness assessments, self-driving cars, and remote surgery. The Act also specifies that the extent of AI involvement in decision-making determines whether an application is classified as high-risk.

For these AI systems, the EU AI Act mandates high-quality data sets, comprehensive system documentation, continuous monitoring, transparency, and human oversight. It also requires operational visibility into robustness, accuracy, and security. The EU also encourages these same standards for lower risk AI systems.

Consequences of Non-Compliance with the EU AI Act

AI compliance is required by the EU AI Act. Noncompliant companies face the following consequences:

  • Financial Penalties: Fines of up to 7% of global annual turnover for serious violations, such as using banned AI applications
  • Operational Disruptions: Non-compliant AI systems may be removed from the market, leading to significant revenue losses
  • Reputational Damage: Failure to adhere to the Act's standards can erode consumer trust and tarnish a brand's image
  • Increased Scrutiny: Risky AI deployments can lead to heightened legal and regulatory scrutiny, resulting in costly legal battles and diminished customer trust
  • Market Access Limitations: Failure to meet AI compliance standards  could restrict a company's ability to access the lucrative EU market, stifling growth opportunities and competitive edge

De-risk AI Applications with Fiddler AI Observability

1. AI Transparency and Explainability

Article 13: Transparency and Provision of Information to Deployers

  1. High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately.

where applicable, information to enable deployers to interpret the output of the high-risk AI system”

With the new AI Act, the EU is looking to address AI’s transparency problem. Most predictive models are an “opaque box” due to two traits of ML:

  • Unlike other algorithmic and statistical models that are designed by humans, predictive models are trained on data automatically by algorithms.
  • As a result of this automated generation, predictive models can absorb complex nonlinear interactions from the data that humans cannot otherwise discern.

This complexity obscures how a model converts input to output thereby causing a trust and transparency problem. Model complexity gets worse for modern deep learning models, making them even more difficult to explain and reason about.

Fiddler can help meet new AI compliance requirements with Explainable AI for ML that helps shine the light into the inner workings of AI models to ensure AI-driven decisions are transparent, accountable, and trustworthy. Explainable AI powers a deep understanding of model behavior to allow AI teams to debug and provide transparency around a wide range of models. LLMs are trained on such large datasets that explainability is still an unsolved research problem. However, there are newer approaches with Chain of Thought prompting and hallucination scores for LLM monitoring that provide reasoning, faithfulness, and groundedness context for deployed LLMs.

2. Monitoring with Human in the Loop

Article 14: Human Oversight

  1. The high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate:

(a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;

(c) to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;

Article 15: Accuracy, Robustness and Cybersecurity

  1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.

  2. High-risk AI systems shall be resilient against attempts by unauthorized third parties to alter their use, outputs or performance by exploiting system vulnerabilities.

Predictive models are unique software entities, as compared to traditional code, in that they are probabilistic in nature. They are trained for high performance on repeatable tasks using historical examples. As a result, their performance can fluctuate and degrade over time due to changes in the model input after deployment. Depending on the impact of a high risk AI application, a shift in its predictive power could have a significant consequence on the use case, e.g. an ML model for recruiting that was trained on a high percentage of employed candidates will degrade if the real-world data starts to contain a high percentage of unemployed candidates, say in the aftermath of a recession. It can also lead to the model making biased decisions.

Monitoring these systems in a single pane of glass enables continuous operational visibility to ensure their behavior does not drift from the intention of the model developers and cause unintended consequences.

Generative AI models and applications can also be impacted by data drift, causing responses to vary over time for the same prompt. There are additional risks around safety, correctness, and privacy, as a result of hallucinations, jailbreak attempts or PII leakage.

Fiddler’s LLM Observability and ML Observability helps address all AI risks, including visibility, degradation, and other operational challenges for deployed models, ensuring AI compliance after the EU AI Act changes. The Fiddler AI Observability platform enables deployment teams to monitor model behavior, prevent drift, and mitigate unintended consequences, with alerting options to promptly address immediate operational issues.

3. Record Keeping

Article 12: Record-Keeping

  1. High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system.

  2. In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for:

...

(b) facilitating the post-market monitoring

Since generative and predictive models, along with the data behind AI systems, are constantly evolving, continuous recording of model behavior is now essential for any operational use case. This involves logging all model inferences to enable future replay, inspection, root cause analysis, and explanation, facilitating auditing and remediation. The Fiddler AI Observability platform inherently provides an audit trail recording of all model logs for the effective monitoring and meeting of AI governance, risk, compliance requirements (AI GRC).

How to Stay Prepared for Upcoming AI Regulations

As new AI compliance regulations continue to emerge, enterprises must proactively prepare. The EU AI Act provides oversight for high-risk applications and encourages similar guidelines for lower-risk applications. This consistency can streamline the AI development process, allowing teams to follow uniform AI governance frameworks for all models.

Teams deploying AI models should bolster AI development by updating their AI infrastructure, processes, and tools to build trust and transparency into their predictive and generative models. By adding transparency into model performance and behavior, enterprises can instill customer trust and be well-prepared for upcoming regulations.

Contact our Fiddler AI experts to get started with AI Observability today and stay ahead of future AI compliance and regulatory changes.

Subscribe to our newsletter

Monthly curated AI content, Fiddler updates, and more.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

complianceeu ai act

Knowledge Map

Knowledge Map
TopicsEntitiesSource
The EU AI A…complianceeu ai actFiddler AI …

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 185 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!