Go back
Go back
Published:  
Jul 5, 2025
AI Systems

EU AI Act: General-purpose AI enters a new regulatory era starting August 2025 - Part 2

The next critical deadline arrives on August 2, 2025, bringing into force a comprehensive set of obligations for General-Purpose AI (GPAI) models. These are the versatile foundational AI models, like large language models (LLMs) or sophisticated image generators, that underpin a vast array of downstream AI applications and services.

August 2, 2025 marks the beginning of the most significant shift in AI governance since the internet's commercialization. While February's prohibitions established clear boundaries, August introduces comprehensive obligations for General-Purpose AI models that will fundamentally reshape how organizations develop, deploy, and integrate foundation models across the EU market.

For product leaders managing LLMs, image generators, or any foundational AI capability, August 2025 represents the inflection point where compliance becomes competitive advantage. Companies that master these GPAI obligations will capture market opportunities while competitors struggle with basic vendor due diligence and documentation gaps.

General-purpose AI demands transparency and accountability

Whether an organization provides these foundational models or integrates them into its enterprise solutions, it is directly impacted.

For GPAI Providers:

* Obligations include meticulous technical documentation (e.g., model architecture, training data, performance), providing clear documentation for downstream users, implementing robust copyright compliance policies (including respecting machine-readable opt-outs), and publishing a public summary of training data.

For Systemic Risk GPAI Models
(e.g., those trained with over 10^25 FLOPs, or otherwise designated by the EU AI Office):

* Additional, more stringent requirements apply, such as advanced safety testing ("red teaming"), continuous risk assessments, incident reporting, and enhanced cybersecurity. The European AI Office gains direct oversight over these powerful models.

For Deployers/Integrators of GPAI Models:

* Organizations must ensure their GPAI providers meet these requirements and supply them with the necessary documentation to understand the model's capabilities and limitations. This allows them to fulfill their own responsibilities when integrating these models into their products.

AI Regulations and cost explained

Floating Point Operations (FLOPs) is essentially a way to measure how much raw computing power was used to "train" an AI model. FLOPs is the "size" of the AI's training: It's a count of all the individual math calculations (additions, multiplications, etc., often with decimals) that the computers performed to teach the AI what it knows. The higher the FLOPs number, the more massive and complex the AI's "learning" process was. Governments are setting rules for very powerful AI models to ensure safety:

1. The "power" threshold:

* AI models that use a huge amount of computing power during their training, specifically more than 10^25 FLOPs (Floating Point Operations) are considered to have "systemic risk."

* This means they're powerful enough to potentially cause widespread societal issues.

2. The obligations for powerful AI's:

If an AI model crosses this 10^25 FLOPs threshold, the company that developed it faces mandatory safety requirements. These include:

* Red teaming: Actively testing the AI for vulnerabilities and potential misuse.

* Incident reporting: Reporting any serious problems or unexpected behaviors to authorities.

* Other safety assessments and risk mitigation efforts.

3. The cost equivalent:

* Experts estimate what it costs to reach that level of computing power (10^25 FLOPs).

* They calculate that training a model to this scale typically costs between $7 million and $10 million.

In simple terms: If an AI model is so powerful that it costs $7-10 million or more just to train it, it's considered a significant enough technology to require mandatory extra safety checks and reporting by regulators, like those outlined in the EU AI Act. This financial cost acts as a practical indicator of the AI's potential scale and impact.

The Strategic imperative

This August 2nd deadline necessitates a dual-track approach. If an organization develops foundational AI, its internal development and documentation pipelines must be mature. If it consumes GPAI services, its vendor management and due diligence processes must explicitly address these new compliance data flows. While a grace period for penalties on GPAI obligations extends until August 2026, the legal obligations themselves are active from this August, meaning auditors can inquire and demand corrective actions.

Programmatic readiness starts with transparency and trust in AI systems

The implications for enterprise software are substantial:

For professional services (leveraging LLMs for content & analysis):

* If a professional services solution incorporates an external LLM for intelligent proposal drafting or meeting summarization, the organization, as the 'deployer,' is now obligated to ensure its LLM provider adheres to the August 2nd requirements. Programmatically, this means establishing automated data ingestion routines to receive and store the LLM's technical documentation, training data summaries, and copyright policies.

* Development teams must embed features that allow end-users to understand the LLM's provenance and limitations, ensuring, for instance, that AI-generated content is clearly flagged for review before client delivery. This transforms vendor management from a purely commercial activity into a crucial compliance responsibility.

For financial services (proprietary GPAI for risk modeling):

* Consider a financial institution that develops its own large-scale GPAI model for advanced fraud pattern analysis. If this model reaches the systemic risk threshold, internal product and engineering teams must immediately initiate "red teaming" exercises to proactively identify and mitigate vulnerabilities and biases.

* From an operational perspective, this demands dedicated MLOps (Machine Learning Operations) pipelines capable of continuous monitoring, performance validation, and automated flagging of any systemic risks, ensuring that internal models are not just powerful, but demonstrably safe and compliant.

For non-profit (enhancing donor engagement with conversational AI):

* If a non-profit organization leverages an internal or third-party GPAI for tasks like optimizing volunteer matching or drafting grant applications, the August 2nd requirements are pertinent.

* For an internally developed GPAI (making the organization a 'provider'), programmatically, this means implementing transparent data governance for its training data, ensuring the publication of a summary outlining the datasets used (e.g., "trained on publicly available grant databases and organizational reports"). If a third-party GPAI is used, the focus shifts to robust vendor contracts that explicitly demand the required documentation and assurances of copyright compliance from the GPAI provider.

Product leader's GPAI compliance playbook

For product leaders managing foundation model integrations or development, August 2025 compliance demands immediate operational changes across your development lifecycle:


GPAI vendor contract renegotiation

* Embed AI Act compliance requirements directly into vendor agreements
before August 2025. Standard commercial terms must now include technical documentation delivery, copyright policy adherence, and training data summary publication obligations

* Request Annex XI technical documentation from all GPAI providers
covering model architecture, training methodologies, evaluation results, and performance benchmarks that enable regulatory oversight

* Establish automated compliance monitoring systems
that track vendor documentation updates, model version changes, and regulatory classification status modifications

Product development integration requirements

* Modify product requirement documents (PRDs) to include GPAI compliance verification for any foundation model integration, requiring legal and technical review before implementation

* Implement AI provenance tracking within your product interfaces that clearly identifies GPAI-generated content and provides users with model capability and limitation information

* Create GPAI model switching capabilities that allow rapid vendor transitions if compliance gaps emerge, preventing product disruption from regulatory enforcement actions

Internal GPAI development governance

* Establish compute cost tracking systems:
that monitor training expenses approaching the $7-10 million systemic risk threshold, enabling proactive regulatory notification and enhanced compliance preparation

* Integrate red teaming protocols into development sprints:
rather than treating adversarial testing as isolated security activities, ensuring continuous vulnerability identification and bias mitigation

* Create cross-functional GPAI governance teams:
including product management, MLOps engineering, legal compliance, and risk management functions to ensure comprehensive August 2025 readiness.

The competitive advantage of early GPAI mastery

August 2025's GPAI obligations represent the foundation model industry's regulatory maturity moment. Organizations that achieve comprehensive compliance will differentiate themselves through demonstrated transparency, systematic risk management, and regulatory partnership that builds customer trust and market confidence.

The companies succeeding in this environment will establish the governance frameworks that enable rapid innovation within clear regulatory boundaries, while competitors struggle with basic documentation requirements and vendor compliance gaps.

Part 3 will examine August 2026's high-risk AI system requirements and the convergence of GPAI obligations with comprehensive AI governance frameworks that determine long-term competitive positioning.

View all articles
View all articles