
AI Compliance in 2025. What the EU’s Code of Practice Means for General-Purpose AI Providers
In the fast-moving AI landscape, 2025 marks a pivotal year for general-purpose AI (GPAI) providers. The finalization of the third draft of the EU’s General-Purpose AI Code of Practice on May 2 marked the beginning of a new compliance era in the European Union. Developed as a complement to the EU AI Act, this Code outlines both baseline requirements and additional obligations for providers whose models fall under systemic risk regulation. For Ocean Enterprise Collective (OEC), these guidelines provide a clear pathway toward building compliant, transparent, and future-ready AI systems.
Third Draft of the EU’s General-Purpose AI Code of Practice
The third draft introduces two levels of commitments: 1) baseline obligations for all GPAI providers and 2) additional commitments for those operating large-scale, advanced models considered systemic-risk under Article 51 of the AI Act. The Code has been refined to emphasize clarity and usability. It no longer uses key performance indicators and instead offers a more streamlined, practical approach to compliance.
The specific objectives of this Code are:
1. To serve as a guiding document for demonstrating compliance with the obligations provided for in Articles 53 and 55 of the AI Act, while recognising that adherence to the Code does not constitute conclusive evidence of compliance with the obligations under the AI Act.
2. To ensure providers of general-purpose AI models comply with their obligations under the AI Act and to enable the AI Office to assess compliance of providers of general-purpose AI models who choose to rely on the Code to demonstrate compliance with their obligations under the AI Act.
A standardized Model Documentation Form has been introduced to help providers structure and share key information about their AI models. The draft is also designed to evolve alongside technological advances, making it a living framework rather than a static rulebook.
Transparency & Copyright Obligations
Under the transparency commitment, GPAI providers must now ensure high-quality, publicly accessible model documentation. This includes clear descriptions of the model architecture, the types and sources of training data, limitations and biases identified during testing, and any intended or restricted use cases. This documentation must be made available to downstream deployers and to the EU AI Office upon request.
The copyright obligation demands that providers establish a formal copyright policy. This policy must outline how data is sourced, what mechanisms are used to avoid using infringing materials, and how the provider respects online content protections such as paywalls and opt-out requests. The Code calls for “best efforts” to prevent the use of unlawful data and expects technical and human oversight mechanisms to be in place.
Systemic Risk Classification: Article 51
Article 51 of the AI Act introduces the idea of “systemic risk” for general-purpose AI models. In simple terms, a model might fall into this category if it’s extremely powerful, either because it hits a high technical threshold or because it has a significant impact on society. The European Commission also has the authority to label a model as systemic-risk based on how and where it’s used. For now, only a few AI providers are likely to be affected, but as technology evolves, more models could cross this line.
Systemic Risk Mitigation Measures
For providers identified as systemic-risk actors, the Code introduces a suite of commitments focused on risk mitigation, safety, and governance. These organizations must perform regular, standardized evaluations such as red-teaming, robustness testing, and simulations that explore failure scenarios. Systemic risk assessments must also address how the AI system could affect society, the environment, public safety, or economic stability. This analysis should be reviewed periodically and updated as the model evolves or is retrained.
Technical risk mitigation involves designing AI models with built-in safety protocols. This includes stress-testing against adversarial inputs, ensuring model resilience against malicious use, and integrating robust monitoring systems. Incident reporting is another key requirement. Providers must maintain secure systems to log irregular behavior and share reports with the EU AI Office or national authorities.
Governance is another major area of focus. Providers must establish dedicated governance structures to oversee systemic-risk compliance. This may involve creating internal boards or committees to assess and guide risk management efforts, publishing annual compliance reports, and ensuring executive accountability. Cybersecurity must also meet EU regulatory standards. This includes secure logging, user authentication, encrypted data flows, and regular audits to ensure systems cannot be exploited or manipulated.
Ocean Enterprise Compliance-Aligned Features
Well before the EU announced its new AI Code of Practice, Ocean Enterprise Collective had already begun building next-gen AI and data ecosystem technology designed to ensure for transparency, safety, and governance — Ocean Enterprise (OE).
Ocean Enterprise (OE), was designed to comply with the EU AI Act, Data Act, and GDPR out of the box. As next-gen technology for sovereign and decentralized data ecosystems, Ocean Enterprise ensures users retain ownership and control over their data through smart contracts and GDPR-aligned governance.
In a regulatory landscape defined by transparency and trust, OE leads by design. It supports full traceability, explainability, and auditability of data flows, essential for high-risk AI under EU law and its interoperable infrastructure connects seamlessly with initiatives like GAIA-X.
Cybersecurity is a key part of the foundational layer of OE and the encrypted communication protocols, access controls and audit logging are all in line with European best practices. For organizations aiming to responsibly monetize data in 2025 and beyond, technology offered by OE isn’t just compliant, it’s built for the future.
The EU’s General-Purpose AI Code of Practice marks a new chapter in the governance of artificial intelligence and offers a structured path to building systems that are not only high-performing but also legally compliant and socially responsible.
The final version of the Code of Practice is expected to be confirmed by the European Commission in August 2025. From then onward, transparency and copyright commitments become enforceable. Providers that qualify as systemic-risk actors must implement their additional measures starting immediately after that date.
As we move forward into the second half of 2025, the Ocean Enterprise Collective remains committed to aligning our innovation with the principles of transparency, accountability, and safety. In doing so, the OEC hopes to contribute to a more trustworthy, collaborative, and ethical AI ecosystem.
TL;DR:
The EU’s 2025 General-Purpose AI Code of Practice introduces new compliance obligations for AI providers, particularly around transparency, copyright, and systemic risk. All providers must document their models and avoid using copyrighted data without permission, while advanced models classified under Article 51 face stricter governance, safety, and reporting requirements.
By offering sovereign data governance, GDPR-compliant privacy controls, real-time compliance monitoring, and AI-ready infrastructure designed for traceability and interoperability across European data ecosystems Ocean Enterprise is already well aligned to comply with existing EU AI Act provisions as well as the new EU General-Purpose AI Code of Practice. As the AI Act, Data Act, and GDPR converge, Ocean Enterprise Collective demonstrates how compliance can be a competitive advantage and a foundation for responsible innovation.
About Ocean Enterprise Collective
The Ocean Enterprise Collective (OEC) is a non-profit association focused on developing Ocean Enterprise, a free, open-source, next-generation data and AI ecosystem for enterprise solutions.
Ocean Enterprise enables companies and public institutions to securely manage and monetize proprietary AI & data products and services in a trusted and compliant environment.
OEC members span eight countries and nine industries, including agriculture, healthcare, aerospace, and manufacturing.
Get in touch with the Ocean Enterprise team: info@oceanenterprise.io

