EU Drops Final Code of Practice for Foundation Models -Here’s What’s Inside (and Why Big Tech Is Sweating)

The European Commission’s new voluntary playbook for general-purpose AI kicks off a 12-month countdown to hard-law enforcement. We break down the must-know clauses, industry push-back, and global ripple effects.

Published: 11 July 2025 · 7-min read · by AI Trend Scout

TL;DR Brussels just released the final General-Purpose AI Code of Practice. It is “voluntary” for now but becomes legally binding on 2 August 2026 with fines up to €35 million or 7 % of global revenue. Transparency disclosures, copyright filters and safety stress-tests are now table-stakes for anyone shipping foundation models in Europe.

1. What happened?

The European Commission quietly published its long-awaited General-Purpose AI Code of Practice on 10 July 2025, giving model builders a one-year grace period to align with the AI Act’s next enforcement wave. Although the document is branded “voluntary,” it is effectively a dry-run for legally binding obligations that kick in on 2 August 2026—a timeline the Commission insists will not be delayed.

2. Why this matters

  • Global reach – Any model that ends up in EU products or services is covered.
  • Clock is ticking – One year of voluntary compliance, then real fines (up to €35 million or 7 %).
  • GDPR déjà-vu – The code is being pitched to G7 partners as a template.
  • Reg-tech boom incoming – Get ready for tools that automate model cards, dataset audits and watermarking.

3. What’s inside the Code?

  1. Transparency Pack
    Model Cards with architecture, training-data summaries, evaluation scores and energy footprints.
    Dataset provenance plus mandatory labelling of synthetic content.
  2. Copyright Guard-Rails
    • Track copyrighted works in training data or compensate rightsholders.
    • Provide an opt-out and quick takedown channel for creators.
  3. Safety & Systemic-Risk Controls
    • Compulsory red-team reports.
    • Alignment tests and kill-switch procedures for frontier-scale models.
  4. EU AI Office Oversight
    • A public registry of GPAI models plus annual stress-tests—real enforcement powers start in 2026.

4. Industry reaction: “Stop the clock!”

More than 40 heavyweight European brands – Airbus, Mercedes-Benz, Philips, even open-source darling Mistral—signed an open letter urging a two-year delay, calling the rules “unclear, overlapping and increasingly complex.” Brussels’ response: no grace period, no pause.

5. What happens next?

TimelineMilestone
Jul 2025Member States review the Code’s adequacy; Commission issues guidance.
2 Aug 2025AI Act “Wave 2” risk-based rules begin (high-risk systems & GPAI disclosures).
2 Aug 2026Code’s requirements become binding law; fines and market bans start.

6. The bigger picture

  • Open-source frameworks (e.g., Hugging Face) are racing to bundle “EU-ready” compliance kits.
  • VC term sheets now come with “AI-Act-ready” warranties—echoes of GDPR clauses in 2018.
  • US policymakers lose their favourite talking point (“heavy-hit regulation can’t be done”). Watch for renewed lobbying in Washington.
  • Start-ups may pivot to smaller, domain-specific models to dodge exhaustive reporting overhead.

🎯 Key takeaway for builders

If your model touches an EU user, you have 12 months to: document data, prove safety, and label everything—or budget for a compliance team larger than your research team.

Further reading

  • European Commission press release, General-Purpose AI Code of Practice now available (10 July 2025)
  • AP News, EU unveils AI code of practice to help businesses comply with bloc’s rules (11 July 2025)
  • TechXplore, More than 40 EU companies ask Brussels to delay rules (11 July 2025)

Enjoyed the breakdown? Hit follow and share to keep your feed a step ahead of tomorrow’s AI headlines!

Meta Launches SecAlign-70B: First Open Source LLM Built to Block Prompt Injection


Quick-Fire Summary (TL;DR)

Meta just dropped SecAlign-70B (plus a lighter 8B variant) — the first openly-licensed language models with built-in, model-level defenses against prompt-injection attacks. On launch-day benchmarks, the 70-billion-parameter model slashed attack success rates to almost zero while keeping everyday utility on par with GPT-4o-mini. Security folk are already calling it a milestone for “secure-by-default” AI. (arxiv.org, huggingface.co)


What Happened?

  • Release date: 4 July 2025 (arXiv pre-print + weights on HuggingFace). (arxiv.org, huggingface.co)
  • Models shipped:
    • SecAlign-70B – a fine-tuned offspring of Llama-3.3-70B-Instruct.
    • SecAlign-8B – a LoRA-style adapter for laptops and edge devices. (huggingface.co)
  • License: FAIR Non-Commercial Research — free to inspect, fork, and benchmark. (huggingface.co)

Why It Matters

  1. Prompt-Injection = #1 AI Threat. OWASP (2025) lists prompt injection at the very top of its LLM-risk chart, beating data poisoning and jailbreaks. (sizhe-chen.github.io)
  2. Open Models, Closed Defenses. Until now, robust PI defenses lived behind APIs (GPT-4o-mini, Gemini-Flash-2.5). SecAlign brings comparable protection into the open-source world. (arxiv.org, huggingface.co)
  3. Research Accelerator. With full weights + training recipe published, red-teamers and academics can iterate on attacks and defenses without NDAs, hopefully raising the security floor for everyone. (arxiv.org, arxiv.org)

How SecAlign Works (Under the Hood)

  • “Preference-Optimization” Training.
    1. Build a preference dataset where each sample has a safe output and a malicious, injected counterpart.
    2. Fine-tune with Direct Preference Optimization (DPO) so the model learns to prefer safe completions. (sizhe-chen.github.io)
  • Results in Numbers (select highlights): (huggingface.co) Benchmark Metric Llama-3.3-70B SecAlign-70B GPT-4o-mini AlpacaFarm (PI attack) Attack Success ↓ 93.8 % 1.4 % 0.5 % AgentDojo (no attack) Task Success ↑ 56.7 % 77.3 % 67.0 % MMLU-Pro (5-shot) Accuracy ↑ 67.7 % 67.6 % 64.8 % Bottom line: security improves by two orders of magnitude with virtually zero utility tax.

Early Buzz

  • Security Twitter & Mastodon lit up with “FINALLY, open weights + security!” threads within hours of the drop.
  • Researchers: Several red-team labs have already scheduled live-streamed hackathons to probe SecAlign’s limits next week.
  • Enterprises: CISOs at fintechs say the model could speed up internal LLM adoption because they can now audit both weights and defenses. (Expect a wave of downstream LoRA adapters.)

What’s Next?

HorizonWhat to WatchPotential Impact
DaysOpen-source folk port SecAlign-8B to vLLM / Ollama for local testing.Desktop-grade secure assistants.
WeeksBenchmark shoot-outs vs. GPT-4o-mini & Gemini-Flash-2.5 on new “adversarial” leaderboards.Standardizes security as a first-class metric.
MonthsForks integrating multimodal inputs and tool-calling policies.Safer autonomous agents for code, browsing, and ops.
2025 Q4Possible SecAlign-MoE or 400B variant if adoption proves strong.Puts pressure on closed vendors to open their own defenses.

Takeaways for Readers

  • If you build with Llama today, swapping in SecAlign could neutralize most off-the-shelf PI attacks with minimal refactor.
  • If you secure AI systems, SecAlign is a living test-bed: try to break it, publish results, iterate. The open weights make responsible disclosure easier.
  • If you’re a policy-maker, the release showcases how transparent, community-auditable models can advance both innovation and safety.

Written in collaboration with AI Trend Scout, tracking emerging AI stories within 48 hours of publication.