European CEOs urge pause on AI Act as Brussels weighs major changes
Over 150 European CEOs urge the EU to pause the AI Act, citing risks to innovation and competitiveness. Brussels is considering key revisions to the landmark AI regulation.

Brussels, July 3, 2025 — A group of top European chief executives has issued a unified call to the European Commission, urging a temporary halt in the implementation of the European Union's Artificial Intelligence Act (AI Act). As Brussels weighs substantial amendments to the landmark regulation, industry leaders argue that certain provisions risk stifling innovation and eroding Europe's competitive edge in the global AI race.
Rising Dissent from the Corporate Elite
More than 150 CEOs from prominent companies across industries—including automotive, finance, healthcare, and technology—signed an open letter addressed to Ursula von der Leyen, President of the European Commission, and Thierry Breton, Commissioner for the Internal Market. Among the signatories were executives from Siemens, Airbus, Renault, SAP, and Deutsche Telekom.
The letter states:
"While we support a robust and ethical AI framework, the current structure of the AI Act may lead to disproportionate obligations, increased legal uncertainty, and a chilling effect on AI investments in Europe."
Their primary concern revolves around the proposed classification of “high-risk” AI applications, which the CEOs claim is overly broad and could include critical innovations such as autonomous driving, healthcare diagnostics, and even cybersecurity tools.
Brussels Revisits Core Provisions
The AI Act, introduced in 2021 and finalized in 2024, is designed to regulate AI usage across the 27-member bloc. It is the world’s first major regulatory framework for AI, setting global precedents for ethics, transparency, and data use.
However, following feedback from industry stakeholders, the European Commission is now considering revisions, particularly regarding:
-
Definition of high-risk systems
-
Liability for AI-related failures
-
Compliance costs for SMEs and startups
-
The role of open-source AI models
A spokesperson for the Commission confirmed that internal reviews are ongoing and consultations with businesses and civil society are active.
"Our goal is to strike the right balance between innovation and fundamental rights. We welcome constructive dialogue," said the official.
Market Reaction: AI-Linked Stocks Edge Lower
European tech stocks experienced mild pressure following the publication of the CEOs' letter. Shares of SAP and Siemens edged down by 0.8% and 0.5% respectively in Frankfurt, as markets digested the implications of regulatory uncertainty.
However, some analysts say the temporary dip could reverse if Brussels moderates the legislation in response to the letter.
Anya Richter, technology analyst at Société Générale, remarked:
"Investors are wary of excessive regulation in emerging sectors like AI. But if the Commission fine-tunes the AI Act to be more innovation-friendly, we could see renewed optimism, especially among mid-cap AI developers and SaaS firms."
Concerns from the Innovation Ecosystem
Startups, often viewed as the drivers of disruptive AI technologies, are particularly apprehensive. They argue that costly compliance procedures could tip the scales in favor of U.S. and Chinese tech giants.
Julien Meunier, CEO of Paris-based AI startup Synthex, said:
"If you’re a five-person team working on generative AI and suddenly face a compliance bill of €400,000, your survival is at risk. Regulation should be risk-based, but also context-aware."
Several startup incubators and VCs have echoed these sentiments, urging regulators to adopt a phased or tiered approach based on company size and product impact.
Ethical Imperatives vs. Commercial Realities
While businesses are pushing for a pause, civil rights groups caution against diluting the AI Act's original intent, which is to safeguard privacy, prevent algorithmic bias, and ensure accountability.
Dr. Lena Vogt, a digital rights expert at the University of Amsterdam, warned:
"Ethical oversight is not a luxury—it's a necessity. Any pause or revision must not become a pretext for deregulation."
This tug-of-war reflects broader global tensions between innovation, ethical AI deployment, and geopolitical competition.
Investor Outlook: Watchful but Cautiously Optimistic
Despite the short-term volatility, investors remain cautiously optimistic about Europe's long-term role in the AI space.
Thomas Albrecht, portfolio manager at Erste Asset Management, explained:
"This is part of the growing pains. If Europe manages to recalibrate its AI Act thoughtfully, it could establish itself as a global AI governance leader—especially important as countries like the U.S. and China adopt more fragmented approaches."
Meanwhile, ESG-focused funds are paying close attention to how the AI Act aligns with broader sustainability and governance criteria. Some funds have already begun evaluating AI-related risk disclosures in their European tech holdings.
The Road Ahead
The European Commission is expected to release a revised draft of the AI Act by Q4 2025, following a summer of stakeholder engagement and parliamentary debates. Industry leaders are hopeful that their concerns will be reflected in the final iteration.
Whether Brussels slows the rollout or not, one thing is clear: the AI Act will shape not only the future of artificial intelligence in Europe but also the contours of global tech regulation.
What's Your Reaction?






