Artificial intelligence (AI) has slipped into enterprise security faster than most organisations can comfortably admit. What started as a proof of concept, a pilot project here and there, has now become deeply ingrained in the processes of detection, response, identity, fraud, and developer tools.
However, for most CISOs, AI is a domain they are expected to control without ever having been provided with the proper visibility and understanding.
This disparity is becoming more pronounced as attackers move faster and face fewer constraints. No longer are they using automation to execute known threats. Instead, they are weaponising generative AI (GenAI) itself, ranging from deepfake-based social engineering to AI-based malware development and reconnaissance.
This is all happening while governance lags, creating a widening gap between how AI is used against enterprises and how it is controlled within them.
Nowhere does this tension manifest itself more clearly than in GenAI and large language models (LLMs) entering the enterprise. Security organisations are facing a new set of risks, including prompt injection, sensitive data leakage, model manipulation, and the unintentional disclosure of proprietary information via third-party LLMs.
At the same time, executives are being promised autonomous SOCs, AI copilots, and self-healing security operations, often without clarity on what is genuinely achievable today, and what remains aspirational.
Europe adds an additional layer of complexity to the picture. Unlike other geographic markets, AI adoption in Europe is happening in a dynamic regulatory environment. The EU AI Act, in conjunction with NIS2, DORA, and other regulations, is prompting businesses to consider not only whether AI is effective in improving security outcomes but also whether it can be controlled, audited, or explained.
This is pushing CISOs into uncertain territory. Balancing innovation with accountability, speed with compliance, and experimentation with long-term operational risk.
At the Barcelona Cybersecurity Congress, the discussion is rooted in reality. It’s not about the promise of AI, or the fear of the machine apocalypse. It’s about how AI and machine learning are being used across European businesses right now. Where they’re succeeding, or creating new vectors for attack, and where the security side of the equation has not yet caught up. The reality: AI security is an operational problem that is happening now.
Top trends
- Attackers are operationalising AI faster than defenders can govern it: Threat actors are using generative AI to scale phishing, fraud, and reconnaissance, while many organisations still lack clear AI usage policies internally.
- LLM security is emerging as a distinct risk category: Prompt injection, data leakage, and model misuse are forcing security teams to rethink traditional application security controls for GenAI systems.
- Regulation is shaping architecture decisions: The EU AI Act is already influencing how enterprises design, deploy, and document AI-driven security capabilities, often before the law is fully in force.
Regulatory watch (EU / Spain)
As of this month, the EU AI Act is at the critical phase of implementation, with phased obligations currently in place and major milestones approaching. The last set of obligations related to enterprises that use high-risk AI systems, including risk management, high-quality data, technical documentation, logging, human oversight, robustness, cybersecurity, and conformity assessments, is fully defined and enforceable from 2 August 2026.
The European Commission failed to meet its deadline of 2 February 2026 for publishing its guidelines for high-risk AI systems under Article 6(1), which means that providers are left without key guidance on classification rules, such as evaluating safety components in products. In Spain, regulators like AESIA and AEPD are prepared to enforce the EU AI Act, particularly after the issuance of regulations related to consent for deepfakes and the AI law.
CISO voice
“What we witness in our clients is that AI has become both their biggest accelerator and their biggest blind spot. They are using it everywhere in security operations, but governance is still playing catch-up. The question isn’t whether it works; it’s whether they can explain, control, and trust it when regulators or auditors start asking questions.”
- Yiannis Kanellopoulos CEO & Founder, code4thought
Innovation / startup spotlight
This month, Adaptive ML is in the spotlight. This Paris-based startup is working on a way to improve the reinforcement learning process using human feedback, or RLHF. Its LLM platform enables continuous improvement of language models through real-time user interactions, enabling businesses to personalize AI outputs and improve decision-making efficiency. The platform also enables businesses to train and use language models using user feedback, making the complex process of fine-tuning and preference optimisation much simpler, and reducing the time taken to get AI-powered products into the market.
Barcelona Cybersecurity Congress update
AI and cybersecurity will be a central pillar at Barcelona Cybersecurity Congress, with sessions exploring GenAI risk, regulatory readiness, and real-world enterprise adoption. Expect practitioner-led discussions that move beyond hype and focus on what is working (and, importantly, what still isn’t) across Europe.
Barcelona Cybersecurity Congress 2026
Dates: 3–5 November
Location: Barcelona
Co-located with: Smart City Expo World Congress
CONNECTING EUROPE’S CYBERSECURITY ECOSYSTEM