The RSAC 2026 Conference in San Francisco featured a pivotal panel where security executives from Google Cloud, Vodafone, and PayPal debated one of the most pressing questions in modern cybersecurity: Do AI deployments need a 'human in the loop' or will people merely slow things down?
The panel, titled 'From Threat to Strategy: The CISO's Playbook for the AI Revolution,' was moderated by The Wall Street Journal's James Rundle. Participants included Francis deSouza, chief operating officer and president of security products at Google Cloud; Emma Smith, global chief information security officer at Vodafone; and Shaun Khalfan, senior vice president and CISO at PayPal. The discussion centered on how security leaders can best adapt to the rapidly evolving AI landscape, particularly regarding the role of human oversight in AI-powered security tools.
The Core Debate: Human in the Loop vs. Human on the Loop
Traditionally, the 'human in the loop' concept has been a cornerstone of AI safety. It ensures that critical decisions are reviewed or validated by a person before being executed, thereby reducing the risk of errors or malicious manipulation. However, as AI agents become more autonomous and attacks become faster and more sophisticated, this model is facing increasing scrutiny.
Francis deSouza of Google Cloud argued that human-led defenses are often too slow to counter agent-led cyberattacks, which can execute thousands of malicious actions per second. 'We are moving toward agent-led defense,' deSouza stated, emphasizing that automation must keep pace with adversarial AI. Google itself now generates 50% of its code with AI assistance, showing a strong internal commitment to AI-driven processes.
Emma Smith of Vodafone strongly agreed. 'I totally agree that a human in the loop is not scalable if we think about our traditional security controls,' she said during the panel. 'The ones that rely on human behaviors are the ones that we don't rely on the most. Let's face it, we rely on the ones that are technical and automated and that we can prove over time. A human in the loop is not the solution for the long term, certainly on scaled operations, and I also worry that it will give a boring job to the human in the loop.'
Instead, Smith advocated for a 'human on the loop' approach, where humans provide strategic oversight and insights derived from AI outputs, rather than micromanaging every decision. 'It's just not going to scale,' she added. Vodafone has developed a heat map that evaluates the confidence level of AI outcomes alongside the potential risk impact. For very high-risk use cases, the company would only proceed with a human firmly in the loop, but for the vast majority of security operations, automation is preferred.
AI Adoption Strategies at Leading Organizations
The panel also highlighted how each organization is integrating AI into their security operations, showcasing diverse approaches that still share common principles.
Vodafone's solution, called AI Booster, is a centralized machine learning platform built on Google Cloud technology. It provides a reusable codebase for deploying pre-trained models and custom tools at scale. The platform tracks business value and allows the privacy engineering team to enforce guardrails on each use case. Smith noted that this top-down approach was essential for ensuring safe, ethical, and responsible AI deployment across the company.
PayPal, which processes over a billion transactions monthly, uses AI to detect fraud in real time. Shaun Khalfan emphasized that all AI initiatives must be wrapped in a 'data security' framework. 'When we think about our key AI principles, it's data and security. It's privacy, it's transparency, it's explainability,' he said. PayPal ranks its AI models into tiers based on data sensitivity, establishes specific use cases, and applies controls to prevent tampering and prompt injection attacks. The company also works closely with the Coalition for Secure AI (CoSAI), an industry-wide initiative that provides white papers and documentation to promote secure AI deployments.
Google Cloud's approach is inherently tied to its massive AI infrastructure. DeSouza noted that the shared data security model between AI vendors and customers remains 'something of a mess,' particularly regarding prompt injection risks that could leak sensitive corporate documents. He stressed the need for better collaboration and standardized security practices.
Wider Context: Challenges of the AI Revolution in Security
The panel occurred against a backdrop of growing concerns about AI-related security risks. As the original article notes, many organizations have yet to find success in their AI security deployments, according to recent studies. The introduction of large language models (LLMs) has not only offered new capabilities but also introduced or exacerbated vulnerabilities, including prompt injection, data leakage, and the rise of 'vibe coding'—where developers rely heavily on AI-generated code without adequate human review. This trend makes the CISO's job more complex, as organizations may lean too hard on AI without the right oversight mechanisms.
The debate about human involvement also touches on the nature of modern cyber threats. Agent-led attacks, which use AI to autonomously probe and exploit systems, are becoming more common. Defending against these requires speed and scalability that human operators simply cannot match. Vodafone's heat map approach offers a middle ground: automation for routine tasks, with human escalation only when the risk is exceptionally high.
Alexandra Rose, director of government partnerships and the Counter Threat Unit at Sophos, provided an external perspective. 'I think it's important that security is not the world of no,' she said in a separate interview. 'It's how do we get to yes, and how do we get to a yes in a way that we're protected?' This sentiment underscores the need for security teams to embrace innovation while maintaining robust controls.
Collaboration and Data Security as Key Pillars
Both Smith and Khalfan emphasized that collaboration is essential for safe AI deployment. Khalfan highlighted CoSAI's role in facilitating stakeholder collaboration and providing best-practice documentation across multiple workstreams. 'Part of this too involves collaborating with the larger ecosystem,' he said, noting that sharing knowledge helps the entire industry raise its security baseline.
Data security remains the fundamental backbone. PayPal's tiered model ranking system ensures that sensitive data is protected with appropriate controls, whether against tampering or unauthorized access. Similarly, Vodafone's AI Booster platform includes built-in privacy engineering capabilities that allow interventions at each stage of a use case's lifecycle.
The panel concluded with a consensus that while humans will always play a critical role in cybersecurity strategy, the day-to-day operational demands of AI-powered security require a move toward automation. The 'human on the loop' model, where humans monitor and guide AI systems from a strategic level, appears to be the emerging best practice for organizations looking to scale their defenses without sacrificing safety.
Source: Dark Reading News