DataSunrise Achieves Databricks Validated Partner Status. Learn more →

RSAC 2026: AI Security Needs Skills, Not Headcount

RSAC 2026: AI Security Needs Skills, Not Headcount

The cybersecurity workforce problem used to be described mostly as a shortage of people. That is still true. It is just no longer the whole truth.

RSA Conference 2026 surfaced a harder problem: security teams are being asked to operate in an environment where AI accelerates attack cycles, AI agents multiply non-human identities, business teams deploy new AI workflows faster than governance can mature, and boards expect CISOs to explain all of it without sounding like they are holding together the aircraft with compliance tape.

CSO Online’s RSA 2026 recap described the workforce discussion as one of the conference’s unresolved themes, including high agent-to-human ratios, AI-shaped R&D teams, faster adversarial operations, and warnings that CISOs are being handed more responsibility without enough structural support.

The Shortage Is Becoming a Skills Architecture Problem

The global cybersecurity workforce gap remains large. ISC2’s 2024 workforce analysis put the active global cyber workforce at 5.5 million and the global workforce gap at 4.8 million. That is the old problem: not enough qualified people for the amount of security work that exists.

The newer problem is that the work itself is changing. ISC2’s 2025 Cybersecurity Workforce Study reported that critical skills needs are increasingly outweighing simple headcount concerns, with AI cited as the most pressing skills need. The same study describes AI adoption across security operations and emphasizes the need for AI-related skills in threat detection, risk assessment, governance, privacy, data integrity, and regulatory compliance.

That matters because AI does not remove the need for cybersecurity professionals. It changes what those professionals need to supervise. Teams need people who can evaluate AI-driven alerts, secure model-connected data pipelines, govern agent identities, review data exposure, understand automation limits, and explain residual risk to executives without burying them under architecture diagrams from hell.

CISOs Are Being Asked to Govern What the Business Is Still Learning to Define

The CISO role was already swollen before AI arrived: incident response, cloud security, vendor risk, privacy adjacency, compliance, resilience, board reporting, data protection, and the endless joy of explaining why “we have MFA” is not a complete security strategy.

AI intensifies that pressure because it touches almost every business function. It arrives through procurement, employee experimentation, SaaS features, developer tools, analytics workflows, customer-facing products, and executive mandates. Security is expected to enable the business, reduce exposure, manage identity sprawl, answer board questions, and keep regulators from sharpening the silverware.

The problem is not that CISOs should be excluded from AI governance. They have to be central to it. The problem is expecting them to absorb AI risk alone. A CISO cannot personally review every AI use case, every model vendor, every prompt path, every retrieval account, every data pipeline, and every agent permission. The organization needs shared accountability and better evidence.

Burnout Is a Business Risk

The pressure is not theoretical. Proofpoint’s 2025 Voice of the CISO report found that 66% of CISOs report facing excessive expectations and 63% have experienced or witnessed burnout in the past year. It also found that GenAI has become both a strategic priority and a data-loss concern, with CISOs expected to enable adoption while controlling misuse.

Burnout is often treated as a personal resilience issue. That framing is too convenient. Burnout is also a business risk. When security leaders churn, long-term programs reset. Risk decisions lose continuity. Institutional knowledge walks out the door. The next CISO inherits the same problems with fewer illusions and a fresher slide template.

AI makes the stakes higher because the business cannot simply pause adoption while security builds the perfect model. Attackers are automating. Employees are experimenting. Vendors are embedding AI features. Boards are asking questions. The CISO is expected to say yes safely, not just no loudly.

Agent-to-Human Ratios Change the Work

The RSA discussion around agent-to-human ratios should not be treated as a precise workforce forecast. It is more useful as a warning about supervision at scale.

A small security team may soon be responsible for far more machine actors than human users: AI agents, service accounts, retrieval services, automation scripts, API clients, workflow bots, model gateways, and data connectors. These actors can retrieve information, summarize records, trigger actions, and interact with other systems. They do not fit neatly into old access-review rituals designed for people with managers and job titles.

That changes the work. Analysts should not spend hours reconstructing basic database activity from fragmented logs when automated monitoring should already provide the evidence. Engineers should not have to manually chase every service account tied to an AI workflow. CISOs should not defend AI governance with incomplete data-access visibility. Human judgment remains essential, but it should be applied to decisions that require judgment, not wasted on archaeological log reconstruction.

The Practical Response Is Better Evidence and Less Manual Drag

Security teams cannot hire their way out of this problem alone. They need to remove avoidable work from the system.

That means automated sensitive data discovery instead of manual inventories. Centralized activity monitoring instead of fragmented database logs. Dynamic masking instead of endless one-off exceptions. Audit trails that support investigation and compliance without forcing analysts to rebuild timelines from scratch. Runtime visibility into non-human identities instead of trusting that permissions still match business intent.

Better tooling will not eliminate the need for skilled professionals. It should make them more effective. The goal is to shift human effort from repetitive evidence gathering to higher-value decisions: which risks matter, which access should be constrained, which AI workflows can be safely enabled, and when an automated system needs human supervision.

What This Looks Like in Practice

An analyst receives an alert that an AI-related service account accessed a sensitive customer table. To investigate, they need identity context, query history, affected fields, normal behavior, policy history, and whether the returned data was masked. In a weak environment, they spend hours pulling logs from separate systems and still cannot tell a clean story.

In a stronger environment, the evidence is already connected. The analyst can see what happened, decide whether it was expected, escalate if needed, and preserve a defensible audit trail. That is not replacing the analyst. That is rescuing the analyst from avoidable misery.

Where DataSunrise Fits

DataSunrise is relevant to the workforce and CISO challenge because it reduces manual drag around data-layer evidence. Sensitive Data Discovery helps identify where regulated and high-value data lives. Activity Monitoring and Data Audit provide visibility into how humans, applications, and non-human identities interact with data. Dynamic Data Masking reduces unnecessary exposure in analytics, development, testing, and AI-adjacent workflows. Compliance Manager supports reporting and policy enforcement. DataSunrise does not replace security teams. It gives them the evidence and controls needed to survive AI-era scale without turning every investigation into a manual excavation project.

Protect Your Data with DataSunrise

Secure your data across every layer with DataSunrise. Detect threats in real time with Activity Monitoring, Data Masking, and Database Firewall. Enforce Data Compliance, discover sensitive data, and protect workloads across 50+ supported cloud, on-prem, and AI system data source integrations.

Start protecting your critical data today

Request a Demo Download Now

Need Our Support Team Help?

Our experts will be glad to answer your questions.

General information:
[email protected]
Customer Service and Technical Support:
support.datasunrise.com
Partnership and Alliance Inquiries:
[email protected]