Human Factors in Cybersecurity: Questions for Maria Cardow

Dec. 14, 2025
7 min read

Key Highlights

  • The real attack surface is people, not tools. The Qantas breach shows how one human decision or moment of confusion can bypass even sophisticated security controls.
  • Design security for real behavior, not “ideal” behavior. Cardow argues human error can’t be “trained out of existence,” so security must be built into processes and systems in ways that anticipate variability.
  • Unintentional insider risk is often a process + culture problem. Workarounds, shortcuts, and “urgent” clicks typically signal friction, missing context, or cultural disconnects — not malicious intent — so the fix is systemic, not punitive.
  • Shadow AI is the new human-factor accelerant. Employees reach for personal/external AI tools when enterprise options don’t fit their workflows, raising data leakage and legal risk and exposing misalignment between policy and how work actually happens.

The 2025 Scattered Spider-linked attack on Qantas exploited a third-party contact-center platform, compromising personal data for roughly 5.7–6 million customers. Exposed fields included names, email addresses, phone numbers, dates of birth, and frequent-flyer details — but not financial data, passport numbers, or login credentials. The breach underscores the reality that the most vulnerable point isn’t the firewall — it’s human behavior. 

Maria Cardow, the Chief Information Officer for LevelBlue, shares insights that underscore a critical shift in how organizations must approach cybersecurity: the most significant vulnerabilities are not rooted in technology, but in human behavior. Referencing incidents such as the Qantas breach, Cardow highlights how a single decision, a moment of confusion, or a lapse in awareness can undo even the most advanced technical protections. She challenges CIOs and CISOs to move beyond tool-centric strategies and recognize people as the primary variable in security architecture — something to be designed around, not trained out of existence.

Throughout the conversation, Cardow addresses overlooked cognitive biases, unintentional insider risk, the rise of Shadow AI, and the dangers of siloed teams. She emphasizes that most security failures stem from process gaps and cultural disconnects rather than malicious intent, advocating for psychologically safe environments where employees can admit mistakes and ask questions without fear. Effective security, she argues, comes from continuous, role-based education and collaboration between business, IT, and security units. Looking ahead to 2025 and beyond, Cardow urges leaders to become translators of risk, bridging people, process, and technology to create resilient, human-centric security cultures.

The Qantas breach highlighted how a single human interaction can override layers of technology. From your perspective, what does this incident reveal about the industry’s blind spots around human behavior?

What incidents like this consistently highlight is that the attack surface is fundamentally people. We tend to assume technology can solve every security problem if we just purchase the right tool. However, even the most sophisticated systems can be circumvented by a human decision, a moment of confusion, or a simple mistake. The industry's blind spot is treating human error as something to be trained out of existence rather than a variable to be accounted for in the architecture itself. We forget that we aren’t just protecting data; we are protecting people and their day-to-day work.

You’ve said there are “no tech problems, only people problems.” How should CIOs and CISOs reinterpret their security strategy when the primary variable is human decision-making rather than technical failure?

This philosophy is rooted in the realization that our most complex technical issues invariably trace back to human elements, be it their habits, their understanding, or their choices. To reinterpret their strategy, leaders must approach cybersecurity problems with human-centric solutions. This means understanding that when you introduce people into the loop, you introduce greater variability. Security planning must be integrated earlier in the process, in the design phase, to account for how people actually interact with the technology. The goal is to create environments where employees are informed and empowered, viewing security as a shared value rather than a limiting checklist.

Traditional security models often assume predictable behavior, yet employees under pressure or lacking context make very different choices. What are the most overlooked cognitive biases or behavioral patterns that elevate insider risk?

The most overlooked element is that most of these incidents are not driven by malicious intent, but by a simple lack of awareness or context. People will often take shortcuts or use unapproved tools to get their jobs done, especially when under pressure, even if it means skipping a protocol or misconfiguring a system. The cognitive pattern at play often prioritizes immediate productivity over security measures that create friction. When an email looks genuinely urgent, people click the link, showing that the need to perform their task or respond to perceived urgency overrides their training.

Most insider incidents are unintentional rather than malicious. How can organizations better distinguish between negligent behavior, process failure, and cultural issues—and respond without creating a climate of distrust?

Organizations must first acknowledge that these incidents reflect more than just individual negligence; they signal gaps in security infrastructure, outdated processes, or cultural issues that need to be addressed. Since the individuals involved often use valid credentials, these risks go unnoticed easily. Distinguishing them requires leaders to stay close to the ground—understanding the tools people use and the obstacles they face. If people are resorting to workarounds, it’s a signal that the process is failing them. The response should focus on illuminating those unknowns and addressing the systemic cause, fostering a culture where people feel safe admitting mistakes or asking questions, rather than being punished for errors.

As AI tools become embedded in workflows, employees are increasingly interacting with systems they don’t fully understand. Where do you see the biggest new human-factor risks emerging from AI adoption?

The most pressing new risk is Shadow AI. Employees often turn to external or personal AI tools, like those on their phones, because the approved enterprise solutions don’t meet their needs. While seemingly harmless, this behavior can introduce significant risks, including data leakage and legal liabilities. This emergence of Shadow AI signals a misalignment between security policy and how people actually work. Leaders must acknowledge the use of these unapproved tools and then seek to understand the motivation. If employees are seeking specific functionalities, the organization needs to ensure they have the right approved tools available.

Many CISOs say siloed teams are one of their biggest barriers to resilience. What practical steps can leaders take to break down cultural and operational silos between IT, security, and business units?

Silos are a major hidden risk because they allow critical security items to fall through the cracks — one team assumes another is covering a risk, and no one owns the gap. To break them down, leaders must be intentional about creating connectivity. This involves creating more opportunities for teams to connect early in the process, particularly in planning and design, so they can align architecture and how people will interact with new technology. Furthermore, leaders must lead security from the front, understanding the day-to-day realities and obstacles people face, which allows for better alignment and the proactive elimination of those operational gaps.

Security awareness programs have existed for decades, yet outcomes are uneven. What works when it comes to changing behavior, and what should organizations stop doing?

Shifting security from a one-time, generic training checklist to a shared organizational value and a community effort. Effective programs focus on continuous learning and behavioral reinforcement. This means providing regular, role-based training that evolves with the threat landscape, acknowledging that different employees need different tools and awareness based on their jobs. Organizations should stop treating security as a compliance box to check and instead encourage people to practice safe online behaviors, to keep learning, and to speak up when something looks off, creating a resilient culture where everyone takes accountability.

Looking ahead, what should security leaders prioritize in 2025–2026 to shift from reactive, control-heavy postures to proactive, collaborative, and human-centric resilience models?

Security leaders should prioritize being the translators of risk. They need to move beyond technical complexity to connect technology to people, process, and purpose. The focus must be on fostering a culture of collaboration and psychological safety, where people feel comfortable questioning, sharing ideas, and admitting mistakes, because diverse thinking is critical to countering threat actors who are constantly finding new angles. They must also bridge the gap between technical teams and the C-suite to ensure decisions aren't just dictated from the top, allowing them to illuminate unknowns and ask the right questions to build genuine, human-centric resilience.

About the Author

Steve Lasky

Steve Lasky

Contributor

Steve Lasky has been a professional journalist for 45 years and a 35-year veteran of the security media industry and a multiple-award-winning journalist. He is currently the Group Content Director for the Endeavor Business Security Media Group, the world’s largest security media entity, serving more than 190,000 security professionals in print, interactive and events. It includes Security Executive, Security Business and Locksmith Ledger International magazines, and SecurityInfoWatch.com, the most visited security web portal in the world (www.securityinfowatch.com).

Steve helped launch two of the industry's premier end-user publications over the last three decades. Since the early 2000s, his editorial vision has created the first serious buzz about the convergence of physical and logical security – not only from a technology standpoint, but also from an enterprise business management perspective. Dealing with real issues like compliance, metrics, and business drivers for security, Security Executive magazine is a top read for both the CSO and CISO communities.

Steve was a 26-year member of ASIS and served on the ASIS Physical Security Standing Committee for nine years. He has also been instrumental in several successful peer-to-peer events, including Secured Cities, SecureWorld Expos, and Global Security Operations 2010 (GSO 2010) conferences. In 2007, Steve was awarded the International Association of Professional Security Consultants' annual Charles A. Sennewald Award for Distinguished Service to the security industry. Steve is in demand as a moderator and speaker at security events around the country.

He is a former editor and writer with the Atlanta Journal-Constitution, Marietta Daily Journal, and Tampa Times and a correspondent for WEDU in Tampa, Florida. Steve is a graduate of the University of South Florida in Tampa and did his post-graduate work at Nicholls State University.

Quiz

mktg-icon Your Competitive Edge, Delivered

Stay ahead of the curve with weekly insights into emerging technologies, cybersecurity, and digital transformation. TechEDGE brings you expert perspectives, real-world applications, and the innovations driving tomorrow’s breakthroughs, so you’re always equipped to lead the next wave of change.

marketing-image