Why Cyber-Physical over Operational Technology?
I am a lady of many opinions... To the point where it is a joke among my colleagues and myself that pigs will fly if I don't chime in on a conversation where there is no clear answer.
It is important to highlight that I am no contrarian, in that I don't just take a position and argue for it because it is the opposite of what everyone else is doing but instead I like to be able to logically connect concepts in my head and use these concepts to understand and explore the implications on new and novel challenges.
This brings me to my current opinion,
What do I mean by this?
Well first of all, I should probably explain in what context I am talking about. Something can only be helpful/unhelpful when put alongside a goal/objective to provide the context in which it can be compared to other things.
The Context
Let me set the stage.
I'm a Cybersecurity Consultant by trade - and a bit of an oddball within that world. Unlike many of my peers who specialise in either breaking things (technical assurance) or building things securely (security engineering), I live in both camps. That means one day I might be elbows-deep in a firmware image or running a penetration test on a safety-critical system, and the next, I’m in a workshop helping a client refine their system boundary diagrams for a Threat Analysis and Risk Assessment (TARA). But that’s not the whole story. As a consultant, I wear many hats - and some of them aren’t even technical. I'm often brought into client meetings to support the sales cycle. Sometimes that means solution architecture, translating security needs into delivery scopes and resource plans. Other times, it’s about building trust - showing up with confidence, speaking in the customer's language, and reassuring them that yes, we’ve done this before, and no, you don’t need to worry about how the sausage gets made. (That bit gets passed along to our long-suffering Account Managers.) I work across industries: automotive, energy, rail, manufacturing - any sector where digital systems affect the physical world. In these spaces, "cybersecurity" doesn’t just mean protecting data, but protecting outcomes. Safety. Availability. Functionality. That adds layers of complexity to every conversation - and it means that language matters more than ever. Which brings us to the point of this piece. You see, when you’re trying to deliver assurance across diverse domains and align with technical stakeholders, risk managers, and procurement all in one breath - you start to see how some of the terms we use, like “Operational Technology,” aren’t pulling their weight anymore. Because at the end of the day, my goals, and those of many in similar roles, come down to three things:
- Sell Security and Operational Resilience services to our customers. Not just as a product, but as a long-term investment.
- Interpret the customer’s world in their own language - whether that’s regulatory, technical, or domain-specific and map it to to internal capabilities we bring to bear.
- Deliver assurance across increasingly interconnected, cross-domain, and safety-relevant systems.
And with those goals in mind, the language we choose particularly the labels we use to describe whole categories of systems becomes a vital part of how we communicate, build trust, and deliver value.
The Contender
Introducing... Cyber-Physical Systems Let’s talk about the term I’d like to nominate as a more useful framing: Cyber-Physical System, or CPS. This isn’t a new term. It’s well-established in academia and increasingly referenced in systems engineering, embedded software, and critical infrastructure discussions. But in industry, particularly within commercial cybersecurity and consulting contexts, it still plays second fiddle to the much more widespread “Operational Technology” (OT). And honestly? That’s a problem. Where “Operational Technology” defines a system by what it isn’t (i.e., not traditional IT), “Cyber-Physical System” makes an affirmative, descriptive statement:
That distinction is more than semantics. It’s foundational. Because when you’re working with the kinds of systems commonly lumped under “OT” say, the braking logic in an EV, or a robotic sorting system, or a protective relay in a power distribution grid - you’re not just dealing with networked devices. You’re dealing with systems that have feedback loops, real-time constraints, safety interlocks, and consequences that exist far outside a log file or a database. The CPS framing does several important things that OT simply doesn’t:
1. It reflects how these systems actually behave CPS encourages you to think systemically. It brings in concepts like control theory, feedback loops, physical actuation, and failure propagation. That mindset is essential when you’re doing meaningful risk modelling or trying to understand the real-world impact of a compromised or malfunctioning system.
2. It helps clarify the skills and disciplines required Cybersecurity in these environments isn’t just “pen testing but spicy.” It requires people who can:
- Interpret electrical schematics
- Understand how the system/component/product is expected behave during its entire security lifecycle including End of Line, Standard Runtime, In-field Diagnostics and End of Life
- Understand physical failure modes and control loop stability
- Reason about the interaction between software, timing, and mechanical motion
You also need engineers who can write Risk Assessments and Method Statements (RAMS) not because a health and safety officer demands it, but because you need it to avoid causing unintentional harm during testing. When you’re testing a system that can move, heat up, go bang, or shut down an operational line, it’s not about being risk-averse - it’s about being risk-aware.
3. It forces a more nuanced approach to Denial of Service In classic IT terms, DoS is a binary concept: system up or system down. But in CPS, availability is a spectrum.
- Is the process slowed?
- Is there degraded control?
- Has the safety margin shrunk?
- Are humans now being relied on to do something a system normally would?
These questions are critical especially when the “failure” may not show up as a 500 error, but as a subtle behavioural deviation that only becomes catastrophic in the right (or wrong) conditions.
4. It supports a more future-proof model “OT” still evokes beige PLCs in dusty cabinets. But the modern reality includes:
- Embedded Linux
- Adaptive systems
- Edge AI
- Cloud-connected diagnostics
- Dynamically reconfigurable hardware
The CPS framing gives us a better language to describe this increasingly hybrid world — where the physical and digital are deeply intertwined, and the traditional OT/IT divide just doesn’t hold up anymore.
5. It transforms how we approach Assurance And perhaps most importantly, it changes the very nature of what it means to do Assurance. In a CPS context, good Assurance isn’t a solo cybersecurity activity. It’s a team sport that draws on systems engineering, control systems, safety engineering, human factors, and operational expertise. You’re not just checking for CVEs, you’re assessing whether a system continues to do its job safely, predictably, and securely under stress, attack, or misconfiguration. That means:
- Writing and working to RAMS as standard practice
- Creating test plans that simulate realistic fault conditions, not just input fuzzing
- Collaborating with cross-functional teams to understand acceptable failure states
- Establishing shared language between safety and security requirements
- Being able to justify, not just detect, risks especially when they cross domain boundaries
In other words, CPS elevates Assurance from a “checklist exercise” to a holistic, risk-informed process that reflects the reality of what these systems are, and how they fail.
From Checklists to Justification: The Rise of Structured Assurance
In traditional IT security, and even in legacy OT contexts, checklists were often seen as sufficient. Not because they offered deep insight, but because the systems they were applied to were assumed to be static, simple, and bounded. Risks were treated as discrete and predictable, so assurance followed a kind of binary logic:
- Are default credentials removed?
- Is network segmentation in place?
- Are patches up to date?
This mindset worked, or at least seemed to, because it reflected the mental model, not the reality, of those systems. As long as environments didn’t change much and the threats remained conventional, the illusion held. But as systems have evolved, that model has collapsed under its own weight.
Cyber-Physical Systems (CPS), particularly when integrated into larger System-of-Systems (SoS), highlight where these axiomatic, checklist-based approaches begin to struggle and, in many cases, fail outright.
These environments exhibit emergent behavior driven by dynamic interactions, contextual safety and operational risks that defy binary classification, interdisciplinary dependencies across software, hardware, human factors, and the physical world, and cross-boundary interactions where cause and effect aren’t always local or immediate.
In these systems, assurance must move beyond box-ticking and adopt a structured, risk-informed approach.
That’s where explicit and structured Assurance Cases come in, particularly those based on:
- CAE (Claims, Arguments, Evidence)
- or GSN (Goal Structuring Notation)
These methods are more than documentation frameworks. They enable:
- Clear articulation of why a system can be trusted, not just what controls exist
- Integration of safety, security, and engineering evidence into a single coherent argument
- Support for design-time assurance, not just after-the-fact validation
- Stakeholder alignment around acceptable risks, safety margins, and operational constraints
- Scalable reasoning across both individual CPS and complex, interconnected SoS environments
- Force explicit consideration about types of evidence and arguments that compell different stakeholders (regulators, investors, customers, senior management, etc.)
They also support a key shift: from reactive compliance ("did we do the checklist?") to proactive justification ("can we show this system is acceptably safe and secure in context?").
This is especially vital in domains where failure doesn’t just mean downtime, it could mean injury, financial loss, or reputational damage. Threats are often subtle, sustained, and safety-aware, and systems are interacting, adapting, and often outliving their original threat models.
As we move from the language of "Operational Technology" to the framing of Cyber-Physical Systems, and beyond that to SoS thinking, we’re not just updating terminology. We’re transforming our assurance models to match the real structure and behavior of modern systems.
Structured assurance is what allows us to do that responsibly, credibly, and with confidence.
Threat Actor Motivations: Why Language Matters
When considering cybersecurity, understanding threat actor motivations provides critical context. Traditional Operational Technology (OT) frameworks often lead organizations to narrowly focus on preventing unauthorized access or maintaining basic operational availability. However, threat actors targeting cyber-physical systems (CPS) frequently pursue goals far beyond simple disruption or data theft-they might aim to subtly degrade safety systems, manipulate physical processes without immediate detection, or establish conditions favorable for future exploitation.
For instance, consider Electric Vehicle Supply Equipment (EVSE). A threat actor might not intend to simply disable chargers outright but rather subtly manipulate charging parameters, causing accelerated battery degradation or even long-term reliability issues in vehicles. This subtle approach could remain undetected for extended periods, ultimately resulting in significant financial losses, safety hazards, or damage to consumer trust in EV infrastructure.
Similarly, attacks targeting transport or transit authorities via vulnerabilities introduced by system manufacturers or suppliers could aim beyond service disruption. Adversaries may quietly manipulate fare systems, signaling controls, or vehicle diagnostics to create intermittent issues or safety risks. Such manipulations might initially appear as isolated or random faults, complicating troubleshooting efforts and potentially masking the threat actor’s true intentions until significant cumulative damage or public harm has occurred.
Furthermore, regulatory frameworks and industry standards, such as GDPR, typically focus heavily on data breaches and violations of confidentiality, resulting in significant fines and punitive measures. This emphasis inadvertently encourages organizations to prioritize data protection scenarios over threats aimed at broader impact outcomes. Increasingly, threat actors employ sophisticated triple-prong strategies involving data exfiltration, encryption of backups (preventing data recovery and causing denial-of-service), and targeting additional systems to achieve substantial disruptions and greater overall impact.
Adopting a CPS-oriented mindset enables us, and our customers, to clearly see, anticipate, and defend against these nuanced threats. It elevates the security conversation beyond merely "keeping the lights on," instead prioritizing proactive defense of the integrity, safety, and resilience of critical systems.
Towards a Customer-Centric Future
I recognize that for many people, the term "Operational Technology" isn't just jargon, it's part of their professional identity. It's familiar, it's comfortable, and for some, it's even reassuring. And crucially, "OT" is often the exact term our clients or customers are expecting to see when they come to the table to discuss cybersecurity investments and strategies.
However, customer-centricity doesn't just mean repeating familiar language back to them, it means helping clients understand their challenges more clearly, equipping them to anticipate future shifts, and ultimately empowering them to make more informed, strategic choices. If our clients are currently purchasing "OT security," it's partly because that's the label we've offered them. But as experts and consultants, it's our role not only to respond to the market but also to shape it, to guide our customers toward frameworks that better reflect the evolving realities of their operational environments.
"Cyber-Physical Systems" is a term that empowers us, and our clients, to have clearer, deeper, and more effective conversations about security, safety, and resilience. It encourages everyone involved to think more comprehensively about risks, controls, and outcomes. CPS isn't just a better label; it's a more powerful mindset.
I didn't personally grow up attached to the term OT; for me, it always felt more like marketing shorthand rather than a meaningful technical category. But I do respect the history and comfort others might find in it. This isn't about discarding past terminology arbitrarily. It's about moving the conversation forward, ensuring our language keeps pace with technological and business realities.
So here's my call to action: let's challenge ourselves, and our clients, to adopt clearer, richer, and more precise language. Let's talk about Cyber-Physical Systems. Let's move beyond OT.