Insights from Upstream’s recent webinar with Ford’s Dan O’Reilly
“For years, APIs were just integration points,” Dan O’Reilly explained during a recent live webinar with Upstream, reflecting on his experience as a Vehicle and Mobility Cyber Security Supervisor at Ford. “They were how systems talked to each other.”
That assumption has dramatically expanded.
As AI, and more specifically LLMs, becomes embedded across workflows that span vehicle commands, backend systems, and application-driven interactions, APIs are no longer just passing data. They are part of how decisions are made and executed. “You’re now using AI to interpret inputs and trigger actions across systems,” he said. “That turns APIs into decision surfaces.”
System behavior is no longer defined by static logic. It is shaped by how LLMs interpret context and how those interpretations propagate across connected systems.
Cybersecurity Must Shift From Protecting Systems to Understanding Behavior
This shift changes the nature of cybersecurity itself.
“You are no longer securing systems,” Dan stated. “You are securing machine-driven behaviors.”
Traditional approaches assume that if each system is monitored and controlled, the overall system can be trusted. But when decisions are made dynamically across systems, risk no longer sits within a single component.
“The question is no longer just whether a system is working,” he said. “It’s whether the behavior you’re seeing is what you expect.”
Cybersecurity becomes a question of understanding outcomes across systems, not just protecting individual components.
OEMs can see signals across systems, but connecting them is still challenging
OEMs already operate in data-rich environments.
“We have a ton of data,” Dan said. “That’s not the issue.”
Vehicle telemetry, backend logs, API activity, and service records all provide visibility into different parts of the system. But they are not connected in a way that reflects how decisions actually unfold.
“You can see the signals,” he noted. “But you can’t easily see the chain of decisions that led to them.”
This is where the underlying challenge becomes clear. Without a unified data layer that connects these domains, each system tells only part of the story. Understanding behavior requires stitching together vehicle data, backend activity, and service context into a single, continuous view.
Without that foundation, system behavior remains fragmented, and understanding what actually happened becomes an investigation in itself.
Teams Are Forced to Reconstruct Behavior After the Fact
This fragmentation becomes most visible during investigations. “You’re essentially reconstructing the system every time something happens,” Dan explained.
An issue does not appear as a single event. It surfaces as separate signals across systems that were never designed to be analyzed together in real time. Each team works within its own domain, without a shared view of how events connect. As a result, investigations become reactive. Instead of observing behavior as it unfolds, teams are forced to rebuild it after the fact, often under time pressure and with incomplete context.
AI Makes System Behavior Harder to Predict and Trace
AI builds on this complexity rather than replacing it.
“The behavior isn’t deterministic anymore,” Dan said. “It depends on context, inputs, and how things interact.”
When AI is introduced into API-driven systems, workflows become more dynamic. The same input can lead to different outcomes depending on how it is interpreted and how it propagates across systems. “The same input doesn’t always lead to the same outcome,” he noted.
This does not create an entirely new problem. It amplifies an existing one. The lack of connected data, the difficulty of tracing interactions, and the reliance on manual investigation all become more pronounced as system behavior becomes less predictable.
Risk Emerges From How Systems Act Together
This shift changes how risk should be understood. “This is not about one system failing,” Dan emphasized. “It’s about how systems behave together.”
Prompt manipulation, unintended access, and abnormal usage patterns are not isolated issues. They are expressions of system-level behavior deviating from expectations.
Risk is no longer tied to a single failure point. It emerges from how decisions are made, how they propagate, and how systems interact over time.
Addressing This Requires a Unified Data Foundation and Behavioral Visibility
Responding to this shift requires more than incremental improvements. “We don’t need more data,” Dan said. “We need to understand what the system is actually doing.”
That understanding starts with connecting data across domains. Vehicle telemetry, BOM and software context, API activity, and other relevant data must be stitched into a unified layer that reflects how the system operates in real time.
On top of that foundation, the focus must shift to behavior. Teams need to understand patterns of interaction, how decisions are executed across systems, and how behavior deviates from expected norms. “You have to look at patterns,” he explained. “How things interact over time. What’s normal, what’s not.”
Finally, scale introduces a prioritization challenge. “You can’t investigate everything,” Dan noted. “You need to know what actually matters.”
Without prioritization, visibility alone does not improve outcomes.
OEMs Must Move From Observing Systems to Understanding Behavior
This is the shift Dan points to.
It is not a question of collecting more data or adding more alerts. The challenge is understanding how systems behave together as decisions are made and executed across them.
As AI becomes embedded across workflows that span vehicle commands, backend systems, and application-driven interactions, those decisions become distributed, contextual, and harder to trace.
Addressing this requires more than monitoring individual signals. It requires connecting data across domains, understanding patterns of interaction, and identifying where behavior deviates from expectation.
The shift is not incremental.
“You are no longer securing systems,” Dan said. “You are securing machine-driven behaviors.”