Beyond Red Teaming
In preparing for the next round of our Proactive Risk Avoidance course (the updated “Becoming Odysseus” red teaming course), we revised how we explain the relationship between system-oriented red teaming and risk. Here’s what we came up with. It’s a bit “back-of-the-envelope,” so take it for what it’s worth.
The challenge is all about “seeing” system risk over time (risk/time). Today’s red team takes a snapshot, but how much is that snapshot worth? Risk approaches a limit of zero as you compress the timeline to the present moment, a fraction of a moment, a fraction of a fraction of a moment. What you really need to grasp are the risk curves over time, overlaid on the system behaviors over time. Linear forecasting using a “predict and control” model won’t help much against adaptive adversaries. (Shades of Pierre Wack? Yes, absolutely!)
The real trick is to see and learn how the system and your adversaries might unfold jointly, reciprocally. You must understand the system better than your adversaries if you want to avoid getting burned. Your mental model must be the best of them all.
Finding the system shortcuts and cheats helps you shut down adversary options, but just as important, it also helps you stretch your mental model of the system and make better decisions regarding the future—decisions that in most cases take time to unfold and can be difficult and expensive to reverse.
Your mental model must be as accurate, adaptive, and responsive as possible. It should inform and guide your strategies, decisions, and investments. It should address baseline risks, edge cases, trends, potential patterns of divergence, and longer-term adversary preferences and capabilities.
It's a lot to ask, and simply listing creative adversary paths is no longer adequate. To get to where we need to be, we must push beyond red teaming. We must comprehend the reciprocal system—how it breathes, moves, grows; how we can shape it, nudge it, own it; and how our adversaries can do the same.
Compounding the challenge, no privileged perspectives exist; all perspectives (even our own) are subject to manipulation and error. Even the deceiver is vulnerable to deception. This adds a highly ambiguous and subjective aspect to the problem. Who knows what? Who believes what? Who’s deceiving whom? Traditional risk frameworks struggle with this subjectivity. Red teaming can help, but only if we actively discern and engage the relevant mental models across the whole system.
In the end, then, it’s more than just risk/time, it’s the perception of risk/time. It’s a tough problem, but just framing it in this way is—we believe—a big step forward. It’s also the starting point for the course.