- Google security methodology compendium
- Threat modeling
- Are attacks the only threat?
- Why is threat modeling ignored?
- Miscellaneous
- e16n next Stage 4
Google is sharing a lot of their impressive security methodology in a recent collection of articles: How Google Does It: An inside look at cybersecurity. Of special interest to me: Threat modeling, from basics to AI.
AI
A very interesting political party in Japan is exploring radically new AI based models … at first I thought it was ridiculous, but it’s actually clever, just might be effective and even scale, certainly not to be immediately dismissed. Of course the devil is in the details but they just might be onto something, and even if they fail the idea can be revised and applied elsewhere. Bruce Schneier has a good English language overview (or in Japanese, チームみらい).
Keep your eye on recent reports that exploit development by LLM is taking off. A little exaggerated I think (and hope!) but one over the top prediction is anyone can do vulnerability research by “pointing an agent at a source tree and typing ‘find me zero days’.” Or as Simon Willison puts it, AI-powered security research, is having a moment right now. As always extraordinary claims require extraordinary evidence. If this is at all happening then a whole lot of the way we build and use software will be deeply impacted very quickly.
Bernie vs. Claude : the most insightful take on genAI I’ve seen from a legislator.
Threat modeling
Are attacks the only threat?
There seems to be no consensus (it isn’t clear that many people think about it much) on threat modeling being limited to intentional malicious attacks or all threats, including spontaneous and accidental. In my view this is crystal clear: we shouldn’t artificially separate intentional from unintentional threats.
- The boundary between the two is ill-defined, e.g. phishing originates from an attacker but deceives an authorized person to unintentionally cause harm.
- You need to anticipate unintentional threats too, so why do an additional threat modeling effort for those?
- Or if you think spontaneous (hardware or connectivity failure) need no defense, and that authorized users never make mistakes, potentially you are going to miss major threats — and learn the hard way.
A great example of this last point is the SL5 Standard for AI Security. It is timely and a good topic to think about for everyone — even if you don’t use AI how do you check if any dependencies are? Kudos for the standard starting with a threat model, but it seems to only consider malicious attacks but not include hallucinations (unintentional) which I consider a primary threat.
Opinion: Why is threat modeling ignored?
Threat modeling is the most powerful, yet grossly underutilized, easy-to-do security methodology we have today. By encapsulating security properties it guides design, makes implementation easier and more secure, informs testing, serves as SecOps reference, educates more people within organizations as well as stakeholders, and much more because security is a team sport so knowledge sharing is essential. Contrary to popular beliefs, it can take many forms, need not be a major project resulting in a formal document.
So why aren’t we using it more than here and there? And I’ve never heard of an informed person with real experience giving it up as a waste of time. Common reasons I hear (IMHO) are not very good:
- not worth the (implied great) effort
- too hard to learn
- requires some special “security expertise” or “thinking like an attacker”
If I’m completely missing something here I certain hope to learn what it is and be corrected … but I do have counter-arguments.
Miscellaneous
e16n next Stage 4
Originally defined as three stages, I think e16n moves on to get worse than that unless we do something — but first for awareness… The $1T class software corporations lobby the government and other powerful interests to *force* us to use their stuff: changes nobody wants, most corporations adopt AI to save money while sacrificing quality (when they all do it customers have no alternatives), or declaring AI “inevitable” or as Mrs Thatcher said famously, “There is no alternative”), cementing the effects. Then comes Stage 5…
Terrific book (I’ve only started it) and this podcast about it Inventing the Renaissance: Ada Palmer on Golden & Dark Ages goes beyond with more general insights into history (correcting the simplifications and lies taught in school.) Ada is not only an impressive historian she also writes Sci-Fi; she explains that it’s a way to project what history tells us into how it might impact the future (of course Sci-Fi never predicts it just shows interesting possible timelines).
“You know — we’ve had to imagine the war here, and we have imagined that it was being fought by aging men like ourselves. We had forgotten that wars were fought by babies. When I saw those freshly shaved faces, it was a shock. “My God, my God—” I said to myself, “it’s the Children’s Crusade.” — Kurt Vonnegut