Threat modeling is the most powerful, underutilized, easy-to-do security methodology we have: why isn’t everybody doing it already, or why do those who are keep their work secret? If you already threat model your digital systems and products, and are doing the work already then you are doing security right so you should share it with pride. Publishing threat models may be the best evidence of excellent security work that customers and users can appreciate the value of, short of a rigorous detailed design and code review. You’ve already done the work — or if not you really should — and making it public not only is great promotion but it also helps all stakeholders understand their respective roles and responsibilities in securing larger systems. (about 4600 words)
[Read More]Demand more
Demand more
I applaud CISA leadership speaking out aggressively at the mWISE Conference 2024 about the dismal state of software security (based on reporting in The Register, but it would be nice for www.cisa.gov to publish transcripts in order to ensure we are interpreting remarks with full context, given that the videos are paywalled).
[Read More]Threat Modeling threat modeling
(2300 words) Threat modeling isn’t just for software security; you can even threat model threat modeling. When a major software incident occurs, the first thing we should be asking is “show us the threat model”.
[Read More]
Crowdstrike further revelations
In a debunking blog post, Crowdstrike finally starts to describe that content files are digitally signed for deployment. The initial report oddly referenced file timestamps instead of hashes to designate the bad and good versions of the infamous Channel File 291, but now we know these were signed.
[Read More]Crowdstrike External Technical Root Cause Analysis
The Crowdstrike July incident root cause analysis report provides new detail and requires reading between the lines to interpret (I welcome corrections with references if I got it wrong).
[Read More]Why tamper LLMs with guardrails?
Say what you will about LLM technology, it’s remarkable that we can do computations on the scale of billions of parameters training on large chunks of humanity’s collective text and media at all — and then it’s remarkable how you can talk to “it” in everyday language and get any kind of recognizable response out of it all, often (but not always) a pretty good one, and this is all based on the simple but powerful “select the best next token” algorithm run in a loop. The concept would have made a terrific sci-fi series, and here we are with it working in our cloud at scale.
[Read More]Incoming message mess
July 30, 2024 — When will we address the unacceptable status quo of scam phone calls, SMS text, and email?
[Read More]