Crowdstrike and the threat of friendly fire

Threat modeling methodology centers on asking, “What could go wrong?” and then considering mitigations to address such an eventuality. The unending calamities of history vividly demonstrate how human intuition repeatedly fails to foresee many such events until after they happen, and even then we sometimes fail to learn and act. For example, consider the 2008 financial crisis: after all the bailout money was handed out around Wall Street, Congress never confronted the glaringly obvious problem of “too big to fail” institutions. As a result, large firms continued to consolidate, concentrating power and risk in still fewer institutions, creating conditions for a repetition that appears to be a matter of “when” rather than “if”. Traditionally threat modeling has been deployed exclusively within the context of secure software engineering, but I posit that it is just as effective and important a tool for anticipating potential harms of all kinds — not just malicious exploitations.

[Read More]

Secret Questions for password reset

Secret questions as credentials for online account authentication are simply a bad idea in my view: I have never seen them done well, often seem them done atrociously, and my most generous assessment would be that they are extremely hard to do well. But keeping an open mind, here's a brief reasoning why these are problematic, and I invite anyone interested to do a brilliant design and prove me wrong.

[Read More]

Learning from Solar Winds

ProPublica is a national journalistic treasure, and recent reporting on the software industry is a terrific impetus to drive much needed change. I sat in on many bug triage discussions over twenty years ago working at Microsoft, and despite great technology advances, the way these decisions are made appears to be little evolved. My purpose here is not to judge what transpired and who is at fault, but to glean from the reporting better software practices so we can at least learn from these events.

[Read More]

Trusting AI

Whenever considering applications that rely on generative AI, I believe we always need to ask if we can trust it. And given the technology’s track record it’s hard to imagine how we are going to honestly be able to say “100%” any time soon. That’s why, for the time being, I think the following guideline will be very important.

[Read More]

Trusting AI

  • (220 words) June 2024 – Loren Kohnfelder

Whether or not this unscientific test is reliable, asking generative AI if a mushroom is safe to eat — it misclassified a highly toxic variety that looks like a common edible one — is a terrible idea if you are prepared to eat according to what it says. This illustrates my rule of thumb:

[Read More]

Better security discussions

I’ve been a fan of threat modeling for many years, but only recently seeing that it isn’t just behind-the-scenes work for software professionals to do. Threats and mitigations need to be part of any discussion about security. That is, news articles that urgently warn of the latest zero-day or when privacy advocates decry the latest outrage from the big platforms, to frame a good discussion requires outlining the threat model you are talking about.

[Read More]