LLMs dominate just about every discussion of software security these days, and I am asked to opine on this lately increasingly often. Frankly speaking, it’s important to start by stating clearly that the following is based on my very limited knowledge and experience with this new technology — however, I believe that I can usefully contribute to the discussion based on first principles which I do not think ever change.
At this point (mid-2026) in my opinion things are so volatile that it’s premature for anyone to know very clearly what’s ahead. For example, a few weeks ago the Anthropic Mythos announcement (whether the claims hold up or not) dramatically changed the conversation about both offensive and defensive cybersecurity.
Given this context, I can only offer these high level points (all opinion, I could be wrong of course):
- Going back to first principles is the best way I know to escape all the hype (both pro and con).
- Threat modeling is essential (using LLMs what could possibly go wrong?), while realizing that LLMs make modeling far more challenging (being unpredictable and challenging to keep within strict guardrails).
- Unless the LLM maker takes responsibility (never heard of this), before deployment get consensus for who does when something goes wrong.
- Roll out new LLM-based projects carefully and monitor how well they do the job.
- Stay flexible because things will change fast.
Transparently sharing results (both success and failures) is the best way for the software community to learn how to best integrate this remarkable technology, as well as better understand what can go wrong and how to mitigate the downsides.