Loren Kohnfelder — 2026 February 15 (revised)
I propose two additional threat categories for the STRIDE needed to keep up with modern times. Only some threat models will need to use these, which should be easily to quickly determine, but nevertheless a complete set of threat categories is important for all to be aware of so they can utilize it when appropriate.
First I present background motivation and a brief introduction to STRIDE as a starting point. The next two sections detail the new categories, proposing additional letters to expand the acronym. In closing I propose STRIPPED as a successor to STRIDE.
Background: STRIDE
The STRIDE threat categories date back to the twentieth century, debuting in a short paper I co-authored with Praerit Garg, “The Threats to Our Products” (April 1, 1999) at Microsoft. Back then home internet was 56 kbps dial-up running Windows 98 Second Edition (SE).
STRIDE is based on bedrock information security principles: the C-I-A triad and the so-called Gold Standard. The former refers to confidentiality, integrity, availability; the latter consists of authentication, authorization, auditing (all beginning Au, the element gold). In a nutshell, the triad expresses what it means for an information system to be secure, and the gold standard describes essential ways of protecting information.
STRIDE: an acronym for categories of threat (below with the briefest of explanation):
- Spoofing is Authenticity: Phishing, stolen password
- Tampering is Integrity: Unauthorized data modification
- Repudiation is why we Audit: Plausible deniability, avoidance or destruction of logs
- Information disclosure is Confidentiality: Data leaks, exfiltration
- Denial of Service is Availability: Swamping a web server, ransomware
- Elevation of Privilege is Authorization: SQL injection, Cross site request forgery
Since then we have seen the rise of: cloud architecture, mobile computing, 5G networks, Internet of Things, and now machine learning. It’s long past time for an update, but what exactly has fundamentally changed? From Turing machines to today, software fundamentals are unchanged even as the hardware is radically evolved to encompass the colossal scale of modern data centers to miniaturization, low cost, and ubiquitous connectivity.
In 1999 most homes had one PC, networking was slow and primitive, you bought apps in a box, and corporations were just beginning to move processes and recordkeeping to these new systems. A few people were enamored enough by the new technology to work hard integrating it into their personal life. They used digital messaging instead of postal correspondence, calendar apps, and collected digital photos from the new wave of filmless cameras, but they still primarily lived an analog life for the most part. (A few pioneers attempted lifestreaming but that was not only weird but very low tech by modern standards.)
Today digital devices and services are everywhere. Everyone has phones and several other laptops or tablets, but automobiles have a dozen or so computers, most appliances and gadgets run on software. All corporations are thoroughly digitized, not to mention automated phone trees and AI agents wrangling the customers. I recently dined at a restaurant where I ordered by tablet and paid the bill at a vending machine (a human did bring the food but a growing number of places have robots for that). It’s hard to get away from computers in modern daily life.
Over those 25+ years of progress (if that’s what it is) the fundamental change that I see (it’s hard to miss) is how digital tech has become embedded in our lives. Or is it that our lives are now embedded in digital infrastructure?
We now live entwined with digital systems that expose us to new harms that have arisen since the days when STRIDE was envisioned. I propose adding two new threat categories to more completely define security requirements for the modern world: human factors, and real world harm. The following two sections present initial definitions, examples, and observations to elucidate each new threat category.
It is true that for many kinds of software these additions are not needed: these new categories are only useful for a subset of systems, and it’s quite easy to determine when these can be ignored.
- Systems that never get information related to humans are exempt from Personal Harm.
- Systems of pure software (not connected to hardware) are exempt from Physical Harm.
P is for Personal Harm
Daily life so frequently depends on and is influenced by software that it inevitably represents a new category of potential harm to us as people. Classic STRIDE is based on information security principles, but given how pervasively software touches us all by now this is a new kind of threat and a much more subjective area. At a minimum, Personal Harm covers privacy harms as well as economic, legal, psychological, manipulation, harassment, and social harms.
This category could be easy to remember as People or Privacy, however, the choice I made matches the STRIDE keywords which all (by design) name threat categories (harms) rather than enumerate the protection goals for a secure system.
These threats may be due to a vulnerability, bug, mistake, or by policy. Let’s break those down with a few illustrative examples:
- vulnerability (weakness exploited by an attacker) — exfiltrating PI information;
- bug (unintentional flaw in the code) — sending customer database to wrong recipient;
- by mistake — misconfiguration make PII public, transmitting sensitive data unprotected;
- policy: (intentional harm or lack of care) — additional profit from selling customer data.
Policy threats are worth describing in some detail because it’s so broad, pervasive, and perhaps by far the most damaging today. These include:
- management has no understanding of data privacy and hence fails to make good policy, or even willfully ignoring their responsibilities, including oversharing or granting access too liberally, or violating their own privacy policy behind the scenes (where nobody can tell);
- lack of governance leading to poor practice by staff doing their work;
- government policy and law compelling companies to excessively disclose information;
- outdated or omission of legal controls, including weak enforcement;
- overcollection of PII.
I use the term PII (personally identifying information) broadly, meaning information that itself or in combination can violate personal privacy. While some may limit this category to only what law and regulation impose, I would argue it should include moral and ethical considerations, as well as nonconformance to reasonable expectations.
Privacy is not at all the same as Information disclosure (the “I” of STRIDE) though they overlap when the personal harm is caused by a data leak. The not uncommon difference is that this category includes many threats where the harm is caused without any public disclosure at all.
- unwarranted tracking (not just location but knowledge of purchases, activities, etc.)
- excessive and unwanted spamming
- needlessly upsetting customers or wasting their time
- selling data to others how use it to cause harm without publicly disclosing it
- requiring customers to divulge sensitive information
- retention of data longer than necessary or in spite of promising not to
People often experience harm as the direct consequence of software misbehaving as a privacy infringement, however personal harm is not limited to that by any means. Consider:
- stealing, extorting, or deceptive practices to get money from people
- mental health issues resulting from online bullying, abusive content
- addictive online experiences
- game cheating unfairly gaining advantage
P is for Physical Harm
In 1999 computers rarely interacted with the real world other than keyboard, mouse, display, and peripherals including printers — none of which are weaponizable at all. Today so much of our infrastructure and so much equipment and gadgets are operated by software, that real potential for harm in the real world (as opposed to pure information) is new.
It should be noted that the lion’s share of software will obviously be able to ignore Physical Harm if it doesn’t control hardware that acts in the real world with the potential to cause harm.
All readers have experience in the real world so there should be no need of explaining its potential for harm. A few examples (and there are much worse cases that no doubt readers can imagine):
- Harassment or crimes enabled by learning the victim’s location.
- Hardware that directly harms a person or property;
- Medical equipment software injures a patient, or by inaction allows disease to progress;
- Bad healthcare guidance (e.g. naming the wrong pharmaceutical or dosage);
- An automated door lock system fails and lets in vandals;
- An elevator entraps riders for hours;
- A vehicle malfunction or crash due to a software bug.
The two “P"s definitely overlap when the harm is bodily: this is inevitable and overlap is common in STRIDE already. (E.g. Elevation of privilege usually leads to one of the other categories to complete the exploit, as does Spoofing). As noted above, Personal also overlaps with “I”, and its overlap here only serves to highlight the unique importance of protecting humans. In fact, one could argue that information security only matters when some human feels the pain — even if it’s just IT staff getting yelled at for screwing up. What happens purely in the digital realm stays in the digital realm (since bits cannot directly harm humans at all). Also since STRIDE is intended to identify threats to a system, overlapping categories serve as different starting points providing more opportunities to find additional threats.
Forget Intention
The security effort that STRIDE was part of was focused on attacks (intentional actions) but now there is no reason not to include unintentional threats as well when threat modeling. At minimum I strongly urge considering it compared to however you might plan to prepare for these other threats separately.
If a network outage resulting from an intruder shutting it down or spontaneous equipment failure makes no difference to the effect: it’s Denial of Service. Since things break, accidents happen, and software has bugs, excluding these threats (sources of potential harm) makes them less likely to be noticed and mitigated unless you perform another system wide analysis and handle them separately — why?
Furthermore, I would say that intention can be difficult to discern, especially proactively since the borderline cases are endless. The new Personal Harm category many scenarios we want to protect against are a mix, for example sharing seeming harmless information that is exploited against someone. Some more examples may help land this point.
- An admin types a command that takes down the system, deletes all the backups, or misconfigures the cloud data storage to be public read-write access: insider attack or fumble fingers? Not even the logs will distinguish these, and a clever insider attack might craftily use a technique that would plausibly appear to be accidental.
- An executive demands a report in one hour, staff knows that using AI without approval is strictly forbidden, but it’s the only way to meet the deadline and after thinking carefully they are convinced in this case there is no risk (they aren’t technically savvy) … and proprietary data is leaked causing a huge debacle. They did intentionally break policy yet they did so with the very best intentions so the damage done was not at all intentional for all they knew. Insider attack or accident? (I have no idea which it is.)
Announcing STRIPPED
Fortunately there is a word to serve as a new acronym though it doesn’t have a meaning related to its purpose that I can see. (STRIDE was a “stride forward” for software security.) Better suggestions are welcome, including renaming one or both of the new categories to get different letters if you can.
That we need to expand STRIDE to catch up with software at this point seems incontrovertible to me. Certainly other acronyms of which there are many might better cover the software threat landscape than STRIDE + PP. Also I would suggest that using a privacy-specific method such as Linddun may be better for the privacy portion of Personal Harm (which I would say is a superset of information privacy). Additionally, I recommend including unintentional as well as attacks in threat modeling.
If these ideas are worthy and get traction remains to be seen: My opinion doesn’t really matter. Those who like STRIPPED are welcome to use it, even amplify it by spreading the word. I offer these descriptions and examples as a start: anyone is free to expand, redefine, and otherwise tailor it to their preference.
More effective threat modeling is the goal: let’s work together to that end.