Who Needs a SOC When You Can Just Loop Everything Through ChatGPT
Another day, another security tool trying to sell itself as the end-all be-all for security operations. “Who needs to have analysts when you can just ask our AI!” Says one company. Showing how it “reasons” and “thinks” and lays out the entire thought process.
People buy it too. That’s the sad part. No matter how many people, or how often someone will remind them that a Large Language Model (LLM) does not think, it purely predicts sequences of characters. Yet here we are. The middle of 2025 with everyone trying to replace every single product with “idk, just ask AI lol”.
In the security space, you have companies trying to just ask ChatGPT if alerts are bad and calling it a day. Then you go into these “agentic” areas where they not only respond with such confidence, but they will take those responses, loop it, and execute whatever code the auto-complete returns as a result. Such an irresponsible use of technology.
Here are some of the most ridiculous company ideas I have seen so far. I am not linking to any of these companies because they don’t deserve referrals for this:
- Company X claims to basically loop your detections through ChatGPT, Gemini et al. and then come to definitive conclusions so that your security team doesn’t have to
- Company Y Solves all your phishing problems magically using their AI. Can’t build any custom phishing detections either, but don’t worry, they pinky promise that they have 100% accuracy.
These companies have raised hundreds of millions in funding to basically by a ChatGPT API key. Embarrassing from a VC standpoint.
And these are just the security tools. And I don’t want to give shoutouts to any of these companies because they don’t deserve it, even on my tiny platform of a website.
It’s irresponsible the way society is moving. Not only are we not regulating the fact that LLMs stole basically all of human copyright, nor are we punishing them. But now people are conflating well written responses with authenticity and authoritative sources. This is dangerous and it’s only going to get more dangerous as we go down this road.
What’s next?
Get rid of judges. Let’s just ask ChatGPT what the sentencing should be for a guilty verdict of a specific case?
Actually, forget that. Get rid of police, forensic investigators, judges, and jury. Let’s just ask ChatGPT if someone is guilty and feed it the evidence. Oh you have an issue with that? Then stop trying to implement a ChatGPT prompt loop to solve your threat detection problem.
Most people would not trust ChatGPT to give a guilty verdict if you fed it evidence because it makes up legal case law. But somehow the current thresholds think other types of verdicts are totally okay.
Language models should be used as tools like advanced auto complete. Nothing more. Their responses are non-deterministic, they have no authority and understand nothing that they repeat back to you. Stop acting like they do.
They should not be used to replace thought; they should not be used to replace actions. Not because of some fear over job loss or fear of AGI. But because believing a statistical model has better analysis abilities than humans is down-right irresponsible from a security perspective and insulting from a humanity perspective.
None of this was written with AI, aka machine learning that got rebranded. End rant.