🛰️ When AI Gets Too Clever: What Comet and OpenAI Teach Us About Digital Security
This week, two stories caught my attention: Perplexity AI and OpenAI — both at the cutting edge of artificial intelligence — faced vulnerabilities that reminded us of an uncomfortable truth:
AI can be brilliant, yes… but also terribly naïve.
In this post, I’ll explain — clearly and without fearmongering — what happened, why it matters, and what precautions we should take if we use AI to research, write, or simply make life easier without losing control.
🌠 The Comet case: when the browser doesn’t know who’s in charge
The Comet browser, introduced as the future of AI-powered navigation, promised to open tabs, summarize articles, and even read your emails.
The problem? It didn’t know who to trust.
Researchers from Brave and Guardio Labs demonstrated that a malicious website could hide a secret instruction (prompt injection) capable of tricking the assistant into performing unauthorized actions — like opening your inbox, copying data, or sending it elsewhere.
In other words, visiting a webpage could make your “assistant” behave like a thief wearing a butler’s uniform.
And, well, the butler had no sense of irony.
🧠 OpenAI and the dilemma of overly obedient GPTs
Meanwhile, OpenAI was dealing with something similar inside its own ecosystem.
Academic studies revealed that over 90% of custom GPTs were vulnerable to prompt injection or data leakage.
All it took was a cleverly phrased question to make the model reveal its internal setup or confidential instructions.
The company itself acknowledged that prompt injection remains an unsolved challenge.
It’s not that the models are “bad” — they simply can’t yet distinguish between a legitimate request and a manipulative one.
Think of them as too polite for their own good.
🔐 What this teaches those of us who use AI for research or writing
If you use ChatGPT, Perplexity, or any other AI tool to write, summarize, or analyze information, these cases offer practical — and ethical — lessons:
Be skeptical of pretty links.
If your AI assistant tries to open a page or run something automatically, pause.
Automation without supervision is like leaving your coffee on the edge of the desk — it’s bound to spill.Don’t upload sensitive material.
Drafts, research notes, or work documents shouldn’t be mixed with cloud-connected AI tools of uncertain security.Separate contexts.
One account for work, another for experiments.
Like in cooking: don’t use the same cutting board for vegetables and raw meat.Always update.
Most vulnerabilities get fixed — but only if you update.
Security notes matter more than flashy new features.Keep human oversight.
AI can be your co-pilot, but you’re still the pilot.
If your assistant acts without clear reasons, disconnect and review the settings.
🧭 From convenience to discernment
Using AI to simplify tasks isn’t wrong; the mistake is assuming that “simplify” means delegate responsibility.
Comet and OpenAI remind us that even the most advanced tools can stumble over very human flaws: overconfidence, lack of context, and zero self-criticism.
So yes — use AI (not using it is like using your brain to apply trigonometric ratios -sine and cosine, cosine coscosin, tangent tantangent) to triangles with angles of 30°).
But use it with judgment, ethics, and digital common sense.
Because true intelligence — artificial or otherwise — begins when we learn to question what seems too easy.
📚 Recommended sources
Disclaimer: I used ChatGPT for sources, material and image creating.


