Mastodon Digest
for the past
scorer
threshold
Posts

Security nerds have launched a Gofundme to buy back securityfocus.com, a domain that hosted the Bugtraq site and more than 120,000 links from the National Vulnerability Database that are now dead. Whoops.

"Symantec killed Bugtraq in 2020 and let the domain lapse. Now it's squatted for $175k," writes Jonathan Brossard. "The NVD has 120,000+ broken links pointing there. The security community's memory is being held hostage."

gofundme.com/f/restore-securit

Not sure if this matters, but DomainTools says the domain was transferred to Accenture.com, Accenture Global Services Limited in Ireland.

a

Infosec exec sold eight zero-day exploit kits to Russia: DoJ theregister.com/2026/02/15/exl

Boosts

Why AI writing is so generic, boring, and dangerous: Semantic ablation.

(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

theregister.com/2026/02/16/sem

This morning I got an email from a sender that identified itself as an AI agent.

So - plus for being upfront about it, but... please don't do this.

I get that a lot of people are really, really, really into AI tools. OK. I have my opinions on them, you have yours. I have major qualms about them, some people think they're the best thing ever.

OK. Fine. But when your use of these things spills over into the rest of the world, it's no longer a question of my opinion vs. your opinion, my decisions vs. your decisions.

At this point, things have moved from each person doing their own thing to inflicting your use of AI onto me without my consent.

Before this spirals out of control, which I can see happening *very* quickly, I'd like for us to agree on a piece of netiquette:

- it is rude in the extreme to set loose an AI agent to reach out to people who have not consented to interact with these things.

- it is rude to have an AI agent submit pull requests that human maintainers have to review.

- it is rude to have an AI agent autonomously interact with humans in any way when they have not consented to take part in whatever experiment you are running.

- it is unacceptable to have an AI agent autonomously interact with humans without identifying the person or organization behind the agent. If you're not willing to unmask and have a person reach out to you with their thoughts on this, then don't have an AI agent reach out to me.

Stuff like this really sours me on technology right now. If I didn't have a family and responsibilities, I'd be seriously considering how I could go live off the grid somewhere without having to interact with this stuff.

Again: I'm not demanding that other people not use AI/LLMs, etc. But when your use spills out into my having to have interactions with an agent's output, you need to reconsider. Your ability to spew things out into the universe puts an unwanted burden on other humans who have not consented to this.

#LLMs #AI #AgenticAI #OpenClaw #OpenSource