I don’t think serving 86 kilobytes to AI crawlers will make any difference in my bandwidth use :)
A tiny mouse, a hacker.
I don’t think serving 86 kilobytes to AI crawlers will make any difference in my bandwidth use :)
That would result in those fediverse servers theoretically requesting 333333 * 114MB = ~38Gigabyte/s.
On the other hand, if the site linked would not serve garbage, and would fit like 1Mb like a normal site, then this would be only ~325mb/s, and while that’s still high, it’s not the end of the world. If it’s a site that actually puts effort into being optimized, and a request fits in ~300kb (still a lot, in my book, for what is essentially a preview, with only tiny parts of the actual content loaded), then we’re looking at 95mb/s.
If said site puts effort into making their previews reasonable, and serve ~30kb, then that’s 9mb/s. It’s 3190 in the Year of Our Lady Discord. A potato can serve that.
I only serve bloat to AI crawlers.
map $http_user_agent $badagent {
default 0;
# list of AI crawler user agents in "~crawler 1" format
}
if ($badagent) {
rewrite ^ /gpt;
}
location /gpt {
proxy_pass https://courses.cs.washington.edu/courses/cse163/20wi/files/lectures/L04/bee-movie.txt;
}
…is a wonderful thing to put in my nginx config. (you can try curl -Is -H "User-Agent: GPTBot" https://chronicles.mad-scientist.club/robots.txt | grep content-length:
to see it in action ;))
…and here I am, running a blog that if it gets 15k hits a second, it won’t even bat an eye, and I could run it on a potato. Probably because I don’t serve hundreds of megabytes of garbage to visitors. (The preview image is also controllable iirc, so just, like, set it to something reasonably sized.)
There’s plenty, but I do not wish to hijack this thread, so… have a look at the Forgejo 7.0 release notes, the PRs it links to along notable features (and a boatload of bugfixes, many of which aren’t in Gitea). Then compare when (and if) similar features or fixes were implemented in Gitea.
The major difference (apart from governance, and on a technical level) between Gitea and Forgejo is that Forgejo cherry picks from Gitea weekly (being a hard fork doesn’t mean all ties are severed, it means that development happens independently). Gitea does not cherry pick from Forgejo. They could, the license permits it, and it even permits sublicensing, so it’s not an obstacle for Gitea Cloud or Gitea EE, either. They just don’t.
Ah! My bad.
mumbles something about big corps choosing way too generic names for their stuff
Threads does not interact with the Fediverse in its current form. It’s a horn blasting into the fediverse at best. It’s not participating in the fediverse, it’s shouting into it. As such, it’s correct to not report on how thredsizens participate in the fediverse - they do not, not at this time.
I don’t use social media to stay connected with family. I lift up the phone, go visit, or if we need to communicate online, I have an XMPP server for the family with end to end encryption. Can share pictures, text, and can even do video calls if need be, send files, and so on.
Don’t see the need to involve any kind of social media.
There’s a very easy solution that lets you rest easy that your instance is how you want it to be: don’t do open registration. Vet the people you invite, and job done. If you want to be even safer, don’t post publicly - followers only. If you require follower approval, you can do some basic checks to see that whoever sends a follow request is someone you’re okay interacting with. This works on the microblogging side of the Fediverse quite well, today.
What I’m trying to say is that with registrations requiring admin approval gets you 99% of the way there, without needing anything more complex than that.
…and you think 14-17 year olds won’t circumvent this in mere seconds? Like, they’d just sign up at an instance that doesn’t implement these labels, or doesn’t care about them, or use their parents accounts, or ask them, or an older friend to sign them up, and so on. Even if age verification would be widespread and legally mandated, I highly doubt any sufficiently determined 14-17 year old would have any trouble getting past it.
Nevertheless, as Bluesky grows, there are likely to be multiple professionally-run indexers for various purposes. For example, a company that performs sentiment analysis on social media activity about brands could easily create a whole-network index that provides insights to their clients.
(source)
Is that supposed to be a selling point? Because I’d like to stay far, far away from that, thank you very much.
A story like that, eh? Well, as it turns out, the entire configuration of my operating system is a story. Or rather, many stories.
And how would that improve anything? Like I said, any general purpose engine is a no-go for me, because they index things I have no desire to ever see in my search results. Kagi is no exception.
Been there, tried it, didn’t find it noticably better than the other general purpose search engines.
I found that no general purpose search engine will ever serve my needs. Their goal is to index the entire internet (or a very large subset of it), and sadly, a very large part of the internet is garbage I have no desire to see. So I simply stopped using search engines. I have a carefully curated, topical list of links from where I can look up information from, RSS feeds, and those pretty much cover all what I used search for.
Lately, I have been experimenting with YaCy, and fed it my list of links to index. Effectively, I now have a personal search engine. If I come across anything interesting via my RSS feeds, or via the Fediverse, I plug it into YaCy, and now its part of my search library. There’s no junk, no ads, no AI, no spam, and the search result quality is stellar. The downside is, of course, that I have to self-host YaCy, and maintain a good quality index. It takes a lot of effort to start, but once there’s a good index, it works great. So far, I found the effort/benefit ratio to be very much worth it.
I still have a SearxNG instance (which also searches my YaCy instance too, with higher weight than other sources) to fall back to if I need to, but I didn’t need to do that in the past two months, and only two times in the past six.
Very bad, because the usability of such a scheme would be a nightmare. If you have to unzip the files every time you need a password, that’d be a huge burden. Not to mention that unzipping it all would leave the files there, unprotected, until you delete them again (if you remember deleting them in the first place). If you do leave the plaintext files around, and only encrypt & zip for backing up, that’s worse than just using the plaintext files in the backup too, because it gives you a false sense of security. You want to minimize the amount of time passwords are in the clear.
Just use a password manager like Bitwarden. Simpler, more practical, more secure.
It’s not. It just doesn’t get enough hits for that 86k to matter. Fun fact: most AI crawlers hit
/robots.txt
first, they get served a bee movie script, fail to interpret it, and leave, without crawling further. If I’d let them crawl the entire site, that’d result in about two megabytes of traffic. By serving a 86kb file that doesn’t pass as robots.txt and has no links, I actually save bandwidth. Not on a single request, but by preventing a hundred others.