why? if 5 instances are seeding the video, clients should be able to download from all 5 instances and spread the bandwidth usage right?
blog: thomasdouwes.co.uk
I also run some bots:
@FlagWaverBot
why? if 5 instances are seeding the video, clients should be able to download from all 5 instances and spread the bandwidth usage right?
Why not also use the instance to re-seed? it could keep seeding after the visitor closed the video.
Would it not make more sense if your instance downloaded and redistributed the torrent? then you could keep seeding after the tab closed. it also wouldn’t leak your IP then.
What about peer discovery? I opened that webtorrent website in two browsers and they didn’t peer, is that demo real?
Nah, I can’t see any reason to make more than one account.
The file you downloaded is a compressed JSON file, it’s not something you can really just look at. But it contains all the data needed to build a nice UI around.
I don’t know what OS you are on but on linux you can run zstd -d -c file.zst | jq .
and it will print everything in the file. It’s not really readable though. Also it doesn’t have any of the media content, only the text
I hate reddit. But it feels like the library of Alexandria burning down (yea I know). All those google search results and educational subreddits that are shutting down forever, and because they are too small reddit won’t force open them again.
A lot are in the pushshift archive, but that cuts of at 2022. Also, it doesn’t include a lot of the smaller subreddits.
I have had my PC running 24/7 with multiple VPNs to avoid rate limits downloading as much as I can before the API dies, but with some blackouts moving forward a day I have already missed a few.
Like many others, I would often add “reddit” to the end of my searches to get better results, half the websites on web searches now are either AI generated, copies or are completely AD ridden websites that ask you to turn off your AD blocker.
If my fingers prune I’m going to die or something