• 1 Post
  • 135 Comments
Joined 6 months ago
cake
Cake day: June 9th, 2024

help-circle








  • two commands: dd and resize2fs, assuming you’re using ext4 and not something more exotic.

    one makes a block-level copy of one device to another like so: dd if=/dev/source-drive of=/dev/destination-drive

    the other is used to resize the filesystem from whatever size it was, to whatever size you tell it (or the whole disk; I’d have to go read a manpage since it’s been a bit)

    the dd is completely safe, but the resize2fs command can break things, but you’d still have the data on the original drive, so you could always start over if it does - i’d unplug the source drive before you start doing any expansion stuff.








  • sudo smartctl -a /dev/yourssd

    You’re looking for the Media_Wearout_Indicator which is a percentage starting at 100% and going to 0%, with 0% being no more spare sectors available and thus “failed”. A very important note here, though, is that a 0% drive isn’t going to always result in data loss.

    Unless you have the shittiest SSD I’ve ever heard of or seen, it’ll almost certainly just go read-only and all your data will be there, you just won’t be able to write more data to the drive.

    Also you’ll probably be interested in the Total_LBAs_Written variable, which is (usually) going to be converted to gigabytes and will tell you how much data has been written to the drive.



  • As a FunFact™, you’re more likely to have the SSD controller die than the flash wear out at this point.

    Even really cheap SSDs will do hundreds and hundreds of TB written these days, and on a normal consumer workload we’re talking years and years and years and years of expected lifespan.

    Even the cheap SSDs in my home server have been fine: they’re pushing 5 years on this specific build, and about 200 TBW on the drives and they’re still claiming 90% life left.

    At that rate, I’ll be dead well before those drives fail, lol.



  • I went and whacked the scan library button on a 30tb library collection and it didn’t read all that much data (looks like under 100gb) and seemed to be pretty quick - maybe 45 seconds. Local drives and all that, so the speed of the scan doesn’t matter as much as the relatively small amount of data. If all you had was 1tb of media, I’d expect it to just be a couple of gigabytes, not huge amounts of data.

    I’d probably double-check that however you’ve mounted the WebDAV share is supporting partial reads, since that really feels to me like the first place that something could be wrong that would cause excessive amounts of file transfers.