Similar with Y2K — it was only a nothingburger because it was taken seriously, and funded well. But the narrative is sometimes, “yeah lol it was a dud.”
All this hysteria over nuclear weapons is overblown. We’ve known how to build them for 75 years yet there hasn’t been a single one detonated on inhabited American soil. They’re harmless
Pretty sure no human lived at the Trinity test site or anywhere else in the test sites where weapons were detonated, especially at the moment of detonation. And I’m pretty sure none have since moved onto those sites either. Hence “inhabited”. It’s not like we nuked cities and towns.
The question is, what will happen in 2038 when y2k happens again due to an integer overflow? People are already sounding the alarm but who knows if people will fix all of the systems before it hits.
It’s already been addressed in Linux - not sure about other OSes. They doubled the size of time data so now you can keep using it until after the heat death of the universe. If you’re around then.
debian for example is atm at work recompiling everything vom 32bit to 64bit timestamps (thanks to open source this is no problem) donno what happens to propriarary legacy software
Obviously new systems are unaffected, the question is how many industrial controllers checking oil pipeline flow levels or whatever were installed before the fix and never updated.
Being somewhat adjacent to that with my work, there is a good chance anything in a critical area (hopefully fields like utilities, petroleum, areas with enough energy to cause harm) have decently hardened or updated equipment where it either isn’t an issue, will stop reporting tread data correctly, or roll over to date “0” which depending on the platform with industrial equipment tends to be 1970 in my personal experience. That said, there is always the case that it will not be handled correctly and either run away or stop entirely.
AfaIk that’s not entirely true, e.g. Debian is changing the system time from 32 bit integer to 64 bit. Thus I assume other distros do this as well. However, this does not help for industrial or IOT devices running deprecated Unix / Linux derivatives.
industrial or IOT devices running deprecated Unix / Linux derivatives
This is my concern, all the embedded devices happily running in underground systems like pipes and cables. I assume there are at least a few which nobody even considered patching because they’ve “just worked” for decades!
It’s called the prevention paradox: It’s when an issue is so severe that it is prevented with proactive action, so no real consequenses are felt so people think it wasn’t severe in the first place.
Case in point: Measles. It was a thing when I was a kid. Then it wasn’t. Now my kids have to deal with Measles because we can’t teach scientific literacy.
Well considering Elon situation I wouldn’t blame anyone for making fun of his idiotic ventures. Also starship is actually dumb and saying “you expected for it to blow up” is something no real scientist would’ve said unless they were making a bomb.
How is Starship dumb exactly? Making a new thing at any extreme of our current capability is going to be hard and its not unexpected when something goes wrong.
What would be dumb is if they put human lives on the line
It had no payload on any of its flights. Rockets that have enough time/money put into development to have a reasonable expectation of working on the first try (and don’t have such an ambitious design) normally launch with a payload on their first flight. Sometimes, even those fail on the first few flights. Having the first few of a new rocket design fail before reliability is achieved is common (ex: Astra) and SpaceX’s other rocket, the Falcon 9, is known as the most reliable rocket, I even suspect it achieves landings more often lately than most others do launches.
Starship’s last launch went decently well, reaching orbit (which is as far as most rockets go!) but failing during reentry. It is also supposed to be the rocket with the largest payload capacity to low earth orbit, with 100-150 tons when reused and likely 200-300 when expended.
Starship, as it is right now, is already a better rocket than SLS. It can already carry more mass and be cheaper (even fully expended) than the SLS’s 4 billion cost per launch.
It will get better. Falcon 9 didn’t land the first time either, but now it has successfully landed more consecutive times than any other rocket has flown.
There’s nothing wrong with saying this is a test. This is only a test, and we don’t expect it to be perfect yet. Each time they learn from the data. And SpaceX hasn’t repeated the same mistake twice.
I wasn’t working in the IT field back then, as I was only 16, but as I knew that it’d most likely be my field one day (yup, I was right), I followed this closely due to interest, and applied patches accordingly.
Everything kept working fine except this one modem I had.
I kinda wish I knew what it was like working on Y2K stuff. It sounds like the most mundane bug to fix, but the problem is that it was everywhere. Which I imagine made it pretty expensive 👀
That’s a pretty good description. And most software back then didn’t use nice date utilities, they each had their own inline implementation. So sometimes you had to figure out what they were trying to do in the original code, which was usually written by someone who’s not there anymore. But other times it was the most mundane doing the same fix you already did in 200 other programs.
And computer networking, especially the ability to remote into a system and make changes or deliver updates en masse, was nowhere near as robust as it is today meaning a lot of those fixes were done manually.
Most of the y2k problem was custom software, and really old embedded stuff. In my case, all our systems were fine at the OS, and I don’t remember any commercial software we had trouble with, but we had a lot of custom software with problems, as did our partners
Y2K specifically makes no sense though. Any reasonable way of storing a year would use a binary integer of some length (especially when you want to use as little memory as possible). The same goes for manipulations; they are faster, more memory efficient, and easier to implement in binary. With an 8-bit signed integer counting from 1900, the concerning overflows would occur in 2028, not 2000. A base 10 representation would require at least 8 bits to store a two digit number anyway. There is no advantage to a base 10 representation, and there never has been. For Y2K to have been anything more significant than a text formatting issue, a whole lot of programmers would have had to go out of their way to be really, really bad at their jobs. Also, usage of dates beyond 2000 would have increased gradually for decades leading up to it, so the idea it would be any sort of sudden catastrophe is absurd.
The issue wasn’t using the dates. The issue was the computer believing it was now on those dates.
I’m going to assume you aren’t old enough to remember, but the “only two digits to represent the year” issue predates computers. Lots of paper forms just gave two digits. And a lot of early computer work was just digitising paper forms.
You’re thinking of the problem with modern solutions in mind. Y2K originates from punch cards where everything was stored in characters. To save space only the last 2 digits of the year because back then you didn’t need to store the 19 of year 19xx. The technique of storing data stayed the same for a long time despite technology advancing beyond punch cards. The assumption that it’s always 19xx caused the Y2K bug because once it overflows to 00 the system doesn’t know if it’s 1900 or 2000.
Some of the computers in question predate standardizing on 8 bits to the byte. You’ve got a whole post here of bad assumptions about how things worked.
You do realize that “counting from 1900” meant storing only the last two digits and just hardcoding the programs to print"19" in front of it in those days? At best, an overflow would lead to 19100, 1910 or 1900, depending on the print routines.
Oh boy you heavily underestimate the amount and level of bad decision in legacy protokoll. Read up in the toppic. the Date was for a loong time stored as 6 decimal numbers.
And then there is PIC 99 in Cobol. In modern languages, it makes no sense, but there is still a lot of really old code around and not everything is twos complement, especially if you do not need the efficiency in memory and calculations.
Similar with Y2K — it was only a nothingburger because it was taken seriously, and funded well. But the narrative is sometimes, “yeah lol it was a dud.”
All this hysteria over nuclear weapons is overblown. We’ve known how to build them for 75 years yet there hasn’t been a single one detonated on inhabited American soil. They’re harmless
You even dropped a few accidentally and nothing happened! Complete duds these things really
Yeah but not all people live on American soil…
It’s the American tradition to ignore that
WTF?
Unless that was sarcasm that I missed… 100’s of weapons have been tested on US soil…
Not sure what you mean.
The US was inhabited last I checked.
Pretty sure no human lived at the Trinity test site or anywhere else in the test sites where weapons were detonated, especially at the moment of detonation. And I’m pretty sure none have since moved onto those sites either. Hence “inhabited”. It’s not like we nuked cities and towns.
Right… Gotcha. So you’re a ‘change the goalposts to keep making me right as the argument and evidence changes’ kinda person.
No point engaging with your type.
Sounds like you’re the goalposts-mover here, shipmate, and it seems the rest of the readers here agree with me. Maybe this place ain’t your venue.
deleted by creator
The question is, what will happen in 2038 when y2k happens again due to an integer overflow? People are already sounding the alarm but who knows if people will fix all of the systems before it hits.
It’s already been addressed in Linux - not sure about other OSes. They doubled the size of time data so now you can keep using it until after the heat death of the universe. If you’re around then.
Finally it’d be the year of desktop linux with all the windows users die off
This is the funniest comment I have ever read here. Thank you.
debian for example is atm at work recompiling everything vom 32bit to 64bit timestamps (thanks to open source this is no problem) donno what happens to propriarary legacy software
Obviously new systems are unaffected, the question is how many industrial controllers checking oil pipeline flow levels or whatever were installed before the fix and never updated.
Being somewhat adjacent to that with my work, there is a good chance anything in a critical area (hopefully fields like utilities, petroleum, areas with enough energy to cause harm) have decently hardened or updated equipment where it either isn’t an issue, will stop reporting tread data correctly, or roll over to date “0” which depending on the platform with industrial equipment tends to be 1970 in my personal experience. That said, there is always the case that it will not be handled correctly and either run away or stop entirely.
I think everything works in windows but the old windows media player. You can test it by setting the time in a windows VM to 2039.
As a future boltzmann brain, I agree.
2038 is approaching super fast and nobody seems to care yet
At the rate of one year per year, even.
For each second that passes we’re one second closer to 2038
Except for leap seconds. Time is the worst to work with :(
omg
deleted by creator
AfaIk that’s not entirely true, e.g. Debian is changing the system time from 32 bit integer to 64 bit. Thus I assume other distros do this as well. However, this does not help for industrial or IOT devices running deprecated Unix / Linux derivatives.
This is my concern, all the embedded devices happily running in underground systems like pipes and cables. I assume there are at least a few which nobody even considered patching because they’ve “just worked” for decades!
Or like… PLANES! Some planes still update their firmware using floppy disks
They do at least get updates though, and they’re big enough that they don’t get forgotten!
That’s not true, lots of people are panicking about how fast they’re getting older
Well that’s justifiable. We’re not sure if we’re even going to make it to then
I can’t remember the name but I think this is some kind of paradox.
Like the preventative measures we’re so effective that they created a perception that there was no risk in the first place.
It’s called the prevention paradox: It’s when an issue is so severe that it is prevented with proactive action, so no real consequenses are felt so people think it wasn’t severe in the first place.
Case in point: Measles. It was a thing when I was a kid. Then it wasn’t. Now my kids have to deal with Measles because we can’t teach scientific literacy.
that waste of effort cold war… /s
“Lol Elon rocket go boom, science isn’t real” is also happening
Stupid people just think they’re the smartest ones in the room now
Elon musk isn’t a scientist, he’s a scammer who got lucky. That, and an asshole.
Well considering Elon situation I wouldn’t blame anyone for making fun of his idiotic ventures. Also starship is actually dumb and saying “you expected for it to blow up” is something no real scientist would’ve said unless they were making a bomb.
How is Starship dumb exactly? Making a new thing at any extreme of our current capability is going to be hard and its not unexpected when something goes wrong. What would be dumb is if they put human lives on the line
It had no payload on any of its flights. Rockets that have enough time/money put into development to have a reasonable expectation of working on the first try (and don’t have such an ambitious design) normally launch with a payload on their first flight. Sometimes, even those fail on the first few flights. Having the first few of a new rocket design fail before reliability is achieved is common (ex: Astra) and SpaceX’s other rocket, the Falcon 9, is known as the most reliable rocket, I even suspect it achieves landings more often lately than most others do launches.
Starship’s last launch went decently well, reaching orbit (which is as far as most rockets go!) but failing during reentry. It is also supposed to be the rocket with the largest payload capacity to low earth orbit, with 100-150 tons when reused and likely 200-300 when expended.
Starship, as it is right now, is already a better rocket than SLS. It can already carry more mass and be cheaper (even fully expended) than the SLS’s 4 billion cost per launch.
It will get better. Falcon 9 didn’t land the first time either, but now it has successfully landed more consecutive times than any other rocket has flown.
There’s nothing wrong with saying this is a test. This is only a test, and we don’t expect it to be perfect yet. Each time they learn from the data. And SpaceX hasn’t repeated the same mistake twice.
I wasn’t working in the IT field back then, as I was only 16, but as I knew that it’d most likely be my field one day (yup, I was right), I followed this closely due to interest, and applied patches accordingly.
Everything kept working fine except this one modem I had.
I kinda wish I knew what it was like working on Y2K stuff. It sounds like the most mundane bug to fix, but the problem is that it was everywhere. Which I imagine made it pretty expensive 👀
That’s a pretty good description. And most software back then didn’t use nice date utilities, they each had their own inline implementation. So sometimes you had to figure out what they were trying to do in the original code, which was usually written by someone who’s not there anymore. But other times it was the most mundane doing the same fix you already did in 200 other programs.
And computer networking, especially the ability to remote into a system and make changes or deliver updates en masse, was nowhere near as robust as it is today meaning a lot of those fixes were done manually.
And that modem was handling the nuke codes, right?
Most of the y2k problem was custom software, and really old embedded stuff. In my case, all our systems were fine at the OS, and I don’t remember any commercial software we had trouble with, but we had a lot of custom software with problems, as did our partners
Y2K specifically makes no sense though. Any reasonable way of storing a year would use a binary integer of some length (especially when you want to use as little memory as possible). The same goes for manipulations; they are faster, more memory efficient, and easier to implement in binary. With an 8-bit signed integer counting from 1900, the concerning overflows would occur in 2028, not 2000. A base 10 representation would require at least 8 bits to store a two digit number anyway. There is no advantage to a base 10 representation, and there never has been. For Y2K to have been anything more significant than a text formatting issue, a whole lot of programmers would have had to go out of their way to be really, really bad at their jobs. Also, usage of dates beyond 2000 would have increased gradually for decades leading up to it, so the idea it would be any sort of sudden catastrophe is absurd.
The issue wasn’t using the dates. The issue was the computer believing it was now on those dates.
I’m going to assume you aren’t old enough to remember, but the “only two digits to represent the year” issue predates computers. Lots of paper forms just gave two digits. And a lot of early computer work was just digitising paper forms.
I remember paper forms having “19__” in the year field. Good times
When I was little I was scared of that problem and always put all four digits of the year in my homework.
You’re thinking of the problem with modern solutions in mind. Y2K originates from punch cards where everything was stored in characters. To save space only the last 2 digits of the year because back then you didn’t need to store the 19 of year 19xx. The technique of storing data stayed the same for a long time despite technology advancing beyond punch cards. The assumption that it’s always 19xx caused the Y2K bug because once it overflows to 00 the system doesn’t know if it’s 1900 or 2000.
Some of the computers in question predate standardizing on 8 bits to the byte. You’ve got a whole post here of bad assumptions about how things worked.
You don’t spend much time around them, do you?
You do realize that “counting from 1900” meant storing only the last two digits and just hardcoding the programs to print"19" in front of it in those days? At best, an overflow would lead to 19100, 1910 or 1900, depending on the print routines.
Oh boy you heavily underestimate the amount and level of bad decision in legacy protokoll. Read up in the toppic. the Date was for a loong time stored as 6 decimal numbers.
And then there is PIC 99 in Cobol. In modern languages, it makes no sense, but there is still a lot of really old code around and not everything is twos complement, especially if you do not need the efficiency in memory and calculations.
Look some info on BCD or EBCDIC.