Another commenter said that capsules are primarily designed for their abort landings. The US launches crewed missions from Florida, so the abort landing options are mostly in the Atlantic (on water landing). Russia launches from the Baikonur Cosmodrome in Kazakhstan, so their abort landing options are mostly over Northern Asia (on land landing)
- 0 Posts
- 35 Comments
Ignoring that the Soyuz is a more traditional capsule that does land on land, with timed rockets to slow their descent just before impact.
Let’s just say landing on water is better, based on the medical injuries different astronauts have suffered riding home on the Soyuz. Several American astronauts have experienced bruising and joint/back pain from the hard “bone-jarring” landing.
bluemellophone@lemmy.worldto
Science Memes@mander.xyz•That's how the world works.English
7·1 month agoYep, find your local CSA (Community Supported Agriculture) and get a membership
bluemellophone@lemmy.worldto
Science Memes@mander.xyz•Shout out to the NYE shift at all the Emergency Departments around the world.English
151·4 months agoThat’s nuclear grade stupidity.
They could be manufacturing prototypes or examples for apprentices. It’s also replicas were made for trade to demonstrate the shape to others, or tokens of appreciation, or adapted into toys / primitive dice. <shrug>
bluemellophone@lemmy.worldto
Asklemmy@lemmy.ml•You can choose 1 superpower, but the first reply is the side effect. What superpower would you choose?English
91·5 months agoYou are now blind and deaf like Helen Keller. Good luck, she figured it out, you’ll be fine.
bluemellophone@lemmy.worldto
Programmer Humor@programming.dev•Apple forgot to disable production source maps on the App Store web appEnglish
8·6 months agoYep, it’s got a DMCA takedown now
No, you are correct. Hinton began researching ReLUs in 2010 and his students Alex Krizhevsky and Ilya Sutskever used it to train a much deeper network (AlexNet) to win the 2012 ILSVRC. The reason AlexNet was so groundbreaking was because it brought all of the gradient optimization improvements (SGD with momentum as popularized by Schmidhuber, and dropout), better activation functions (ReLU), a deeper network (8 layers), supervised training on very large datasets (necessary to learn good general-purpose convolutional kernels), and GPU acceleration into a single approach.
NNs, and specifically CNNs, won out because they were able to create more expressive and superior image feature representations over the hand-crafted features of competing algorithms. The proof was in the vastly better performance, it was a major jump when the performance on the ILSVRC was becoming saturated. Nobody was making nearly +10% improvements on that challenge back then, it blew everybody out of the water and made NNs and deep learning impossible to ignore.
Edit: to accentuate the point about datasets and GPUs, the original AlexNet developers really struggled to train their model on the GPUs available at the time. The model was too big and they had to split it across two GPUs to make it work. They were some of the first researchers to train large CNNs with GPUs. Without large datasets like the ILSVRC they would not have been able to train good deep hierarchical convolutions, and without better GPUs they wouldn’t have been able to make AlexNet sufficiently large or deep. Training AlexNet on CPU only for ILSVRC was out of the question, it would have taken months of full-tilt, nonstop compute for a single training run. It was more than these two things, as detailed above, but removing those two barriers really allowed CNNs and deep learning to take off. Much of the underlying NN and optimization theory had been around for decades.
Before AlexNet, SVMs were the best algorithms around. LeNet was the only comparable success case for NNs back then, and it was largely seen as exclusively limited to MNIST digits because deep networks were too hard to train. People used HOG+SVM, SIFT, SURF, ORB, older Haar / Viola-Jones features, template matching, random forests, Hough Transforms, sliding windows, deformable parts models… so many techniques that were made obsolete once the first deep networks became viable.
The problem is your schooling was correct at the time, but the march of research progress eventually saw 1) the creation of large, million-scale supervised datasets (ImageNet) and 2) larger / faster GPUs with more on-card memory.
It was fact back in ~2010 that SVMs were superior to NNs in nearly every aspect.
Source: started a PhD on computer vision in 2012
bluemellophone@lemmy.worldto
Programmer Humor@programming.dev•Fox news trying to explain github.English
7·11 months agoThanks, dad
bluemellophone@lemmy.worldto
Programmer Humor@programming.dev•ultimate storage hackEnglish
6·1 year agoThat’s precisely when you bet on it.
Something that was said to me long ago by somebody I admired when I was a fuckwit teenager, “if you’re not embarrassed of the person you were last year, you’re not growing enough.” That has stuck with me for decades, I find it still applies and I’m scared of when I could realize it doesn’t.
Well, if you ever find yourself in Portland, OR I’ll buy you a beer. Nice of you to do that without any credit. Truly the lord’s work.
For those who haven’t read it:
Jazz hands, bitches!
That’s all you get.
Everybody in this thread needs to read Project Hail Mary by Andy Weir.
Exactly how would carbon dioxide get exchanged if the lungs are damaged?
It’ll 100% be chickcoal since the hand will be pushing Mach 5. Pretty sure the plasma will give it a nice sear.
bluemellophone@lemmy.worldto
Asklemmy@lemmy.ml•I have an interview tomorrow morning, any tips, suggestions advice?English
12·2 years agoRelax, show a willingness to learn and you’ll be ok.
I got my start working for university IT and made it all the way to a CS Ph.D. and into industry.
Edit: and get good sleep! It’s nearly midnight on the West coast, get as much good quality sleep as you can.
Pepperdine?