• 0 Posts
  • 20 Comments
Joined 2 years ago
cake
Cake day: December 29th, 2023

help-circle
  • The only real impacts it has had are making me pay more attention to how I feel and more intentional about how I eat. I’m not going back for seconds just because they are there since I consciously decided that my initial portion was a good amount. I’m also not going a day without eating since it didn’t cross my mind because I can see that little line slowly go down and it reminds me that food is good.

    Modern tech is amazing. The management of T1D has been reduced to a glance at my phone every once in a while and a couple keystrokes when I eat something.

    I was talking to someone recently who was diagnosed as a little kid and the stuff that they went through sounded awful. Their adult management of it is no better than mine is for having gone through that.




  • eRac@lemmings.worldtoScience Memes@mander.xyzI AM BETTER
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    6 months ago

    I mean, the guy didn’t know that water and ice are the same thing.

    The summaries I find reference him theorizing that water may be spherical, leading to the hexagon pattern. He also related the feathery ends to steam hitting a cold window.

    It seems to me that he knew that steam, water, and ice were the same thing.














  • Generative AI doesn’t get any training in use. The explosion in public AI offerings falls into three categories:

    1. Saves the company labor by replacing support staff
    2. Used to entice users by offering features competitors lack (or as catch-up after competitors have added it for this reason)
    3. Because AI is the current hot thing that gets investors excited

    To make a good model you need two things:

    1. Clean data that is tagged in a way that allows you to grade model performance
    2. Lots of it

    User data might meet need 2, but it fails at need 1. Running random data through neural networks to make it more exploitable (more accurate interest extraction, etc) makes sense, but training on that data doesn’t.

    This is clearly demonstrated by Google’s search AI, which learned lots of useful info from Reddit but also learned absurd lies with the same weight. Not just overtuned-for-confidence lies, straight up glue-the-cheese-on lies.