• 2 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • Oh, judicial duels have always been bad, tending to favor the wealthy who can afford training. The pistol duel was once considered egalitarian because you were just as likely to miss your opponent regardless of how much you trained. For most of the 20th century (until the 90s) Uruguay had legalized dueling. It was mostly used by politicians and the powerful to muder journalists and lawyers who “defamed” them.

    But if we are already living in a period where the rich act with impunity anyway, I want a world where there’s a nonzero chance that we get to watch Elon Musk take an estoc to the face because of a twitter argument.




  • How about writing a script to automate the deletion, thus minimizing the chance of human error being a factor? It could include checks like “Is this a folder with .git contents? Am I being invoked from /home/username/my_dev_workspace?”

    In a real aviation design scenario, they want to minimize the bullshit tasks that take up cognitive load on a pilot so they can focus on actually flying. Your ejector seat example would probably be replaced with an automatic ejection system that’s managed by the flight computer.













  • I guess it would depend on whether or not the project spawns a dedicated community that lasts for a long time. Without a wide pool of knowledgeable contributors, I think it would be hard for an original team to both support the one design while also developing the next iteration.

    Not to bring it up as a whipping boy, but let’s take the case of Wayland, which is “just” a software protocol. It was started back in 2008, and is still under active development. As more projects support it, more edge cases are coming up, which is why new features are added to the protocol all the time. In those 15 years, they’ve had to adjust to technologies that didn’t exist back in 2008, like widespread adoption of 4k HDR displays, or Vulkan. Now imagine that, but with every aspect of a computer. In 2008, DDR3 RAM was just a year old. Today we’re on DDR5 and you (probably) can’t buy a new machine that takes DDR3. PCIE 2 was the latest shit in 2007. Now I see that PCIE 7 is planned for next year.

    A global corporation can support old products while also developing new technologies because they have unfathomable labor and capital at their beck and call.

    I think that free software can keep up with proprietary offerings because the barrier to entry is relatively low. You just need free time and a source control client. I think it would be different if your project toolchain involved literal tools that cost millions of dollars.


  • I think because such an undertaking would require a wide breadth of extremely specialized knowledge. It would require intense coordination of many experts to work together over many years, all to design something that:

    1. Will eventually be obsolete within a few years
    2. Is outside the realm of replicability for individuals (I never heard of anyone with a nanometer-scale photolithography room in their house)

    Item 1 is OK for hobbyists, who might value open source over new-ness, but item 2 all but guarantees that only big corporations can actually get involved. They don’t care about free and open source. They just want a computing platform that their engineers can develop a product for. As long as there’s enough documentation for their goals, open source is irrelevant.

    The power of modern computing comes partly in how it enables abstraction. You don’t need to understand the physics of electrons through a transistor to write a video game. Overall, the open source community has generally converged on the idea that abstracting away the really hard stuff is an acceptable tradeoff.



  • I think there are real concerns to be addressed in the realm of AGI alignment. I’ve found Robert Miles’ talks on the subject to be quite fascinating, and as such I’m hesitant to label all of Elizier Yudkowsky’s concerns as crank (Although Roko’s Basilisk is BS of the highest degree, and effective altruism is a reimagined Pascal’s mugging for an atheist/agnostic crowd).

    Even while today’s LLMs are toys compared to what a hypothetical AGI could achieve, we already have demonstrable cases where we know that the “AI” does not “desire” the same end goal that we desire the “AI” to achieve. Without more advancement in how to approach AI alignment, the danger of misaligned goals will only grow as (if) we give AI-like systems more control over daily life.