Science is “empirically complete” when it is well funded, all unknowns are constrained in scope, and (n+1) generations of scientists produce no breakthroughs of any kind.

If a hypothetical entity could encompass every aspect of science into reasoning and ground that understanding in every aspect of the events in question, free from bias, what is this epistemological theory?

I’ve been reading wiki articles on epistemology all afternoon and feel no closer to the answer in the word salad in this space. It appears my favorite LLM’s responses reflect a similar understanding. Maybe someone here has a better grasp on the subject?

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 months ago

    If you think epistemological theorizing happens prior to and independently of empirical science (a priori), the question doesn’t make much sense. If, on the other hand, you think epistemology follows and depends on the results of empirical science, you won’t know the answer until you get there.

    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 months ago

      I’m trying to simplify without telling a story to get there but am about to fail. I’m working out the idea of how academics might argue for and (to a losing argument) against codification of science as an engineering corpus in a very distant future. I’m willing to gloss over the balance of cost over return and already well beyond hierarchical wealth in favor of reputation and accolades for large scale hierarchy, and heat/elemental cycles budget for the average person, in a space based life that has colonized G-type stars within 7 parsecs of Sol. Colonization is driven by time pressure of Sol’s expansion. It is only possible to travel one way with generation ships powered by antimatter produced with the Solar infrastructure. No FTL, no aliens, no magic. The biggest magic is the assumption that a self replicating drone is possible, but only at kilometers scale.

      The entire story idea stems from the notion that the present fear of AI is a mythos of the machine gods. I am setting up to create a story where a fully understood human like brain is synthesised in a lab as a biological computer. All human scale technology is biological, but the bio-compute brayn is the crowning achievement for millennia until someone comes up with a way to merge the brayn with lab grown holoanencephaloids that create independent humanoid AGI. It is a round about way of creating a semi plausible mortal flesh and blood AGI.

      I further the idea of integrating these entities with humans at every level of life. Later in life these human scale intelligence AGI entities may get invitations to join with a collective Central AGI that functions as the governing system. I’m using the unique life experience of integration as a counter to the Alignment Problem and as a form of representative democracy.

      I refuse to admit that this must be authoritarian, dystopian, or utopian. I believe we tend to assume it similar ideas are one or more of these things because we are blind to both the potential for other forms of complex social hierarchy, and the true nature of our present forms of hierarchical display.

      The question posed in this post is not about the absolute truth, but that which is plausible for populist dominance.

      It is just a pointless hobby project, but a means to explore and keep my mind occupied.

      I like your premise.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 months ago

        Ah, I didn’t understand that you were asking about a fictional scenario. I don’t know about your main question but I like your notion of the social integration of humanoid AGIs with unique life experiences, and your observation that there’s no need to assume AGI will be godlike and to be feared. Some ways of framing the alignment problem seem to carry a strange assumption that AGI would be both smarter than us and yet less able to recognize nuance and complexity in values, and that it would therefore be likely to pursue some goals to the exclusion of others in a way so crude we’d find horrific.

        There’s no reason an AGI with a lived experience of socialization not dissimilar to ours couldn’t come to recognize the subtleties of social situations and respond appropriately and considerately. Your mortal flesh and blood AI need not be some towering black box occupied with its own business whose judgements and actions we struggle to understand, but if integrated into society would be motivated like any social being to find common ground for communication and understanding, and tolerable societal arrangements. Even if we’re less smart that doesn’t mean it automatically considers us unworthy of care - that assumption always smells like a projection of the personalities of people who imagine it. And maybe it would have new ideas about these that could help us stop repeating the same political mistakes again and again.

        • j4k3@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          All science fiction is a critique of the present and a vision of a future. I believe Asimov said something to that effect. In a way I am playing with a more human humaniform robot.

          If I’m asking the questions in terms of a fiction, is it science fiction or science fantasy.

          I think one of the biggest questions is how to establish trust with cognitive dissonance, especially when the citizen lacks the awareness to identify and understand their condition when a governing entity sees it clearly. How does one allow a healthy autonomy, while manipulating in the collective and individual’s best interests, but avoid authoritarianism, dystopianism, and utopianism? If AGI can overcome these hurtles, it will become possible to solve political troubles in the near and distant future.