• psud@aussie.zone
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    6
    ·
    edit-2
    2 months ago

    LLMs can’t cite. They don’t know what a citation is other than a collection of text of a specific style

    You’d be lucky if the number of references equalled the number of referenced items even if you were lucky enough to get real sources out of an LLM

    If the student is clever enough to remove the trap reference, the fact that the other references won’t be in the University library should be enough to sink the paper

    • auzy@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      edit-2
      2 months ago

      They can. There was that court case where the cases cited were made up by chatgpt. Upon investigation it was discovered it was all hallucinated by chatgpt and the lawyer got into deep crap

    • uis@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 months ago

      LLMs can’t cite. They don’t know what a citation is other than a collection of text of a specific style

      LLMs can cite. It’s called Retrival-Augmented Generation. Basically LLM that can do Information Retrival, which is just academic term for search engines.

      You’d be lucky if the number of references equalled the number of referenced items even if you were lucky enough to get real sources out of an LLM

      You can just print retrival logs into references. Well, kinda stretching definition of “just”.

      • notthebees@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        My question is that the thing they are citing actually exists and if it does exist, contains the information it claims.

        • FutileRecipe@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Depends. In my experience, it usually does exist. Now there are hallucinations where GPT makes up stuff or just misinterprets what it read. But it’s super easy to read the GPT output, look at the cited work, skim works for relevance, then tweak the wording and citing to match.

          If you just copy/paste and take GPT’s word for it without the minimal amount of checking, you’re digging your own grave.

        • uis@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          the thing they are citing actually exists

          In case of RAGs it exists in searched dataset.

          and if it does exist, contains the information it claims.

          Not guaranteed.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Let’s give that a try, this is classic model, no retrieval

      Chatgpt how deadly is covid19, two paragraph answer, include citations


      COVID-19 varies in severity, with mortality rates heavily influenced by factors such as age, underlying health conditions, and access to healthcare. The World Health Organization (WHO) estimates a global case fatality rate (CFR) of around 1-3%, though this can be much higher among older adults and immunocompromised individuals (WHO, 2023). In comparison to seasonal influenza, COVID-19 has a higher mortality rate, particularly during the initial outbreaks before widespread vaccination. Long-term complications, often referred to as “long COVID,” further increase the disease’s burden beyond acute infection (NIH, 2022).

      Vaccination efforts, antiviral treatments, and improved healthcare protocols have reduced mortality rates over time, especially in developed countries (CDC, 2023). However, variants of concern, such as Delta and Omicron, have shown differing degrees of transmissibility and immune escape, complicating pandemic management. Even with lowered death rates in certain populations, COVID-19 remains a significant public health threat, particularly in areas with low vaccination coverage or overstretched healthcare systems (Johns Hopkins University, 2023).

      References:

      World Health Organization (WHO). (2023). COVID-19 Mortality Update.
      
      Centers for Disease Control and Prevention (CDC). (2023). Impact of Vaccines on COVID-19 Severity.
      
      National Institutes of Health (NIH). (2022). Long COVID and Post-Acute Sequelae of SARS-CoV-2 Infection.
      
      Johns Hopkins University. (2023). COVID-19 Global Trends and Variants.
      
    • TachyonTele@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      2 months ago

      Duckduckgos llm cites it’s sources. I’ve seen screenshots of googles llm doing the same.

      You’re basing your entire comment on made up nonsense, that hilariously might as well be a llm result.

      • Swedneck@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        i’ve tried to get their ai thing to cite and it does the standard AI thing of generating a sensible-looking output that isn’t actually valid, it’ll link to pages that don’t exist but that do lead to valid websites.