• webp@mander.xyz
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 hours ago

    Why do they need AI at all? Wikipedia had existed long before it and was doing fine.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      19
      ·
      4 hours ago

      You could make that argument about any tool Wikipedia editors use. Why should they need spellcheck? They were typing words just fine before.

      …except it just makes it easier to spot errors or get little suggestions on how you could reword something, and thus makes the whole process a little smoother.

      It’s not strictly necessary, but this could definitely be helpful to people for translation and proofreading. Doesn’t have to be something people are wholly reliant on to still be beneficial to their ability to edit Wikipedia.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 hours ago

      Why should we use (insert tool) when we did just fine before?

      Because when used correctly it can be great for helping you be more productive, and find errors/make improvements. The two exceptions are for grammar which AI does a surprisingly good job with. Would you have gotten mad if they used Grammarly >5 years ago? Having it rewrite an entire article is gonna be a bad idea, but asking it to rephrase a sentence, or check your phrasing for potential issues is a much safer thing. Not everyone who speaks Spanish uses it the same way. Some words are innocuous in some regions, but offensive in others.

      • webp@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        Call me mad, call me crazy. AI shouldn’t be altering databases of knowledge, especially when it is so inconsistent. If there is a question on whether certain words are appropriate why can’t you ask another human being, they have forums for a reason, or someone else comes along and fixes it. Or look at a dictionary. The amount of energy spent for dubious information, holy. It’s not like there is a shortage of human beings on earth.

        • fuckwit_mcbumcrumble@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 minutes ago

          AI isn’t altering databases or knowledge. AI is telling the writer there’s a better way to do this, and the writer has to explicitly change their wording.

          You only know to look at a dictionary for alternative wordings if you know there’s a problem. How do you know there’s a problem?

          If you ask someone else what if that same someone else uses your regional dialect and not the one that has problems? Your average writer can review every single word used in the dictionary for every single article they edit. But AI can, and that’s something it’s actually good at. You may only know 5 Spanish speakers, but AI knows everything it was trained on.

        • Qwel@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          50 minutes ago

          https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_with_large_language_models

          https://en.wikipedia.org/wiki/Wikipedia:LLM-assisted_translation

          The two related “policies” are rather short, you should read them if you haven’t.

          AI shouldn’t be altering databases of knowledge, especially when it is so inconsistent

          The policy only allows usage as an auto-translater (a task at which they are not worst than old-style auto-translaters that were always allowed) and as spellcheck/grammarcheck (where it is also not worst than other allowed options).

          None of those tools were previously seen as altering Wikipedia by themselves. The goal is that LLMs should be used and considered like they were.

          To be clear they always were articles for creation submitted from clearly google-translated text, and they always were dismissed as slop. To get an autotranslated article accepted, you need to clean it up until all the information is correct and the grammar is good enough. This is a rather standard workflow for translations. The same thing should apply to LLMs.

          The new issue here is that LLMs can “organically” change informations while asked to translate. When a classic autotranslate changes the information, it often (not always) leaves a notable mess in the grammar. LLMs will insert their errors much more cleanly. This is acknowledged by both texts and, well, texts will change if that becomes a reocurring issue.