• 1 Post
  • 772 Comments
Joined 4 years ago
cake
Cake day: January 17th, 2022

help-circle

  • Sad but unsurprising.

    I did read quite a lot on the topic, including “Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.” (2019) and saw numerous documentaries e.g. “Invisibles - Les travailleurs du clic” (2020).

    What I find interesting here is that it seems the tasks go beyond dataset annotation. In a way it is still annotation (as in you take a data in, e.g. a photo, and your circle part of it to label i.e. e.g “cat”) but here it seems to be 2nd order, i.e. what are the blind spots in how this dataset is handled. It still doesn’t mean anything produced is more valuable or that the expected outcome is feasible with solely larger datasets and more compute yet maybe it does show a change in the quality of tasks to be done.











  • Brand new example : “Skills” by Anthropic https://www.anthropic.com/news/skills even though here the audience is technical it is still a marketing term. Why? Because the entire phrasing implies agency. There is no “one” getting new skills here. It’s as if I was adding bash scripts to my ~/bin directory but instead of saying “The first script will use regex to start the appropriate script” I named my process “Theodore” and that I was “teaching” it new “abilities”. It would be literally the same thing, it would be functionally equivalent and the implement would be actually identical… but users, specifically non technical users, would assume that there is more than just branching options. They would also assume errors are just “it” in the process of “learning”.

    It’s really a brilliant marketing trick, but it’s nothing more.




  • The word “hallucination” itself is a marketing term. It’s not because it’s been frequently used in the technical literature that it is free of any problem. It’s used because it highlights a problem (namely that some of the output of LLM are not factually correct) but the very name is wrong. Hallucination implies there is someone, perceiving and with a world model, who typically via heuristics (for efficient interfaces like Donald Hoffman suggests) do so incorrectly leading to bad decision regarding the current problem to solve.

    So… sure, “it” (trying not to use the term) is structural but it is simply because LLM have no notion of veracity or truth (or anything else, to be clear). They have no simulation to verify from if the output they propose (the tokens out, the sentence the user gets) is correct or not, it is solely highly probably based on their training data.





  • Well you already have a desktop so the added value is that once you get the content out of the disk you don’t need them anymore. You can just store them if you want but not need to play with the physical thing, neither DVD nor player.

    I mean if you particularly enjoy very specific things, e.g. bonus, or the physical feeling of the media (why not, harder to justify than with e.g. vinyl but still fine) you can still do that but otherwise the physical media isn’t actually needed anymore.