• 0 Posts
  • 376 Comments
Joined 1 year ago
cake
Cake day: December 6th, 2024

help-circle
  • Almost a decade in Investment Banking and I started reading a lot about Economics (from books, not random websites) after the 2008 Crash to try and understand what the fuck had happenned and what was being done about it.

    That said, take what I wrote with a large pinch of salt, especially the first part which is an idea that I have of how that part of things work (based on Mathematics and Finance industry knowledge), not a proper peer reviewed theory from Economics.

    I’ve pieced together a lot of knowledge I read about with understanding I gained from the inside of the Finance Industry (such as their way of valuing future money as well as things like fair value and fundamentals when it comes to markets), but the assembled thing as a whole is my own theory.

    That said, my money is were my mouth is, and I’ve been highly invested in Gold (known as the ultimate safe asset) since 2012, and that has so far returned 500% on the original investment during that period, thus so far I seem to be at least partially right about the direction things are going (some kind over overall devaluation of traditional strong currencies and near-stagflation getting worse as the inherent disfunctionalities of the current value allocation system make it harder and harder for it to keep going as is), though that doesn’t mean I’m right on the Why.

    PS: Recommended books to read - “This Time is Different” for an Historical perspective on Economic Crashes and “Freakonomics” for a look of on human decision making in an Economics context (which turns out to be very different from the homo economicus human behaviour model that underpins Free Market Economics theories) from Behaviour Economics which is the only part of Economics that actually conducts experiments.



  • Just remember that every year the World’s Economy has to grow enough to cover the interest rate payments in all outstanding debt (or money itself has to inflate away fast enough to offset it, and since interest rates are naturally set up to be above inflation - otherwise Financial Institutions would be losing money - that’s unlikely)

    There are two ways to offset this:

    • Reduce the amount of outstanding debt.
    • Lower interest rates (which is what was done after the 2008 Crash, leading to the slowest recovery from a Crash in at least a century) so that for the same amount of debt there is less interest to pay.

    Overall debt is increasing as per the article.

    Interest rates are below historical average since what was done after 2008 which was supposed to be temporary wasn’t fully wound back, so there’s a lot less room there for central banks to do something about it.

    Actually solving the underlying problems behind the 2008 Crash was pushed to the Future with some interest rate engineering, and it looks a lot like The Future Is Today, and this time around rather than just an over-indebtness plus Finance overextension problem, we seem to have over-indebtness, a massive Tech bubble (like in 2000) AND asset price bubbles in all manner of asset classes, from economically peripheral things like crypto to core things like housing.

    I’ve been expecting a massive crash since I saw what passed for a “solution” back in 2009-12, but shit is turning up to be way worse than I expected due to all the additional resource malallocation and mispriceing in the Economy.


  • “Computer says” is a pretty standard excuse for doing fucked up shit as it adds a complex form of indirection and obfuscation between the will of a human and the actual actions that result from that will.

    Doesn’t work as an excuse with people who actually make the software that makes the computer “say” something (because the complexity of what us used is far less for them and thus they know what’s behind it and that the software is just an agent of somebody’s will), but it seems to work with even non-expert (technology fan) techies, more so with non-techies.

    With AI the people using the computer as an excuse just doubled down on this because in this case the software wasn’t even explicitly crafted to do what it does, it was trained (though in practice you can sorta guide it in some direction or other by chosing what you train it with) further obscuring the link between the will of a human which has decided what it does (or at least, decided which of the things it ended up doing after training are acceptable and which require changes to training) and the output of a computer system.

    Considering that just about the entirety of the Justice System. Legislative System and Regulatory System are technically ignorant, using the “computer says” as an excuse often results in profit enhancing outcomes, incentivising “greed above all” people to use it to confuse, block or manipulate such systems.






  • Even the LLM part might be considered Plagiarism.

    Basically, unlike humans it cannot assemble an output based on logical principles (i.e. assembled a logical model of the flows in a piece of code and then translate it to code), it can only produce text based on an N-space of probabilities derived from the works of others it has “read” (i.e. fed to it during training).

    That text assembling could be the machine equivalent of Inspiration (such as how most programmers will include elements they’ve seen from others in their code) but it could also be Plagiarism.

    Ultimately it boils down to were the boundary between Inspiration and Plagiarism stands.

    As I see it, if for specific tasks there is overwhelming dominance of trained weights from a handful of works (which, one would expect, would probably be the case for a C-compiler coded in Rust), then that’s a lot more towards the Plagiarism side than the Inspiration side.

    Granted, it’s not the verbatim copying of an entire codebase that would legally been deemed Plagiarism, but if it’s almost entirely a montage made up of pieces from a handful of codebases, could it not be considered a variant of Plagiarism that is incredibly hard for humans to pull off but not so for an automated system?

    Note that obviously the LLM has no “intention to copy”, since it has no will or cognition at all, what I’m saying is that the people who made it have intentionally made an automated system that copies elements of existing works, which normally assembles the results from very small textual elements (same as a person who has learned how letters and words work can create a unique work from letters and words) but with the awareness that in some situations that automated system they created can produce output based on an amount of sources which is very low to the point that even though it’s assembling the output token by token, it’s pretty much just copying whole blocks from those sources same as a human manually copying a text from a document to a different document would.

    In summary, IMHO LLMs don’t always plagiarize, but can sometimes do it when the number of sources that ended up creating the volume of the N-dimensional probabilistic space the LLM is following for that output is very low.


  • It’s even simpler than that: using an LLM to write a C compiler is the same as downloading an existing open source implementation of a C compiler from the Internet, but with extra steps, as the LLM was actually fed with that code and is just re-assembling it back together but with extra bugs - plagiarism hidden behind an automated text parrot interface.

    A human can beat the LLM at that by simply finding and downloading an implementation of that more than solved problem from the Internet, which at worse will take maybe 1h.

    The LLM can “solve” simple and well defined problems because its basically plagiarizing existing code that solves those problems.




  • Also later in time when one is making a choice for a kind of product one doesn’t usually buy, one might have forgotten their dislike for a brand due to their excessive use of advertising and yet their subconscious is still giving them a feeling of familiarity when they see that brand’s name on a product which makes it more likely that they’ll chose it over other options that don’t feel as familiar.

    Most advertising nowadays is meant to affect subconscious impulses which will do their thing with no cognitive effort, whilst the position the OP holds (and which I myself try to) is conscious and requires cognitive effort to maintain.


  • The problem is that LLMs don’t generate “an answer” as a whole, they just generate tokens (generally word-sized, but not always) for the next text element given the context of all the text elements (the whole conversation) so far and the confidence level is per-token.

    Further, the confidence level is not about logical correctness, it’s about “how likely is this token to appear in this context”.

    So even if you try using token confidence you still end up stuck due to the underlying problem that the LLMs architecture is that of a “realistic text generator” and hence that confidence level is all about “what text comes next” and not at all about the logical elements conveyed via text such as questions and answers.


  • If you want a low power, cheap x86 mini-PC to run a Linux box for low demand uses (personal TV Box, PC for a family member that only ever does light web browsing and e-mail) they do have some nice processors.

    I mean, you can also use an ARM SBC for some of those things, but it’s handy to have an x86 processor because of easier availability of binaries, plus even the low power ones are actually more powerful than the ARM stuff.

    That’s about the only thing, really.


  • Aceticon@lemmy.dbzer0.comtoData is Beautiful@mander.xyzdeadly
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 days ago

    In all fairness, the unusual sells a lot better than the usual as news, so it makes sense that newsmedia goes for the former rather than the latter - any newspaper that reports based on prevalence and ignores the shock-effect of an event simply doesn’t get read and goes bankrupt.

    So in this specifically, as I see it the problem isn’t that the newsmedia choses to report the unusual but that very few people have been taught to beware of one’s natural falacy of confusing exposure (how much something is talked about) with actual impact - you can very visibly see it in how people react to governments implementing authoritarian anti-Terrorism measures, were the people who confuse exposure with impact actually support very authoritarian measures to supposedly combat Terrorism whilst the people who do not and instead actual check what’s the impact of it tend to be against authoritarian measures because they trade a lot of everybody’s Freedom for supposedly combating something which in most countries is has a lower death rate than slipping on a bathtub.

    As I see it, were the press fails to uphold Journalistic Integrity is in refraining from reporting on certain unusuals, for example political corruption and certain actions of the ultra-wealthy (whilst choosing to report on other actions of them - see: celebrity culture).

    IMHO the dynamic we see in this graphic which is really about impactful vs newsworth is pretty natural, what’s not natural is the selectivity in reporting of different but equally newsworthy events.


  • If every single one of the estimated 10,000 deaths per year due to the polution from diesel vehicles in Europe was individually reported in the Press, we would have far stronger legislation against that kind of polution and the heads of the companies involved in the Diesel Scandal would be rotting in jail rather than some scapegoat engineer.

    (I’m using an European example because that’s the data for deaths I remember, but I bet it’s the same or worse in the US).


  • Aceticon@lemmy.dbzer0.comtoData is Beautiful@mander.xyzdeadly
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    5 days ago

    At first sight it seems to me that the coverage being positivelly correlated with how unusual a death is and the number of people dying in a single event, would explain that graph.

    I bet if we dig into the details of the Accidents class we would see a pattern were uncommon kinds of accidents and/or those with a large number of deaths (“man killed by falling crane”, “plane crash”) get lots of coverage whilst common kinds of accidents with few victims per event (“a car crash involving a single car”) get a lot less coverage.