Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 17 Posts
  • 780 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle






  • You mean just the brand, or the manufacturing?

    I mean, branding something is trivial.

    But if you want to manufacture it in Europe, then you have to compete against companies who are going to be manufacturing in China, and manufacturing wages are going to be lower in China, so it’s going to be at a price disadvantage.

    I was just commenting yesterday where some guy wanted to buy a keyboard out of the EU or Canada instead of a Unicomp keyboard because he was pissed at the US. He was asking about buying a Cherry keyboard. Cherry just shut down their production in Germany after cheaper Chinese competition clobbered 'em.

    If you want to have stuff manufactured in Europe, you’ve got kinda limited options.

    1. Get some kind of patriotic “buy European” thing going, where people are intrinsically willing to pay a premium for things made in Europe.

    2. Ban imports. My guess is that in general, Europe will not do this unless they have some negative externality, like national security, associated with the import (think, say, Russian natural gas), since it’s economically-inefficient.

    3. Leverage some kind of other comparative advantage. Like, okay. Maybe one can’t have competitive unskilled assembly line workers. But maybe if there’s really amazing, world-leading industrial automation, so that there’s virtually no human labor marginal cost involved, and one scales production way up, it’s possible to eliminate enough of the assembly line labor costs to be competitive.


  • I’ve never used the software package in question.

    If you already own the software, and if the hardware it uses to talk to the microcontroller is on a serial port or USB-attached serial port, then you can most-likely just run it under WINE. This isn’t a VM, but a Windows compatibility layer — you don’t need to run a copy of Windows in a VM and all that. It’d be my first shot. That way, you can just use it like any other Linux program, don’t need to blow extra memory or overhead on running Windows in a VM.

    So, say the program in question has an installer, picbasic-installer.exe.

    So you’re going to want to install WINE. I don’t use Arch, so I’ll leave that up to you, but I believe that the Arch package manager is pacman. They may have some graphical frontend that you prefer to use.

    Then go ahead and, in a virtual terminal program, invoke picbasic-installer.exe — assuming that that’s what the installer is called — under WINE:

    $ wine picbasic-installer.exe
    

    That’ll run the installer.

    Now, my guess is that that much won’t have problems. And that WINE will run the thing. And it’ll probably let you compile BASIC programs.

    You can go ahead and fire up your PICBASIC PRO program. I don’t know how you launch Windows programs in your Arch environment. In general, WINE installers will drop a .desktop file under ~/.local/share/applications, and that can be started the way any other application can. I use a launcher program, tofi, to start programs like that under sway using tofi-drun, but you probably have a completely different environment set up. My guess is that your desktop environment on Arch probably has some kind of system menu of applications or something like that that will include WINE programs with a desktop file in it. Or maybe you have some program that shows a searchable list of programs and can launch from that. KDE Plasma, GNOME, Cinnamon, etc will probably all have their own routes, but I don’t use those, so I can’t tell you what they do. I’ll leave that up to you.

    What you’re likely to run into problems with is that if the PICBASIC PRO program wants to talk to that microcontroller programmer via a serial port (which on Windows would probably be COM0 or COM1 or whatever), it’s going to need to talk to /dev/ttyS0 or /dev/ttyS1 or whatever on Linux, or if it’s a USB-attached, /dev/ttyUSB0, /dev/ttyUSB1, etc. Ordinary users probably don’t have permission to write directly to them, by default.

    There are a couple ways to grant permission, but one of the most-straightforward ways is to add your user to a group that has permission.

    The basic Unix file permission system has each file — including device files, like /dev/ttyS0 — owned by one user and one group.

    On my Debian trixie system:

    $ ls -l /dev/ttyS0
    crw-rw---- 1 root dialout 4, 64 Jan 15 20:46 /dev/ttyS0
    $
    

    So that serial port device file is owned by the user root, which has read and write privileges (the first “rw”) and the group dialout, which has read and write privileges (the second “rw”). Any user that belongs to that group will be able to write to the serial ports.

    On my system, my user doesn’t belong to the “dialout” group:

    $  groups
    tal cdrom floppy sudo audio dip video plugdev users render netdev bluetooth lpadmin scanner docker libvirt ollama systemd-journal
    $
    

    So I’m going to want to add my user to that group:

    $ sudo usermod -aG dialout tal
    $
    

    Group permissions get assigned to processes when you log in (that is, usermod just sets what groups the process started when you log in as has, and then all its child processes). Technically, you don’t have to log out to do this — you could run sg dialout at this point, and then from that shell, run wine and see if it works — but I’d probably log out and then back in again, to keep things simplest. After you do that, you should see that you’re in the “dialout” group:

    $ groups
    night_petal <list of groups> dialout
    $
    

    After that, you should be able to use the program and write code to the microcontroller.



  • Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up.

    Eh…I don’t know if you’d call it “really serious hardware”, but when I picked up my 128GB Framework Desktop, it was $2k (without storage), and that box is often described as being aimed at the hobbyist AI market. That’s pricier than most video cards, but an AMD Radeon RX 7900 XTX GPU was north of $1k, an NVidia RTX 4090 was about $2k, and it looks like the NVidia RTX 5090 is presently something over $3k (and rising) on EBay, well over MSRP. None of those GPUs are dedicated hardware aimed at doing AI compute, just high-end cards aimed at playing games that people have used to do AI stuff on.

    I think that the largest LLM I’ve run on the Framework Desktop was a 106 billion parameter GLM model at Q4_K_M quantization. It was certainly usable, and I wasn’t trying to squeeze as large a model as possible on the thing. I’m sure that one could run substantially-larger models.

    EDIT: Also, some of the newer LLMs are MoE-based, and for those, it’s not necessarily unreasonable to offload expert layers to main memory. If a particular expert isn’t being used, it doesn’t need to live in VRAM. That relaxes some of the hardware requirements, from needing a ton of VRAM to just needing a fair bit of VRAM plus a ton of main memory.


  • Are Motorola ok?

    Depends on what you value in a phone. Like, I like a vanilla OS, a lot of memory, large battery, and a SIM slot. I don’t care much about the camera quality and don’t care at all about size and weight (in fact, if someone made a tablet-sized phone, I’d probably switch to that). That’s almost certainly not the mix that some other people want.

    There’s some phone comparison website I was using a while back that has a big database of phones and lets you compare and search based on specification.

    goes looking

    This one:

    https://www.phonearena.com/phones



  • That’s why they have the “Copilot PC” hardware requirement, because they’re using an NPU on the local machine.

    searches

    https://learn.microsoft.com/en-us/windows/ai/npu-devices/

    Copilot+ PCs are a new class of Windows 11 hardware powered by a high-performance Neural Processing Unit (NPU) — a specialized computer chip for AI-intensive processes like real-time translations and image generation—that can perform more than 40 trillion operations per second (TOPS).

    It’s not…terribly beefy. Like, I have a Framework Desktop with an APU and 128GB of memory that schlorps down 120W or something, substantially outdoes what you’re going to do on a laptop. And that in turn is weaker computationally than something like the big Nvidia hardware going into datacenters.

    But it is doing local computation.


  • I’m kind of more-sympathetic to Microsoft than to some of the other companies involved.

    Microsoft is trying to leverage the Windows platform that they control to do local LLM use. I’m not at all sure that there’s actually enough memory out there to do that, or that it’s cost-effective to put a ton of memory and compute capacity in everyone’s home rather than time-sharing hardware in datacenters. Nor am I sold that laptops — which many “Copilot PCs” are — are a fantastic place to be doing a lot of heavyweight parallel compute.

    But…from a privacy standpoint, I kind of would like local LLMs to be at least available, even if they aren’t as affordable as cloud-based stuff. And at least Microsoft is at least supporting that route. A lot of companies are going to be oriented towards just doing AI stuff in the cloud.


  • You only need one piece of (timeless) advice regarding what to look for, really: if it looks too good to be true, it almost certainly is. Caveat emptor.

    I mean…normally, yes, but because the situation has been changing so radically in such a short period of time, it probably is possible to get some bonkers deals in various niches, because the market hasn’t stabilized yet.

    Like, a month and a half back, in early December, when prices had only been going up like crazy for a little while, I was posting some tiny retailers that still had RAM in stock at pre-price-increase rates that I could find on Google Shopping. IIRC the University of Virginia bookstore was one, as they didn’t check that purchasers were actually students. I warned that they’d probably be cleaned out as soon as scalpers got to them, and that if someone wanted memory, they should probably get it ASAP. Some days prior to that, there was a small PC parts store in Hawaii that had some (though that was out of stock by the next time I was looking and mentioned the bookstore).

    That’s not to disagree with the point that @UnderpantsWeevil@lemmy.world is making, that this was awfully sketchy as a source, or your point that scavenging components off even a non-scam piece of secondhand non-functional hardware is risky. But in times of rapid change, it’s not impossible to find deals. In fact, it’s various parties doing so that cause prices to stabilize — anyone selling memory for way below market price is going to have scalpers grab it.


  • I don’t think that memory manufacturers are in some plot to promote SaaS. It’s just that they can make a ton of money off the demand right now for AI buildout, and they’re trying to make as much money as they can in the limited window that they have. All kind of industries are going to be collateral damage for a while. Doesn’t require a more complicated explanation.

    Michael Crichton had some way of putting “it’s not about you” it in Sphere that I remember liking.

    searches

    “I’m afraid that’s true,” Norman said. “The sphere was built to test whatever intelligent life might pick it up, and we simply failed that test.”

    “Is that what you think the sphere was made for?” Harry said. “I don’t.”

    “Then what?” Norman said.

    “Well,” Harry said, “look at it this way: Suppose you were an intelligent bacterium floating in space, and you came upon one of our communication satellites, in orbit around the Earth. You would think, What a strange, alien object this is, let’s explore it. Suppose you opened it up and crawled inside. You would find it very interesting in there, with lots of huge things to puzzle over. But eventually you might climb into one of the fuel cells, and the hydrogen would kill you. And your last thought would be: This alien device was obviously made to test bacterial intelligence and to kill us if we make a false step.

    “Now, that would be correct from the standpoint of the dying bacterium. But that wouldn’t be correct at all from the standpoint of the beings who made the satellite. From our point of view, the communications satellite has nothing to do with intelligent bacteria. We don’t even know that there are intelligent bacteria out there. We’re just trying to communicate, and we’ve made what we consider a quite ordinary device to do it.”

    Like, two years back, there was a glut of memory in the market. Samsung was losing a lot of money. They weren’t losing money back then because they were trying to promote personal computer ownership any more than they’re trying to deter personal computer ownership in 2026. It’s just that demand can gyrate more-rapidly than production capacity can adjust.


  • I’m not really a hardware person, but purely in terms of logic gates, making a memory circuit isn’t going to be hard. I mean, a lot of chips contain internal memory. I’m sure that anyone that can fabricate a chip can fabricate someone’s memory design that contains some amount of memory.

    For PC use, there’s also going to be some interface hardware. Dunno how much sophistication is present there.

    I’m assuming that the catch is that it’s not trivial to go out and make something competitive with what the PC memory manufacturers are making in price, density, and speed. Like, I don’t think that if you want to get a microcontroller with 32 kB of onboard memory, that it’s going to be a problem. But that doesn’t really replace the kind of stuff that these guys are making.

    EDIT: The other big thing to keep in mind is that this is a short-term problem, even if it’s a big problem. I mean, the problem isn’t the supply of memory over the long term. The problem is the supply of memory over the next couple of years. You can’t just build a factory and hire a workforce and get production going the moment that someone decides that they want several times more memory than the world has been producing to date.

    So what’s interesting is really going to be solutions that can produce memory in the near term. Like, I have no doubt that given years of time, someone could set up a new memory manufacturer and facilities. But to get (scaled-up) production in a year, say? Fewer options there.