Using a local AI model (running on your own GPU), you can:
Drag files on to the “AI bar” to bring them into the context of a prompt.
Tell it to perform actions like, “merge these two PDF files.”
Use it to search your filesystem, “find pictures of spiders.”
Ask it to find system settings or tell it to change them, “turn on dark mode.”
It supports voice and probably visual (e.g. with a webcam) I think.
Best of all: It doesn’t send any of that to some data center in the cloud! I mean, you can configure it to do that but you can just as easily use say, qwen3.5.
Note: It’s not realistic to expect to be able to use local models if you have less than 16GB of VRAM (in your GPU). I mean, some 8-billion parameter model will work with say, 8GB but you’re not going to be satisfied with the results most of the time 🤷
The AI features are actually pretty cool!
Using a local AI model (running on your own GPU), you can:
It supports voice and probably visual (e.g. with a webcam) I think.
Best of all: It doesn’t send any of that to some data center in the cloud! I mean, you can configure it to do that but you can just as easily use say, qwen3.5.
Note: It’s not realistic to expect to be able to use local models if you have less than 16GB of VRAM (in your GPU). I mean, some 8-billion parameter model will work with say, 8GB but you’re not going to be satisfied with the results most of the time 🤷