Hi all,
Perhaps a stupid question. Some time ago, I received a rpi zeroW as a gift, but as I did not have any use for ii I passed it to somebody else in our electronics-group. Now, that person has had a +30 year carreer as self-taught programmer -starting out with BASIC on DOS machines- so he showed of some of his old BASIC applications in dosbox on the pi.
So far so good, but he had an interesting question: Years ago, I wrote a library in BASIC for screen / window applications in DOS. (you know, pop-up text-windows and so on). How do I do that on linux (in C)?
As I myself only do ‘backend’ coding (so no UI), I have to admit I did not have any answer to that.
So, question, For somebody who has mostly coded in BASIC (first DOS and later Visual Basic) and now switched to C and python, what is the best / most easy tool to write a basic UI application with window-function on linux/unix. I know there exist things like QT and ncurses, but I never used these, so I have no idea.
Any advice?
Kr.
wrote a library in BASIC for screen / window applications in DOS. (you know, pop-up text-windows and so on). How do I do that on linux (in C)?
(…)
I know there exist things like QT and ncursesSo it’s graphical interface we are after or text based?
For text, I agree with others, ncurses
For graphical
- pyGTK
basically everything you need, some learnig curve as it’s big and versatile. But to be honest, when trying to achieve something I’d suggest to start from GTK reference to me it somehow conveys the logic better than the PyGTK reference - Kivy
haven’t used it, but might be fun to use - wxWidgets
very cross-platform. Not only you can use it to write UI that will require minor fixes to have the same code for Windows and Linux at the same time, you can also tell it whether the toolkit used under the hood on Linux should be QT or GTK - Tk
old, simple (more fancy things need some gymnastics) but simple (easy to use) and supported in Python out of the box (you don’t even need to install anything) - QT
I’m putting it here just for fairness. I don’t like it, don’t like its signal-slot design, I think it’s hogging up too much resources. But last time I used it was ~10 years ago and in the end, it does in fact work
Wauw! So many answers in such a short time. Thanks all! 👍 (I will not spam the channel by sending a thank you to all but this is really greatly apriciated)
Concerning ncurses. I did hear of it but never looked at it myself. What is not completely clear for me. I know you can use it for ‘low-level’ things, but does it also include ‘high-level’ concepts like windows, input fields and so?
The blog mentioned in one of the other posts only shows low-level things.
but does it also include ‘high-level’ concepts like windows, input fields and so
Yes, it allows you build full on TUIs.
does it also include ‘high-level’ concepts like windows, input fields and so?
AFAIK MC uses ncurses for GUI. So I while I don’t know if it has the concept of non-modal dialog, for example, for sure it has dialogs, fields, radios, boxes, etc
- pyGTK
If you want a terminal gui, then ncurses may be suitable which you can also use in c++. Qt and Gtk have c++ bindings.
Both GTK and Qt have good Python bindings.
GTK example: https://github.com/Taiko2k/GTK4PythonTutorial
There’s also PyQt but it looks more complicated and I couldn’t find as nice and straightforward of an example as GTK but I found this: https://realpython.com/qt-designer-python/
If you want to go to C, GTK works about the same way. If you want C++, Qt is pretty good there.
Otherwise you can go SDL and just put whatever pixels you want on the screen on your own.
deleted by creator
I’d go for ncurses.
I’d recommend egui, though you have to use Rust for it. (learning it should be easy, considering the fact that you have background in C).
You don’t
Use existing frameworks like GTK and QT
FOr me the quickest and basic way would be python and tkinter or pyqt. Failing that, push it to a web app with something like Flask or React
Raygui, fltk, GTK. Qt if you’re working in C++
I class myself as having similar experience to your friend having used Power Basic and Turbo Pascal mainly under DOS. I was able to use tkinter to produce some simple gui front-ends to produce dialogue boxes, process data and feed it to GnuPlot.
[This comment has been deleted by an automated system]
It really depends on which language you want to use.
I’m not that deep into the topic, but I experimented with GTK and tkinter as a kid
ChatGPT will easily make you a basic GUI in Python using tkinter in my case. Can only recommend. It can also explain how those things work etc.
Hmmm … 🤔 The best way not to make friends with somebody with over 30 years of coding experience: suggest him to use ChatGPT to write a computerprogram 🤣🤣
It is far more efficient to ask specific questions instead of reading the whole documentation. Asking those with relevant knowledge of the field is usually not an option. Asking GPT is an option we now have. Why would you not like it? It is like having Excel instead of a calculator and paper.
Why would you not like it?
It takes the fun out of programming
You don’t learn as well when you have someone/something else do the thinking for you. It’s nice to NOT have to keep going back to an LLM for answers.
I learn even less if the effort required is far too high to even try. GPT reduces this a whole lot, enabling me (and presumably many others) to do things they were unable to do previously.
I really do not understand how this community is so toxic regarding this.
I really do not understand how this community is so toxic regarding this.
I’m guessing it’s because you’re surrounded by people who DID spend the extra effort to learn something on their own without having their hand held, and now just see people trying to take the easy way out.
You’re not unique. We were all in your position once.
Define “without having their hand held”. Did they come up with all concepts themselves? Do they exclusively code in assembly? Wire their machines by hand? Operate the switches manually? Push the button off the Morse machine themselves? How far back should I go with the analogies before it is clear how nonsensical that is? I am a random hobbyist that is enabled to do such stuff because of GPT. I would not have been able to replace a broken BMS chip in my e-bike battery without GPT helping me digest the datasheet and get the register, programming procedure etc. etc. into code to read the old and write the new chip. I am not 15 anymore, I can not spend 50 hours learning some niche skill that I will never(!) use again just to fix something that is worth 200 $.
If you think that anyone can do that with GPT you are not only mistaken but at the same time I am shocked that you would not want that to be the case, just out of pettiness that you could not do it as easy but “had to learn it the hard way back in the day”. Disgusting.
I don’t care what you do, you do you. I just like actually knowing things when I need to know them, and have the capacity to solve problems myself without being dependent on tech for everything. It’s like being able to figure out how to change your own engine oil vs. paying somebody to do it for you.
Did they come up with all concepts themselves? Do they exclusively code in assembly? Wire their machines by hand? Operate the switches manually? Push the button off the Morse machine themselves?
We read books. We went to classes. We got our hands dirty and failed, again and again and again until it clicked and we got it right. That’s the part that’s hard. LLMs are a tool. Not a replacement for a good programmer who understands what they are doing. Use them to help you save time with tasks you are already familiar with. Don’t use them as a college professor. Because eventually it’s going to teach you wrong, that’s how they work. And without knowing some basic concepts about the subject you’re inquiring about, you’re not going to catch it when it does go wrong.
I’m 42 by the way, and I still learn new things every day.
It is far more efficient to ask specific questions instead of reading the whole documentation.
I’m going to bring up an excerpt of your previous comment, because this is an example I want to make. Say there is something in that datasheet (I’m completely making this up as an example) about needing a certain value resistor to set the charging current, and ChatGPT fails to mention this and simply tells you that the battery takes the voltage directly from the circuit without it? Then you have a fire on your hands, because you decided to NOT to read the datasheet and skip crucial info. If you keep taking AI generated text at face value, it’s going to bite you in the ass one day.
Electronics is my main hobby, so you can bet I’m poring over datasheets all day too, and little gotchas like that are all over the place. You simply cannot trust them with these things the way you can trust a good old book or someone that’s been doing it for a long time.
~20 years ago:
“Reading documentation is for wimps! Real programmers read the source code directly”LLMs are just a tool. And meanwhile our needs and expectations from the simplest pieces of code have risen
To be honest, I have no personal experience with LLM (kind of boring, if you ask me). I know do have two collegues at work who tried them. One -who has very basic coding skills (dixit himself) - is very happy. The other -who has much more coding experience- says that his test show they are only good at very basic problems. Once things become more complex, they fail very quickly.
I just fear that, the result could be that -if LLMs can be used to provide same code of any project- open-source project will spend even less time writing documentation (“the boring work”)
The LLM is excellent at writing documentation… :D
As a sidenote. This reminds me of a discussion I haver every so often on “tools that make things to easy”.
There is something I call "the arduino effect:. People who write code for things, based on example-code they find left and right, and all kind of libraries they mix together. It all works … for as long as it works. The problem is what happens if things do not work.
I once helped out somebody who had an issue with a simple project: he: “I don’t understand it. I have this sensor, and this library… and it works. Then I have this 433 MHz radio-module with that library and that also works. But when I use them together. It doesn’t work”| me: what have you tried? he: well, looked at the libraries. They all are all. Reinstalled all the software. It’s that neither me: could it be that these two boards use the same hardware interrupt or the same timer he: the what ???
I see simular issues with other platforms. GNU Radio is a another nice example. People mix blocks without knowing what exactly they do.
As said, this is all very nice, as long as it works
I wonder if programming-code generated by LLMs will not result in the same kind of problems. people who do not have the background knowledge needed to troubleshoot issues once problems become more complex.
(Just a thought / question … not an assumpion)
That can become an issue but IMO the person in your example used the tool wrong. To use it to write the boilerplate for you, MVP, see how the libraries should be used sets one on the track. But that track should be used to start messing with it and understand why what goes where. LLM for code used as replacement is misuse. Used as time booster is good. Unless you completely don’t want to learn it, just have something that works. But that assumption broke in your example the moment they decided to add something to it
I have a very “on hands” way of learning things. I had in the past situations when I read whole documentation for a library back to back but in the end I had to copy something that somehow works and keep breaking it and fixing it to understand how it works. The part between documentation to MVP wasn’t easier because I’ve read the documentation
For such kinds of learning, having an LLM create something that works is a great speed up. In theory a tutorial might help in such cases. But it has to exist and very often I want something like this but… can mean that one is exploring direction that won’t address their use-caseEDIT: A thought experiment. If I go to fiverr asking for a project, then for another one, and then start smashing them together the problem is not in what the freelancers did. It’s in me not knowing what I’m doing. But if I can have a 100 line boilerplate file that only needs a little tinkering generated from a few sentences of text, that’s a great speed up
Hi,
Just to put things into perspective.
Well, this example dates from some years ago, before LLMs and ChatGPT. But I agree that the principle is the same. (an that was exactly my point).
If you analyse this. The error the person made was that he assumed an arduino to be like a PC, … while it is not. An arduino is a microcontroller. The difference is that a microcontroller has resources that are limited: pins, hardware interrups, timers, … An addition, pins can be reconfigured for different functions (GPIO, UART, SPI, I2C, PWM, …) Also, a microcontroller of the arduino-class does not run a RTOS, so is coded in “baremetal”. And as there is no operating-system that does resource-management for you, you have to do it the application.
And that was the problem: Although resource-management is responsability of the application-programmer, the arduino environment has largly pushed that off the libraries. The libraries configure the ports in the correct mode, set up timers and interrupts, configure I/O devices, …And in the end, this is where things went wrong. So, in essence, what happened is the programmer made assumption based on the illusion created by the libraries: writing application on arduino is just like using a library on a unix-box. (which is not correct)
That is why I have become carefull to promote tools that make things to easy, that are to good at hiding the complexity of things. Unless they are really dummy-proof after years and decades of use, you have to be very carefull not to create assumptions that are simply not true.
I am not saying LLMs are by definition bad. I am just careful about the assumptions they can create.
what happened is the programmer made assumption based on the illusion created by the libraries: writing application on arduino is just like using a library on a unix-box. (which is not correct)
That is why I have become carefull to promote tools that make things to easy, that are to good at hiding the complexity of things. Unless they are really dummy-proof after years and decades of use, you have to be very carefull not to create assumptions that are simply not true.
I know where you’re coming from. And I’m not saying you’re wrong. But just a thought: what do you think will prevail? Having many people bash together pieces and call in someone who understands the matter only about things that don’t. Or having more people understand the real depths?
I’m afraid that in cases where the point is not to become the expert, first one will be chosen as viable tacticLong time ago we were putting things together manually crafting assembly code. Now we use high level languages to churn out the code faster and solve un-optimalities throwing more hardware at the problem until optimizations come in in interpreter/compiler. We’re already choosing the first one
tkinter is pretty powerful but not exactly easy to use. I’d use something simpler to get started.
Hence GPT to help. I build a fairly big GUI that way, far bigger than GPTs context window (about 3’500 lines), but as always we can break things into smaller pieces that are easy to manage.
Easiest GUI toolkit I’ve used was NiceGUI. The end result is a web app but the python code you write is extremely simple, and it felt very logical to me.
I’ve used pyTK to make some apps for personal use. Good stuff, somewhat easy to use once you follow some tutorials.