• 0 Posts
  • 279 Comments
Joined 5 years ago
cake
Cake day: February 15th, 2021

help-circle

  • In general, I agree that you can always use the CLI raw, but a frontend is a lot more friendly for many. It’s the reason some people prefer TUI over CLI as well (some people really like lazygit and lazydocker which are just frontends wrapping git and docker CLI calls and presenting it in a TUI). A TUI/GUI can structure information in panels, it can be more context-sensitive and it can help provide visual representations of the operation.

    Also, wrapping CLI commands (whether through a GUI or a TUI) means the wrapper can automatically combine the commands in whichever way it’s best for a particular goal, or more conveniently set up batch processing… it’s helpful for people who don’t like having to make their own scripts, or craft long oneliners.

    Plus: lets say you have your computer hooked to your TV and don’t have space for a keyboard (but can use a small wireless mouse on the arm of your couch), a GUI wrapper that allows you to perform operations with just a mouse can be very convenient.

    I don’t know what kind of GUIs are you imagining, but I’ve hardly ever seen 1-to-1 recreations to a single individual command (unless that command is extremely complex or a graphical representation would be actually useful).

    Some examples:

    Gparted creates a job list of terminal commands for the disk manipulation, but it presents a graphical representation of the disks before you actually commit to executing the commands internally, so you can see what would be the result of the changes in the GUI side before actually pressing the button that actually executes parted, fdisk, mkfs, resize2fs, etc. (they do wrap the commands when it comes to executing the changes), without you needing to go through the steps and specific syntax of each of them on your own.

    There are wrappers to ffmpeg for video editing or transcoding that some people find convenient for discoverability of the options available and/or to have a limited list of presets / sanitized options for those who don’t want to bother creating their own scripts. Sometimes also showing video previews for the graphical representation (useful when the operation is about cropping the image, or picking the exact millisecond where to cut). An example is LosslessCut, they keep a log of the ffmpeg calls… or maybe Shutter Encoder (press Alt+C to see the console commands).

    In Synaptic, the GUI package manager, pressing “Apply” calls the appropriate APT commands as a CLI app inside a VTE with the selection of the packages you have decided to add/remove/update, which you have previously selected in the listing that is generated from the GUI view of the app. Some people like having a graphical detailed listing which might be useful for conveniently browsing packages and seeing their detailed description, while still you get the raw information and accurate log from the installation that you would get when you are just using the CLI.




  • The thing is that age verification in a digital world is not easy… what exactly does the government mandate as a valid verification method?

    Like… would asking the user their age be valid enough? … because it’s not like a reliable method exist (not even credit card verification prevents a minor from taking their parents card and go through it). IMHO, until the government doesn’t actually set a standard, I don’t see why websites should actually give anything else than the most minimal effort possible when it comes to this.


  • Personally, I feel that if it uses control characters to update the screen in previous positions, altering the scroll buffer, moving beyond where the cursor is and redrawing the screen, then it’s a TUI.

    CLI programs only output plain text in a stream, using just control characters for coloring and formatting, and if they do any re-drawing it’s only for the current line (eg. progressbars and so).

    So… even something like less is a TUI program… but things like more or sed would be CLI programs.


  • Isn’t the T for “text”? (ie. “Text User Interface”)

    I mean, in the context of Unix systems it’s most likely gonna be within a terminal emulator, but in theory you can have a TUI inside an SDL window rendering the text there (specially when they are ports from other systems where they might be using different character sets than whats available in terminals… or if they want to force a specific font).

    The only example that comes to my head right now is ZZT, but I believe there are many games on Steam that use a TUI rendered within their own program, not a terminal.


  • I generally agree but it depends on the application and the computer purpose / input you will most use.

    Like… it doesn’t make much sense to have a CLI/TUI for an image editor… if you start using things like sixel you are essentially building a GUI that runs in a terminal, not a TUI. The same happens with videogames, video players and related entertainment applications.

    But like I said, I do generally agree. I’d even argue that when possible, GUIs should just be frontends that ultimately just call the corresponding CLI programs with the appropriate parameters, avoiding duplication.


  • The second most restrictive of the Creative Commons licenses (only behind the BY-NC-ND one). CC BY-NC-SA is not considered an open source license.

    It gives the most control to the original owners of the Copyright, since only they can produce commercial and proprietary versions of the product. Free as in free beer, but not as in freedom.


  • Oh, I see the misunderstadning.

    Note that “authentication and login” does not necessarily require network communication with a government service. In fact in Europe the eIDs (eIDAS) are digital documents that use cryptography to authenticate without the need of spending resources in a government-funded public API that could be vulnerable to DDOS attacks and would be requiring reliable internet connections for all digital authentication (which might not always be an online operation). The chips are just a secure way to store the digital document and lock under hardware the actual key, making it much harder for it to be copied/replicated, but they don’t require internet connection for making government-certified digital signatures with them that can be used in authentication, this is the same whether the service itself you are login into is online or offline.

    In any case, in your example where actual network communication is used, it would still be possible for the government to track you regardless of proxies, because then they can store a log of the data & messages exchanged in the authentication.

    They can either ask the sites to authenticate previously with the government for the use of the API (which would make sense to prevent DDOS and other abuse, for example), which would let them know immediately which site you were asking login for (in a much more direct way than with “documents”), or simply provide a token to the site as result of the user authentication (which is a common practice anyway, most authentication systems work through tokens) and later at any given time in the future ask the sites to provide back which tokens are linked to each account on the site (just like I was saying before with the “documents” example) so the government can map each token with each individual person and know which users of that site correspond to which individuals.


  • I feel you are talking about a different thing now. My point was surrounding what you initially said:

    The only right way to do this, is if governments provide their citizens with an eID that any site can ask “is this person 18+?” and get an accurate answer without any other identifiable info. And if you don’t want the government to know what sites you visit, have sites route the request through a proxy.

    An eID is a digital document. You yourself are proposing that sites should request people to provide a document, one that’s issued by the government to you, personally. Then later you said that using a proxy prevents the government to know what you visit.

    My answer was that if you are providing a government-issued document/file to the service then the government (the issuer) can know if you visit the site just by keeping track of who did they issue each document for and requesting the sites for copies of the documents. Even if the document itself does not say your name. And that’s regardless of how many proxy layers you use, since there’s traceability in the document. This makes you fundamentally less anonymous to the government than before (when you could have indeed used a proxy to prevent this), this makes proxies no longer a good defense.

    The service does not know you, but that’s not the point, what you said is that the government can’t know if you visit the site, which is the one thing I disagreed with.


  • They might not know the list of sites you visit right away in the same way they could by contacting your ISP when you are not using a proxy, but that wasn’t my point.

    My point is that they can check with a specific site that uses this verification method and see if you have an account on that site, and if you do, which account in particular. And in a way that is much more directly linked to you personally than an IP address (which might be linked to the household/internet access you’re using but that isn’t necessarily under your name).

    So in this situation they can indeed know if you use any one particular site that they choose to target, as long as that site is requiring you to provide them with a document, regardless of how many layers of proxies you (or the site) choose to be under.

    I’m not sure what you mean by “the site that’s requesting this”, the site does not need to request anything from the government, they just need to have previously agreed on a “secret” mathematical verification method that works for every document. The digital equivalent of a stamp/signature.


  • They don’t need to know the requesting address in order for them to know if it was you the person corresponding to that proof of age, because the information is in the data being exchanged. These kind of verifications don’t depend or rely on IP address or networking, these are credentials that are checked on the application layer.

    In fact, they don’t even need to directly communicate with the government for this.

    This is equivalent to a registration office for a service asking you provide a paper stamped by the government that certifies your age without the paper actually saying who you are… the service does not need to contact the government if they can trust the stamp in the paper and the government official signature (which in this case is mathematical proof). And even though the service office can’t see your name in the paper, the government knows that the number written in the paper links to you individually, because they can keep record of which particular paper number was issued to which individual, even if your name wasn’t written in the document itself.

    So, the government can, at any given time, go to those offices, ask them to hand in the paper corresponding to a particular registration and check the number to see who it belongs to.

    The traceability is in the document, not in the manner in which you send it. It does not matter if you send the document to a different country for someone else to send it from a different address, on your behalf (ie. a proxy). If the government can internally cross-reference the registration papers as being the ones linked to your governmental ID, they can know it’s yours regardless of how it reached the offices. So this way they can check if you registered yourself in any particular place they wanna target and what your account is.


  • I agree that a government that wants privacy can actually do it in a way that ensures privacy. That’s also what I was saying.

    My point was that this is up to the government, and no amount of “route the request through a proxy” would patch that up, that’s not gonna help this case. Because this is not something that’s tracked in the networking layer, it’s in the application layer.

    If the government wants to protect privacy, they can do it without you needing to use proxies, and if the government wants to see what sites you visit using these certificates, they can do it even if you were to use proxies.


  • If you have no way to link the signature to the original document, then how do you validate that the signature is coming from a document without repetition / abuse?

    How do you ensure there aren’t hundreds of signatures used for different accounts all done by the same stolen eID that might be circulating online without the government realizing it?

    Can the government revoke the credentials of a specific individual? …because if they can’t then that looks like a big gap that could create a market of ever-growing stolen eIDs (or reusing eIDs from the deceased) …and if they can revoke, what stops the government from creating a simulation in which they revoke one specific individual and then check what signatures end up being revoked to identify which ones belong to that person? The government can mandate the services to provide them all data they have so it can be analyzed as if they were Issuer, Registry and Verifier, all in one, without separation of powers.

    I know there are ways to try and fix this, but those ways have other problems too, which end up forcing the need for a compromise… there’s no algorithm that perfectly provides anonymity and full verifiability with a perfect method of revocation that does not require checks at every user login. For example, with the eIDAS 2.0 system (considered zero-knowledge proof), the government does have knowledge of the “secret serial number” that is used in revocation, so if they collude with the service they can identify people by running some tests on the data.


  • That prevents the site from knowing your identity, but I’m not convinced it prevents the government from knowing you visit the site. The government could keep track of which document corresponds to which individual whenever they issue / sign it.

    So if the government mandated that each signed proof of “age>18” was stored by the service and mapped to each account (to validate their proof), then the government could request the service to provide them copy of the proof and then cross-check from their end which particular individual is linked to it.

    know what sites you visit


  • if you don’t want the government to know what sites you visit, have sites route the request through a proxy.

    I feel a proxy would not really make much of a difference. If the government keeps a mapping of which eID corresponds to each real person from their end (which they would do if they want to know what sites you visit) then they can simply request the services (and/or intermediaries) to provide account mapping of the eIDs (and they could mandate by law those records are kept, like they often do with ISPs and their IP addresses). The service might not know who that eID belongs to… but the government can know it, if they want.

    The government needs to want to protect your privacy. If the government really wants to know what sites you visit, there’s no reason why they would want to provide you with a eID that is truly anonymous at all levels and that isn’t really linked to you, not even in state-owned databases.


  • I agree, which is why I think running those open source apps in a separate computer, isolating infotainment from the more critical software, would be a stronger safety layer.

    Them being separated should, imho, be a precondition, so that it can minimize accidents and exploits in cars that might be running software that is not immediately up to date as a result from publicly and well known vulnerabilities being discovered as the code evolves.


  • Open source software is not bug free. I’d argue there are more vulnerabilities caused by human error than there are caused by malicious actors. More often than not, malicious actors are just exploiting the errors/gaps left by completely legit designers.

    Running those open source apps in a separate computer, isolating infotainment from the more critical software, would be an even stronger safety layer, imho.


  • While it’s true that Debian installation used to make use of a TUI and it did not have a nice GUI “live-CD” installation image for a long time (I think until 2019), Debian installation process included a default DE for way longer than that (2000). And before they did, the installation offered a choice between different window managers (back in the days before well established DE suites were even a thing).

    They don’t customize the DE much, but neither does Archlinux which is a very popular distro nowadays (and the installer on that one is arguably even less friendly than Debian used to be).

    Personally, I feel it has more to do with how other distros (like Mint, Ubuntu, Knoppix, etc.) have built on the work of Debian to make their own variants that are essentially Debian + extra stuff, making them better recommendations for the average people (if one thinks of those as Debian variants then I wouldn’t say Debian is “left out”). And for the not-so-average people, rolling release style distros (or even things like Nix/Guix) might be more interesting to experiment in.