Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 17 Posts
  • 894 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • tal@lemmy.todaytoProgramming@programming.devKeyboard latency
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 hours ago

    Another thing to note about gaming keyboards is that they often advertise “n-key rollover” (the ability to have n simulataneous keys pressed at once — for many key combinations, typical keyboards will often only let you press two keys at once, excluding modifier keys). Although not generally tested here, I tried a “Razer DeathStalker Expert Gaming Keyboard” that advertises “Anti-ghosting capability for up to 10 simultaneous key presses”. The Razer gaming keyboard did not have this capability in a useful manner and many combinations of three keys didn’t work. Their advertising claim could, I suppose, technically true in that 3 in some cases could be “up to 10”, but like gaming keyboards claiming to have lower latency due to 1000 Hz polling, the claim is highly misleading at best.

    That being said, the real issue was keyboards that used matrix encoders, where all keys were represented in a matrix, addressed by one line going high on the X axis and one line going high on the Y axis. I understand that this is cheaper, and expect that it’s probably because this requires running fewer traces from the keys to the controller than doing one for each. It looks Something like:

    - X1 X2 X3
    Y1 “Q” “W” “E”
    Y2 “R” “T” “Y”
    Y3 “U” “I” “O”

    That’s just a 3x3 matrix, as an example. So if I press “Q” on my keyboard, the X1 and Y1 line will go high. If I keep it pressed and then additionally press the “W” key, the Y1 line, which is already high, will stay high. The X2 line will then also go high. The controller can detect the keypress, since a new line has gone high.

    If I keep both keys pressed and then additionally press the “R” key, then the X1 line is already high due to the “Q” key being down, and will stay high. The “Y2” line will go high. The controller can detect the keypress.

    However, if I then press the “T” key, it can’t be detected. Pressing it would normally send the X2 line and Y2 line high, but both are already high due to existing keys being pressed.

    In practice, keyboard manufacturers try to lay out their matrix to try to minimize these collisions, but there’s only so much they can do with a matrix encoder. They’ll also normally run independent lines for modifier keys.

    A controller using a matrix encoding can always detect at least two keys being simultaneously pressed, but may not be able to detect a third.

    Matrix encoders aren’t really an issue when typing, but some games do require you to press more than two non-modifier keys at once. For example, it’s common to use the “WASD” keys for movement, and moving diagonal requires holding two of those. if someone is playing a game that requires pressing another key or two at once, those might collide.

    As I recall, USB sends the full state of the keyboard, not events specific to a button when a button is pressed. There are protocol-level restrictions on the number of “pressed keys” that can be pushed. That means that USB keyboards don’t support n-key rollover, and are why you’ll see some companies selling gaming keyboards with a PS/2 option — because that protocol does send state on a per-button basis. (It’s also why, for those of us that have used PS/2 keyboards and have experienced this, it’s possible to get a key on a PS/2 keyboard “stuck” down until it’s pressed again if the OS, for whatever reason, misses a key-up event.) USB gaming keyboards probably (hopefully) won’t actually advertise n-key rollover. But they can avoid using a matrix encoder, and in general, one really doesn’t need n-key rollover for playing games — just the ability to detect up to the USB limit. We only have ten fingers, and I don’t think that there are any games that require even something like six keys to be down at once.

    Obviously, in the case the author hit with the Razer keyboard, it wasn’t able to do that. I’m not sure what they’re doing (unless they’re simply completely fabricating their feature claim, which I assume that they wouldn’t). They might be using a larger matrix and sparsely-populating it, though I’m guessing there.


  • You can definitely feel 100 ms in input response time. That’s about what an analog modem’s latency would be. I can tell you, that’s very much noticeable on a telnet or ssh connection when you’re typing (though to be fair, what matters there is really round-trip time, so one should probably double that).

    On that note, if someone hasn’t run into it, mosh uses UDP and adaptive local echo to shave down network latency for terminal connections, and might be worth looking into if you often do remote work in a terminal over a WAN. It uses ssh to bootstrap auth (if you’re concerned about using less-widely-used thing what does network authentication, which I remember I was). I find that it makes things more pleasant, and also like some of its other features, like auto-reconnecting after an arbitrary length of time. One can just close a laptop and then reopen it a week later and the terminals function. Tmux and GNU screen can also do something similar — and in fact, I think that mosh and tmux are good packages to pair with each other — but they don’t do quite the same thing, as they require (a) manual re-establishment of connection and are (b) aimed at letting one reconnect from different clients. It also displays a notice in the terminal if there’s temporary network unavailability until it’s re-established communication, so the user isn’t simply staring at his screen wondering whether the software on the remote machine is being unresponsive or whether it’s a network issue.


  • tal@lemmy.todaytoProgramming@programming.devKeyboard latency
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    6 hours ago

    That’s…actually a substantial amount more latency than I’d expected. Not exactly the same thing, but for perspective, while I haven’t played multiplayer competitive FPSes for many years, back when I did, the limit of what I could really “feel” when it came to network latency was around 10 milliseconds. The latency the keyboards are adding, if it’s as high as measured, is a really substantial amount of delay to be adding if you’re talking video games.

    considers

    Note that depending upon the keyswitch mechanism, the controller does need to debounce the thing to avoid duplicate keypresses. I’ve used a keyboard before with a controller that didn’t adequately debounce, and it was extremely obnoxious — occasionally would get duplicate keypresses, and I had to filter it out at the level of my computer.

    However, if you look at gamepad button latency, they also need to worry about bounce, and their latency is much lower:

    https://gamepadla.com/

    You can get gamepads with sub-2-millisecond latency on USB.

    EDIT: Note that one thing that I learned from following !ergomechkeyboards@lemmy.world is that there are some semi-standardized open-source firmwares for (fancy, expensive) microcontroller-based keyboards; I believe that QMK is popular. I don’t know how the latency on those microcontroller-based keyboards compare, but assuming that there aren’t any fundamental constraints imposed by the other hardware on the keyboard, it might be possible to shave some time off of that by tweaking the firmware.

    I believe that at least some keyswitch mechanisms become more prone to bouncing over time, but if so, it might be possible for a microcontroller to detect bounces and tune the wait time to the mechanism on a given keyboard to adapt to mechanism wear.









  • You would typically want to use static ip addresses for servers (because if you use DHCP the IP is gonna change sooner or later, and it’s gonna be a pain in the butt).

    In this case, he controls the local DHCP server, which is gonna be running on the OpenWRT box, so he can set it to always assign whatever he wants to a given MAC.


  • tal@lemmy.todaytoSelfhosted@lemmy.world[Solved] OpenWrt & fail2ban
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    except that all requests’ IP addresses are set to the router’s IP address (192.168.3.1), so I am unable to use proper rate limiting and especially fail2ban.

    I’d guess that however the network is configured, you have the router NATting traffic going from the LAN to the Internet (typical for a home broadband router) as well as from the home LAN to the server.

    That does provide security benefits in that you’ve basically “put the server on the Internet side of things”, and the server can’t just reach into the LAN, same as anything else on the Internet. The NAT table has to have someone on the LAN side opening a connection to establish a new entry.

    But…then all of those hosts on the LAN are going to have the same IP address from the server’s standpoint. That’s the experience that hosts on the Internet have towards the same hosts on your LAN.

    It sounds like you also want to use DHCP:

    Getting the router to actually assign an IP address to the server was quite a headache

    I’ve never used VLANs on Linux (or OpenWRT, and don’t know how it interacts with the router’s hardware).

    I guess what you want to do is to not NAT traffic going from the LAN (where most of your hardware lives) and the DMZ (where the server lives), but still to disallow the DMZ from communicating with the LAN.

    considers

    So, I don’t know whether the VLAN stuff is necessary on your hardware to prevent the router hardware from acting like a switch, moving Ethernet packets directly, without them going to Linux. Might be the case.

    I suppose what you might do — from a network standpoint, don’t know off-the-cuff how to do it on OpenWRT, though if you’re just using it as a generic Linux machine, without using any OpenWRT-specific stuff, I’m pretty sure that it’s possible — is to give the OpenWRT machine two non-routable IP addresses, something like:

    192.168.1.1 for the LAN

    and

    192.168.2.1 for the DMZ

    The DHCP server listens on 192.168.1.1 and serves DHCP responses for the LAN that tell it to use 192.168.1.1 as the default route. Ditto for hosts in the DMZ. It hands out addresses from the appropriate pool. So, for example, the server in the DMZ would maybe be assigned 192.168.2.2.

    Then it should be possible to have a routing table entry to route 192.168.1.1 to 192.168.2.0/24 via 192.168.2.1 and vice versa, 192.168.2.1 to 192.168.1.0/24 via 192.168.1.1. Linux is capable of doing that, as that’s standard IP routing stuff.

    When a LAN host initiates a TCP connection to a DMZ host, it’ll look up its IP address in its routing table, say “hey, that isn’t on the same network as me, send it to the default route”. That’ll go to 192.168.1.1, with a destination address of 192.168.2.2. The OpenWRT box forwards it, doing IP routing, to 192.168.2.1, and then that box says “ah, that’s on my network, send it out the network port with VLAN tag whatever” and the switch fabric is configured to segregate the ports based on VLAN tag, and only sends the packet out the port associated with the DMZ.

    The problem is that the reason that home users typically derive indirect security benefits from use NAT is that it intrinsically disallows incoming connections from the server to the LAN. This will make that go away — the LAN hosts and DMZ hosts will be on separate “networks”, so things like ARP requests and other stuff at the purely-Ethernet level won’t reach each other, but they can freely communicate with each other at the IP level, because the two 192.168.X.1 virtual addresses will route packets between each the two networks. You’re going to need to firewall off incoming TCP connections (and maybe UDP and ICMP and whatever else you want to block) inbound on the 192.168.1.0/24 network from the 192.168.2.0/24 network. You can probably do that with iptables at the Linux level. OpenWRT may have some sort of existing firewall package that applies a set of iptables rules. I think that all the traffic should be reaching the Linux kernel in this scenario.

    If you get that set up, hosts at 192.168.2.2, on the DMZ, should be able to see connections from 192.168.1.2, on the LAN, using its original IP address.

    That should work if what you had was a Linux box with three Ethernet cards (one for each of the Internet, LAN, and WAN) and the VLAN switch hardware stuff wasn’t in the picture; you’d just not do any VLAN stuff then. I’m not 100% certain that any VLAN switching fabric stuff might muck that up — I’ve only very rarely touched VLANs myself, and never tried to do this, use VLANs to hack switch fabric attached directly to a router to act like independent NICs. But I can believe that it’d work.

    If you do set it up, I’d also fire up sudo tcpdump on the server. If things are working correctly, sudo ping -b 192.168.1.255 on a host on the LAN shouldn’t show up as reaching the server. However, ping 192.168.2.2 should.

    You’re going to want traffic that doesn’t match a NAT table entry and is coming in from the Internet to be forwarded to the DMZ vlan.

    That’s a high-level of what I believe needs to happen. But I can’t give you a hand-holding walkthrough to configure it via off-the-cuff knowledge, because I haven’t needed to do a fair bit of this myself — sorry on that.

    EDIT: This isn’t the question you asked, but I’d also add that what I’d probably do myself if I were planning to set something like this up is get a small, low power Linux machine with multiple NICs (well, okay, probably one NIC, multiple ports). That cuts the switch-level stuff that I think that you’d likely otherwise need to contend with out of the picture, and then I don’t think that you’d need to deal with VLANs, which is a headache that I wouldn’t want, especially if getting it wrong might have security implications. If you need more ports for the LAN, then just throw a regular old separate hardware Ethernet switch on the LAN port. You know that the switch can’t be moving traffic between the LAN and DMZ networks itself then, because it can’t touch the DMZ. But I don’t know whether that’d make financial sense in your case, if you’ve already got the router hardware.







  • You can get wrong answer with 100% token confidence, and correct one with 0.000001% confidence.

    If everything that I’ve seen in the past has said that 1+1 is 4, then sure — I’m going to say that 1+1 is 4. I will say that 1+1 is 4 and be confident in that.

    But if I’ve seen multiple sources of information that state differing things — say, half of the information that I’ve seen says that 1+1 is 4 and the other half says that 1+1 is 2, then I can expose that to the user.

    I do think that Aceticon does raise a fair point, that fully capturing uncertainty probably needs a higher level of understanding than an LLM directly generating text from its knowledge store is going to have. For example, having many ways of phrasing a response will also reduce confidence in the response, even if both phrasings are semantically compatible. Being on the edge between saying that, oh…an object is “white” or “eggshell” will also reduce the confidence derived from token probability, even if the two responses are both semantically more-or-less identical in the context of the given conversation.

    There’s probably enough information available to an LLM to do heuristics as to whether two different sentences are semantically-equivalent, but you wouldn’t be able to do that efficiently with a trivial change.


  • One major problem with very-long-term-data-retention formats is that the hardware to read the things may not be around in a surprisingly short period of time. Like, if you assume that this format isn’t bumping up against fundamental physical limits, then it will probably be supplanted down the line by something else, and people will probably stop making the devices to work with them before long. The devices to work with the media won’t last as long as the media, and there probably won’t be new ones produced.

    https://en.wikipedia.org/wiki/BBC_Domesday_Project#Concerns_over_electronic_preservation

    The BBC Domesday Project was a partnership between Acorn Computers, Philips, Logica, and the BBC (with some funding from the European Commission’s ESPRIT programme) to mark the 900th anniversary of the original Domesday Book, an 11th-century census of England. It has been cited as an example of digital obsolescence on account of the physical medium used for data storage.[1][2][3][4]

    This new multimedia edition of Domesday was compiled between 1984 and 1986 and published in 1986.

    In 2002, concerns emerged over the potential unreadablility of the discs as computers capable of reading the format became rare and drives capable of accessing the discs even rarer.[14][15] Aside from the difficulty of emulating the original code, a major issue was that the still images had been stored on the laserdisc as single-frame analogue video, which were overlaid by the computer system’s graphical interface. The project had begun years before JPEG image compression and before truecolour computer video cards had become widely available.

    I think that realistically, if you want to maintain something for very long-term archival use, it’s probably going to need to be rolled over into a new format periodically.