• 0 Posts
  • 65 Comments
Joined 2 years ago
cake
Cake day: December 14th, 2023

help-circle

  • Imo that’s perfectly fine and not idiotic if you have a static IP, no ISP blocked ports / don’t care about using alt ports, and don’t mind people who find your domain knowing your IP.

    I did basically that when I had a fiber line but then I added a local haproxy in front to handle additional subdomains. I feel like people gravitate towards recommending that because it works regardless of the answers to the other questions, even their security tolerance if recommending access only over VPN.

    I have CGNAT now so reverse proxy in the cloud is my only option, but at least I’m free to reconfigure my LAN or uproot everything and plant it on any other LAN and it’ll all be fine.


  • This is 99% my setup, just with a traefik container attached to my wifeguard container.

    Can recommend especially because I can move apartments any time, not care about CGNAT (my current situation which I predicted would be the case), and easily switch to any backup by sticking my boxes on any network with DHCP that can reach the Internet (like a 4G hotspot or a nanobeam pointed at a public wifi down the road) in a pinch without reconfiguring anything.



  • Immich is pretty good for this if you take pictures at each location. It has a global map that shows all your photos with a heatmap-style display and a drawer that shows a grid of the photos within your viewport as you can and zoom around. It doesn’t seem like you can view a specific album on the map currently but you can at least filter the map to favorites or a date range.


  • No problem! I don’t really do much road mapping so I’m still figuring it out, I do mostly just sidewalks and parking lots, but I’m noticing some intersections near me also have some messed up lane configurations.

    Another thing that might help is ab-street, if you download the desktop version you can import an area using an overpass query and view it using the viewer program bundled with ab-street, and it lets you click on segments and view where the turns connect from and to. It might not be 100% accurate but it’s another thing that helps to debug. I found that a road near me shows up in ab-street with the wrong number of lanes which pointed me to an issue.



  • I have no clue about the whole problem, but I looked at what I think is the first issue where it looks like it tells you to turn left where the sliplane splits to go right into the shred right parking lot.

    My guess is that turn:lanes is supposed to be tagged immediately before the intersection it applies to, and since the turn:lanes are tagged on the way before the sliplane attaches, it’s applying left|none|none to the sliplane junction, where it sees 2 outgoing ways and applies “left” to old Shakopee (going straight) and probably “none” to the sliplane. I would try removing the turn:lanes from the old Shakopee section before the sliplane but keep it on the way that is actually connected to the main intersection there.

    I also noticed that there is a turn restriction for coming out of the shred right sliplane entrance and then turning 180° to go eastbound on Shakopee. Since that sliplane entrance is one way, I don’t think that turn restriction is necessary or really helps, and it might be confusing to osmand. Without that turn restriction, I don’t think it would be necessary to have the old Shakopee split into 2 ways before the sliplane and after up until the intersection, so merging those 2 segments of old Shakopee (immediately before the sliplane entrance to shred right and up until the intersection itself) might simplify things.

    So I guess that’s 2 ideas that I think might fix the issue. Hopefully that works and can be applied to some of the other issues, I’m not sure about the intersection itself, I’ll have to look at it closer later.


  • I use a .dev and it just works with letsencrypt. I don’t do anything special with wildcards, I just let traefik request a cert for every subdomain I use and it works. I use the tls challenge which works on port 443, so I don’t think HSTS or port 80 matters, but I still forwarded port 80 it so I can serve an http->https redirect since stuff like curl and probably other tools might not know about HSTS.


  • Gotcha thanks for the info! It looks like I would be fine with ocis or opencloud, but since my main use case and pain points are with document editing which is collabora, it probably wouldn’t change much besides simplifying the docker setup (I had to make a gross pile of nginx config stuff pieced together from many forum help posts to get the nextcloud fpm container to work smoothly). But it already works so unless it breaks there’s little incentive for me to change.


  • Ah I see, I guess at least that would help with the main UI, but I’m already using collabora through the collabora code server in next cloud so it sounds like I’ll probably have the same document editing experience with OCIS/opencloud. I used to use onlyoffice but after I tried out their mobile app, it started blocking me from editing documents using the next cloud app (which seemed to use the only office web UI) so I was forced to switch unless I started paying for onlyoffice.


  • What are the apps that you would miss? I basically only use my NC as a Google drive and docs replacement, so all it has to do is store docx files and let me edit them on desktop or mobile without being glitchy and I’ve really wanted to consider OCIS or similar.

    That second requirement for me seems hard because of how complex office suites are, but NC is driving me to my wit’s end with how slow and error prone it is, and how glitchy the NC office UI is (like glitches when selecting text or randomly scrolling you to the beginning).


  • I think you are misunderstanding my mention of C2PA, which I only mentioned offhand as an example of prior art when it comes to digital media provenance that takes AI into account. If C2PA is indeed not about making a go/no-go determination of AI presence, then I don’t think it’s relevant to what OP is asking about because OP is asking about an “anti-ai proof”, and I don’t think a chain of trust that needs to be evaluated on an individual basis fulfills that role. I also did disclaim my mention of C2PA - that I haven’t read it and don’t know if it overlaps at all with this discussion. So in short I’m not misunderstanding C2PA because I’m not talking about C2PA, I just mentioned it as an interesting project that is tangentially related so that nobody feels the need to reply with “but you forgot about C2PA”.

    I’m more interested in the high-level: “can we solve this by guaranteeing the origin” question, and I think the answer to that is yes

    I think you are glossing over the possibility that someone uses Photoshop to maliciously edit a photo, adding Adobe to the chain of trust. If instead you are suggesting that only individuals sign the chain of trust, then there is no way anyone will bother looking up each random person who edited an image (let alone every photographer) so they can check if it’s trustworthy. Again I don’t think that lines up with what OP is asking for. In addition, we already have a way to verify the origin of an image - just check the source AP posting an image on their site is currently equivalent to them signing it, so the only difference is some provenance, which I don’t think provides any value unless the edit metadata is secured as I mention below. If you can’t find the source then it’s the same as an image without a signature chain. This system can’t doesn’t force unverified images to have an untrustworthy signature chain so you will mostly either have images with trustworthy signature chains that also include a credit that you can manually check or images without a source or a signature. The only way it can be useful is if checking the signature chain is easier than checking the website of the credited source, which if it requires the user to make the same determination I don’t think it will move the needle besides making it marginally easier for those who would have checked for the source anyway to check faster.

    I don’t think we need any sort of controls on defining the types of edits at all.

    I disagree, the entire idea of the signature chain appears to be for the purpose of identifying potentially untrustworthy edits. If you can’t be sure that the claimed edit is accurate, then you are deciding entirely based on the identity of the signatory - in which case storing the edit note is moot because it can’t be used to narrow down which signature could be responsible for an AI modification.

    If AP said they cropped the image, and if I trust AP, then I trust them as a link in the chain

    The thing about this is that if you trust AP to be honest about their edits, then you likely already trust them to verify the source - this is something they already do so it seems the rest of the chain is moot. To use your own example, I can’t see a world where we regularly need to verify that AP didn’t take the image that was edited by Infowars posted on facebook, crop it, and sign it with AP’s key. That is just about the only situation where I see the value in having the whole chain, but that’s not solving a problem we currently have. If you were worried that a trusted source would get their image from an untrusted source, they wouldn’t be a trusted source. And if a trusted source posts an image where it gets compressed or shared, it’ll be on their official account or website which already vouches for it.

    Worrying about MITM attacks is not a reasonable argument against using a technology. By the same token, we shouldn’t use TLS for banking because it can be compromised

    The difference with TLS is that the malicious parties are not in ownership of the endpoints, so it’s not at all comparable. In the case of a malicious photographer, the malicious party owns the hardware to be exploited. If the malicious party has physical access to the hardware it’s almost always game over.

    Absolutely, but you can prevent someone from taking a picture of an AI image and claiming that someone else took the picture. As with anything else, it comes down to whether I trust the photographer, rather than what they’ve produced.

    Yes and this is exactly the problem, it comes down to whether you trust the photographer, meaning each user needs to research the source and make up their own mind. The system would have changed nothing from now, because in both cases you need to check the source and decide for yourself. You might argue that at least with a chain of signatures the source is attached to the image, but I don’t think in practice that will change anything since any fake image will lack a signature just as how many fake images are not credited. The question OP seems to be asking is about a system that can make that determination because leaving it up to the user to check is exactly the problem we currently have.


  • I think you might be assuming that most of the problems I listed are about handling the trust of the software that made each modification - in case you just read the first part of my comment. And I’m not sure if changing the signature to a chain really addresses any of them besides having a bigger “hit list” of companies to scrutinize.

    For reference, the issues I listed included:

    1. Trusted image editors adding or replacing a signature cannot do so securely without a TPM - without it someone can memory edit the image buffer without the program knowing and have a “crop” edit signed by Adobe which replaces the image with an AI one
    2. Needs a system to grade the “types” of edits in a foolproof way - so that you can’t bypass having the image marked as “user imported an external image” by painting the imported images pixels over the original using an automated tool for example
    3. Need to prevent MITM of camera sensor data that can make the entire system moot
    4. You cannot prevent someone from taking a picture of a screen with Ai image

    There are plenty of issues with how even a trusted piece of software allows you to edit the picture, since trusted software would need to be able to distinguish between a benign edit and one adding AI. I don’t think a signature chain changes much since the chain just increases the number of involved parties that need to be vetted without changing any of the characteristics of what you are allowed to do.

    I think the main problem with the signature chain is that is that the chain by itself doesn’t allow you to attribute and particular part to and party in the chain. You will be able to see all the responsible parties but not have any way of telling which company in the chain could be responsible for signing a modification. If the chain contains Canon, gimp, and Adobe, there is no way to tell if the AI added to the image was because the canon camera was hacked or if gimp or Adobe has a workaround that allowed someone to replace the image with an AI one. I think in the case of a malicious edit, it makes less sense to allow the picture to retain the canon signature if the entire image could be changed by Adobe, essentially putting Canon’s signature reputation on the line for stuff they might not be responsible for.

    This would also bring a similar problem to the one I mentioned where there would need to be a level of trust for each piece of editing software - and you might have a world where gimp is out because nobody trusts it, so you can say goodbye to using any smaller developers image editor if you want your image to stay verified. That could be a nightmare if providers such as Facebook or others wanted to use the signature chain to prevent untrusted uploads, it would penalize using anything but Adobe products for example.

    In short I don’t think a chain changes much besides increasing the number of parties you have to evaluate complicating validation, without helping you attribute malicious edit to any party. And now you have a situation where gimp for example might be blamed for being in the chain when the vulnerability was from Adobe or Canon. My understanding of the question is that the goal is an automatic final determination of authenticity, which I think is infeasible. The chain you’ve proposed sounds closer to a “web of trust” style system where every user needs to create their own trust criteria and decide for themselves what to trust, which I think defeats the purpose of preventing gullible people from falling for AI images.


  • I didn’t think this is really feasible.

    I’ve heard of efforts (edit: this is the one https://c2pa.org/ - I haven’t read it at all so I don’t know if it overlaps with my ideas below at all) to come up with a system that digitally signs images when they are taken using a tamper resistant TPM or secure enclave built into cameras, but that doesn’t even begin to address the pile of potential attack vectors and challenges.

    For example, if only cameras can sign images, and the signature is only valid for that exact image, then editing the image in any way makes the signature invalid. So then you’d probably need image editors to be able to make signatures or re-sign the edit, assuming it’s minor (crop, color correct) but you’d need a way to prevent rogue/hacked image editors from being able to re-sign an edit that adds AI elements. So unless you want image editors to require you to have a TPM that can verify your edit is minor / not adding AI, then the image editor would be able to forge a signature on an AI edit.

    Assuming you require every image editor to run on a device with a TPM in order to re-sign edits, there’s also the problem of how you decide which edits are ok and which are too much. You probably can’t allow compositing with external images unless they are also signed, because you could just add an AI image into an originally genuine image. You also probably couldn’t stop someone from using macros to paint every pixel of an AI image on top of a genuine image using the pencil tool at 1px brush size, so you would need some kind of heuristic running inside the TPM or TEE that can check how much the image changed - and you’d have to prevent someone from also doing this piecewise (like only 1/10 of overlaying an AI image at a time so that the heuristic won’t reject the edit), so you might need to keep the full original image embedded in the signed package so the final can be checked against the original to see if it was edited too much

    You might be able to solve some of the editing vulnerabilities by only allowing a limited set of editing operations (like maybe only crop/rotate or curves), if you did that then you could not require a TPM to edit if the editing software doesn’t actually create a new signature but just saves the edits as a list of changes along side the original signed image. Maybe a system like this where you can only crop/rotate and color correct images would work for stock photos or news, but that would be super limiting for everyone else so I can’t see it really taking off.

    And if that’s not enough, I’m sure if this system was made then someone would just mitm the camera sensor and inject fake data, so you’d need to parts pair all camera sensors to the TPM, iPhone home button style (iiuc this exact kind of data injection attack is the justification for the iPhone home button fingerprint scanner parts pairing).

    Oh, and how do you stop someone from using such a camera to take a picture of a screen that has an AI image on it?





  • Hmm, well it doesn’t seem to be any problem with the docker compose then as best as I can tell. I picked a random ext4 flash drive and replicated your setup with the UID and GID set and it seems to work fine:

    # /etc/fstab
    /dev/sda1       /home/<me>/mount/ext_hdd_01  ext4    defaults 0 2
    
    ~/mount % ls -an
    total 12
    drwxr-xr-x  3 1000 1000 4096 Mar 27 16:22 .
    drwx------ 86 1000 1000 4096 Mar 27 16:31 ..
    drwxrwxrwx  3    0    0 4096 Mar 27 16:26 ext_hdd_01
    
    ~/mount/ext_hdd_01 % ls -an
    total 6521728
    drwxrwxrwx 3    0    0       4096 Mar 27 16:26 .
    drwxr-xr-x 3 1000 1000       4096 Mar 27 16:22 ..
    -rw-r--r-- 1 1000 1000 6678214224 May  5  2024 PXL_20240504_233345242.mp4
    drwxrwxrwx 2    0    0      16384 May  5  2024 lost+found
    -rwxr--r-- 1 1000 1000          5 Mar 27 16:27 test.txt
    
    # ~/samba/docker-compose.yml
    services:
      samba:
        image: dockurr/samba
        container_name: samba
        environment:
          NAME: "Data"
          USER: "user"
          PASS: "pass"
          UID: "1000"
          GID: "1000"
        ports:
          - 445:445
        volumes:
          - /home/<me>/mount:/storage
        restart: always
    

    I was able to play the PXL.mp4 video from my desktop and write back the test.txt file

    Have you checked the logs with docker logs -f samba to see if there’s anything there?

    Also you could try to access the HD from within the container, using docker exec -it samba bash and then cd into /storage and see what happens.


  • I would suggest adding “UID” and “GID” environment variables to the container, and set them to the numeric values for user and group numbers that show in place of your name when you use “ls -an” inside of the “mount” folder (they will probably be the same number).

    For example, if inside your mount folder you see:

    ls -an
    total 12
    drwx------ 2 1001 1001 4096 Mar 27 13:54 .
    drwxr-xr-x 3 1000 1000 4096 Mar 27 13:51 ..
    -rwx------ 1 1001 1001    0 Mar 27 13:54 hello.txt
    -rwx------ 1 1001 1001    4 Mar 27 13:54 test.txt
    

    Then set UID: 1001 and GID: 1001

    I get the same error as you when I copy your docker-compose and try to access a folder owned by my user. When I add the UID and GID of my user id to the docker-compose (1001 for me), the error goes away.