• [object Object]@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    5 hours ago

    I love webp, but your explanation is a bit confused. Webp is typically lossy, just as jpeg — only, it’s more efficiently compressed, meaning smaller size for the same image quality. So there’s no ‘entire image data’, there are only different approximations of the original image and different compressed files. Full-blown lossless images in PNG or other formats take several times more data.

    Disabling webp in favor of jpeg would use like 20-40% more data, in comparison. Which still sucks, but not as much.

    • SleeplessCityLights@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      I wasn’t going to get into the whole lossyness of the formats and just simplified to full image instead of compressed formatted. That is interesting that it is only saving 20%-40%. I was under the impression that the page only rendered the image size necessary to fit the layout and not the full resolution image. Forcing it to less lossy or lossless would mean that the larger image would always be available to be served to be rendered without any web request.

      • [object Object]@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        That’s a rather interesting consideration as to whether rendering at smaller sizes skips decoding parts of the image.

        First, the presented file is normally always loaded in full, because that’s how file transfer works over the web. Until lately, there were no different sizes available, and that only became widely-ish spread because of Apple’s ‘Retina’ displays with different dots-per-inch resolution, mostly hidpi being two times the linear size of the standard dpi. Some sites, like Wikipedia, also support resizing images on the fly to some target dimensions, which results in a new image of the JPEG or other format. In any case, to my somewhat experienced knowledge, JPEG itself doesn’t support sending every second row or anything like that, so you always get a file of a predetermined size.

        First-and-a-half, various web apps can implement their own methods for loading lower- or higher-res images, which they prepare in advance. E.g. a local analogue to Facebook almost certainly loads various prepared-in-advance low-res images for viewing in the apps or on the site, but has the full-res images available on request, via a menu.

        Second, I would imagine that JPEG decoding always results in the image of the original size, which is then dynamically resized to the viewport of the target display — particularly since many apps allow zooming in or out of the image on the fly. Specifically, I think decoding the JPEG image creates a native lossless image similar to BMP or somesuch (essentially just a 2d array of pixel colors), which is then fed to the OS’s rendering capabilities, taking quite a chuck of memory. Of course, by now this is all accelerated by the hardware a whole lot, with the common algorithms being prepared to render raw pixels, JPEG, and a whole bunch of other formats.

        It would be quite interesting if file decoding itself could just skip some part of the rows or columns, but I don’t think that’s quite like the compression works in current formats (at least in lossy ones, which depend on the previous data to encode later data). Although afaik JPEG encodes the image in rectangles like 16x16 or something like that, so it could be that whole chunks could be skipped altogether.