I need to scan very large JSONL files efficiently and am considering a parallel grep-style approach over line-delimited text.

Would love to hear how you would design it.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago
    1. How many grep-like ops per file?
    2. Is it interactive or run by another process?
    3. Do you know which files ahead of time?
    4. Do you have any control over that file creation?
    5. Is the JSONL append only? Is the grep running while the file is modified?
    6. How large is very large? 100s of MB? Few GB? 100s of GB? Whether or not it fits in memory could change the approach.
    7. You’re using files, plural, would parallelizing at the file level (e.g. one thread per file) be enough?
    8. How many files and how often is that executed?
  • Bazell@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    11 hours ago

    Splitting file in equal parts and analyzing in threads each part is basically the only efficient option to utilize modern CPU architectures efficiently for your task that I can think about. Since I doubt that the data stored in your files can be quickly processed by the GPU(I assume that you have text data).

    • bleistift2@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      Can a file really be split efficiently? And is reading from multiple files on the same disk really faster than scanning a single file from top to bottom?

      • entwine@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        9 hours ago

        You don’t actually need to “split” anything, you just read from different offsets per thread. Mmap might be the most efficient way to do this (or at least the easiest)

        Whether or not that’s going to run into hardware bottlenecks is a separate issue from designing a parallel algorithm. Idk what OP is trying to accomplish, but if their hardware is known (eg this is an internal tool meant to run in a data center), they’ll need to read up on their hardware and virtualization architecture to squeeze the most IO performance.

        But if parsing is actually the bottleneck, there’s a lot you can do to optimize it in software. Simdjson would be a good place to start.

      • Bazell@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        9 hours ago

        If the task to just read the data quickly without processing it(doing calculations, sorting, transformation, etc.), then yes, reading line by line is the fastest way. But the OP mentioned some processing operations on data, which may require additional time and computing power, thus it will be efficient to firstly load file into ram splitting it into chunks, give each thread a chunk to process and then combine results.

        In fact, my first comment suggested that you can read file line by line and once enough lines were read in RAM, thread 1 can start processing them while thread 0 still reads new lines from hard drive. Once another chunk is ready, thread 2 can start processing it and so on.

        In conclusion, it all depends on what exactly you need to do with data. Simply transferring it from HDD to RAM must be done by reading line by line. But processing of data can be split among cores of CPU to maximize the speed of computations.

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        I think, you could open the same file multiple times and then just skip ahead by some number of bytes before you start reading.

        But yeah, no idea if this would actually be efficient. The bottleneck is likely still the hard drive and trying to fit multiple sections of the file into RAM might end up being worse than reading linearly…

        • Bazell@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          9 hours ago

          This approach will indeed hit bottleneck even in SSD. Thus, file can be read into RAM line by line in thread 0, then once specified amount of lines was gathered, schedule thread 1 to process them, while thread 0 still reads new lines. Once another chunk is ready, give to thread 2 and so on. This way you can start processing data in asynchronous regime in the fastest way possible. Slightly slower but more convenient approach is to firstly read all the file into RAM and only then assign parts of it to each thread at the same time.

  • bizdelnick@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    9 hours ago

    Bad idea. First, file is read sequentially, and you can’t parallelize this. Second, grep is a bad solution for structured files. Better use jq or something similar.

      • bizdelnick@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        Sorry, I missed that L, and I’ve never heard about JSONL before (although worked with JSON logs that are effectively JSONL). So, well, you may use grep, however it can be inefficient (depends on regex engine and how good you are in regexes). It is also easy to make a mistake if you are not very proficient in regexes. So I’d prefer using JSON parser (jq or another, maybe lower level if performance matters) over grep anyway.

  • vfscanf()@discuss.tchncs.de
    link
    fedilink
    arrow-up
    3
    ·
    10 hours ago

    The question is, what will be your limiting factor: CPU or disk I/O? Parallel processing doesn’t do much good if the workers have to wait on the disk to deliver more data. I’d start with an async architecture, where the program can do its processing while it is waiting on more data.

    • pelya@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      8 hours ago

      One additinal trick is to compress your files before writing them to disk, using some kind of fast lightweight compression like parallel gzip (pigz command) or lzop. When parsing them, you will have smaller disk reads but higher CPU usage, which will give speed advantage if you have server-class CPU with lots of cache.

  • Lysergid@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    10 hours ago

    How large is very large? Would it be something that jq can’t do? Is it purely string search or JSON-tree search?

    Generally you would want to get file size, split it into ranges which can be read as valid UTF-8. Feed each range into reader thread. Can be inefficient for HDDs because each thread will try to access random location on disk forcing needle to jump back and forth. Also you’ll need reread ranges at split point with some positive and negative offset in case desired content got split. Things are getting much more complicated if you want JSON-tree grep. Branches may get split from parent nodes across multiple ranges.