• tatterdemalion@programming.devOP
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    5 hours ago

    There is a subtle scenario with read-modify-write transactions in MVCC where SQLite lacks some grace (in my opinion).

    In MVCC, transactions work with a point-in-time (read “between atomic transactions”) consistent “read snapshot” of the database.

    Consider this example:

    1. Transaction A begins and reads from table foo.
    2. Transaction B begins and updates table foo.
    3. Both transactions commit.

    There is no conflict here because these transactions are isolated from each other via the snapshot mechanism. Transaction A’s read snapshot is immutable and will not see any writes from transaction B, even if they are happening concurrently.

    Now what happens in this example (from the OP):

    1. Transaction A begins and reads from table foo.
    2. Transaction B begins and updates table foo.
    3. Transaction B commits.
    4. Transaction A tries to update foo.

    This is a true conflict because both transactions are trying to write to foo, and transaction A’s writes might be based on what it just read. There is no consistent way for A to proceed, because B already wrote to foo, invalidating A’s read snapshot.

    So SQLite handles this by returning an error to A, effectively requiring A to restart the transaction.

    There are other ways this could be handled though. The DB could optimistically retry the transaction for you. There is even a special BEGIN IMMEDIATE; statement that it could use to proactively take a write lock on foo so that the transaction doesn’t get starved by other writers. But SQLite puts all of the responsibility on users to handle this.

    I’m not an expert, so there could be a very good reason that SQLite works this way, but it feels a bit annoying as a user.

    I don’t actually know off the top of my head how PostgresQL handles this particular scenario.

    • FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Wouldn’t that be an error due to serialisation failure if in postgres if you enabled serialisable isolation?

      If you didn’t, you could get anything, because unless I’m mistaken this is the typical example used to illustrate what goes wrong when you don’t have full serialisable isolation.

      • tatterdemalion@programming.devOP
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        I believe both SERIALIZABLE and REPEATABLE READ isolation levels would raise an error for this example.

        And I looked this up: Postgres won’t retry the transaction for you either. Though ultimately I think this a good default behavior.

        • FishFace@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Why would repeatable read raise an error? I’m a little rusty but I thought it was only serialisable that could do, and that repeatable read simply ensured that the committed writes of transaction 2 would not be read in transaction 1, so it updating over the top of 2 is fine.