sgarland an hour ago

> Imagine you need to add an index to a table with a few million rows. On a seeded database with 200 rows, the migration runs in milliseconds. Obviously. But on a branch with realistic data, it takes 40 seconds and needs CREATE INDEX CONCURRENTLY to avoid locking the table. The branch is isolated, so locking there isn't the issue — the point is that the rehearsal shows the production migration would need CONCURRENTLY.

A few million rows should take at most, on the most awful networked storage available, maybe 10 seconds. I just built an index locally on 10,000,000 rows in 4 seconds. Moreover, though, there are vanishingly few cases where you wouldn't want to use CONCURRENTLY in prod - you shouldn't need to run a test to tell you that.

IMO branching can be a cool feature, but the use I keep seeing touted (indexes) doesn't seem like a good one for it. You should have a pretty good idea how an index is going to behave before you build it, just from understanding the RDBMS. There are also tools like hypopg [0], which are also available on cloud providers.

A better example would be showing testing a large schema change, like normalizing a JSON blob into proper columns or something, where you need to validate performance before committing to it.

0: https://github.com/HypoPG/hypopg

sastraxi 2 hours ago

I’ve done experiments using BTRFS and ZFS for local Postgres copy-on-write. You don’t need anything but vanilla pg and a supported file system to do it anymore; just clone the database using a template and a newish version of Postgres.

Looking at Xata’s technical deep dive, the site claims that we need an additional Postgres instance per replica and proposes a network file system to work around that. But I don’t really understand why that’s needed. Can someone explain to me my misunderstanding here?

  • eatonphil an hour ago

    I also don't really understand how being correct under physical branching with ZFS, or physical backups of a filesystem, are different from crash safety in general. As long as you replay the WAL at the point where you branch (or take a physical backup of the filesystem) you should not lose data?

    At the same time Postgres people don't seem comfortable with the idea in practice so I'm not sure if this is actually ok to do.

    • hilariously an hour ago

      Crash safety does mean rollbacking all things in progress, but yes, if your database cannot safely do it (even if it is yucky) then you do not have a safe database for any crash situation.

mininao 11 minutes ago

Using neon for this and it's an absolute game changer, would recommend implementing database branching whatever solution you pick

comrade1234 2 hours ago

I was on a big team working on a giant oracle database over 25-years ago. I dont remember the term but each developer had their own playground of the giant database that wasn't affected by anyone else. The DB admin would set it up for each developer in just a few minutes so it definitely wasn't a copy. Then when a developer needed to reset and go back to the original db again it just took a few minutes. I just don't remember what it's called but I think Postgres has had it now for a few years.

  • tremon an hour ago

    You don't actually need to physically copy data, just create a view for every table that does a replacing merge between the original read-only data and the developer's own copy. And you can put a trigger on the view to redirect writes to the same private-copy table, making the whole thing transparent to the user.

    Not disputing that Oracle might have had something like this built-in, but it sounds like something that I could have whipped up in a day or so as a custom solution. I actually proposed a similar system to create anonymized datasets for researchers when I worked at a national archive institute.

    • TheMrZZ an hour ago

      Snowflake uses a similar system with their 0-copy cloning. It starts with the original table's partition, and keeps track of the delta created by subsequent operations. Always found that builtin mechanism pretty neat!

  • hilariously an hour ago

    Sounds like a snapshot - a file based diff of the pages changed since the last full backup - easy to revert to for the same reasons.

Nihilartikel 2 hours ago

This kind of magic is the reason that I'm very itchy to be able to line up real work on Datomic or XTDB someday.

theaniketmaurya 2 hours ago

i was using neon and they had some similar feature but now using planetscale. would be curious to know how you all are doing it?

  • miketery an hour ago

    We used neon at last job. It seemed pretty cool. What made you switch to planetscale?