Show HN: DBOS TypeScript – Lightweight Durable Execution Built on Postgres

github.com

75 points by KraftyOne 2 days ago

Hi HN - Peter from DBOS here with my co-founder Qian (qianl_cs)

Today we want to share our TypeScript library for lightweight durable execution. We’ve been working on it since last year and recently released v2.0 with a ton of new features and major API overhaul.

https://github.com/dbos-inc/dbos-transact-ts

Durable execution means persisting the execution state of your program while it runs, so if it is ever interrupted or crashes, it automatically resumes from where it left off.

Durable execution is useful for a lot of things:

- Orchestrating long-running or business-critical workflows so they seamlessly recover from any failure.

- Running reliable background jobs with no timeouts.

- Processing incoming events (e.g. from Kafka) exactly once

- Running a fault-tolerant distributed task queue

- Running a reliable cron scheduler

- Operating an AI agent, or anything that connects to an unreliable or non-deterministic API.

What’s unique about DBOS’s take on durable execution (compared to, say, Temporal) is that it’s implemented in a lightweight library that’s totally backed by Postgres. All you have to do to use DBOS is “npm install” it and annotate your program with decorators. The decorators store your program’s execution state in Postgres as it runs and recover it if it crashes. There are no other dependencies you have to manage, no separate workflow server–just your program and Postgres.

One big advantage of this approach is that you can add DBOS to ANY TypeScript application–it’s just a library. For example, you can use DBOS to add reliable background jobs or cron scheduling or queues to your Next.js app with no external dependencies except Postgres.

Also, because it’s all in Postgres, you get all the tooling you’re familiar with: backups, GUIs, CLI tools–it all just works.

Want to try DBOS out? Initialize a starter app with:

    npx @dbos-inc/create -t dbos-node-starter
Then build and start your app with:

    npm install
    npm run build
    npm run start
Also check out the docs: https://docs.dbos.dev/

We'd love to hear what you think! We’ll be in the comments for the rest of the day to answer any questions you may have.

e12e 5 hours ago

Interesting idea. It seems like zodb (https://zodb.org) might enable some similar things for python - by simply being an object database?

Is it possible to mix typescript and python steps?

CMCDragonkai 2 days ago

Could you genericise the requirement in postgresql and provide a storage interface we could plug into? I think I have a use for this in Polykey (https://GitHub.com/MatrixAI/Polykey) but we use rocksdb (transactional key value embedded db).

  • KraftyOne 2 days ago

    That's definitely worth considering! The core algorithms can work with any data store. That said, we're focused on Postgres right now because of its incredible support and popularity.

    • CMCDragonkai a day ago

      You could imagine this working well for cloudflare workers - especially with time limits on execution. (Or with even aws compute market)

qianli_cs 2 days ago

Hello! I'm a co-founder at DBOS here and I'm happy to answer any questions :)

  • sarahdellysse 2 days ago

    Hi there, I think I might have found a typo in your example class in the github README. In the class's `workflow` method, shouldn't we be `await`-ing those steps?

  • nahuel0x a day ago

    Can you change the workflow code for a running workflow that already advanced some steps? What support DBOS have for workflow evolution?

  • ilove196884 2 days ago

    I know this this might sound scripted or can be considered cliche but what is the use case for DBOS.

    • qianli_cs 2 days ago

      The main use case is to build reliable programs. For example, orchestrating long-running workflows, running cron jobs, and orchestrating AI agents with human-in-the-loop.

      DBOS makes external asynchronous API calls reliable and crashproof, without needing to rely on an external orchestration service.

  • peterkelly a day ago

    How do you persist execution state? Does it hook into the Python interpreter to capture referenced variables/data structures etc, so they are available when the state needs to be restored?

    • KraftyOne a day ago

      That work is done by the decorators! They wrap around your functions and store the execution state of your workflows in Postgres, specifically:

      - Which workflows are executing

      - What their inputs were

      - Which steps have completed

      - What their outputs were

      Here's a reference for the Postgres tables DBOS uses to manage that state: https://docs.dbos.dev/explanations/system-tables

      • CMCDragonkai a day ago

        All of this seems it would fit any transactional key value structure.

  • mnembrini 2 days ago

    About workflow recovery: if I'm running multiple instance of my app that uses DBOS and they all crash, how do you divide the work of retrying pending workflows?

  • Dinux 2 days ago

    Hai, really cool project! This is something I can actually use.

  • gbuk2013 2 days ago

    FYI the “Build Crashproof Apps” button in your docs doesn’t do anything.

    • qianli_cs 2 days ago

      You'll need to click either the Python or TypeScript icon. We support both languages and will add more icons there.

      • gbuk2013 a day ago

        Thanks the icons work!

        I was originally looking at the docs to see if there was any information on multi-instance (horizontally scaled) apps. Is this supported? If so, how does that work?

        • qianli_cs a day ago

          Yeah, DBOS Cloud automatically (horizontally) scales your apps. For self-hosting, you can spin up multiple instances and connect them to the same Postgres database. For fan-out patterns, you may leverage DBOS Queues. This works because DBOS uses Postgres for coordination, rate limiting, and concurrency control. For example, you can enqueue tasks that are processed by multiple instances; DBOS makes sure that each task is dequeued by one instance.

          Docs for Queues and Parallelism: https://docs.dbos.dev/typescript/tutorials/queue-tutorial

swyx a day ago

> What’s unique about DBOS’s take on durable execution (compared to, say, Temporal) is that it’s implemented in a lightweight library that’s totally backed by Postgres. All you have to do to use DBOS is “npm install” it and annotate your program with decorators. The decorators store your program’s execution state in Postgres as it runs and recover it if it crashes. There are no other dependencies you have to manage, no separate workflow server–just your program and Postgres.

this is good until you the postgres server fills up with load and need to scale up/fan out work to a bunch of workers? how do you handle that?

(disclosure, former temporal employee, but also no hate meant, i'm all for making more good orcehstration choices)

  • KraftyOne a day ago

    That's a really good question! Because DBOS is backed by Postgres, it scales as well as Postgres does, so 10K+ steps per second with a large database server. That's good for most workloads. Past that, you can split your workload into multiple services or shard it. Past that, you've probably outscaled any Postgres-based solution (very few services need this scale).

    The big advantages of using Postgres are:

    1. Simpler architecturally, as there are no external dependencies.

    2. You have complete control over your execution state, as it's all on tables on your Postgres server (docs for those tables: https://docs.dbos.dev/explanations/system-tables#system-tabl...)

    • reissbaker a day ago

      Unaffiliated with DBOS but I agree that Postgres will scale much further than most startups will ever need! Even Meta still runs MySQL under the hood (albeit with a very thick layer of custom ORM).

chatmasta a day ago

Do you consider ”durability” to include idempotency? How can you guarantee that without requiring the developer to specify a (verifiable) rollback procedure for each “step?” If Step 1 inserts a new purchase into my local DB, and Step 2 calls the Stripe API to “create a new purchase,” what if Step 2 fails (even after retries, eg maybe my code is using the wrong URL or Stripe banned me)? Maybe you haven’t “committed” the transaction yet, but I’ve got a row in my database saying a purchase exists. Should something clean this up? Is it my responsibility to make sure that row includes something like a “transaction ID” provided by DBOS?

It just seems that the “durability” guarantees get less reliable as you add more dependencies on external systems. Or at least, the reliability is subject to the interpretation of whichever application code interacts with the result of these workflows (e.g. the shipping service must know to ignore rows in the local purchase DB if they’re not linked to a committed DBOS transaction).

  • KraftyOne a day ago

    Yes, if your workflow interacts with multiple external systems and you need it to fully back out and clean up after itself after a step fails, you'll need backup steps--this is basically a saga pattern.

    Where DBOS helps is in ensuring the entire workflow, including all backup steps, always run. So if your service is interrupted and that causes the Stripe call to fail, upon restart your program will automatically retry the Stripe call and if that doesn't work, back out and run the step that closes out the failed purchase.

atsbbg 15 hours ago

What are the limits on Retroaction? Can Retroactive changes revise history?

For example, if I change the code / transactions in a step, how do you reconcile what state to prepare for which transactions. For example, you'll need to reconcile deleted and duplicated calls to the DB?

mfrye0 a day ago

I see the example for running a distributed task queue. The docs aren't so clear though for running a distributed workflow, apart from the comment about using a vm id and the admin API.

We use spot instances for most things to keep costs down and job queues to link steps. Can you provide an example of a distributed workflow setup?

  • KraftyOne a day ago

    Got it! What specifically are you looking for? If you launch multiple DBOS instances connected to the same Postgres database, they'll automatically form a distributed task queue, dividing new work as it arrives on the queue. If you're looking for a lightweight deployment environment, we also have a hosted solution (DBOS Cloud).

darkteflon a day ago

What is the determinism constraint? I noticed it mentioned several times in blog posts, but one of the use-cases mentioned here is for use with LLMs, which produce non-deterministic outputs.

  • KraftyOne a day ago

    Great question! A workflow should be deterministic: if called multiple times with the same inputs, it should invoke the same steps with the same inputs in the same order. But steps don't have be deterministic, they can invoke LLMs, third party APIs, or any other operation. Docs page on determinism: https://docs.dbos.dev/typescript/tutorials/workflow-tutorial...

latchkey a day ago

Why typeorm over something like https://mikro-orm.io/?

psadri 2 days ago

Where is the state stored? In my own pg instance? Or is it stored somewhere in the cloud? Also, a small sample code snippet would be helpful.

  • KraftyOne 2 days ago

    The state can be stored in any Postgres instance, either locally or in any cloud.

    For code, here's the bare minimum code example for a workflow:

      class Example {
        @DBOS.step()
        static async step_one() {
          ...
        }
    
        @DBOS.step()
        static async step_two() {
          ...
        }
    
        @DBOS.workflow()
        static async workflow() {
          await Example.step_one()
          await Example.step_two()
        }
      }
    
    The steps can be any TypeScript function.

    Then we have a bunch more examples in our docs: https://docs.dbos.dev/.

    Or if you want to try it yourself download a template:

        npx @dbos-inc/create
    • psadri 2 days ago

      Are there any constraints around which functions can be turned into steps? I assume their state (arguments?) need to be serializable?

      Also, what happens with versioning? What if I want to deploy new code?

      • KraftyOne 2 days ago

        Yeah, the arguments and return values of steps have to be serializable to JSON.

        For versioning, each workflow is tagged with the code version that ran it, and we recommend recovering workflows on an executor running the same code version as what the workflow started on. Docs for self hosting: https://docs.dbos.dev/typescript/tutorials/development/self-.... In our hosted service (DBOS Cloud) this is all done automatically.

        • CMCDragonkai a day ago

          If you were to use cbor, you could support binary values more easily.