• rmam@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I think its easier for ops people to just use a proper database.

    SQLite is a proper database.

    For single-instance deployments, running SQLite means no overhead due to a network roundtrip, and things just work.

      • rmam@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        What admin tooling do you need? You haven’t defined any problem requiring a solution.

        • gnus_migrate@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          The database in a state where it’s violating some assumptions I’m making and I need to manually intervene without taking down my application for example. I need to have an audit trail on the changes being made to the database and who made them. I need to create replicas to implement failover. I need to replicate my application on multiple machines and all the replicas need to have the same view of the data. I need to mitigate the possibility of data leaks if I have multiple tenants sharing a database.

          I’m not saying that you’re wrong for using it. I’m just saying that it doesn’t work for everything.

    • eluvatar@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I typically run postgres locally too (in docker), while there’s still technically network overhead there’s not much compared to a real network, plus you can easily move it to another machine without reworking your app to switch from SQLite to postgres.