This is something I have thought a lot recently since I recently saw a project that absolute didn’t care in the slightest about this and used many vendor specific features of MS SQL all over the place which had many advantages in terms of performance optimizations.

Basically everyone always advises you to write your backend so generically with technologies like ODBC, JDBC, Hibernate, … and never use anything vendor specific like stored procedures, vendor specific datatypes or meta queries with the argument being that you can later switch your DBMS without much hassle.

I really wonder if this actually happens in the real world with production Software or if this is just some advice that makes sense on surface level but in reality never pans out. I personally haven’t seen any large piece of Software switch to a different DBMS, even if there would be long term advantages of doing so, because the risk and work to retest everything would be far too great.

The only examples I know of (like SAP) were really part of a much larger rewrite or update rather than “just” switching DBMS.

  • xantoxis@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    8 months ago

    In almost 30 years I’ve never seen anyone actually switch databases underneath an existing product. I have worked at one place where generic database APIs were required because it was a product that supportedf multiple databases, but no individual customer was really expected to switch from one database to another, that’s just how the product was written.

    I have heard of this happening, but it’s the kind of thing that happens in one of two scenarios:

    1. Very early in a product’s lifetime the developer (probably a startup) realizes the database they chose was a poor choice. Since the product doesn’t even exist yet, the switching cost is low, and generic database use wouldn’t have helped.

    2. A management shakeup in a very mature product causes the team to switch databases. This is, as you observed, usually part of a major rewrite of some kind, so lots of things are going to change at once. Also–critically–this only happens with companies that have more money than sense. Management doesn’t mind if it takes a long time to switch.

      It won’t go smoothly, at all, but nobody actually cares, so generic database use wouldn’t have helped.

  • walter_wiggles@lemmy.nz
    link
    fedilink
    arrow-up
    11
    ·
    8 months ago

    I know of 2 projects that wanted to migrate from Oracle to Postgres, one of which was successful. Both migrations were driven by cost savings–Oracle can get exceedingly expensive.

    In both cases there was up front analysis of Oracle specific features being used. A lot of that could be rewritten into standard SQL but some required code logic changes to compensate. Vendor lock-in is insidious and will show up in native queries, triggers, functions that use Oracle packages, etc.

    Changing a project’s underlying database is rare, but not as rare as it used to be.

  • Pencilnoob@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    8 months ago

    I suppose this is a hot take, but I’d never intentionally select a closed source paid database or programming language. Your data is the most valuable thing you have. The idea that you’d lock yourself into a contract with a third party is extremely risky.

    For example, I’ve never seen a product on Oracle that didn’t want to migrate off, but every one has tightly coupled everything Oracle so it’s nearly impossible. Why start with Oracle in the first place? Just stay away from paid databases, they are always the wrong decision. It’s a tax on people who think they need something special, when at most they just need to hire experts in an open source database. It’ll be much much cheaper to just hire talent.

    Meanwhile I’ve done two major database shifts in my career, and you are correct, keeping to ANSI standard SQL is extremely important. If you’re on a project that isn’t disciplined about that, chances are they are undisciplined about so many other things the whole project is a mess that’ll be gone in ten years anyway. I know so few projects that have survived more than fifteen years without calls for a “rewrite”. Those few projects have been extremely disciplined about 50% of all effort is tech debt repayment, open source everything, and continuous modernization.

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    8 months ago

    I’ve done one DBMS migration (MySQL > PostgreSQL) and also maintained an RDBMS agnostic system that supported MySQL, MSSQL/TransactSQL and PostgreSQL.

    The system I migrated was extremely young and had terrible SQL discipline (I actually had to move queries using ye olde mysql_query to a tool set I wrote above pdo. This was a little while after the MySQL/Maria fragmentation and we weren’t comfortable with Oracle stewardship but also didn’t have high confidence in Maria’s ability to resist further fragmentation.

    After that migration we’ve committed hard to postgres and utilize a lot of language specific features where they’re useful, I still encourage agnostic SQL where possible though.

    I prefer writing raw SQL and, while I’ve added tooling to support Doctrine (an ORM framework in PHP), I don’t personally use it. I also advocate against complex stored procedures, I personally think developers have difficulty understanding the whole system if stored procedures are allowed to be non-trivial - developers should be aware of the complexity that underlies their actions. I do, however, leverage materialized views extensively to keep a denormalized performant layer over our normalized source of truth.

    I’m happy to answer any specific questions but it’s super late here so it’ll probably be in the morning.

  • Modva@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    8 months ago

    30 years experience running enterprise development teams here. Switching databases has happened once for me, Sql Server to Postgres. We were busy with a huge rewrite of something existing, approx $100 million project for a major company.

    The instruction to switch dbs came midway through the project, basically on the whim of the CIO. Luckily we were only lightly impacted by db specific features on a couple procs, but code base was abstracted away - which made it achievable.

  • onlinepersona@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    I really wonder if this actually happens in the real world with production Software or if this is just some advice that makes sense on surface level but in reality never pans out.

    Oh yes! Yes yes yes. I worked at a company where the DB admins were tasked with moving away from OracleDB to MariaDB and are now moving to PostgreSQL. They have thus been migrating for about 20 years now. It is a huge expense with multiple teams across the business dedicated to this sole task.
    One would think the devs learned from the first migration not to write DB specific queries, but alas… To be fair though, managers also pushed for quickly written code and hired a lot of junior engineers straight out of uni. The turnover rate was amazing back then (1-2 years max) and probably stayed about the same or got worse.

    At another company, the client used some old Microsoft database on-prem, wanted to migrate to PostgreSQL, but had trouble with their datacenter (couldn’t scale IIRC) and had to migrate to the cloud first. Of course the only vendor of the old version of that Microsoft DB is Azure and by the time their migration to the cloud was complete, the DB was not officially supported anymore. So now, they are paying out of their ass for premium support as, IINM, they are among the last clients still running that version of the DB.
    The entire code base is handwritten SQL in the flavor of microsoft’s DB which is of course incompatible with anything else.
    Their migration started close to a decade ago and by the time I left the end was nowhere in sight.

    So yeah, staying DB agnostic is not a joke.

    CC BY-NC-SA 4.0

  • Oliver Lowe@apubtest2.srcbeat.com
    link
    fedilink
    arrow-up
    4
    ·
    8 months ago

    Great question. Short answer: yes!

    Long answer: I did this on a production system about 2 years ago.

    The system was using MySQL, which was served from 3 virtual machines. Nobody took responsibility for that MySQL cluster, so outages and crazy long maintenance windows were normal especially as there was no DB admin expertise. The system had been hobbling along for 3 years regardless.

    One day the company contracting me asked for help migrating some applications to a new disaster recovery (DR) datacentre. One-by-one I patched codebases to make them more portable; even needing to remove hard-coded IP addresses and paths provided by NFS mounts! Finally I got to the system which used the MySQL cluster. After some digging I discovered:

    1. The system was only ever configured to connect to one DB host
    2. There were no other apps connecting to the DB cluster
    3. It all ran on “classic” Docker Swarm (not even the last released version)

    My ex-colleague who I got along really well with wrote 90% of the system. They used a SQL query builder and never used any DB engine-specific features. Thank you ex-colleague! I realised I could scrap this insane not-actually-highly-available architecture and use SQLite instead, all in a single virtual machine with 512MB memory and 1vCPU. SQLite was perfect for the job. The system consisted of a single reader and writer. The DB was only used for record-keeping of other long-running jobs.

    Swapping it over took about 3 days, mostly testing. No more outages, no more working around shitty network administration, no more “how does the backup work again?” when DB dumps failed, no more complex config management to bring up, down DB clusters. The ability to migrate DB engines led to a significant simplification of the overall system.

    But is this general advice? Not really. Programming with portability in mind is super important. But overly-generic programs can also be a pain to work with. Hibernate, JDBC et al. don’t come for free; convenience comes at the cost of complexity. Honestly I’m a relational database noob (I’m more a SRE), so my approach is to try to understand the specific system/project and go from there. For example:

    I recently saw a project that absolute didn’t care in the slightest about this and used many vendor specific features of MS SQL all over the place which had many advantages in terms of performance optimizations.

    Things I would want to learn more about:

    • were those performance optimisations essential?
    • if so, is the database the best place to optimise? e.g. smarter queries versus fronting with a dumb cache
    • are there database experts who can help out later? do we want to manage a cache?

    Basically everyone always advises you to write your backend so generically with technologies like ODBC, JDBC, Hibernate, … and never use anything vendor specific like stored procedures, vendor specific datatypes or meta queries with the argument being that you can later switch your DBMS without much hassle.

    Things I would want to learn more about:

    • how many stored procedures could we be managing? 1, 100, 1000? may not be worth taking on a dependency to avoid writing like 3 stored procedures
    • is that tooling depended on by other projects already?
    • how much would the vendor-specific datatype be used? one column in one table? everywhere?
    • does using vendor-specific features make the code easier to understand? or just easier to write? big difference!

    My shitty conclusion: it depends.

  • lemmyng@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    I’ve seen it successfully happen due to licensing costs and cloud migration (MSSQL->Spanner), as well as for scalability reasons (vanilla postgres->cockroach). The first one was a significant change in features, the latter did sacrifice some native plugins. In the first case the company was using vendor specific features, and rewrote the backend to fit the new vendor.

    There’s vendor agnosticism, and then there’s platform agnosticism. Writing your code so that it’s not tied to one specific implementation of postgres is fine, and lets you use a compatible drop-in. Writing your code so you can swap MSSQL for Oracle or Aurora or whatever at will does not make sense. In every case of attempted platform agnosticism I’ve seen they ended up abandoning the project within a year or two with nothing to show for it.

  • JakenVeina@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 months ago

    I think it’s a fallacy to say that you can or should build an application layer that’s completely DBMS agnostic. Even if you are very careful to only write SQL queries with features that are part of the official SQL standard, you’re still coupled to your particular DBMS’s internal implementations for query compilation, planning, optimization, etc. At enterprise scale, there’s still going to be plenty of queries that suddenly perform like crap, after a DBMS swap.

    In my mind, standardization for things like ODBC or Hibernate or Entity Framework or whatever else isn’t meant to abstract away the underlying DBMS, it’s meant to promote compatibility.

    Not to mention that you’re tying your own hands by locking yourself out of non-standard DBMS features, that you could be REALLY useful to you, if you have the right use-cases. JSON generation and indexing is the big one that comes to mind. Also, geospatial data tables.

    For context, my professional work for the past 6 years is an Oracle/.NET/Browser application, and we are HEAVILY invested in Oracle. Most notably, we do a LOT of ETL, and that all runs exclusively in the DBMS itself, in PL/SQL procedures orchestratedbbybthe Oracle job scheduler. Attempting to do this kind of data manipulation by round-tripping it into .NET code would make things significantly worse.

    So, my opinion could definitely be a result of what’s been normalized for me, in my day job. But I’ve also had a few other collaborative side projects where I think the “don’t try and abstract away the DBMS” advice holds true.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    2
    ·
    8 months ago

    I’ve done it, but only pre-live, and the client then wanted it putting back into Oracle again because they were never ever ever going to escape Oracle’s clutches. It’s very rare that vendor specific optimisations are actually worth using though - 99.99% of performance issues are bad table/join design and indexing fails.

  • Lmaydev@programming.dev
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    8 months ago

    In 15 years I have never actually seen this happen.

    As you’ve said writing generically can have big performance implications.

    Almost all projects I’ve seen end up locked in one way or another.

    A better approach if you want to do this is abstract away the actual database stuff from your main code.

    This way you can do whatever provider specific stuff you need and still allow the option to rip it out with minimal refactoring.

    Your main code shouldn’t really care what provider you are using. Only code that interacts directly with the database should.

  • Lysergid@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    My colleague was working on migrating around dozens of batch jobs written in Java. All jobs had JPA/Hibernate but people which were writing those jobs didn’t understand abstraction and encapsulation. It end-up as vendor locked as you can imagine. Procedures, reading cursors, Oracle specific functions, metadata, logic spread between Java and PL/Sql, all the fun stuff, you know. So it took around one year of work to migrate to Postgres. And that’s with support of DBA who was helping with rewriting most complicated queries and procs. So yeah, don’t worry about that DBMS specific features.

    Don’t get me wrong, though. Efforts spent on making design decisions should be proportional to potential size/complexity growth of the software. If you building todo list , how many different queries you might have in fairly pessimistic scenarios? Maybe couple dozens. Don’t bother with JPQL or HQL. If you expect your app to grow significantly. I’d do as much as possible to avoid my colleague’s fate.

  • abhibeckert@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Sure - for example we migrated all our stuff from MySQL to MariaDB.

    It was completely painless, because all of the source code and many of the people who wrote that code migrated to MariaDB at the same time. They made sure the transition was effortless. We spent a months second guessing ourselves, weighing all of our options, checking and triple checking our backups, verifying everything worked smoothly afterwards… but the actual transition itself was a very short shell script that ran in a few seconds.

    I will never use a proprietary database unless it’s one I wrote myself and I’d be extremely reluctant to do that. You’d need a damned good reason to convince me not to pick a good open source option.

    My one exception to that rule is Backblaze B2. I do use their proprietary backup system, because it’s so cheap. But it’s only a backup and it’s not my only backup, so I could easily switch.

    I’m currently mid transition from MariaDB to SQLite. That one is more complex, but not because we did anything MariaDB specific. It’s more that SQLite is so different we have a completely different database design (for one thing, we have hundreds of databases instead of just one database… some of those databases are less than 100KB - the server just reads the whole thing into RAM and slow queries on our old monolithic database are less than 1 millisecond with this new system).

    never use anything vendor specific like stored procedures, vendor specific datatypes or meta queries

    Yeah we don’t do anything like that. All the data in our database is in a JSON type (string, number, boolean, null) with the exception of binary data (primarily images). It doesn’t even distinguish between float/int - though our code obviously does. All of the queries we run are simple “get this row by primary key” or "find all rows matching these simple where clauses. I don’t even use joins.

    Stored procedures/etc are done in the application layer. For example we don’t do an insert query anywhere. We have a “storage” object with simple read/write functions, and on top of that there’s an object for each model. That model does all kinds of things, such as writing the same data in different places (with different indexes) and catching “row not found” failures with an “ok, lets check if it’s in this other place”. That’s also the layer we do constraints which includes complex business rules, such as “even if this data is invalid — we will record it anyway, and flag it for a human to follow up on”.

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      MariaDB is a fork of MySQL that keeps compatibility with its APIs. That’s why the move was effortless for you. It’s a simple drop-in replacement. Not because “many migrated at the same time”. (Maybe you were referring to that, but it wasn’t written like that.)

    • Fal@yiffit.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      8 months ago

      Wtf. You can’t possibly be suggesting that any of this is a good idea

  • theit8514@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    8 months ago

    My line of business is entirely a Microsoft shop so everything we’ve ever written has been for MSSQL.

    That being said, I can understand the benefits of having a choice in backend. For example, for our Zabbix deployment some engineer just installed mariadb+zabbix on a server and called it a day. This has caused us no end of troubles (ibdata misconfigured, undo files too small, etc). After the last time I had to rebuild it due to undo file corruption I swore that if it broke again I was switching to postgres. So far knocks on wood we haven’t had any major issues. We’re still looking into and planning for a postgres migration but we’re hoping to hold out for a little longer prep time.

    Maybe I should contribute a MSSQL engine for Zabbix so I can move it to a platform I’m more comfortable with. ;)