Announcement

Collapse
No announcement yet.

PostgreSQL 13.0 Beta 1 Released With Parallel Vacuum, Security Improvements + Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PostgreSQL 13.0 Beta 1 Released With Parallel Vacuum, Security Improvements + Benchmarks

    Phoronix: PostgreSQL 13.0 Beta 1 Released With Parallel Vacuum, Security Improvements + Benchmarks

    The first beta of the forthcoming PostgreSQL 13.0 is now available for evaluation. PostgreSQL 13 is coming with many new features with this article serving as a quick look plus some very preliminary benchmarks...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    PostgreSQL is probably the most feature complete SQL server there is. Probably the one most closely adhering to the SQL standards too.

    Compare its features to other SQL database servers.

    Comment


    • #3
      Originally posted by uid313 View Post
      PostgreSQL is probably the most feature complete SQL server there is. Probably the one most closely adhering to the SQL standards too.
      And yet the sysadmin in me feels like I just did the migrations from 11 to 12. Reading that 13 was in beta already has me reconsidering the shift to MariaDB on those machines just to simplify long-term administration headaches.

      Comment


      • #4
        Originally posted by Caffarius View Post

        And yet the sysadmin in me feels like I just did the migrations from 11 to 12. Reading that 13 was in beta already has me reconsidering the shift to MariaDB on those machines just to simplify long-term administration headaches.
        What kind of headaches?

        Comment


        • #5
          Originally posted by anarki2 View Post

          What kind of headaches?
          PostgreSQL major updates that require database cluster to be manually migrated feel kind of too often. It is good that development is very active, it makes it annoying for those who want to follow the newest releases for some reason.
          I'm not administering any production server anymore (when I did, I just did not upgrade), I do have some instance on my development machine, and once in a while when I run emerge --sync in my Gentoo, I sigh "oh, PostgreSQL major update, again". I believe that's what Caffarius meant.
          Last edited by reavertm; 24 May 2020, 08:27 AM.

          Comment


          • #6
            Originally posted by reavertm View Post

            PostgreSQL major updates that require database cluster to be manually migrated feel kind of too often. It is good that development is very active, it makes it annoying for those who want to follow the newest releases for some reason.
            I'm not administering any production server anymore (when I did, I just did not upgrade), I do have some instance on my development machine, and once in a while when I run emerge --sync in my Gentoo, I sigh "oh, PostgreSQL major update, again". I believe that's what Caffarius meant.
            for production server you should use an appropriate distribution with a longterm support then (ie: RHEL or CentOS)

            Comment


            • #7
              Such updates are once a year, but you can usually skip them for several years if you wish, since the last five release branches are kept up to date on their own package repo. I spent a while on 9.6 and moved to 12 this year when it was convenient. It's less easy if you're on a rolling distro, but I suspect most production databases aren't.

              pg_upgrade is good as well, though for 12 I used the old dump-and-restore method since I wanted to enable checksums and do a full vacuum at the same time. It's not automatic, but using it in the "link" mode is close.
              Last edited by GreenReaper; 24 May 2020, 09:38 AM.

              Comment


              • #8
                Originally posted by GreenReaper View Post
                Such updates are once a year, but you can usually skip them for several years if you wish, since the last five release branches are kept up to date on their own package repo. I spent a while on 9.6 and moved to 12 this year when it was convenient. It's less easy if you're on a rolling distro, but I suspect most production databases aren't.

                pg_upgrade is good as well, though for 12 I used the old dump-and-restore method since I wanted to enable checksums and do a full vacuum at the same time. It's not automatic, but using it in the "link" mode is close.
                I'm not at all familiar with SQL, but is it doable? 'Cause not keeping up with the latest and greatest (or at most one major version in between) often means more migration work, i.e. more time-consuming activities.

                Comment


                • #9
                  Originally posted by Vistaus View Post
                  I'm not at all familiar with SQL, but is it doable? 'Cause not keeping up with the latest and greatest (or at most one major version in between) often means more migration work, i.e. more time-consuming activities.
                  SQL itself is normally just added to, precisely because it is embedded within applications. There can be backwards-incompatible changes, but they are rare, and provided in the release notes.

                  ​​And yes, they can bite you - for example, we relied on WITH clauses acting as an optimization fence (because the engine made a poor selectivity choice, probably because we had a hard-to-estimate bitwise operation in the queries) and when they changed that we had to add a few MATERIALIZED here and there, since otherwise the daily mailer was using many times the CPU it needed to in order to determine the names of people who'd posted work the user might want to see - the query went from 5ms to 500ms per account.

                  Potentially disruptive for system administrators are also changes like modifying the location of the binary log files (and for that matter, text logs). But in both cases, we'd have had to do the work anyway; and it's arguably easier to test a new setup once in three years than three times, once a year. We upgraded because we wanted a specific feature, better support of parallel queries to run analytics recommendation queries on our under-utilized hot replica. Most fixes for features we already had were backported to 9.6 already.

                  Part of the purpose of betas like this is to shake out edge-cases that haven't already been considered in the development process (which like Linux is largely based around mailing lists) before it gets into production.
                  Last edited by GreenReaper; 24 May 2020, 12:56 PM.

                  Comment


                  • #10
                    Originally posted by GreenReaper View Post
                    pg_upgrade is good as well, (...) It's not automatic, but using it in the "link" mode is close.
                    Yeah, I have a simple script that I use for upgrading in my dev machine, just need to change the location of the previous binaries, run pg_upgrade in check mode and if everything is okay, run link mode, minimizing the downtime

                    Comment

                    Working...
                    X