Free PostgreSQL Backup Software

Free PostgreSQL Backup Software

1 Star2 Stars3 Stars4 Stars5 Stars
(16 votes, average: 5.00 out of 5)
Loading...

PostgreSQL is an enterprise-level relational database with open source code that supports JSON and SQL querying. This database management system has been around for more than 24 years and was supported by the vast support of the community for the majority of that time. One of the most popular PostgreSQL use cases is to act as a database management software for a wide range of applications with different purposes, like web apps, mobile apps, analytical apps, geospatial apps, and so on.

Just as with any other database, PostgreSQL is typically used in a situation where the data it holds is of value. As such, there’s a need to do regular backups. The open-source nature of PostgreSQL has played its part in the appearance of many different backup options for this database, both with and without the use of third-party PostgreSQL backup tools. One of such tools is Bacula PostgreSQL backup software.

When it comes to the entire backup process of PostgreSQL databases, it’s rarely adequate to simply copy your existing data – that’s why it is typically necessary to “dump” a copy of that data in a consistent directory before creating a proper backup.

Methods of backup creation for PostgreSQL

Generally speaking, there’s two main ways to create a PostgreSQL backup: SQL dump and binary dump. The differences between those are bigger than you may expect. To perform an SQL dump there is no need to prepare the server in any way, you just need to have the sufficient user privileges. To perform a binary dump you will have to “prepare” the database for that, and the preparation process might seem complicated for some users.

At the same time, SQL dump method is slower than the binary dump, and causes more load on the overall system bandwidth in the backup process, while binary is faster and less resource-consuming.

SQL dump

We’ll start with SQL dump as the easier way, and then go over the specifics of a binary dump. The only thing you’ll have to do once you have sufficient access level within the database (and once you have Bacula set up, as well) is to create the Bacula backup job the normal way for PostgreSQL service and then add specific lines to the configuration of the job you’re about to create, as follows:

# vi /etc/bacula/client.d/example.net.conf

....
FileSet {
....

Include {
....

# Database dump directory for backup purposes
File = "/var/backups/bacula-pg_dump"
}
}
....

Job {
Name = "example.net"
Type = Backup
....

# Starting up the dump process for all of the local PostgreSQL databases in SQL format:
RunScript {
RunsWhen = Before
FailJobOnError = No

# Cleaning up and recreation of a dump backup directory
Command = "rm -rf /var/backups/bacula-pg_dump"
Command = "mkdir -p /var/backups/bacula-pg_dump"
Command = "chown -R postgres:postgres /var/backups/bacula-pg_dump"

# Beginning the backup process for the dumped data
Command = "/bin/bash -c 'sudo -u postgres pg_dumpall --clean --inserts --verbose --file=/var/backups/bacula-pg_dump/`date +\"%%Y-%%m-%%d_%%H:%%M\"`.sql 2>/var/log/bacula-pg_dump.log'"
}
#
RunScript {
RunsWhen = After
RunsOnFailure = yes

# Offloading resources as soon as the process is complete
Command = "rm -rf /var/backups/bacula-pg_dump"
}
}

To check if the configuration was correct, and to apply the changes above, you’ll have to enter specific commands, as follows:

# bacula-dir -c /etc/bacula/bacula-dir.conf -t
# /etc/init.d/bacula-dir reload

Binary dump

As we’ve said before, the binary dump is slightly more difficult and requires more setup to work properly. First of all, the entire mechanism of a binary dump works via the transaction log within the PostgreSQL itself called Write-ahead log (WAL). Activating this log and setting it up properly is a necessary part of the setup for PostgreSQL binary backup process. The setup in question is performed as follows:

# vi /etc/postgresql/9.5/main/postgresql.conf

....
# Activation of the entire WAL log in the first place
wal_level = archive

# Setting up a number of "pg_xlog" files that would be replaced as the newer data appears
# (a normal size of one such file is around 16Mb, so in this example we’ll be taking about 0,5Gb)
wal_keep_segments = 32

# Limiting the amount of data streams that would be used for the log replication process
# (1 is for the data offload and 1 more is for current transactions)
max_wal_senders = 2
....

Second part of the initial setup for binary dump is to grant a locally identified PostgreSQL user the ability to access the database in the special mode that limits the connections to local only and allows our binary dump to happen:

# vi /etc/postgresql/9.5/main/pg_hba.conf

....
# TYPE DATABASE USER ADDRESS METHOD
# --------------------------------------------
....

# Replication connections login by host trust
local replication postgres peer

To apply these changes you’ll have to restart the database manually:

# /etc/init.d/postgresql restart

Now that the setup phase is complete we can use Bacula to create a job that is similar to the one we’ve shown before and automate the backup process, or we can perform a manual binary backup using the built-in PostgreSQL tool called “pg_basebackup”.

Creating a binary backup with Bacula’s PostgreSQL backup software is similar to what was shown before – we’re creating a backup job and then add specifics to it, like in the following example:

# vi /etc/bacula/client.d/example.net.conf

....
FileSet {
....

Include {
....

# Database dump directory for backup purposes
File = "/var/backups/bacula-pg_basebackup"
}
}
....

Job {
Name = "example.net"
Type = Backup
....

# Starting up the dump process for all of the local PostgreSQL databases in binary format:
RunScript {
RunsWhen = Before
FailJobOnError = No

# Cleaning up and recreation of a dump backup directory
Command = "rm -rf /var/backups/bacula-pg_basebackup"
Command = "mkdir -p /var/backups/bacula-pg_basebackup"
Command = "chown -R postgres:postgres /var/backups/bacula-pg_basebackup"

# Beginning the backup process for the dumped data
Command = "/bin/bash -c 'sudo -u postgres pg_basebackup -v -P -X fetch -R -D /var/backups/bacula-pg_basebackup 2>/var/log/bacula-pg_basebackup.log'"
}
#
RunScript {
RunsWhen = After
RunsOnFailure = yes

# Offloading resources as soon as the process is complete
Command = "rm -rf /var/backups/bacula-pg_basebackup"
}
}

Of course, these changes would need to be checked and applied via the system reload:

# bacula-dir -c /etc/bacula/bacula-dir.conf -t
# /etc/init.d/bacula-dir reload

The binary backup of a PostgreSQL database using pg_basebackup is somewhat more complicated, there’s two more extra steps before we’ll start the backup process. First of all, we’ll have to set up a backup directory for our backed up data, and the user that’ll be performing the backup should have exclusive access rights in this directory. The command is as follows:

# mkdir -p /var/backups/postgresql
# chown -R postgres:postgres /var/backups/postgresql
# chmod -R go-rwx /var/backups/postgresql

Since this backup method literally everything that’s within your databases, it’s always a good idea to clean up the directory in question beforehand:

# rm -r /var/backups/postgresql/*

And now we can safely start up our complete binary backup process:

# sudo -u postgres pg_basebackup -v -X fetch -R -D /var/backups/postgresql

You should have this specific log data if you’ve done everything correctly:

pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
transaction log start point: ...
transaction log end point: ...
pg_basebackup: base backup completed

As you can see, there are plenty of choices in how to perform a PostgreSQL backup, with or without third-party solutions. The choice between binary dump and SQL dump is always available to the end user, as well as the choice of using or not using third-party tools like Bacula PostgreSQL backup software.

About the author

Rob Morrison

Rob Morrison started his IT career with Silicon Graphics in Switzerland, which developed into a wide variety of IT management roles for almost 10 years. He has subsequently held various management positions in professional open source companies such as JBoss, Red Hat and Pentaho, contributing to the growth of these well-known companies. He is a graduate of Plymouth University and holds an Digital Media and Communications BSc Honours degree, and completed an Overseas Studies Program.